id
int64
393k
2.82B
repo
stringclasses
68 values
title
stringlengths
1
936
body
stringlengths
0
256k
labels
stringlengths
2
508
priority
stringclasses
3 values
severity
stringclasses
3 values
2,505,384,239
vscode
Underscores not visible in search in Ubuntu 24.04.1 LTS
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions --> <!-- 🔎 Search existing issues to avoid creating duplicates. --> <!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ --> <!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. --> <!-- 🔧 Launch with `code --disable-extensions` to check. --> Does this issue occur when all extensions are disabled?: Yes <!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. --> <!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. --> Version: 1.92.2 Commit: fee1edb8d6d72a0ddff41e5f71a671c23ed924b9 Date: 2024-08-14T17:29:30.058Z Electron: 30.1.2 ElectronBuildId: 9870757 Chromium: 124.0.6367.243 Node.js: 20.14.0 V8: 12.4.254.20-electron.0 OS: Linux x64 6.8.0-40-generic snap Steps to Reproduce: 1. Use underscores in search in the lower fields 2. See screencast [Screencast from 2024-09-04 16-24-34.webm](https://github.com/user-attachments/assets/5b0a92fb-1f79-4fca-ac5b-b07e068c7321)
upstream,upstream-issue-pending
medium
Critical
2,505,497,573
ui
Add Success Variant to Toast Component
### Feature description This contribution introduces a new success variant to the toast notification component. The success variant is designed to provide visual feedback for successful actions by displaying a green background with white text. This enhancement allows developers to easily use a predefined style for success messages within the toast component, improving the consistency and usability of notifications across the application. ### Affected component/components Toast ### Additional Context Additional details here... ### Before submitting - [X] I've made research efforts and searched the documentation - [X] I've searched for existing issues and PRs
area: request
low
Minor
2,505,498,237
go
runtime: build fails when run via QEMU for linux/amd64 running on linux/arm64
### Go version go version go1.23.0 linux/arm64 ### Output of `go env` in your module/workspace: ```shell $ go env GO111MODULE='' GOARCH='arm64' GOBIN='' GOCACHE='/home/myitcv/.cache/go-build' GOENV='/home/myitcv/.config/go/env' GOEXE='' GOEXPERIMENT='' GOFLAGS='' GOHOSTARCH='arm64' GOHOSTOS='linux' GOINSECURE='' GOMODCACHE='/home/myitcv/gostuff/pkg/mod' GONOPROXY='' GONOSUMDB='' GOOS='linux' GOPATH='/home/myitcv/gostuff' GOPRIVATE='' GOPROXY='https://proxy.golang.org,direct' GOROOT='/home/myitcv/gos' GOSUMDB='sum.golang.org' GOTMPDIR='' GOTOOLCHAIN='local' GOTOOLDIR='/home/myitcv/gos/pkg/tool/linux_arm64' GOVCS='' GOVERSION='go1.23.0' GODEBUG='' GOTELEMETRY='on' GOTELEMETRYDIR='/home/myitcv/.config/go/telemetry' GCCGO='gccgo' GOARM64='v8.0' AR='ar' CC='gcc' CXX='g++' CGO_ENABLED='1' GOMOD='/home/myitcv/tmp/dockertests/go.mod' GOWORK='' CGO_CFLAGS='-O2 -g' CGO_CPPFLAGS='' CGO_CXXFLAGS='-O2 -g' CGO_FFLAGS='-O2 -g' CGO_LDFLAGS='-O2 -g' PKG_CONFIG='pkg-config' GOGCCFLAGS='-fPIC -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build810191502=/tmp/go-build -gno-record-gcc-switches' ``` ### What did you do? Given: ``` -- Dockerfile -- FROM golang:1.23.0 WORKDIR /app COPY . ./ RUN go build -o asdf ./blah -- blah/main.go -- package main func main() { } -- go.mod -- module mod.example go 1.23.0 ``` Running: ``` docker buildx build --platform linux/amd64 . ``` ### What did you see happen? ``` [+] Building 0.8s (8/8) FINISHED docker-container:container-builder => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 110B 0.0s => [internal] load metadata for docker.io/library/golang:1.23.0 0.4s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load build context 0.0s => => transferring context: 271B 0.0s => CACHED [1/4] FROM docker.io/library/golang:1.23.0@sha256:613a108a4a4b1dfb6923305db791a19d088f77632317cfc3446825c54fb862cd 0.0s => => resolve docker.io/library/golang:1.23.0@sha256:613a108a4a4b1dfb6923305db791a19d088f77632317cfc3446825c54fb862cd 0.0s => [2/4] WORKDIR /app 0.0s => [3/4] COPY . ./ 0.0s => ERROR [4/4] RUN go build -o asdf ./blah 0.3s ------ > [4/4] RUN go build -o asdf ./blah: 0.268 runtime: lfstack.push invalid packing: node=0xffffa45142c0 cnt=0x1 packed=0xffffa45142c00001 -> node=0xffffffffa45142c0 0.268 fatal error: lfstack.push 0.270 0.270 runtime stack: 0.270 runtime.throw({0xaf644d?, 0x0?}) 0.271 runtime/panic.go:1067 +0x48 fp=0xc000231f08 sp=0xc000231ed8 pc=0x471228 0.271 runtime.(*lfstack).push(0xffffa45040b8?, 0xc0005841c0?) 0.271 runtime/lfstack.go:29 +0x125 fp=0xc000231f48 sp=0xc000231f08 pc=0x40ef65 0.271 runtime.(*spanSetBlockAlloc).free(...) 0.271 runtime/mspanset.go:322 0.271 runtime.(*spanSet).reset(0xfe7680) 0.271 runtime/mspanset.go:264 +0x79 fp=0xc000231f78 sp=0xc000231f48 pc=0x433559 0.271 runtime.finishsweep_m() 0.272 runtime/mgcsweep.go:257 +0x8d fp=0xc000231fb8 sp=0xc000231f78 pc=0x4263ad 0.272 runtime.gcStart.func2() 0.272 runtime/mgc.go:702 +0xf fp=0xc000231fc8 sp=0xc000231fb8 pc=0x46996f 0.272 runtime.systemstack(0x0) 0.272 runtime/asm_amd64.s:514 +0x4a fp=0xc000231fd8 sp=0xc000231fc8 pc=0x4773ca ... ``` My setup here is my host machine is `linux/arm64`, Qemu installed, following the approach described at https://docs.docker.com/build/building/multi-platform/#qemu, to build for `linux/amd64`. This has definitely worked in the past which leads me to suggest that something other than Go has changed/been broken here. However I note the virtually identical call stack reported in https://github.com/golang/go/issues/54104 hence raising here in the first instance. ### What did you expect to see? Successful run of `docker build`.
NeedsInvestigation,GoCommand,compiler/runtime
medium
Critical
2,505,504,527
opencv
Make proper layers implementations to support the new dnn engine in 5.x
See https://github.com/opencv/opencv/pull/26056 The layers are the following: - [x] Concat - [x] ConstantOfShape - [x] [WIP] Einsum (done by @Abdurrahheem) - [x] Expand - [x] Gather - [ ] GatherND - [ ] GatherElements - [ ] GRU - [ ] [WIP] Flatten - [x] Hardmax - [ ] [WIP by Abdurrahheem] LSTM - [x] Mish - [x] Neg - [ ] Normalize - [x] Pad - [x] Range - [x] Reshape - [x] Resize (implementation has been just fixed to pass all tests, but it can be significantly improved) - [ ] Scatter - [ ] ScatterND - [x] Shape - [x] Slice - [x] Split - [x] Squeeze - [x] Transpose (a.k.a. Permute) - [x] Tile - [ ] TopK - [x] Unsqueeze - [ ] [WIP] Deconvolution (mostly done, there are just 2 remaining failures) - [x] DequantizeLinear - [x] QuantizeLinear - [ ] QLinearConv the new implementations must: 1. conform with ONNX specification (https://onnx.ai/onnx/operators/index.html), different opset's 2. not rely on the shape inference being done in ONNX importer 3. preferably, be 'stateless'. In particular, forget about Layer::finalize(). Don't rely on it, leave it empty. 4. support different data types. All the layers that just move tensor elements around without arithmetic operations on them may just have branches for 1-byte, 2-byte, 4-byte and 8-byte elements, then they will support all possible data types, except for the recently added to ONNX specs int4/uint4/float4, which OpenCV does not support anyway (yet)
feature
low
Critical
2,505,522,625
next.js
Catch all route within dynamic segment breaks dynamic params
### Link to the code that reproduces this issue https://codesandbox.io/p/devbox/optimistic-brahmagupta-qcqflx ### To Reproduce 1. Start dev server 2. Change locale to be anything but en 3. Page is being rendered / no 404 even though there is dynamicParams = false on the locale's segment level ### Current vs. Expected behavior Following the steps from the previous section, I expected /de to render a 404, as dynamicParams is false and de ist not listed within the segment's static params. Somehow the nested catch all route breaks this behaviour and makes it fall back to a dynamic behaviour, trying to render the given locale, running into errors because the locale does not exist. ### Provide environment information ```bash Binaries: Node: 20.9.0 npm: 9.8.1 Yarn: 1.22.19 pnpm: 8.10.2 Relevant Packages: next: 15.0.0-canary.140 // Latest available version is detected (15.0.0-canary.140). eslint-config-next: N/A react: 19.0.0-rc-7771d3a7-20240827 react-dom: 19.0.0-rc-7771d3a7-20240827 typescript: 5.3.3 Next.js Config: output: N/A ``` ### Which area(s) are affected? (Select all that apply) Navigation ### Which stage(s) are affected? (Select all that apply) next dev (local), next start (local), Vercel (Deployed) ### Additional context _No response_
bug,Navigation
low
Critical
2,505,541,899
go
runtime/race:race: TestOutput failures
``` #!watchflakes default <- pkg == "runtime/race:race" && test == "TestOutput" ``` Issue created automatically to collect these failures. Example ([log](https://ci.chromium.org/b/8737753588363053073)): === RUN TestOutput output_test.go:83: failed test case wrappersym, expect: ================== WARNING: DATA RACE Write at 0x[0-9,a-f]+ by goroutine [0-9]: main\.T\.f\(\) .*/main.go:15 \+0x[0-9,a-f]+ main\.\(\*T\)\.f\(\) <autogenerated>:1 \+0x[0-9,a-f]+ main\.main\.gowrap1\(\) ... main.main.gowrap1() /home/swarming/.swarming/w/ir/x/t/TestOutput26943986/013/main.go:9 +0x68 Goroutine 6 (finished) created at: main.main() /home/swarming/.swarming/w/ir/x/t/TestOutput26943986/013/main.go:9 +0x98 ================== Found 1 data race(s) exit status 66 --- FAIL: TestOutput (8.52s) — [watchflakes](https://go.dev/wiki/Watchflakes)
NeedsInvestigation,compiler/runtime
low
Critical
2,505,557,864
godot
Movie Writer Colors differ significantly from Rendered Output
### Tested versions Reproducible in 4.3 stable on Windows 11 Pro, with a reasonably recent version of VLC The color discrepancy exists in VLC Media Player even if all preferences are reset to default. The color discrepancy is visible in HandBrake preview, but is goine in H.264 output. ### System information Windows 11 - Godot v4.3.stable.mono.official [77dcf97d8] ### Issue description Recording a movie results in a movie file whose color grading is quite different from the output shown, both while recording or while playing the scene normally. ![explorer_BOuhoWV6LM](https://github.com/user-attachments/assets/5ba8bcdc-4672-4469-bb6c-b3ff6178e120) ### Steps to reproduce 1. Run the attached Reproduction Project, enabling the movie writer 2. Run it again without recording, and bring the video up in a player on the same screen to compare 3. Observe: Differences in colors and brightness even with a world environment derived from the default (color differences are somewhat subtle, but usually noticeable, you can use an image capture program to compare) ![image](https://github.com/user-attachments/assets/c74e46e1-5e84-4ddd-b55d-e6a2c09bf4b9) Screenshot with locally sampled color values in image editor and annotated) ### Minimal reproduction project (MRP) ## Reference Project [movie-color-grading-bug-repro.zip](https://github.com/user-attachments/files/16872372/movie-color-grading-bug-repro.zip) ## Reference Project Movie File: [movie.avi.zip](https://github.com/user-attachments/files/16872297/movie.avi.zip) ## "Repaired" Movie after Re-Encoding to H.264 with HandBrake 1.8.1 https://github.com/user-attachments/assets/b29fea98-6fab-4af9-b01d-cf800bd8a55f
topic:rendering,documentation,needs testing
low
Critical
2,505,567,998
langchain
pgvector - (psycopg.DataError) PostgreSQL text fields cannot contain NUL (0x00) bytes
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code The following code failed: ```python vertexai.init(project=PROJECT_ID, location=REGION) embedding_length = 768 # split the documents into chunks text_splitter = RecursiveCharacterTextSplitter( chunk_size = 500, chunk_overlap = 100, separators = ["\n\n", "\n", ".", "!", "?", ",", " ", ""], ) def get_connection_string() -> str: """ Construct the database connection string for the PGVector vector database. returns: The PGSQL connection string. """ CONNECTION_STRING = PGVector.connection_string_from_db_params( driver = "psycopg", host = PGVECTOR_DB_HOST, port = PGVECTOR_DB_PORT, database = PGVECTOR_DATABASE, user = PGVECTOR_DB_USER, password = PASSWORD, ) return CONNECTION_STRING def get_embeddings() -> ce: """ This is an embedding function which is called to reference GCP's embedding model. It passes the arguments to run it in batch mode with a pause, so that the API will not run into an error. returns: CustomEmbedding instance. """ # Embeddings API integrated with langChain EMBEDDING_QPM = 100 EMBEDDING_NUM_BATCH = 5 embeddings = ce.CustomVertexAIEmbeddings( requests_per_minute=EMBEDDING_QPM, num_instances_per_batch=EMBEDDING_NUM_BATCH, model_name="text-multilingual-embedding-002" ) return embeddings def get_pgvector(collection_name: str) -> PGVector: """ The PGVector instance is returned from this function. The instance is dependent on the collection name. arg: collection_name: a string variable, which designates a suppplier. return: PGVector instance for the supplier based on collection_name. """ vector_store = PGVector( embeddings = get_embeddings(), collection_name = collection_name, connection = get_connection_string(), use_jsonb=True, ) return vector_store def delete_embeddings_from_vectordb(collection_name): print(f"Deleting embeddings from collection-{collection_name}") logging.info(f"Deleting embeddings from collection-{collection_name}") vector_store = .get_pgvector(collection_name) # Delete the collection from pgvector vector_store.delete_collection() logging.info("Embedding deleted.") def add_embeddings_to_vectordb(document_splits, collection_name): print(f"Collection name-{collection_name}") logging.info(f"Collection name-{collection_name}") vector_store = get_pgvector(collection_name) vector_store.add_documents(documents=document_splits) print("Embedding added.") logging.info("Embedding added.") def embed_document(collection_name: str, document_uri: str): """ args: collection_name: a string represents the supplier name which is stored as a PGVector collection in the database. document_uri: a string which is a storage path in a GCS bucket where the supplier documents are stored. is_full_embedding_needed: the embedding process for entire supplier prefix (folder) in GCS to be done or only the selected documents to be embedded. document_list: list of individual documents to be embedded, if above flag is_full_embedding_needed is False """ logging.info(f"Processing documents from {GCS_BUCKET} in a path {document_uri}/to-be-processed") loader = GCSDirectoryLoader(project_name=PROJECT_ID, bucket=GCS_BUCKET, \ prefix=f"{document_uri}/to-be-processed") documents = loader.load() doc_splits = text_splitter.split_documents(documents) # Add chunk number to metadata for idx, split in enumerate(doc_splits): split.metadata["chunk"] = idx split.metadata["id"] = idx logging.info(f"# of documents after the document split = {len(doc_splits)}") if len(doc_splits) > 0: add_embeddings_to_vectordb(document_splits=doc_splits, \ collection_name=collection_name) # Please ignore this, it is to move files from different prefixs in a blob move_files_in_gcs(source_folder=f"{document_uri}/to-be-processed", \ destination_folder=f"{document_uri}/processed") return True, "OK" else: return False, "No documents found in the supplier folder" ``` ### Error Message and Stack Trace (if applicable) 2024-09-04 14:23:04,307 (psycopg.DataError) PostgreSQL text fields cannot contain NUL (0x00) bytes [SQL: INSERT INTO langchain_pg_embedding (id, collection_id, embedding, document, cmetadata) VALUES (%(id_m0)s::VARCHAR, %(collection_id_m0)s::UUID, %(embedding_m0)s, %(document_m0)s::VARCHAR, %(cmetadata_m0)s::JSONB), (%(id_m1)s::VARCHAR, %(collection_id_m1)s::UUID, %(embedding_m1)s, %(document_m1)s::VARCHAR, %(cmetadata_m1)s::JSONB), (%(id_m2)s::VARCHAR, %(collection_id_m2)s::UUID, %(embedding_m2)s, %(document_m2)s::VARCHAR, %(cmetadata_m2)s::JSONB), (%(id_m3)s::VARCHAR, %(collection_id_m3)s::UUID, %(embedding_m3)s, %(document_m3)s::VARCHAR, %(cmetadata_m3)s::JSONB), (%(id_m4)s::VARCHAR, %(collection_id_m4)s::UUID, %(embedding_m4)s, %(document_m4)s::VARCHAR, %(cmetadata_m4)s::JSONB), (%(id_m5)s::VARCHAR, %(collection_id_m5)s::UUID, %(embedding_m5)s, %(document_m5)s::VARCHAR, %(cmetadata_m5)s::JSONB), (%(id_m6)s::VARCHAR, %(collection_id_m6)s::UUID, %(embedding_m6)s, %(document_m6)s::VARCHAR, %(cmetadata_m6)s::JSONB), (%(id_m7)s::VARCHAR, %(collection_id_m7)s::UUID, %(embedding_m7)s, %(document_m7)s::VARCHAR, %(cmetadata_m7)s::JSONB), (%(id_m8)s::VARCHAR, %(collection_id_m8)s::UUID, %(embedding_m8)s, %(document_m8)s::VARCHAR, %(cmetadata_m8)s::JSONB), (%(id_m9)s::VARCHAR, %(collection_id_m9)s::UUID, %(embedding_m9)s, %(document_m9)s::VARCHAR, %(cmetadata_m9)s::JSONB), (%(id_m10)s::VARCHAR, %(collection_id_m10)s::UUID, %(embedding_m10)s, %(document_m10)s::VARCHAR, %(cmetadata_m10)s::JSONB), (%(id_m11)s::VARCHAR, %(collection_id_m11)s::UUID, %(embedding_m11)s, %(document_m11)s::VARCHAR, %(cmetadata_m11)s::JSONB), (%(id_m12)s::VARCHAR, %(collection_id_m12)s::UUID, %(embedding_m12)s, %(document_m12)s::VARCHAR, %(cmetadata_m12)s::JSONB), (%(id_m13)s::VARCHAR, %(collection_id_m13)s::UUID, %(embedding_m13)s, %(document_m13)s::VARCHAR, %(cmetadata_m13)s::JSONB), (%(id_m14)s::VARCHAR, %(collection_id_m14)s::UUID, %(embedding_m14)s, %(document_m14)s::VARCHAR, %(cmetadata_m14)s::JSONB) ON CONFLICT (id) DO UPDATE SET embedding = excluded.embedding, document = excluded.document, cmetadata = excluded.cmetadata] [parameters: {'id_m0': '9c794529-2534-457d-b09e-120564e0203b', 'collection_id_m0': UUID('e9acec67-6afd-45dd-9999-509381ee1e22'), 'embedding_m0': '[-0.023259006440639496,-0.026278316974639893,0.010832197964191437,0.02110976167023182,0.03138941153883934,0.010138949379324913,0.042534541338682175,0 ... (15922 characters truncated) ... 137787,-0.0432162843644619,0.0278224665671587,0.07601999491453171,-0.02350415289402008,0.01278616115450859,-0.022451436147093773,0.01470776554197073]', 'document_m0': 'Startup School: Gen AI - list of recommended labs and notebooks\n\nClass\n\nLabs covered\n\nNotebooks covered\n\nLabs can be completed in Cloud Skills Boost pla\x00orm, more instructions here', 'cmetadata_m0': Jsonb({'source': 'gs://my-prod-bucket-s ... (135 chars)), 'id_m1': '21c03067-1e96-4e0e-a61e-548b0c3c4c3b', 'collection_id_m1': UUID('e9acec67-6afd-45dd-9999-509381ee1e22'), 'embedding_m1': '[-0.06288693845272064,-0.017322123050689697,0.00843889731913805,0.012165053747594357,0.059546373784542084,0.04053519293665886,0.0543924979865551,0.04 ... (15926 characters truncated) ... -0.08310684561729431,-0.004060924984514713,0.0043006762862205505,0.004421140532940626,0.03354359790682793,-0.05268661677837372,-0.009564831852912903]', 'document_m1': 'Notebooks can only be run using your own Cloud environment, more instructions here\n\n1 Current state of Generative AI Ge\x00ing Started with the Vertex AI Gemini API and Python SDK\n\nn/a\n\nMultimodality with Gemini', 'cmetadata_m1': Jsonb({'source': 'gs://my-prod-bucket-s ... (135 chars)), 'id_m2': '60f291bb-a829-47d5-b5b8-23029b5926cf', 'collection_id_m2': UUID('e9acec67-6afd-45dd-9999-509381ee1e22'), 'embedding_m2': '[-0.015607654117047787,0.0012050960212945938,0.04723281413316727,0.02288687787950039,0.08967798203229904,0.04483583942055702,0.047026704996824265,0.0 ... (15890 characters truncated) ... .09412500262260437,-0.010505995713174343,0.043766316026449203,-0.0004982317914254963,0.020516254007816315,-0.01597626507282257,-0.009379896335303783]', 'document_m2': 'n/a\n\nMultimodality with Gemini\n\nApplications of Generative AI for your business\n\nIntroduction to Generative AI Learning Path\n\nn/a\n\n2 Exploring prompt engineering\n\nn/a', 'cmetadata_m2': Jsonb({'source': 'gs://my-prod-bucket-s ... (135 chars)), 'id_m3': 'fd9271fe-9f2d-4416-a050-b01dfcfa7d40', 'collection_id_m3': UUID('e9acec67-6afd-45dd-9999-509381ee1e22'), 'embedding_m3': '[0.01780903898179531,-0.004708666820079088,0.014872205443680286,-0.04326470196247101,0.10042023658752441,0.02812151610851288,0.06731176376342773,0.01 ... (15939 characters truncated) ... -0.09371116012334824,-0.039024095982313156,0.03582580015063286,-0.027630938217043877,0.016092879697680473,0.0015013794181868434,0.002231260761618614]', 'document_m3': 'n/a\n\n2 Exploring prompt engineering\n\nn/a\n\nNotebook: Intro Gemini Notebook: Chain of Thought & React Notebook: Safety ratings & thresholds\n\nImage generation, editing, custom styling and beyond\n\nn/a', 'cmetadata_m3': Jsonb({'source': 'gs://my-prod-bucket-s ... (135 chars)), 'id_m4': 'e2e529a5-1fae-4388-9c97-fa13dd3098e1', 'collection_id_m4': UUID('e9acec67-6afd-45dd-9999-509381ee1e22'), 'embedding_m4': '[-0.01722259446978569,0.039151113480329514,0.042904406785964966,-0.00693162064999342,0.01672654040157795,0.056566305458545685,0.06780373305082321,0.0 ... (15941 characters truncated) ... 07887300848960876,-0.0015596754383295774,-0.012516054324805737,-0.003459786996245384,0.0001272123772650957,0.002018331317231059,0.007937485352158546]', 'document_m4': 'n/a\n\nNotebook: Create High Quality Visual Assets with Imagen and Gemini\n\n3 Embeddings, vector databases\n\nn/a\n\nNotebook: Ge\x00ing Started with Text Embeddings + Vertex AI Vector Search\n\nCode generation, completion, chat\n\nn/a', 'cmetadata_m4': Jsonb({'source': 'gs://my-prod-bucket-s ... (135 chars)), 'id_m5': '76bb710f-a128-4ceb-adbb-cc63db6d6df8', 'collection_id_m5': UUID('e9acec67-6afd-45dd-9999-509381ee1e22'), 'embedding_m5': '[-0.01236398983746767,0.004715809132903814,0.01416818704456091,-0.004159413278102875,0.007086075376719236,0.05418580397963524,0.07686776667833328,0.0 ... (15892 characters truncated) ... ,-0.04131390154361725,0.03247188776731491,-0.007179936859756708,-0.011165671981871128,0.029996506869792938,-0.024639440700411797,0.02816704846918583]', 'document_m5': 'Code generation, completion, chat\n\nn/a\n\nNotebooks: Code Generation\n\n4 Intro to RAG architectures, including Vertex AI Search\n\nIntegrate Search in Applications using Vertex AI Search', 'cmetadata_m5': Jsonb({'source': 'gs://my-prod-bucket-s ... (135 chars)), 'id_m6': '14394728-ac6d-4660-bb6f-6eb49b534411', 'collection_id_m6': UUID('e9acec67-6afd-45dd-9999-509381ee1e22'), 'embedding_m6': '[-0.027659673243761063,0.0006991660920903087,0.025508370250463486,0.01155843771994114,0.029794376343488693,0.07038003206253052,0.10488440841436386,0. ... (15913 characters truncated) ... -0.05645660310983658,0.03617023304104805,0.006507235113531351,-0.004400421399623156,0.019326118752360344,-0.026365745812654495,0.0001150920579675585]', 'document_m6': 'Notebook: Multimodal Retrieval Augmented Generation (RAG) using Vertex AI Gemini API\n\nBuilding enterprise chat apps using GenAI\n\nn/a', 'cmetadata_m6': Jsonb({'source': 'gs://my-prod-bucket-s ... (135 chars)), 'id_m7': '4bdd26b1-7890-45df-afb4-be77f4a06af2', 'collection_id_m7': UUID('e9acec67-6afd-45dd-9999-509381ee1e22'), 'embedding_m7': '[-0.010382091626524925,-0.026589123532176018,0.005429613869637251,-0.035534538328647614,-0.04687759280204773,0.05812354385852814,0.023599425330758095 ... (15865 characters truncated) ... 13,-0.030380338430404663,0.0339222326874733,0.021654268726706505,-0.01042112335562706,0.024006597697734833,-0.011356944218277931,0.05818868428468704]', 'document_m7': 'n/a\n\nCodelab: Create a Generative Chat App with Vertex AI Conversation Codelab: Increase intent coverage and handle errors gracefully with generative fallback Codelab: Informed decision making using Dialog\x00ow CX generators and data stores', 'cmetadata_m7': Jsonb({'source': 'gs://my-prod-bucket-s ... (135 chars)), 'id_m8': 'b917689d-0235-4023-8d59-fec37bfc0deb', 'collection_id_m8': UUID('e9acec67-6afd-45dd-9999-509381ee1e22'), 'embedding_m8': '[-0.016646429896354675,-0.035836346447467804,-0.0057340324856340885,0.028716551139950752,0.025854725390672684,0.033178601413965225,0.0703235790133476 ... (15958 characters truncated) ... -0.09207435697317123,0.0035694115795195103,-0.017135247588157654,0.002087848959490657,0.04181625321507454,-0.04720042273402214,-0.003726722439751029]', 'document_m8': '5 Deploying and hosting apps in\n\nthe cloud\n\nn/a\n\nDemo App GitHub Repository - sample applications\n\nTuning & RLHF\n\nNotebook: Tuning and deploy a foundation model Notebook: Vertex AI LLM Reinforcement Learning from Human Feedback\n\n6 MLOps for Gen AI', 'cmetadata_m8': Jsonb({'source': 'gs://my-prod-bucket-s ... (135 chars)), 'id_m9': 'd85dfdfa-e30a-40f3-9b1f-725616174e1b', 'collection_id_m9': UUID('e9acec67-6afd-45dd-9999-509381ee1e22'), 'embedding_m9': '[-0.05191882327198982,-0.024967215955257416,0.024289220571517944,0.02554929070174694,-0.0067266556434333324,0.015424377284944057,0.044294703751802444 ... (15864 characters truncated) ... 4,-0.08642246574163437,0.014378088526427746,0.037835948169231415,-0.02229861356317997,0.022061794996261597,-0.03275573253631592,0.010878358036279678]', 'document_m9': '6 MLOps for Gen AI\n\nn/a\n\nBlogpost Notebook: Evaluate LLMs with AutoSxS Model Eval\n\nApplication Development with Duet AI\n\nVisit this doc at goo.gle/GenAI-Labs each week to discover new recommended labs\n\nNotebook Instructions', 'cmetadata_m9': Jsonb({'source': 'gs://my-prod-bucket-s ... (135 chars)), 'id_m10': '7e016f25-1390-4818-9f63-3ef0a0622554', 'collection_id_m10': UUID('e9acec67-6afd-45dd-9999-509381ee1e22'), 'embedding_m10': '[-0.005956327077001333,-0.03804914280772209,-0.007826329208910465,0.005197994410991669,0.0162068922072649,0.039616815745830536,0.04923781752586365,0. ... (15909 characters truncated) ... -0.055082451552152634,0.004437738563865423,0.026748552918434143,-0.001225739368237555,0.035878125578165054,-0.04693932458758354,0.004400981590151787]', 'document_m10': 'If our speakers cover a Notebook in class, you’ll need to use Google Colab or AI Vertex Workbench to run these, which will require you to use your own Cloud Console', 'cmetadata_m10': Jsonb({'source': 'gs://my-prod-bucket-s ... (137 chars)), 'id_m11': 'd0fa5c3c-307e-4e97-ab8b-9fd773df2e95', 'collection_id_m11': UUID('e9acec67-6afd-45dd-9999-509381ee1e22'), 'embedding_m11': '[-0.01988862454891205,-0.028791887685656548,0.007235800847411156,0.022267987951636314,0.04042388126254082,0.045118965208530426,0.04622763767838478,0. ... (15919 characters truncated) ... 7,-0.08161881566047668,-0.03470646217465401,0.05456053465604782,-0.0017264139605686069,0.013938860036432743,-0.03372414410114288,0.03529150411486626]', 'document_m11': '. This may have billable components, however we have a Free Trial with $300 in credits or our Cloud Program for startups which both o\x00er Cloud credits that you can use to run these Notebooks.', 'cmetadata_m11': Jsonb({'source': 'gs://my-prod-bucket-s ... (137 chars)), 'id_m12': '4ff49daf-cd41-430e-a7d3-7e23f4f93b65', 'collection_id_m12': UUID('e9acec67-6afd-45dd-9999-509381ee1e22'), 'embedding_m12': '[-0.004883579909801483,-0.028143372386693954,0.010797201655805111,0.04843907803297043,0.09141760319471359,0.012636261060833931,0.051852189004421234,0 ... (15910 characters truncated) ... 595,-0.05188891664147377,-0.01267238799482584,0.03661508113145828,-0.02903173863887787,0.02557116188108921,-0.06374701857566833,0.014439685270190239]', 'document_m12': 'This link should help you set up your \x00rst Google Cloud Project and set up an environment for Notebook.\n\nOur GitHub repository for GenAI notebooks is available here.\n\nLabs Instructions', 'cmetadata_m12': Jsonb({'source': 'gs://my-prod-bucket-s ... (137 chars)), 'id_m13': '296e44ee-4062-4fa2-806d-76a22600072b', 'collection_id_m13': UUID('e9acec67-6afd-45dd-9999-509381ee1e22'), 'embedding_m13': '[0.016365613788366318,0.012480733916163445,0.008040445856750011,0.045174866914749146,0.03872095048427582,0.046330638229846954,-0.02483428083360195,0. ... (15916 characters truncated) ... 0.07008633762598038,-0.032697923481464386,0.04701327532529831,0.009774026460945606,-0.010585346259176731,-0.014868056401610374,0.0033125909976661205]', 'document_m13': 'Remember to follow these steps to redeem your credits in Cloud Skills Boost. Paste this link when you are prompted for using a speci\x00c URL (and remember about Incognito Mode): h\x00ps://www.cloudskillsboost', 'cmetadata_m13': Jsonb({'source': 'gs://my-prod-bucket-s ... (137 chars)), 'id_m14': '8d4144f7-7bf6-49a3-a02b-70b4f5d0a1fc', 'collection_id_m14': UUID('e9acec67-6afd-45dd-9999-509381ee1e22'), 'embedding_m14': '[0.001917202607728541,-0.03328728675842285,0.04176468402147293,0.022659817710518837,0.00808507390320301,0.01894487254321575,0.0740545243024826,0.0296 ... (15886 characters truncated) ... 0112,-0.046015415340662,0.009823916479945183,0.06310032308101654,0.02141757868230343,0.0055993665009737015,-0.03736981377005577,0.058463599532842636]', 'document_m14': '.cloudskillsboost.google/catalog_lab/1281?qlcampaign=1b-strsc-90', 'cmetadata_m14': Jsonb({'source': 'gs://my-prod-bucket-s ... (137 chars))}] (Background on this error at: https://sqlalche.me/e/20/9h9h) ### Description * it works fine for most of the cases. But for some documents it throws error. As you can see embedding generation happened properly but it failed to insert to the database. ### System Info Libraries used: ``` aiohappyeyeballs==2.4.0 aiohttp==3.10.5 aiosignal==1.3.1 annotated-types==0.7.0 antlr4-python3-runtime==4.9.3 anyio==4.4.0 attrs==24.2.0 backoff==2.2.1 beautifulsoup4==4.12.3 CacheControl==0.14.0 cachetools==5.5.0 certifi==2024.7.4 cffi==1.17.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 coloredlogs==15.0.1 contourpy==1.3.0 cryptography==43.0.0 cycler==0.12.1 dataclasses-json==0.6.7 deepdiff==8.0.0 Deprecated==1.2.14 docstring_parser==0.16 effdet==0.4.1 emoji==2.12.1 et-xmlfile==1.1.0 filelock==3.15.4 filetype==1.2.0 firebase-admin==6.5.0 flatbuffers==24.3.25 fonttools==4.53.1 frozenlist==1.4.1 fsspec==2024.6.1 google-api-core==2.19.2 google-api-python-client==2.142.0 google-auth==2.34.0 google-auth-httplib2==0.2.0 google-cloud-aiplatform==1.64.0 google-cloud-bigquery==3.25.0 google-cloud-core==2.4.1 google-cloud-firestore==2.18.0 google-cloud-pubsub==2.23.0 google-cloud-resource-manager==1.12.5 google-cloud-secret-manager==2.20.2 google-cloud-storage==2.18.2 google-cloud-vision==3.7.4 google-crc32c==1.5.0 google-resumable-media==2.7.2 googleapis-common-protos==1.65.0 greenlet==3.0.3 grpc-google-iam-v1==0.13.1 grpcio==1.66.0 grpcio-status==1.66.0 h11==0.14.0 httpcore==1.0.5 httplib2==0.22.0 httpx==0.27.2 httpx-sse==0.4.0 huggingface-hub==0.24.6 humanfriendly==10.0 idna==3.8 iopath==0.1.10 Jinja2==3.1.4 joblib==1.4.2 jsonpatch==1.33 jsonpath-python==1.0.6 jsonpointer==3.0.0 kiwisolver==1.4.5 langchain==0.2.15 langchain-community==0.2.13 langchain-core==0.2.35 langchain-google-community==1.0.8 langchain-google-vertexai==1.0.10 langchain-postgres==0.0.9 langchain-text-splitters==0.2.2 langdetect==1.0.9 langsmith==0.1.106 layoutparser==0.3.4 lxml==5.3.0 MarkupSafe==2.1.5 marshmallow==3.22.0 matplotlib==3.9.2 mpmath==1.3.0 msgpack==1.0.8 multidict==6.0.5 mypy-extensions==1.0.0 nest-asyncio==1.6.0 networkx==3.3 nltk==3.9.1 numpy==1.26.4 nvidia-cublas-cu12==12.1.3.1 nvidia-cuda-cupti-cu12==12.1.105 nvidia-cuda-nvrtc-cu12==12.1.105 nvidia-cuda-runtime-cu12==12.1.105 nvidia-cudnn-cu12==9.1.0.70 nvidia-cufft-cu12==11.0.2.54 nvidia-curand-cu12==10.3.2.106 nvidia-cusolver-cu12==11.4.5.107 nvidia-cusparse-cu12==12.1.0.106 nvidia-nccl-cu12==2.20.5 nvidia-nvjitlink-cu12==12.6.20 nvidia-nvtx-cu12==12.1.105 omegaconf==2.3.0 onnx==1.16.2 onnxruntime==1.19.0 opencv-contrib-python==4.10.0.84 opencv-python==4.10.0.84 openpyxl==3.1.5 orderly-set==5.2.1 orjson==3.10.7 packaging==24.1 pandas==2.2.2 pdf2image==1.17.0 pdfminer.six==20231228 pdfplumber==0.11.4 pgvector==0.2.5 pi_heif==0.18.0 pikepdf==9.2.0 pillow==10.4.0 poppler-utils==0.1.0 portalocker==2.10.1 proto-plus==1.24.0 protobuf==5.27.4 psutil==6.0.0 psycopg==3.2.1 psycopg-binary==3.2.1 psycopg-pool==3.2.2 pyasn1==0.6.0 pyasn1_modules==0.4.0 pycocotools==2.0.8 pycparser==2.22 pydantic==2.8.2 pydantic_core==2.20.1 PyJWT==2.9.0 pyparsing==3.1.4 pypdf==4.3.1 PyPDF2==3.0.1 pypdfium2==4.30.0 pytesseract==0.3.13 python-dateutil==2.9.0.post0 python-docx==1.1.2 python-iso639==2024.4.27 python-magic==0.4.27 python-multipart==0.0.9 pytz==2024.1 PyYAML==6.0.2 rapidfuzz==3.9.6 regex==2024.7.24 requests==2.32.3 requests-toolbelt==1.0.0 rsa==4.9 safetensors==0.4.4 scipy==1.14.1 setuptools==74.0.0 shapely==2.0.6 six==1.16.0 sniffio==1.3.1 soupsieve==2.6 SQLAlchemy==2.0.32 sympy==1.13.2 tabulate==0.9.0 tenacity==8.3.0 timm==1.0.9 tokenizers==0.19.1 torch==2.4.0 torchvision==0.19.0 tqdm==4.66.5 transformers==4.44.2 triton==3.0.0 typing-inspect==0.9.0 typing_extensions==4.12.2 tzdata==2024.1 unstructured==0.15.8 unstructured-client==0.25.5 unstructured-inference==0.7.36 unstructured.pytesseract==0.3.13 uritemplate==4.1.1 urllib3==2.2.2 wrapt==1.16.0 yarl==1.9.4 ``` Ubuntu machine following packages installed: ```bash apt-get install python3-opencv apt-get reinstall pkgconf-bin apt-get install pkg-config apt-get install -y poppler-utils ```
investigate
low
Critical
2,505,585,205
rust
bootstrap panic: overflow when subtracting durations
I tried to bootstrap rustc 1.81.0 pre-release (2024-09-03) with 1.80.1 using Gentoo portage I expected to see this happen: It should build Instead, this happened: `bootsteap` panic here https://github.com/rust-lang/rust/blob/842d6fc32e3d0d26bb11fbe6a2f6ae2afccc06cb/src/bootstrap/src/core/builder.rs#L2266 Note: this is not always reproducible ### Meta <!-- If you're using the stable version of the compiler, you should also check if the bug also exists in the beta or nightly versions. --> `rustc --version --verbose`: ``` rustc 1.80.1 (3f5fd8dd4 2024-08-06) (gentoo) binary: rustc commit-hash: 3f5fd8dd41153bc5fdca9427e9e05be2c767ba23 commit-date: 2024-08-06 host: x86_64-unknown-linux-musl release: 1.80.1 LLVM version: 18.1.8 ``` <details><summary>Backtrace</summary> <p> ``` Copy/Link "/tmp/portage/dev-lang/rust-1.81.0_rc20240903/work/rustc-1.81.0-src/build/x86_64-unknown-linux-musl/stage2-tools/x86_64-unknown-linux-musl/release/clippy-driver" to "/tmp/portage/dev-lang/rust-1.81.0_rc20240903/work/rustc-1.81.0-src/ build/x86_64-unknown-linux-musl/stage2/bin/clippy-driver" Copy/Link "/tmp/portage/dev-lang/rust-1.81.0_rc20240903/work/rustc-1.81.0-src/build/x86_64-unknown-linux-musl/stage2-tools/x86_64-unknown-linux-musl/release/cargo-clippy" to "/tmp/portage/dev-lang/rust-1.81.0_rc20240903/work/rustc-1.81.0-src/build/x86_64-unknown-linux-musl/stage2/bin/cargo-clippy" thread 'main' panicked at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/core/src/time.rs:1150:31: overflow when subtracting durations stack backtrace: 0: rust_begin_unwind 1: core::panicking::panic_fmt 2: core::option::expect_failed 3: core::option::Option<T>::expect at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/core/src/option.rs:898:21 4: <core::time::Duration as core::ops::arith::Sub>::sub at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/core/src/time.rs:1150:31 5: bootstrap::core::builder::Builder::ensure at ./src/bootstrap/src/core/builder.rs:2252:19 6: <bootstrap::core::build_steps::tool::Clippy as bootstrap::core::builder::Step>::make_run at ./src/bootstrap/src/core/build_steps/tool.rs:1042:17 7: bootstrap::core::builder::StepDescription::maybe_run at ./src/bootstrap/src/core/builder.rs:392:13 8: bootstrap::core::builder::StepDescription::run at ./src/bootstrap/src/core/builder.rs:433:21 9: bootstrap::core::builder::Builder::run_step_descriptions at ./src/bootstrap/src/core/builder.rs:1098:9 10: bootstrap::core::builder::Builder::execute_cli at ./src/bootstrap/src/core/builder.rs:1078:9 11: bootstrap::Build::build at ./src/bootstrap/src/lib.rs:667:13 12: bootstrap::main at ./src/bootstrap/src/bin/main.rs:79:5 13: core::ops::function::FnOnce::call_once at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/core/src/ops/function.rs:250:5 note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace. Traceback (most recent call last): File "/tmp/portage/dev-lang/rust-1.81.0_rc20240903/work/rustc-1.81.0-src/./x.py", line 50, in <module> bootstrap.main() File "/tmp/portage/dev-lang/rust-1.81.0_rc20240903/work/rustc-1.81.0-src/src/bootstrap/bootstrap.py", line 1191, in main bootstrap(args) File "/tmp/portage/dev-lang/rust-1.81.0_rc20240903/work/rustc-1.81.0-src/src/bootstrap/bootstrap.py", line 1167, in bootstrap run(args, env=env, verbose=build.verbose, is_bootstrap=True) File "/tmp/portage/dev-lang/rust-1.81.0_rc20240903/work/rustc-1.81.0-src/src/bootstrap/bootstrap.py", line 186, in run raise RuntimeError(err) RuntimeError: failed to run: /tmp/portage/dev-lang/rust-1.81.0_rc20240903/work/rustc-1.81.0-src/build/bootstrap/debug/bootstrap build -vvv --config=/tmp/portage/dev-lang/rust-1.81.0_rc20240903/work/rustc-1.81.0-src/config.toml -j30 ``` </p> </details>
T-bootstrap,C-bug,T-libs,A-time
low
Critical
2,505,624,886
next.js
Next.js App Router: Server Component with Context Provider and Tailwind causes rendering issues
### Link to the code that reproduces this issue https://github.com/jmderby/min-repro-next-render-issue ### To Reproduce 1. run `pnpm i` 2. run `pnpm dev` 3. visit localhost:3000, see console logs not display on the client browser console. Caveat: Issue will reproduce intermittently, to repro successfully, restart the Next.js server. ### Current vs. Expected behavior - Expected: App mounts and re-renders allowing the `TestProvider`'s console log to print client-side. - Actual: Server hangs after initial render, blocking further updates and does not print `TestProvider`'s console log client-side. There is sometimes an error that prints which is: `Uncaught SyntaxError: Invalid or unexpected token (at layout.js)` ### Provide environment information ```bash Operating System: Platform: darwin Arch: arm64 Version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000 Available memory (MB): 16384 Available CPU cores: 8 Binaries: Node: 20.16.0 npm: 10.8.1 Yarn: N/A pnpm: 9.7.1 Relevant Packages: next: 14.2.7 // Latest available version is detected (14.2.7). eslint-config-next: 14.2.7 react: 18.3.1 react-dom: 18.3.1 typescript: 5.5.4 tailwindcss: 3.4.1, Next.js Config: output: N/A ``` ### Which area(s) are affected? (Select all that apply) create-next-app, Developer Experience, Runtime ### Which stage(s) are affected? (Select all that apply) next dev (local) ### Additional context This issue seems to be related to the interaction between Server Components, Client Components with Context Providers, and importing `globals.css` which contains the Tailwind directives from the `layout.tsx` Server Component. It persists even when following Next.js best practices for mixing Server, Client Components, and Context Providers. The issue is most noticeable immediately after starting the development server. If you refresh the page after the initial load, the app typically functions as expected. The problem primarily affects the first render following server startup. A minimal reproduction repository has been created to demonstrate this issue. This minimal reproduction was inspired by this Vercel guide of using React Context with Next.js: https://vercel.com/guides/react-context-state-management-nextjs
create-next-app,bug,Runtime
low
Critical
2,505,663,663
vscode
[json] Intellisense doesn't suggest objects as a possibility when next to string enums
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions --> <!-- 🔎 Search existing issues to avoid creating duplicates. --> <!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ --> <!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. --> <!-- 🔧 Launch with `code --disable-extensions` to check. --> Does this issue occur when all extensions are disabled?: Yes (**issue even present in the bare monaco editor**) <!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. --> <!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. --> - VS Code Version: ``` Version: 1.83.1 (Universal) Commit: f1b07bd25dfad64b0167beb15359ae573aecd2cc Date: 2023-10-10T23:46:55.789Z Electron: 25.8.4 ElectronBuildId: 24154031 Chromium: 114.0.5735.289 Node.js: 18.15.0 V8: 11.4.183.29-electron.0 OS: Darwin arm64 23.4.0 ``` I have JSON schema for which an example object like this is valid: ```json { "spells": ["lumos", { "accio": "thing" }] } ``` A simple JSON schema that demonstrates the issue: `schema.json` ```json { "$schema": "http://json-schema.org/draft-07/schema#", "type": "object", "properties": { "spells": { "type": "array", "items": { "anyOf": [ { "type": "string", "enum": ["nox", "lumos"] }, { "type": "object", "properties": { "accio": { "type": "string" } } } ] } } } } ``` Note that the array item can be one of two strings, or an object. When I select the schema to be used in VSCode like so: `.vscode/settings.json` ``` { "json.schemas": [ { "fileMatch": ["*.spells.json"], "url": "./schema.json" } ] } ``` Intellisense always, and only, suggests the two string options: `"nox"`, `"lumos"` and not the object option `{"accio": thing}`. ![illustration of the bug](https://github.com/user-attachments/assets/fcf0a53f-6cf4-416f-ab57-dd89ec6449ee) Of course, once I get started on typing an object (`{`) it shows `accio`: ![illustration of it being recognised](https://github.com/user-attachments/assets/97ed241f-b0f6-47ca-9054-b78064d4be11) ## Expected behavior Other editors like JetBrains and Visual Studio will indicate the possibility of the choice also being an object by having a `{ ... }` in the list that can be selected, and which gives further information. Something like this: <img width="200" alt="Screenshot 2024-09-04 at 20 58 09" src="https://github.com/user-attachments/assets/d20dcace-b88f-436b-bdb4-9f0901f7ca49"> --- ## Steps to Reproduce: 1. Save the schema in a workspace as `schema.json`. 2. Save the vcode settings. 3. Create a new file `test.spells.json` 4. Get Intellisense started by typing `{"spells": [ <ctrl> + <space>`
feature-request,json
low
Critical
2,505,678,608
bitcoin
cmake: passing options from depends build to main build
_Originally posted by @hebasto in https://github.com/bitcoin/bitcoin/issues/30800#issuecomment-2326899886_ >I think we should first prioritize coordinating the optimization and debugging flags between the depends subsystem and the main build system: >1. Should the optimization and debugging flags coincide, or are they allowed to differ? >2. If the optimization and debugging can differ between the depends subsystem and in the main build system, which should take precedence?
Brainstorming,Build system
low
Critical
2,505,727,285
pytorch
DISABLED test_streams (__main__.TestCuda)
Platforms: linux This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_streams&suite=TestCuda&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/29667386345). Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 18 failures and 6 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_streams` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. <details><summary>Sample error message</summary> ``` Traceback (most recent call last): File "/var/lib/jenkins/workspace/test/test_cuda.py", line 616, in test_streams tensor1 = torch.ByteTensor(5).pin_memory() RuntimeError: Host and device pointer dont match with cudaHostRegister. Please dont use this feature by setting PYTORCH_CUDA_ALLOC_CONF=use_cuda_host_register:False (default) To execute this test, run the following from the base repo dir: python test/test_cuda.py TestCuda.test_streams This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 ``` </details> Test file path: `test_cuda.py` cc @ptrblck @msaroufim @clee2000
module: cuda,triaged,module: flaky-tests,skipped
low
Critical
2,505,727,287
pytorch
DISABLED test_set_per_process_memory_fraction (__main__.TestCuda)
Platforms: linux This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_set_per_process_memory_fraction&suite=TestCuda&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/29658230012). Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 9 failures and 3 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_set_per_process_memory_fraction` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. <details><summary>Sample error message</summary> ``` Traceback (most recent call last): File "/var/lib/jenkins/workspace/test/test_cuda.py", line 307, in test_set_per_process_memory_fraction self.assertTrue((tensor == 1).all()) File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 685, in assertTrue if not expr: RuntimeError: Host and device pointer dont match with cudaHostRegister. Please dont use this feature by setting PYTORCH_CUDA_ALLOC_CONF=use_cuda_host_register:False (default) To execute this test, run the following from the base repo dir: PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/test_cuda.py TestCuda.test_set_per_process_memory_fraction This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 ``` </details> Test file path: `test_cuda.py` cc @ptrblck @msaroufim @clee2000
module: cuda,triaged,module: flaky-tests,skipped
low
Critical
2,505,764,350
node
Source map working in Chrome, but not in Node
### Version 22.5.1 ### Platform ```text Microsoft Windows NT 10.0.19045.0 x64 ``` ### Subsystem _No response_ ### What steps will reproduce the bug? ```js eval(`'use strict'; function get(o, p) { return apply(weakMapGet, p, [o]); } function set(o, p, v) { apply(weakMapSet, p, [o, v]); return v; } function call(o, p, args) { return apply(get(o, p), o, args); } function mark(o, p) { apply(weakSetAdd, p, [o]); } function has(o, p) { return apply(weakSetHas, p, [o]); } function getMisc(k) { return apply(mapGet, misc, [k]); } function setMisc(k, v) { apply(mapSet, misc, [k, v]); } const apply = Function.prototype.call.bind(Function.prototype.apply); const { get: weakMapGet, set: weakMapSet, delete: weakMapDelete } = WeakMap.prototype; const { get: mapGet, set: mapSet } = Map.prototype; const { add: weakSetAdd, has: weakSetHas } = WeakSet.prototype; const misc = new Map(); const internal23839 = new WeakMap(); const internal23840 = new WeakMap(); const internal23831 = new WeakSet(); let { console: console_1, Array: Array_1, TypeError: TypeError_1, Symbol: Symbol_1, Error: Error_1 } = globalThis, { log: log_1 } = console_1, { isArray: isArray_1 } = Array_1; let { push: push_1 } = Array_1.prototype; const { get: get_length_1 = function () { return this.length; }, set: set_length_1 = function (value) { return this.length = value; } } = Reflect.getOwnPropertyDescriptor(Array_1.prototype) ?? {}; apply(log_1, console_1, ["initialize"]); class Test { constructor() { if (arguments[0] !== getMisc(23831)) throw new TypeError_1("Illegal constructor"); mark(this, internal23831); set(this, internal23839, function callStaticMethod() { apply(log_1, console_1, [apply(isArray_1, Array_1, [[]]), isArray_1]); equal(apply(isArray_1, Array_1, [[]]), true); }); set(this, internal23840, function callMethod() { const array = []; apply(push_1, array, [123]); equal(apply(get_length_1, array), 1); }); } callStaticMethod() { if (!has(this, internal23831)) throw new TypeError_1("Illegal invocation"); if (arguments.length < 0) throw new TypeError_1("Failed to execute 'callStaticMethod' on 'Test': 0 arguments required, but only " + arguments.length + " present."); return call(this, internal23839, []); } callMethod() { if (!has(this, internal23831)) throw new TypeError_1("Illegal invocation"); if (arguments.length < 0) throw new TypeError_1("Failed to execute 'callMethod' on 'Test': 0 arguments required, but only " + arguments.length + " present."); return call(this, internal23840, []); } } setMisc(23831, Symbol_1()); const object = new Test(getMisc(23831)); throw new Error_1('hello'); exports.object = object; ${"//#"} sourceURL=index.ts ${"//#"} sourceMappingURL=data:application/javascript;base64,eyJ2ZXJzaW9uIjozLCJzb3VyY2VzIjpbImluZGV4LnRzIl0sIm5hbWVzIjpbXSwibWFwcGluZ3MiOiI7Ozs7OztBQUNZLFNBQUEsR0FBQSxDQUFBLENBQUEsRUFBQSxDQUFBLEVBQUEsQ0FBQSxFQUFBO0FBR0EsSUFBQSxLQUFNLENBQUksVUFBQSxFQUFBLENBQUEsRUFBQSxDQUFBLENBQUEsRUFBQSxDQUFBLENBQUEsQ0FBQSxDQUFBOzs7O1NBSUwsSUFBQSxDQUFBLENBQUEsRUFBQSxDQUFBLEVBQUEsSUFBQSxFQUFBOzs7O2FBS0csQ0FBQSxDQUFBLEVBQUssQ0FBQSxFQUFBO1NBQ1IsQ0FBQSxVQUFBLEVBQUEsQ0FBQSxFQUFBLENBQUEsQ0FBQSxDQUFBLENBQUEsQ0FBQTs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7QUFiTCxLQUFBLENBQUEsS0FBQSxFQUFBLFNBQUEsRUFBQSxDQUFZLFlBQVksQ0FBQSxDQUFBLENBQUU7QUFHMUIsTUFBTSxJQUFJLENBQUE7Ozs7OztZQUVGLEtBQUEsQ0FBQSxLQUFBLEVBQUEsU0FBQSxFQUFBLENBQUEsS0FBQSxDQUFBLFNBQUEsRUFBQSxPQUFBLEVBQUEsQ0FBMEIsRUFBRSxDQUFBLENBQUEsRUFBQSxTQUFBLENBQUEsQ0FBQSxDQUFrQjtZQUM5QyxLQUFLLENBQUEsS0FBQSxDQUFBLFNBQUEsRUFBQSxPQUFBLEVBQUEsQ0FBZSxFQUFFLENBQUEsQ0FBQSxFQUFHLElBQUksQ0FBQyxDQUFDO1NBQ2xDLENBQUEsQ0FBQTs7WUFHRyxNQUFNLEtBQUssR0FBVSxFQUFFLENBQUM7WUFDeEIsS0FBQSxDQUFBLE1BQUEsRUFBQSxLQUFLLEVBQUEsQ0FBTSxHQUFHLENBQUEsQ0FBQSxDQUFFO1lBQ2hCLEtBQUssQ0FBQSxLQUFBLENBQUEsWUFBQSxFQUFDLEtBQUssQ0FBQSxFQUFTLENBQUMsQ0FBQyxDQUFDO1NBQzFCLENBQUEsQ0FBQTs7SUFURCxnQkFBZ0IsR0FBQTs7Ozs7OztJQUtoQixVQUFVLEdBQUE7Ozs7Ozs7Q0FLYjs7QUFFTSxNQUFNLE1BQU0sR0FBRyxJQUFJLElBQUksQ0FBQSxPQUFBLENBQUEsS0FBQSxDQUFBLEVBQUc7QUFFakMsTUFBTSxJQUFBLE9BQUEsQ0FBVSxPQUFPLENBQUMiLCJzb3VyY2VzQ29udGVudCI6W251bGxdfQ==`); ``` ### How often does it reproduce? Is there a required condition? Always. ### What is the expected behavior? Why is that the expected behavior? Traceback from Chrome: ``` Uncaught Error: hello at eval (index.ts:20:37) at <anonymous>:1:1 ``` ### What do you see instead? Traceback from Node: ``` Uncaught Error: hello at eval (index.ts:80:7) ``` ### Additional information _No response_
source maps
low
Critical
2,505,767,008
neovim
:checkhealth should detect if `g:loaded_python3_provider=1` was wrongly set
### Problem When I try to run a Pyhton expression, I get that there is not python provider: ``` :lua vim.fn.py3eval("2+2") E5108: Error executing lua Vim:E319: No "python3" provider found. Run ":checkhealth provider" stack traceback: [C]: in function 'py3eval' [string ":lua"]:1: in main chunk ``` However there is. Here is the output of `:checkhealth provider.python`: ``` provider.python: require("provider.python.health").check() Python 3 provider (optional) - pyenv: Path: /usr/share/pyenv/libexec/pyenv - pyenv: Root: /home/USERNAME/.pyenv - Using: g:python3_host_prog = "/home/USERNAME/.pyenv/versions/neovim/bin/python3" - Executable: /home/USERNAME/.pyenv/versions/neovim/bin/python3 - Python version: 3.11.7 - pynvim version: 0.5.0 - OK Latest pynvim is installed. Python virtualenv - OK no $VIRTUAL_ENV ``` ### Steps to reproduce 1. Create a virtual environment with pyenv: `pyenv virtualenv 3.12 neovim` 2. Activate it `pyenv local neovim` 3. Install pynvim on it: `python -m pip install pynvim` 4. Set the virtual environment as python provider in init.lua: ``` -- WARNING: A python 3 provider is needed for zotcite vim.g.loaded_python3_provider = 1 -- WARNING: Path must be set to a `python3` executable file, -- NOT to a `python` executable file. -- That won't work. vim.g.python3_host_prog = MYHOME .. "/.pyenv/versions/neovim/bin/python3" ``` 5. Run `:lua vim.fn.py3eval("2+2")` ### Expected behavior neovim should not error out as it says the provider is present ### Neovim version (nvim -v) v0.10.1 ### Vim (not Nvim) behaves the same? no ### Operating system/version Arch Linux ### Terminal name/version kitty 0.36.1 ### $TERM environment variable xterm-kitty ### Installation Arch Linux official package
bug,provider,complexity:low,checkhealth
low
Critical
2,505,789,373
PowerToys
PowerToys Run URI Handler without https /enhancement
### Description of the new feature / enhancement The URI Handler plugin for PowerToys Run is great, except that I can use it to get to my local devices like my router, because it forces https. Would it be possible to add an option (a check box?) to allow for http instead. Maybe it could be an option when you input the URI, give two results, one with HTTPS, the second with HTTP. ### Scenario when this would be used? When logging on to my router, local file server, etc... ### Supporting information _No response_
Needs-Triage
low
Minor
2,505,799,775
yt-dlp
[TikTok] "Unable to extract webpage video data"
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE - [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field ### Checklist - [X] I'm reporting that yt-dlp is broken on a **supported** site - [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details - [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command) - [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates - [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue) - [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required ### Region _No response_ ### Provide a description that is worded well enough to be understood There are many TikTok videos that are not downloaded with this error showing up each time: ERROR: [TikTok] 7311698908986477856: Unable to extract webpage video data ### Provide verbose output that clearly demonstrates the problem - [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`) - [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead - [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below ### Complete Verbose Output ```shell [debug] Command-line config: ['-vU', '-o', '%(uploader)s/%(title)s.%(ext)s', 'https://www.tiktok.com/@juliaxgri1'] [debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version stable@2024.08.06 from yt-dlp/yt-dlp [4d9231208] (pip) [debug] Python 3.12.4 (CPython AMD64 64bit) - Windows-11-10.0.22631-SP0 (OpenSSL 3.0.13 30 Jan 2024) [debug] exe versions: ffmpeg 7.0.2-essentials_build-www.gyan.dev (setts), ffprobe 7.0.2-essentials_build-www.gyan.dev [debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.2, websockets-13.0.1 [debug] Proxy map: {} [debug] Request Handlers: urllib, requests, websockets [debug] Loaded 1830 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest Latest version: stable@2024.08.06 from yt-dlp/yt-dlp yt-dlp is up to date (stable@2024.08.06 from yt-dlp/yt-dlp) [tiktok:user] Extracting URL: https://www.tiktok.com/@juliaxgri1 [tiktok:user] juliaxgri1: Downloading user webpage [download] Downloading playlist: juliaxgri1 [tiktok:user] juliaxgri1: Downloading page 1 [tiktok:user] juliaxgri1: Downloading page 2 [tiktok:user] juliaxgri1: Downloading page 3 [tiktok:user] Playlist juliaxgri1: Downloading 35 items of 35 [download] Downloading item 1 of 35 [TikTok] Extracting URL: https://www.tiktok.com/@juliaxgri1/video/7409225096545504545 [TikTok] 7409225096545504545: Downloading webpage [debug] [TikTok] Found universal data for rehydration [debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id [debug] Default format spec: bestvideo*+bestaudio/best [info] 7409225096545504545: Downloading 1 format(s): bytevc1_1080p_723913-2 [debug] Invoking http downloader on "https://api16-normal-c-useast2a.tiktokv.com/aweme/v1/play/?video_id=v0f044gc0000cr9dlnvog65l2hg92ot0&line=0&is_play_url=1&file_id=2f9316a2a28d411eb214715644f31b11&item_id=7409225096545504545&signaturev3=dmlkZW9faWQ7ZmlsZV9pZDtpdGVtX2lkLmY5YWIzMGRjNDM4NDRlYmM4NWU0NTdjNzYyMmQ2MTUw&shp=9e36835a&shcp=280c9438" [debug] File locking is not supported. Proceeding without locking [download] Destination: juliaxgri1\#foryou #blowthisup #goviral #foryoupage .mp4 [download] 100% of 1.29MiB in 00:00:00 at 3.43MiB/s [download] Downloading item 2 of 35 [TikTok] Extracting URL: https://www.tiktok.com/@juliaxgri1/video/7407832624456043809 [TikTok] 7407832624456043809: Downloading webpage [debug] [TikTok] Found universal data for rehydration [debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id [debug] Default format spec: bestvideo*+bestaudio/best [info] 7407832624456043809: Downloading 1 format(s): bytevc1_720p_544082-2 [debug] Invoking http downloader on "https://api16-normal-c-useast2a.tiktokv.com/aweme/v1/play/?video_id=v0f044gc0000cr6uegfog65rded56sb0&line=0&is_play_url=1&file_id=9dc34f0e5ae4453b8089458fec948fc6&item_id=7407832624456043809&signaturev3=dmlkZW9faWQ7ZmlsZV9pZDtpdGVtX2lkLmIzYTViZWZiOTQ0NDkxOTcxODQxMmRhYThlYjJkNTk1&shp=9e36835a&shcp=280c9438" [download] juliaxgri1\#foryou #blowthisup #goviral #photodump #foryoupage #fypシ゚viral #xyzbca .mp4 has already been downloaded [download] 100% of 761.79KiB [download] Downloading item 3 of 35 [TikTok] Extracting URL: https://www.tiktok.com/@juliaxgri1/video/7406986592151194913 [TikTok] 7406986592151194913: Downloading webpage [debug] [TikTok] Found universal data for rehydration [debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id [debug] Default format spec: bestvideo*+bestaudio/best [info] 7406986592151194913: Downloading 1 format(s): bytevc1_1080p_877871-2 [debug] Invoking http downloader on "https://api16-normal-c-useast2a.tiktokv.com/aweme/v1/play/?video_id=v0f044gc0000cr5ec2fog65og53he86g&line=0&is_play_url=1&file_id=bc3f0fd78d58469e9bbbfeb99306344b&item_id=7406986592151194913&signaturev3=dmlkZW9faWQ7ZmlsZV9pZDtpdGVtX2lkLmMzODIyZDczNDU1ZDNjYzVkNjBjMWY3Y2MzOTIzNzNk&shp=9e36835a&shcp=280c9438" [download] juliaxgri1\#foryou #blowthisup #goviral #photodump #foryoupage #gymtok #fypシ゚viral #xyzbca #dance #teamwork .mp4 has already been downloaded [download] 100% of 1.19MiB [download] Downloading item 4 of 35 [TikTok] Extracting URL: https://www.tiktok.com/@juliaxgri1/video/7405936959169776929 [TikTok] 7405936959169776929: Downloading webpage ERROR: [TikTok] 7405936959169776929: Unable to extract webpage video data; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U File "C:\Users\joshs\AppData\Local\Programs\Python\Python312\Lib\site-packages\yt_dlp\extractor\common.py", line 740, in extract ie_result = self._real_extract(url) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\joshs\AppData\Local\Programs\Python\Python312\Lib\site-packages\yt_dlp\extractor\tiktok.py", line 892, in _real_extract video_data, status = self._extract_web_data_and_status(url, video_id) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\joshs\AppData\Local\Programs\Python\Python312\Lib\site-packages\yt_dlp\extractor\tiktok.py", line 250, in _extract_web_data_and_status raise ExtractorError('Unable to extract webpage video data') ```
site-bug,triage
low
Critical
2,505,806,311
PowerToys
RegistryPreview - Create backups of modified keys before writing to registry
### Description of the new feature / enhancement Prompt the user to optionally create a backup of changed registry values prior to writing the changes, which can be used for a rollback. ### Scenario when this would be used? When a registry modification causes issues and needs to be undone. Modifying the registry can be dangerous. This would mitigate much of the risk. ### Supporting information Here is an example of a similar backup feature being used by CCleaner, which allows the user to easily revert registry modifications in case they cause issues. ![registry backup example](https://github.com/user-attachments/assets/575fb407-deba-480b-8baa-14eea67a9bc4)
Needs-Triage
low
Minor
2,505,819,425
ollama
Moondream2 needs an update
moondream2 is an amazing tiny little VLM. The owner (https://github.com/vikhyat) releases updates quite frequently. I'm not sure which version ollama currently has, but there was a new release last week (2024-08-26) which is not in ollama. https://huggingface.co/vikhyatk/moondream2
model request
low
Minor
2,505,837,600
node
performance.getEntries and performance.getEntriesByName cause call stack size to be exceeded
### Version v18.14.0 - v22.7.0 ### Platform ```text Darwin x86_64 ``` ### Subsystem perf_hooks ### What steps will reproduce the bug? Executing this script: ```javascript // Established warning threshold for number of performance entries // https://github.com/nodejs/node/blob/v22.x/lib/internal/perf/observe.js#L105 const performanceEntryBufferWarnSizeThreshold = 1e6; for (let numEntries = 1e4; numEntries <= performanceEntryBufferWarnSizeThreshold; numEntries += 1e4) { console.log(`Testing ${numEntries} entries`); for (let i = 0; i < numEntries; i++) { performance.mark(`mark-${i}`); } performance.getEntriesByName('mark-0') performance.clearMarks(); } console.log('Done'); ``` ### How often does it reproduce? Is there a required condition? 100% of the time ### What is the expected behavior? Why is that the expected behavior? The expected behaviour is that the script completes successfully given the number of performance entries at any time is below the [established warning threshold](https://github.com/nodejs/node/blob/v22.x/lib/internal/perf/observe.js#L105): ``` ... Testing 980000 entries Testing 990000 entries Testing 1000000 entries Done ``` ### What do you see instead? With the default stack size of 984kB the script consistently fails at an order of magnitude lower than the [established warning threshold](https://github.com/nodejs/node/blob/v22.x/lib/internal/perf/observe.js#L105): ``` ... Testing 110000 entries Testing 120000 entries Testing 130000 entries node:internal/perf/observe:517 ArrayPrototypePushApply(bufferList, markEntryBuffer); ^ RangeError: Maximum call stack size exceeded at filterBufferMapByNameAndType (node:internal/perf/observe:517:5) at Performance.getEntriesByName (node:internal/perf/performance:106:12) at Object.<anonymous> (test_file.js:12:15) at Module._compile (node:internal/modules/cjs/loader:1546:14) at Module._extensions..js (node:internal/modules/cjs/loader:1691:10) at Module.load (node:internal/modules/cjs/loader:1317:32) at Module._load (node:internal/modules/cjs/loader:1127:12) at TracingChannel.traceSync (node:diagnostics_channel:315:14) at wrapModuleLoad (node:internal/modules/cjs/loader:217:24) at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:166:5) Node.js v22.7.0 ``` ### Additional information This bug appears to have been introduced in [v18.14.0](https://nodejs.org/en/blog/release/v18.14.0#:~:text=%5B1e32520f72%5D%20%2D%20tools%3A%20add%20ArrayPrototypeConcat%20to%20the%20list%20of%20primordials%20to%20avoid%20(Antoine%20du%20Hamel)%20%2344445) when #44445 added a [usage of the ArrayPrototypePushApply primordial in filterBufferMapByNameAndType](https://github.com/nodejs/node/blob/v22.x/lib/internal/perf/observe.js#L517-L519) as part of an effort to eliminate usages of `ArrayPrototypeConcat`. `ArrayPrototypePushApply` is a variadic function, which appears to be problematic in this instance because it’s being called with each element in the buffer of performance entries - each passed as individual arguments. The size of the performance entries buffer is effectively unbounded, so at scale the underlying `Array.prototype.push` method is being called with an unbounded number of arguments resulting in massive stack frames which cause the maximum call stack size to be exceeded. If this is, as it appears, to be a general risk with the usage of variadic primordials with unbounded argument lists it may at least be worth a callout in the [primordials documentation on variadic functions](https://github.com/nodejs/node/blob/main/doc/contributing/primordials.md#variadic-functions).
perf_hooks
low
Critical
2,505,852,269
godot
draw_polyline The display intermittently breaks when two points jump too much,but Using draw line will not appear
### Tested versions Godot Engine v4.3.stable. ### System information Window10 godot-cpp4.3 ### Issue description ``` // draw_line 当巨大跳跃不会出现 间断点 //but Using draw line will not appear // for (const auto &array : points) // { // if (array.size() >= 2) // 确保至少有2个点 // { // for (size_t i = 1; i < array.size(); ++i) // { // draw_line(array[i - 1], array[i], lineColor, line_thickness); // } // } // } // draw_polyline 当巨大跳跃 会出现间断点 //draw_polyline The display intermittently breaks when two points jump too much for (const auto &array : points) { if (array.size() >= 2) // 确保至少有2个点 { PackedVector2Array packed_array; for (const auto &point : array) { packed_array.push_back(point); } draw_polyline(packed_array, lineColor, line_thickness, antialiased); } } ``` mark time :01:13:51 ![c7323af0f94d75cc31b9e37b5682db8](https://github.com/user-attachments/assets/735252fb-d23b-42e9-8b37-f09a38b75a92) ![9526a93140a96751f0d61ad2daaaa40](https://github.com/user-attachments/assets/647e9eae-b71a-4d92-b44e-c49eb6268f90) ### Steps to reproduce ``` void waveformRendering::_draw() { if (mapped_view_to_ecgrawDat == NULL) { UtilityFunctions::print("File not mapped"); return; } char *current_position = mapped_view_to_ecgrawDat + m_drawline_start_index * 36; points.clear(); points.resize(12); // 预先分配空间 float y_Translate_item = 100.0; std::vector<float> lead_translates(12); for (int i = 0; i < 12; ++i) { lead_translates[i] = 100.0 + i * y_Translate_item; } float m_x_line_standard_gain = 1.0; float m_y_line_standard_gain = 1.0; float InputScalar_y_line_standard_gain = 0.100; if (current_position != NULL) { for (int i = 0; i < sampling_number; i++) { float x = i * m_x_line_standard_gain * 0.25; for (int j = 0; j < 12; j++) { unsigned char low4 = static_cast<unsigned char>(current_position[0]) & 0x0F; unsigned short low16 = (static_cast<unsigned char>(current_position[1]) << 8) | static_cast<unsigned char>(current_position[2]); short tmp1 = (low4 << 16) | low16; float y = tmp1 * m_y_line_standard_gain * (-1) * InputScalar_y_line_standard_gain + lead_translates[j]; points[j].emplace_back(x, y); current_position += 3; } if ((i + m_drawline_start_index) % 500 == 0) { Vector2 component_size = get_size(); int timestamp = (i + m_drawline_start_index) * 2; String timestamp_text = convert_from_timestamp_hh_mm_ss(timestamp); float text_width = default_font->get_string_size(timestamp_text).x; Vector2 text_position = Vector2(x - text_width / 2 - 8, component_size.y - 20); draw_string(default_font, text_position, U"" + (timestamp_text), HORIZONTAL_ALIGNMENT_CENTER, -1, 20.0, godot::Color(0, 0, 0)); draw_line(Vector2(x, component_size.y - 20), Vector2(x, component_size.y), Color(0, 0, 0), 2.0); } } } Color lineColor(0, 0, 0); float line_thickness = 1.0; bool antialiased = false; // draw_line 当巨大跳跃不会出现 间断点 //but Using draw line will not appear // for (const auto &array : points) // { // if (array.size() >= 2) // 确保至少有2个点 // { // for (size_t i = 1; i < array.size(); ++i) // { // draw_line(array[i - 1], array[i], lineColor, line_thickness); // } // } // } // draw_polyline 当巨大跳跃 会出现间断点 //draw_polyline The display intermittently breaks when two points jump too much for (const auto &array : points) { if (array.size() >= 2) // 确保至少有2个点 { PackedVector2Array packed_array; for (const auto &point : array) { packed_array.push_back(point); } draw_polyline(packed_array, lineColor, line_thickness, antialiased); } } } ``` ### Minimal reproduction project (MRP) [src.zip](https://github.com/user-attachments/files/16874157/src.zip)
topic:rendering,needs testing,topic:2d
low
Major
2,505,854,934
pytorch
InternalTorchDynamoError: 'FunctionCtx' object has no attribute 'saved_tensors'
### 🐛 Describe the bug Accessing a `FunctionCtx` but not using it hits an `InternalTorchDynamoError`. ### Error logs ``` Traceback (most recent call last): File "/tmp/smaller-dynbug.py", line 9, in <module> f() File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 469, in _fn return fn(*args, **kwargs) File "/tmp/smaller-dynbug.py", line 7, in f _ = torch.autograd.function.FunctionCtx() File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 1238, in __call__ return self._torchdynamo_orig_callable( File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 1039, in __call__ result = self._inner_convert( File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 514, in __call__ return _compile( File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 929, in _compile raise InternalTorchDynamoError(str(e)).with_traceback( File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 902, in _compile guarded_code = compile_inner(code, one_graph, hooks, transform) File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 653, in compile_inner return _compile_inner(code, one_graph, hooks, transform) File "/usr/local/lib/python3.10/dist-packages/torch/_utils_internal.py", line 85, in wrapper_function return StrobelightCompileTimeProfiler.profile_compile_time( File "/usr/local/lib/python3.10/dist-packages/torch/_strobelight/compile_time_profiler.py", line 129, in p rofile_compile_time return func(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 686, in _compile_inner out_code = transform_code_object(code, transform) File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/bytecode_transformation.py", line 1322, in transform_code_object transformations(instructions, code_options) File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 208, in _fn return fn(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 622, in transform tracer.run() File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 2731, in run super().run() File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 958, in run while self.step(): File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 870, in step self.dispatch_table[inst.opcode](self, inst) File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1048, in STORE_FAST self._store_fast(inst.argval) File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1044, in _store_fast loaded_vt.set_name_hint(name) File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/lazy.py", line 156, in realize_and_forward return getattr(self.realize(), name)(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/lazy.py", line 63, in realize self._cache.realize() File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/lazy.py", line 29, in realize self.vt = VariableBuilder(tx, self.source)(self.value) File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/builder.py", line 333, in __call__ vt = self._wrap(value) File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/builder.py", line 699, in _wrap actual_saved_tensors = value.saved_tensors torch._dynamo.exc.InternalTorchDynamoError: 'FunctionCtx' object has no attribute 'saved_tensors' from user code: File "/tmp/smaller-dynbug.py", line 7, in torch_dynamo_resume_in_f_at_7 _ = torch.autograd.function.FunctionCtx() Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information You can suppress this exception and fall back to eager by setting: import torch._dynamo torch._dynamo.config.suppress_errors = True ``` ### Minified repro ```python import torch @torch.compile(backend="eager") def f(): _ = torch.autograd.function.FunctionCtx() return None f() ``` ### Versions ``` python3 -c "import torch; print(torch.__version__)" 2.5.0a0+git4bae7ae ``` cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @rec
module: autograd,triaged,oncall: pt2,module: dynamo,dynamo-autograd-function
low
Critical
2,505,881,440
TypeScript
Error type on spreading array with additional props
### 🔎 Search Terms array intersection spread any ### 🕗 Version & Regression Information - This is the behavior in every version I tried ### ⏯ Playground Link https://tsplay.dev/wOdoMN ### 💻 Code ```ts type withExtraProps = extractArray<{ name: string } & string[]>; // ^? any[] type extractArray<t extends readonly unknown[]> = [...{ [i in keyof t]: t[i] }]; ``` ### 🙁 Actual behavior Inferred as `any[]` due to an internal error type ### 🙂 Expected behavior Inferred as `string[]` ### Additional information about the issue @Andarist mentioned this could be related to https://github.com/microsoft/TypeScript/issues/59260
Bug,Help Wanted
low
Critical
2,505,891,485
pytorch
benchmarks/dynamo/timm_models.py starts installing binary distribution of PyTorch if torchvision is not installed
### 🐛 Describe the bug The reason is because we pip install timm_models, which has a dependency on torchvision, and if there's not a working install of torchvision it tries to get the binary distribution, which will in turn pull pytorch. Maybe we make the pip command say "do not install dependencies" or something. ### Versions main cc @seemethere @malfet @pytorch/pytorch-dev-infra @chauhang @penguinwu
module: ci,triaged,oncall: pt2
low
Critical
2,505,896,806
yt-dlp
[fptplay.vn] Unable to download JSON metadata: HTTP Error 403: Forbidden
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE - [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field ### Checklist - [X] I'm reporting that yt-dlp is broken on a **supported** site - [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details - [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command) - [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates - [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue) - [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required ### Region vietnam ### Provide a description that is worded well enough to be understood The fptplay.vn website support download section has been damaged and cannot be used. ### Provide verbose output that clearly demonstrates the problem - [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`) - [X] If using API, add `'verbose': True` to `YoutubeDL` params instead - [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below ### Complete Verbose Output ```shell C:\downloader>yt-dlp -vU https://fptplay.vn/xem-video/hoa-no-ve-dem-65a1147926aa11fc56a7e75e [debug] Command-line config: ['-vU', 'https://fptplay.vn/xem-video/hoa-no-ve-dem-65a1147926aa11fc56a7e75e'] [debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version nightly@2024.08.26.232811 from yt-dlp/yt-dlp-nightly-builds [41be32e78] (win_exe) [debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1k 25 Mar 2021) [debug] exe versions: ffmpeg N-116778-g7e4784e40c-20240827 (setts), ffprobe N-116778-g7e4784e40c-20240827, phantomjs 2.1.1 [debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.2, websockets-13.0 [debug] Proxy map: {} [debug] Request Handlers: urllib, requests, websockets, curl_cffi [debug] Loaded 1831 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest [debug] Downloading _update_spec from https://github.com/yt-dlp/yt-dlp-nightly-builds/releases/latest/download/_update_spec [debug] Downloading SHA2-256SUMS from https://github.com/yt-dlp/yt-dlp-nightly-builds/releases/download/2024.09.02.232855/SHA2-256SUMS Current version: nightly@2024.08.26.232811 from yt-dlp/yt-dlp-nightly-builds Latest version: nightly@2024.09.02.232855 from yt-dlp/yt-dlp-nightly-builds Current Build Hash: e47ada7bd73c123c93436bb53664d58a5923818de3ecd1f6d4301e5a8ae5b166 Updating to nightly@2024.09.02.232855 from yt-dlp/yt-dlp-nightly-builds ... [debug] Downloading yt-dlp.exe from https://github.com/yt-dlp/yt-dlp-nightly-builds/releases/download/2024.09.02.232855/yt-dlp.exe Updated yt-dlp to nightly@2024.09.02.232855 from yt-dlp/yt-dlp-nightly-builds [debug] Restarting: yt-dlp -vU https://fptplay.vn/xem-video/hoa-no-ve-dem-65a1147926aa11fc56a7e75e [debug] Command-line config: ['-vU', 'https://fptplay.vn/xem-video/hoa-no-ve-dem-65a1147926aa11fc56a7e75e'] [debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version nightly@2024.09.02.232855 from yt-dlp/yt-dlp-nightly-builds [e8e6a982a] (win_exe) [debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1k 25 Mar 2021) [debug] exe versions: ffmpeg N-116778-g7e4784e40c-20240827 (setts), ffprobe N-116778-g7e4784e40c-20240827, phantomjs 2.1.1 [debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.2, websockets-13.0.1 [debug] Proxy map: {} [debug] Request Handlers: urllib, requests, websockets, curl_cffi [debug] Loaded 1832 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest Latest version: nightly@2024.09.02.232855 from yt-dlp/yt-dlp-nightly-builds yt-dlp is up to date (nightly@2024.09.02.232855 from yt-dlp/yt-dlp-nightly-builds) [debug] Using fake IP 14.161.147.157 (VN) as X-Forwarded-For [fptplay] Extracting URL: https://fptplay.vn/xem-video/hoa-no-ve-dem-65a1147926aa11fc56a7e75e [fptplay] 65a1147926aa11fc56a7e75e: Downloading webpage WARNING: [fptplay] unable to extract title; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U [fptplay] 65a1147926aa11fc56a7e75e: Downloading JSON metadata ERROR: [fptplay] 65a1147926aa11fc56a7e75e: Unable to download JSON metadata: HTTP Error 403: Forbidden (caused by <HTTPError 403: Forbidden>) File "yt_dlp\extractor\common.py", line 740, in extract File "yt_dlp\extractor\fptplay.py", line 59, in _real_extract File "yt_dlp\extractor\common.py", line 1139, in download_content File "yt_dlp\extractor\common.py", line 1099, in download_handle File "yt_dlp\extractor\common.py", line 960, in _download_webpage_handle File "yt_dlp\extractor\common.py", line 909, in _request_webpage File "yt_dlp\extractor\common.py", line 896, in _request_webpage File "yt_dlp\YoutubeDL.py", line 4165, in urlopen File "yt_dlp\networking\common.py", line 117, in send File "yt_dlp\networking\_helper.py", line 208, in wrapper File "yt_dlp\networking\common.py", line 340, in send File "yt_dlp\networking\_requests.py", line 365, in _send yt_dlp.networking.exceptions.HTTPError: HTTP Error 403: Forbidden ```
site-bug,triage
low
Critical
2,505,916,621
ollama
cuda device unavailable error results in failed memory update leading to concurrent model load when no space actually available
### What is the issue? ``` ollama run llama3.1 (is ok) ``` switch to a different terminal ``` ollama run yi-coder Error: llama runner process has terminated: CUDA error ollama run llama3.1 (is ok) ``` ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.9
bug,nvidia
low
Critical
2,505,921,033
deno
Bug: patch feature silently fails on invalid `exports` map
Took me a couple of hours to narrow this down. When you use `patch` to point to a local JSR dependency, it will always ignore it when the `export` map contains an error. This is not obvious to the user as no error message is shown, not even in the internal logs. ## Steps to reproduce 1. Clone https://github.com/marvinhagemeister/deno-patch-bug-report 2. Run `cd bar` 3. Run `deno run -A mod.ts` Output: ```sh error: JSR package not found: @deno/i-dont-exist-yet at file:///Users/marvinh/dev/test/deno-patch-pre/bar/mod.ts:1:21 ``` Reason it fails is because of an invalid entry key in the `exports` map in `foo/deno.json`: ```diff { "name": "@deno/i-dont-exist-yet", "version": "0.0.1", "exports": { ".": "./mod.ts", - "types": "./types.ts" + "./types": "./types.ts" } } ``` After this change it works as expected. We should show a big fat error when the exports mapping is wrong and abort, rather than silently falling back to fetching the package from the JSR registry. Version: deno 2.0.0-rc.0+c58a628
bug,jsr,patch
low
Critical
2,505,949,969
flutter
Migrate flutter packages artifact hub usage to prevent using non google hosted dependencies.
In https://github.com/flutter/flutter/issues/120119 we added a google provided repository. If artifacts are not found in this repo we fallback to maven or the public google repo. We should consider using maven profiles or limiting fallback so that if an artifact is not found in artifact hub (known internally as airlock) then the build fails. There is a availability risk to balance against a security risk where airlock removed a known dangerous dependency and we continued to build anyway. https://blog.gradle.org/maven-pom-profiles
P2,infra: security,team-android,triaged-android
low
Minor
2,505,954,963
react-native
(iOS) - NativeCommands fail in ref functions if batchRenderingUpdatesInEventLoop is active
### Description ### Problem Starting in version `0.74.1` and above, due to this [change](https://github.com/facebook/react-native/pull/43396/files), the `batchRenderingUpdatesInEventLoop` feature flag is turned ON. This causes NativeCommands called in `ref` functions to fail. ``` <RTNCenteredText {...props} ref={element => { if (element) { Commands.trigger(element); // <-- trigger will not be called natively if batchRenderingUpdatesInEventLoop is turned ON } }} /> ``` The corresponding NativeCommand: ``` // RTNCenteredText.mm - (void)trigger { NSLog(@"*** Fabric component trigger method called directly"); } - (void)handleCommand:(const NSString *)commandName args:(const NSArray *)args { NSString *TRIGGER = @"trigger"; if([commandName isEqual:TRIGGER]) { [self trigger]; } } ``` ### Diagnosis The NativeCommand `trigger` fails because when [synchronouslyDispatchCommandOnUIThread](https://github.com/facebook/react-native/blob/2b11131247f09ab41c053625d70f65881d20f19b/packages/react-native/React/Fabric/Mounting/RCTMountingManager.mm#L323C9-L323C47) gets called, `findComponentViewWithTag` returns `nil` because the [_registry](https://github.com/facebook/react-native/blob/2b11131247f09ab41c053625d70f65881d20f19b/packages/react-native/React/Fabric/Mounting/RCTComponentViewRegistry.mm#L83) does not contain the element. When `batchRenderingUpdatesInEventLoop` is off, the `_registry` is correctly populated with all elements on the screen, and the NativeCommand `trigger` functions correctly. If `Commands.trigger` is wrapped in a `setTimeout`, it gets called successfully. ``` <RTNCenteredText {...props} ref={element => { if (element) { setTimeout(() => { Commands.trigger(element); // <-- trigger gets called successfully }, 0); } }} /> ``` ### Steps to reproduce See the [reproducer](https://github.com/RyanCommits/RN74-issue-reproducer) provided. 1. Use `codegenNativeComponent` and `codegenNativeCommands` to create a `NativeCommand` 2. Call the created `NativeCommand` in a `ref` function 3. See that the `NativeCommand` does NOT get called in versions `>=0.74.1` ### React Native Version 0.74.5 ### Affected Platforms Runtime - iOS ### Areas JSI - Javascript Interface, Bridgeless - The New Initialization Flow ### Output of `npx react-native info` ```text N/A ``` ### Stacktrace or Logs ```text N/A ``` ### Reproducer https://github.com/RyanCommits/RN74-issue-reproducer ### Screenshots and Videos _No response_
Platform: iOS,Issue: Author Provided Repro,Newer Patch Available,Type: New Architecture
low
Major
2,505,972,397
pytorch
"RuntimeError: CUDA error: operation not supported" fixed by downgrading toolkit version
### 🐛 Describe the bug After #134373 I started getting the error "RuntimeError: CUDA error: operation not supported" when trying to run pytorch. Fresh build from source succeeds before #134373 and fails on/after. Error: ``` $ python test/inductor/test_triton_kernels.py -k test_triton_kernel_native ETEST SUITE EARLY TERMINATION due to torch.cuda.synchronize() failure CUDA error: operation not supported CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. ====================================================================== ERROR: test_triton_kernel_native_grad_False_dynamic_False_backend_aot_eager (__main__.KernelTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/aorenste/local/pytorch/torch/testing/_internal/common_utils.py", line 2979, in wrapper method(*args, **kwargs) File "/home/aorenste/local/pytorch/torch/testing/_internal/common_utils.py", line 532, in instantiated_test test(self, **param_kwargs) File "/data/users/aorenste/miniconda3/envs/py39/lib/python3.9/unittest/mock.py", line 1336, in patched return func(*newargs, **newkeywargs) File "/data/users/aorenste/pytorch/test/inductor/test_triton_kernels.py", line 888, in test_triton_kernel_native t1 = torch.rand(5, device=GPU_TYPE, requires_grad=grad) RuntimeError: CUDA error: operation not supported CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. To execute this test, run the following from the base repo dir: python test/inductor/test_triton_kernels.py KernelTests.test_triton_kernel_native_grad_False_dynamic_False_backend_aot_eager This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 ---------------------------------------------------------------------- Ran 1 test in 0.698s FAILED (errors=1) ``` I'm not sure what version of the toolkit I started on - I think it was 12.2. I definitely tried 12.4 and 12.6 and they also failed. Switching to 12.0 succeeded. ### Versions Collecting environment information... PyTorch version: 2.5.0a0+gitae3aa8f Is debug build: False CUDA used to build PyTorch: 12.0 ROCM used to build PyTorch: N/A OS: CentOS Stream 9 (x86_64) GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3) Clang version: Could not collect CMake version: version 3.30.2 Libc version: glibc-2.34 Python version: 3.8.19 | packaged by conda-forge | (default, Mar 20 2024, 12:47:35) [GCC 12.3.0] (64-bit runtime) Python platform: Linux-5.19.0-0_fbk12_hardened_11583_g0bef9520ca2b-x86_64-with-glibc2.10 Is CUDA available: True CUDA runtime version: 12.0.140 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA PG509-210 Nvidia driver version: 525.105.17 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: False CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 22 On-line CPU(s) list: 0-21 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) Platinum 8339HC CPU @ 1.80GHz CPU family: 6 Model: 85 Thread(s) per core: 1 Core(s) per socket: 22 Socket(s): 1 Stepping: 11 BogoMIPS: 3591.73 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon $ Virtualization: VT-x Hypervisor vendor: KVM Virtualization type: full L1d cache: 704 KiB (22 instances) L1i cache: 704 KiB (22 instances) L2 cache: 88 MiB (22 instances) L3 cache: 16 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0-21 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Vulnerable Vulnerability Retbleed: Vulnerable Vulnerability Spec store bypass: Vulnerable Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Mitigation; TSX disabled Versions of relevant libraries: [pip3] flake8==6.1.0 [pip3] flake8-bugbear==23.3.23 [pip3] flake8-comprehensions==3.15.0 [pip3] flake8-executable==2.1.3 [pip3] flake8-logging-format==0.9.0 [pip3] flake8-pyi==23.3.1 [pip3] flake8-simplify==0.19.3 [pip3] mypy==1.10.0 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.24.3 [pip3] optree==0.12.1 [pip3] pytorch-triton==3.0.0+dedb7bdf33 [pip3] torch==2.5.0a0+gitae3aa8f [pip3] torchvision==0.18.0a0 [conda] mkl 2023.2.0 h84fe81f_50496 conda-forge [conda] mkl-include 2024.2.0 ha957f24_665 conda-forge [conda] numpy 1.24.3 pypi_0 pypi [conda] optree 0.12.1 pypi_0 pypi [conda] pytorch-triton 3.0.0+dedb7bdf33 pypi_0 pypi [conda] torch 2.5.0a0+gitae3aa8f dev_0 <develop> [conda] torchvision 0.18.0a0 dev_0 <develop> cc @malfet @seemethere @ptrblck @msaroufim @ezyang @chauhang @penguinwu
module: build,module: cuda,triaged,oncall: pt2
low
Critical
2,505,977,839
go
x/telemetry: download the upload config lazily
This is a follow up to #68946. To fix that issue for go 1.23.1, we did the absolutely narrowest fix to suppress the telemetry config download if the telemetry mode is not `"on"`. However, we should also make the download lazy, so that we download the config once a week when there is actual work to upload, rather than once a day when the upload process runs. We could also consider back-porting this fix to 1.23.2, as I think it will be a safe change.
telemetry
low
Minor
2,506,010,165
kubernetes
Optional secret mounts taint pod directories on host
### What happened? 1. Create a pod with a volume mount of an optional secret 2. Create the secret 3. Trigger kubelet trying to recreate the container _but not the pod_ - For the repro case I rebooted the VM, but there's probably an easier way to do this 4. Pod now has `CreateContainerConfigError` status and doesn't come up It looks like what happens is that kubelet creates a directory where it later expects a file: ``` Sep 04 18:05:24 node kubelet[1300]: E0904 18:05:24.448350 1300 kubelet_pods.go:349] "Failed to prepare subPath for volumeMount of the container" err="error creating file /var/lib/kubelet/pods/e32797bd-b956-482c-af31-bffa78ba3ded/volume-subpaths/vol/test/0: open /var/lib/kubelet/pods/e32797bd-b956-482c-af31-bffa78ba3ded/volume-subpaths/vol/test/0: is a directory" containerName="test" volumeMountName="vol" ``` The pod's description says: ``` Error: failed to prepare subPath for volumeMount "vol" of container "test" ``` Restarting the pod (e.g. recreating it, or deleting it if it's in a deployment) solves the problem, because it's a whole new kubelet directory on the host and there is no conflict. ### What did you expect to happen? I'm not entirely sure if it's reasonable, but I would expect kubelet to detect this provisional directory and delete it first before recreating it now that the secret exists. If not, I would at least expect better documentation around this. ### How can we reproduce it (as minimally and precisely as possible)? Apply this yaml: ``` apiVersion: v1 kind: Pod metadata: name: test spec: tolerations: - key: node-role.kubernetes.io/control-plane operator: Equal effect: NoSchedule containers: - name: test image: busybox:latest command: [ "sh", "-c", "sleep infinity" ] volumeMounts: - mountPath: /run/map/foo name: vol subPath: foo volumes: - name: vol secret: secretName: secret optional: true ``` Observe the pod comes up without issue. You can verify the subpath directory is a directory: ``` $ ls -l /var/lib/kubelet/pods/e32797bd-b956-482c-af31-bffa78ba3ded/volume-subpaths/vol/test/ total 4 drwxr-x--- 2 root root 4096 Sep 4 17:59 0 ``` Create the secret: ``` $ kubectl create secret generic secret --from-literal=foo=hello ``` Reboot the node/VM. The pod is now in the error state. You can also delete and recreate the pod and observe the same directory above is now a file with the expected contents. ### Anything else we need to know? _No response_ ### Kubernetes version <details> ```console $ kubectl version Client Version: v1.31.0 Kustomize Version: v5.4.2 Server Version: v1.31.0 ``` </details> ### Cloud provider N/A ### OS version <details> ```console NAME="Ubuntu" VERSION="20.04.6 LTS (Focal Fossa)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 20.04.6 LTS" VERSION_ID="20.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=focal UBUNTU_CODENAME=focal ``` </details> ### Install tools kubeadm ### Container runtime (CRI) and version (if applicable) containerd github.com/containerd/containerd 1.7.12 ### Related plugins (CNI, CSI, ...) and versions (if applicable) N/A
kind/bug,sig/storage,lifecycle/stale,needs-triage
low
Critical
2,506,029,496
bitcoin
Test Framework - test_framework.test_node.FailedToStartError: No RPC credentials
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current behaviour On execution bitcoind fails to start and logs Failed to start error. Have pasted logs into the logs section. It looks as though there is some rpc config required for this to run, but have not found a way to specify whats required. When i hard code a username/password in utils.py .. fails with Connection Refused. **Example Command** ` python functional/feature_rbf.py --loglevel=DEBUG --tracerpc ` **Config.ini** [environment] PACKAGE_NAME=Bitcoin Core PACKAGE_BUGREPORT=https://github.com/bitcoin/bitcoin/issues SRCDIR=/usr/src/bitcoin BUILDDIR=/usr/src/bitcoin EXEEXT=.exe RPCAUTH=/usr/src/bitcoin/share/rpcauth/rpcauth.py [components] ENABLE_WALLET=true USE_SQLITE=true USE_BDB=true ENABLE_CLI=true ENABLE_BITCOIN_UTIL=true ENABLE_WALLET_TOOL=true ENABLE_BITCOIND=true ENABLE_FUZZ_BINARY=true ENABLE_ZMQ=true #ENABLE_EXTERNAL_SIGNER=true #ENABLE_USDT_TRACEPOINTS=true ` **Bitcoin.conf (autogenerated)** ` regtest=1 [regtest] port=12201 rpcport=17201 rpcservertimeout=99000 rpcdoccheck=1 fallbackfee=0.0002 server=1 keypool=1 discover=0 dnsseed=0 fixedseeds=0 listenonion=0 peertimeout=999999999 printtoconsole=0 upnp=0 natpmp=0 shrinkdebugfile=0 deprecatedrpc=create_bdb unsafesqlitesync=1 connect=0 bind=127.0.0.1 ` ### Expected behaviour The test should run to completion. ### Steps to reproduce Build 27.* on Ubuntu WLS Execute python test with example command. python functional/feature_rbf.py --loglevel=DEBUG --tracerpc ### Relevant log output ` 2024-09-04T18:07:10.779000Z TestFramework (INFO): PRNG seed is: 4851958720779390906 2024-09-04T18:07:10.779000Z TestFramework (DEBUG): Setting up network thread 2024-09-04T18:07:10.780000Z TestFramework (INFO): Initializing test directory /tmp/bitcoin_func_test_u9zpt_2g 2024-09-04T18:07:10.780000Z TestFramework (DEBUG): Copy cache directory /usr/src/bitcoin/test/cache/node0 to node 0 2024-09-04T18:07:10.781000Z TestFramework (DEBUG): Copy cache directory /usr/src/bitcoin/test/cache/node0 to node 1 2024-09-04T18:07:10.783000Z TestFramework.node0 (DEBUG): ['/usr/src/bitcoin/src/bitcoind.exe', '-datadir=/tmp/bitcoin_func_test_u9zpt_2g/node0', '-logtimemicros', '-debug', '-debugexclude=libevent', '-debugexclude=leveldb', '-debugexclude=rand', '-uacomment=testnode0', '-logthreadnames', '-logsourcelocations', '-loglevel=trace', '-v2transport=0'] 2024-09-04T18:07:10.783000Z TestFramework.node0 (DEBUG): ['-maxorphantx=1000', '-limitancestorcount=50', '-limitancestorsize=101', '-limitdescendantcount=200', '-limitdescendantsize=101'] 2024-09-04T18:07:10.783000Z TestFramework.node0 (DEBUG): bitcoind started, waiting for RPC to come up 2024-09-04T18:07:10.784000Z TestFramework.node1 (DEBUG): ['/usr/src/bitcoin/src/bitcoind.exe', '-datadir=/tmp/bitcoin_func_test_u9zpt_2g/node1', '-logtimemicros', '-debug', '-debugexclude=libevent', '-debugexclude=leveldb', '-debugexclude=rand', '-uacomment=testnode1', '-logthreadnames', '-logsourcelocations', '-loglevel=trace', '-v2transport=0'] 2024-09-04T18:07:10.784000Z TestFramework.node1 (DEBUG): [] 2024-09-04T18:07:10.784000Z TestFramework.node1 (DEBUG): bitcoind started, waiting for RPC to come up 2024-09-04T18:07:10.784000Z TestFramework.utils (DEBUG): /tmp/bitcoin_func_test_u9zpt_2g/node0/bitcoin.conf 2024-09-04T18:07:10.784000Z TestFramework.node0 (DEBUG): Value Error No RPC credentials .... 2024-09-04T18:07:14.552000Z TestFramework.utils (DEBUG): /tmp/bitcoin_func_test_u9zpt_2g/node0/bitcoin.conf 2024-09-04T18:07:14.552000Z TestFramework.node0 (DEBUG): Value Error No RPC credentials 2024-09-04T18:07:14.803000Z TestFramework.node0 (DEBUG): Stopping node 2024-09-04T18:07:14.804000Z TestFramework (ERROR): Assertion failed Traceback (most recent call last): File "/usr/src/bitcoin/test/functional/test_framework/test_framework.py", line 556, in start_nodes node.wait_for_rpc_connection() File "/usr/src/bitcoin/test/functional/test_framework/test_node.py", line 260, in wait_for_rpc_connection raise FailedToStartError(self._node_msg( **test_framework.test_node.FailedToStartError: [node 0] bitcoind exited with status 1 during initialization. Error: Specified data directory "/tmp/bitcoin_func_test_u9zpt_2g/node0" does not exist.** ************************ During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/src/bitcoin/test/functional/test_framework/test_framework.py", line 130, in main self.setup() File "/usr/src/bitcoin/test/functional/test_framework/test_framework.py", line 297, in setup self.setup_network() File "/usr/src/bitcoin/test/functional/test_framework/test_framework.py", line 391, in setup_network self.setup_nodes() File "/usr/src/bitcoin/test/functional/test_framework/test_framework.py", line 413, in setup_nodes self.start_nodes() File "/usr/src/bitcoin/test/functional/test_framework/test_framework.py", line 559, in start_nodes self.stop_nodes() File "/usr/src/bitcoin/test/functional/test_framework/test_framework.py", line 574, in stop_nodes node.stop_node(wait=wait, wait_until_stopped=False) File "/usr/src/bitcoin/test/functional/test_framework/test_node.py", line 386, in stop_node self.stop(wait=wait) **File "/usr/src/bitcoin/test/functional/test_framework/test_node.py", line 207, in __getattr__ assert self.rpc_connected and self.rpc is not None, self._node_msg("Error: no RPC connection") AssertionError: [node 0] Error: no RPC connection** 2024-09-04T18:07:14.806000Z TestFramework (DEBUG): Closing down network thread 2024-09-04T18:07:14.857000Z TestFramework (INFO): Stopping nodes 2024-09-04T18:07:14.857000Z TestFramework.node0 (DEBUG): Stopping node Traceback (most recent call last): File "/usr/src/bitcoin/test/functional/feature_rbf.py", line 731, in <module> ReplaceByFeeTest().main() File "/usr/src/bitcoin/test/functional/test_framework/test_framework.py", line 154, in main exit_code = self.shutdown() File "/usr/src/bitcoin/test/functional/test_framework/test_framework.py", line 313, in shutdown self.stop_nodes() File "/usr/src/bitcoin/test/functional/test_framework/test_framework.py", line 574, in stop_nodes node.stop_node(wait=wait, wait_until_stopped=False) File "/usr/src/bitcoin/test/functional/test_framework/test_node.py", line 386, in stop_node self.stop(wait=wait) File "/usr/src/bitcoin/test/functional/test_framework/test_node.py", line 207, in __getattr__ assert self.rpc_connected and self.rpc is not None, self._node_msg("Error: no RPC connection") AssertionError: [node 0] Error: no RPC connection [node 1] Cleaning up leftover process [node 0] Cleaning up leftover process ` ### How did you obtain Bitcoin Core Compiled from source ### What version of Bitcoin Core are you using? v27.1.0 ### Operating system and version Ubuntu 22.04.4 LTS ### Machine specifications Windows 11 Home 8 Core i7 9700 32 GB Ram
Windows,Linux/Unix,Tests
low
Critical
2,506,043,228
flutter
ProcessException: Process exited abnormally with exit code -2: Command: /usr/bin/arch -arm64e xcrun xcodebuild -list
As of today (9/4), this crash affected 88 unique clients. command: `flutter build ipa` ``` ProcessException: Process exited abnormally with exit code -2: Command: /usr/bin/arch -arm64e xcrun xcodebuild -list at RunResult.throwException(process.dart:122) at _DefaultProcessUtils.run(process.dart:370) at <asynchronous gap>(async) at XcodeProjectInterpreter.getInfo(xcodeproj.dart:342) at <asynchronous gap>(async) at XcodeBasedProject.projectInfo(xcode_project.dart:155) at <asynchronous gap>(async) at IosProject.buildSettingsForBuildInfo(xcode_project.dart:442) at <asynchronous gap>(async) at IosProject._parseHostAppBundleName(xcode_project.dart:416) at <asynchronous gap>(async) at IosProject.hostAppBundleName(xcode_project.dart:405) at <asynchronous gap>(async) at BuildableIOSApp.fromProject(application_package.dart:116) at <asynchronous gap>(async) at FlutterApplicationPackageFactory.getPackageForPlatform(flutter_application_package.dart:82) at <asynchronous gap>(async) at _BuildIOSSubCommand.buildableIOSApp.<anonymous closure>(build_ios.dart:651) at <asynchronous gap>(async) at _BuildIOSSubCommand.runCommand(build_ios.dart:688) at <asynchronous gap>(async) at BuildIOSArchiveCommand.runCommand(build_ios.dart:427) at <asynchronous gap>(async) at FlutterCommand.run.<anonymous closure>(flutter_command.dart:1408) at <asynchronous gap>(async) at AppContext.run.<anonymous closure>(context.dart:153) at <asynchronous gap>(async) at CommandRunner.runCommand(command_runner.dart:212) at <asynchronous gap>(async) at FlutterCommandRunner.runCommand.<anonymous closure>(flutter_command_runner.dart:420) at <asynchronous gap>(async) at AppContext.run.<anonymous closure>(context.dart:153) at <asynchronous gap>(async) at FlutterCommandRunner.runCommand(flutter_command_runner.dart:364) at <asynchronous gap>(async) at run.<anonymous closure>.<anonymous closure>(runner.dart:130) at <asynchronous gap>(async) at AppContext.run.<anonymous closure>(context.dart:153) at <asynchronous gap>(async) at main(executable.dart:93) at <asynchronous gap>(async) ```
c: crash,tool,P2,team-ios,triaged-ios
low
Critical
2,506,057,709
terminal
Disable Mica when window is out of focus
<!-- 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨 I ACKNOWLEDGE THE FOLLOWING BEFORE PROCEEDING: 1. If I delete this entire template and go my own path, the core team may close my issue without further explanation or engagement. 2. If I list multiple bugs/concerns in this one issue, the core team may close my issue without further explanation or engagement. 3. If I write an issue that has many duplicates, the core team may close my issue without further explanation or engagement (and without necessarily spending time to find the exact duplicate ID number). 4. If I leave the title incomplete when filing the issue, the core team may close my issue without further explanation or engagement. 5. If I file something completely blank in the body, the core team may close my issue without further explanation or engagement. All good? Then proceed! --> # Description of the new feature/enhancement PR #17858 Added MicaAlt support and it looks so good that Windows Terminal now looks like a more modern and beautiful PowerShell ISE if you use the default Windows dark theme. But if Windows Terminal loses focus, it becomes ugly again. Maybe consider add an option to enable Mica only when Windows Terminal get focus, otherwise disable Mica to re-allow acrylic style. <!-- A clear and concise description of what the problem is that the new feature would solve. Describe why and how a user would use this new functionality (if applicable). --> # Proposed technical implementation details (optional) <!-- A clear and concise description of what you want to happen. -->
Issue-Feature,Product-Terminal,Area-Theming
low
Critical
2,506,075,366
terminal
Make acrylic style work for “Settings” too
<!-- 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨 I ACKNOWLEDGE THE FOLLOWING BEFORE PROCEEDING: 1. If I delete this entire template and go my own path, the core team may close my issue without further explanation or engagement. 2. If I list multiple bugs/concerns in this one issue, the core team may close my issue without further explanation or engagement. 3. If I write an issue that has many duplicates, the core team may close my issue without further explanation or engagement (and without necessarily spending time to find the exact duplicate ID number). 4. If I leave the title incomplete when filing the issue, the core team may close my issue without further explanation or engagement. 5. If I file something completely blank in the body, the core team may close my issue without further explanation or engagement. All good? Then proceed! --> # Description of the new feature/enhancement The Mica style works on the whole window, so the “Settings” are prettier, but the Acrylic style doesn't look like that. ![image](https://github.com/user-attachments/assets/c4fc4ad3-6841-4bda-a830-f4c6d3754977) ![image](https://github.com/user-attachments/assets/611b7a11-d77c-4c37-afe9-35355c6d4472) <!-- A clear and concise description of what the problem is that the new feature would solve. Describe why and how a user would use this new functionality (if applicable). --> # Proposed technical implementation details (optional) <!-- A clear and concise description of what you want to happen. -->
Product-Terminal,Issue-Task,Area-SettingsUI
low
Critical
2,506,084,029
go
proposal: math: add Mean, Median, Mode, Variance, and StdDev
**Description**: This proposal aims to enhance the Go standard library’s `math` ( `math/stats.go` )package by introducing several essential statistical functions. The proposed functions are: - **Mean**: Calculates the average value of a data set. - **Median**: Determines the middle value when the data set is sorted. - **Mode**: Identifies the most frequently occurring value in a data set. - **Variance**: Measures the spread of the data set from the mean. - **StdDev**: Computes the standard deviation, providing a measure of data dispersion. and many more.... **Motivation**: The inclusion of these statistical functions directly in the `math` package will offer Go developers robust tools for data analysis and statistical computation, enhancing the language's utility in scientific and financial applications. Currently, developers often rely on external libraries for these calculations, which adds dependencies and potential inconsistencies. Integrating these functions into the standard library will: - **Provide Comprehensive Statistical Analysis**: These functions will facilitate fundamental statistical measures, aiding in more thorough data analysis and better understanding of data distributions. - **Ensure Reliable Behavior**: Functions are designed to handle edge cases, such as empty slices, to maintain predictable and accurate results. - **Optimize Performance and Accuracy**: Implemented with efficient algorithms to balance performance with calculation accuracy. - **Increase Utility**: Reduces the need for third-party libraries, making statistical computation more accessible and consistent within the Go ecosystem. **Design**: The functions will be added to the existing `math` package, ensuring they are easy to use and integrate seamlessly with other mathematical operations. Detailed documentation and examples will be provided to illustrate their usage and edge case handling. **Examples**: - **Mean**: ```go mean := math.Mean([]float64{1, 2, 3, 4, 5}) ``` - **Median**: ```go median := math.Median([]float64{1, 3, 3, 6, 7, 8, 9}) ``` - **Mode**: ```go mode := math.Mode([]float64{1, 2, 2, 3, 4}) ``` - **Variance**: ```go variance := math.Variance([]float64{1, 2, 3, 4, 5}) ``` - **StdDev**: ```go stddev := math.StdDev([]float64{1, 2, 3, 4, 5}) ``` --- <!-- Generated by Oscar. DO NOT EDIT. {"bot":"gabyhelp","kind":"overview"} --> @gabyhelp's overview of this issue: https://github.com/golang/go/issues/69264#issuecomment-2593973713 <!-- oscar-end -->
Proposal
high
Major
2,506,096,189
go
proposal: x/pkgsite: package tours (additional documentation type for packages)
### Proposal Details ## Background Go packages can have two distinct forms of documentation: 1. Comments anchored to identifiers as described by [_Godoc: documenting Go code_](https://go.dev/blog/godoc) and [_Go Doc Comments_](https://tip.golang.org/doc/comment). 2. Runnable (API) examples as described by [_Testable Examples in Go_](https://go.dev/blog/examples) Both of these are rendered in [godoc](https://pkg.go.dev/golang.org/x/tools/cmd/godoc) and [pkgsite](https://pkg.go.dev/golang.org/x/pkgsite). These two forms of documentation serve somewhat overlapping/disjoint purposes: * Go documentation comments are mostly in the vein of reference manual. * Runnable (API) examples are minimal viable demonstrations of an API _at the very least_ to sometimes complete end-to-end examples (which it is is predicated by how complicate the API is to work with). ## Problem In this [public discussion](https://www.reddit.com/r/golang/comments/1f03h22/my_grievances_with_go_documentation/), it became clear that a type of documentation is missing for users: step-by-step tutorials or material that focuses on an end-to-end **developer journey**. This led to a thought about how we could fix this gap in a nice Go-like way. ## Proposal What if we expanded the documentation servers to enable users to create their own _API tours_ à la [_A Tour of Go_](https://go.dev/tour/welcome/1)? A package could have multiple tours to demonstrate the journeys. Imagine a hypothetical Go module at `github.com/matttproud/rot13` that follows a directory structure like this: ``` $ tree rot13 rot13 ├── cmd │   └── rot13 │   ├── rot13.go │   ├── rot13_test.go │   └── rot13_x_test.go ├── endtoend │   ├── endtoend_test.go │   └── testdata │   ├── input.txt │   └── output.txt ├── go.mod ├── rot13.go ├── rot13_test.go └── rot13_x_test.go ``` We could have a directory structure like this: ``` rot13 │   … earlier elements elided … ├── tours │   ├── basic.article │   └── advanced.article │   … later elements elided … ``` Then the respective documentation servers would have some sort of internal support to run tours (e.g., use what [binary tour](https://cs.opensource.google/go/x/website/+/master:tour/) uses). When viewing the respective package in the viewer, the table of contents for the package would also include a heading that lists sub-elements of available tours (tracer shot): <img width="402" alt="Bildschirmfoto 2024-09-04 um 14 12 06" src="https://github.com/user-attachments/assets/9e28b392-12ad-4598-a118-7e8ccb8a4b96"> This has the advantage of keeping the documentation for extended workflows adjacent to the code so that it can be always fresh. Perhaps this could even be extended to support the [present](https://pkg.go.dev/golang.org/x/tools/present) tool, too. ## Rejected Alternatives There is technically a third form available: the blog post (e.g., [_Go Concurrency Patterns: Context_](https://go.dev/blog/context)), but … 1. these blog posts are seldom back-linked to the Go package documentation. 2. findability is not great either with the blog posts (especially older ones). 3. sometimes the blog posts are superseded with new information/methodologies.
Documentation,Proposal
low
Major
2,506,110,979
flutter
[google_maps_flutter_ios] Failing native tests
The native tests in [GoogleMapsUITests.m](https://github.com/flutter/packages/blob/main/packages/google_maps_flutter/google_maps_flutter_ios/example/ios14/ios/RunnerUITests/GoogleMapsUITests.m) are consistently failing locally and on CI for Flutter main branch. I've also noticed this has been failing for over a week. ``` /Volumes/Work/s/w/ir/x/w/packages/packages/google_maps_flutter/google_maps_flutter_ios/example/ios14/ios/RunnerUITests/GoogleMapsUITests.m:53: error: -[GoogleMapsUITests testUserInterface] : failed - Failed due to not able to find User interface t = 63.39s Tear Down Test Case '-[GoogleMapsUITests testUserInterface]' failed (63.614 seconds). Test Suite 'GoogleMapsUITests' failed at 2024-09-03 11:59:47.863. Executed 4 tests, with 4 failures (0 unexpected) in 256.426 (256.427) seconds Test Suite 'RunnerUITests.xctest' failed at 2024-09-03 11:59:47.863. Executed 4 tests, with 4 failures (0 unexpected) in 256.426 (256.428) seconds Test Suite 'All tests' failed at 2024-09-03 11:59:47.864. Executed 4 tests, with 4 failures (0 unexpected) in 256.426 (256.429) seconds 2024-09-03 12:00:13.009 xcodebuild[50120:308772] [MT] IDETestOperationsObserverDebug: 291.207 elapsed -- Testing started completed. 2024-09-03 12:00:13.009 xcodebuild[50120:308772] [MT] IDETestOperationsObserverDebug: 0.000 sec, +0.000 sec -- start 2024-09-03 12:00:13.009 xcodebuild[50120:308772] [MT] IDETestOperationsObserverDebug: 291.207 sec, +291.207 sec -- end Test session results, code coverage, and logs: /Users/chrome-bot/Library/Developer/Xcode/DerivedData/Runner-bynswbqjzzfjuuheqrjysabjmvyt/Logs/Test/Run-Runner-2024.09.03_11-54-50--0700.xcresult Failing tests: -[GoogleMapsUITests testMapClickPage] -[GoogleMapsUITests testMapCoordinatesPage] -[GoogleMapsUITests testMarkerDraggingCallbacks] -[GoogleMapsUITests testUserInterface] ** TEST FAILED ** Testing started [packages/google_maps_flutter/google_maps_flutter_ios completed in 6m 23s] ``` See https://logs.chromium.org/logs/flutter/buildbucket/cr-buildbucket/8737792882371508001/+/u/Run_package_tests/native_test/stdout?format=raw for full logs. The problem seems to be that `XCUIElement` is failing to find the Flutter element in the example app: <img width="948" alt="Screenshot 2024-09-04 at 3 36 08 PM" src="https://github.com/user-attachments/assets/c2031a2c-f6f3-43fc-9448-6e32cd8eb184">
c: regression,team,platform-ios,p: maps,package,P2,c: disabled test,team-ios,triaged-ios
low
Critical
2,506,112,179
pytorch
TORCH_CPP_LOG_LEVEL cannot enable VLOG
### 🐛 Describe the bug When not using GLOG, the `VLOG` macro is implemented as a negative severity: https://github.com/pytorch/pytorch/blob/eb0fd17bc451b604b7d7ed7981c4197666f4dd6e/c10/util/logging_is_not_google_glog.h#L95-L97 The severity is then compared with `FLAGS_caffe2_log_level`: https://github.com/pytorch/pytorch/blob/eb0fd17bc451b604b7d7ed7981c4197666f4dd6e/c10/util/Logging.cpp#L415-L418 However, the flag at runtime comes from the `TORCH_CPP_LOG_LEVEL` envvar, which only supports 4 pre-defined severity levels: https://github.com/pytorch/pytorch/blob/eb0fd17bc451b604b7d7ed7981c4197666f4dd6e/c10/util/Logging.cpp#L506-L539 As a result, when GLOG is not built, there is no way to control whether to print VLOG messages or not. One potential fix could be to allow a negative `TORCH_CPP_LOG_LEVEL`. ### Versions github main
module: logging,triaged
low
Critical
2,506,124,015
vscode
Allow banning certain symbols or sort them based on usage counts
* in vscode, open `src/vs/base/common/strings.ts` * type Disposable and do Ctrl+Space * observe that the first suggestion is the Disposable from `vscode`, but that is not used almost anywhere in our code * it would be helpful to be able to ban it or hide it via some project settings ![Image](https://github.com/user-attachments/assets/6d6bfda1-d52f-4228-ac7e-e22dfdca3a55)
feature-request,typescript
low
Minor
2,506,162,512
vscode
Diff editor stage / revert gutter buttons are not visible when using high contrast themes
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions --> <!-- 🔎 Search existing issues to avoid creating duplicates. --> <!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ --> <!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. --> <!-- 🔧 Launch with `code --disable-extensions` to check. --> Does this issue occur when all extensions are disabled?: Yes <!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. --> <!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. --> - VS Code Version: 1.92.2 - OS Version: MacOS 14.5 Steps to Reproduce: 1. Enable a high contrast theme (tested with default Dark / Light High Contrast) 2. Make a change and enter the diff view 3. Observe that the stage / revert gutter buttons have very little contrast Light: <img width="69" alt="image" src="https://github.com/user-attachments/assets/31deca74-7f04-4bdb-aa75-20b893554986"> Dark: <img width="86" alt="image" src="https://github.com/user-attachments/assets/a77bbed0-e041-4cf7-bc8d-f8748288fac6"> Context: I'm color-blind
bug,diff-editor
low
Critical
2,506,245,091
TypeScript
Add support for nixos for easily on-boarding developers onto the project.
### Acknowledgement - [X] I acknowledge that issues using this template may be closed without further explanation at the maintainer's discretion. ### Comment Add support for nixos for easily on-boarding developers onto the project.
Suggestion,Awaiting More Feedback
low
Minor
2,506,270,704
TypeScript
Clarify logging for "No Project" when hitting memory limit
### Acknowledgement - [X] I acknowledge that issues using this template may be closed without further explanation at the maintainer's discretion. ### Comment typescript fails with "No Project" and no other error in its log when hitting default memory limit. Improve this by providing better logging output.
Needs Investigation
low
Critical
2,506,304,208
deno
Insecure localStorage access
How is Deno "secure by default" if it allows access to localStorage and moreover preserves data across separate "deno run" executions? And I can't seem to find a way to restrict/remove localStorage access? It seems there should be a way to control it. This was very unexpected behavior when without any "--allow-read" Deno allowed localStorage writes which is really technically a disk file write. Version: Deno 1.46.1
question
medium
Major
2,506,340,631
rust
Tracking issue for RFC 3637: Guard patterns
This is a tracking issue for XXX. The feature gate for the issue is `#![feature(guard_patterns)]`. ### About tracking issues Tracking issues are used to record the overall progress of implementation. They are also used as hubs connecting to other relevant issues, e.g., bugs or open design questions. A tracking issue is however *not* meant for large scale discussion, questions, or bug reports about a feature. Instead, open a dedicated issue for the specific matter and add the relevant feature gate label. ### Steps - [x] Accept an RFC. - https://github.com/rust-lang/rfcs/pull/3637 - [ ] Implement in nightly. - [ ] Add documentation to the [dev guide][]. - See the [instructions][doc-guide]. - [ ] Add documentation to the [reference][]. - See the [instructions][reference-instructions]. - [ ] Add formatting for new syntax to the [style guide][]. - See the [nightly style procedure][]. - [ ] Stabilize. - See the [instructions][stabilization-instructions]. [dev guide]: https://github.com/rust-lang/rustc-dev-guide [doc-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#documentation-prs [edition guide]: https://github.com/rust-lang/edition-guide [nightly style procedure]: https://github.com/rust-lang/style-team/blob/master/nightly-style-procedure.md [reference]: https://github.com/rust-lang/reference [reference-instructions]: https://github.com/rust-lang/reference/blob/master/CONTRIBUTING.md [stabilization-instructions]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#stabilization-pr [style guide]: https://github.com/rust-lang/rust/tree/master/src/doc/style-guide ### Unresolved Questions - [ ] Should we allow mismatched bindings when possible in pattern disjunctions? - [ ] How should we refer to "guard patterns"? ```[tasklist] ### Implementation history - [ ] https://github.com/rust-lang/rust/pull/129996 - [ ] https://github.com/rust-lang/rust/pull/133424 - [ ] https://github.com/rust-lang/rust/pull/134989 ``` cc @max-niederman
B-RFC-approved,T-lang,C-tracking-issue,F-guard_patterns
low
Critical
2,506,341,115
godot
2D selecting ignores `z_index`
### Tested versions 4.3 4.2.2 ### System information Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1060 (NVIDIA; 31.0.15.4633) - Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz (8 Threads) ### Issue description If lower node z-index wise is lower in tree, it will be selected through the top node: https://github.com/user-attachments/assets/8e5ee496-b003-4588-b08f-1547a3bc247d ### Steps to reproduce 1. Add 2 Sprite2Ds 2. Make them overlay 3. Change z-index so that sprite further in tree appears below (contrary to default drawing order) 4. Click intersection point ### Minimal reproduction project (MRP) N/A
bug,topic:editor,usability,topic:2d
low
Major
2,506,345,520
pytorch
Undocumented fast pass behavior in nn.TransformerEncoderLayer causes failures in test_transformerencoderlayer_cuda_float32 (__main__.TestNNDeviceTypeCUDA)
### 🐛 Describe the bug This issue is related to https://github.com/pytorch/pytorch/issues/134687 Unit test test_transformerencoderlayer_cuda_float32 (in `test_nn.py`) performs the following tests: https://github.com/pytorch/pytorch/blob/a8611da86f42a442c3ab891a038af55440ccd8d0/test/test_nn.py#L12426-L12450 This test expects all NaN output tensor if the `TransformerEncoderLayer` not using fast path. However, the determining factor of all NaN output is if the `SDPBackend.FLASH_ATTENTION` or `SDPBackend.EFFICIENT_ATTENTION` is used by the underlying SDPA operator within TransformerEncoderLayer However, the non-fast path of `TransformerEncoderLayer` does not explicitly disables the FA/MEFF attention: https://github.com/pytorch/pytorch/blob/a8611da86f42a442c3ab891a038af55440ccd8d0/torch/nn/modules/transformer.py#L896-L927 Therefore, the actual backend selection logic depends on three parts of the whole codebase 1. The fast path/non-fast path logic in `TransformerEncoderLayer` 2. [bool can_use_flash_attention(sdp_params const& params, bool debug)](https://github.com/pytorch/pytorch/blob/a8611da86f42a442c3ab891a038af55440ccd8d0/aten/src/ATen/native/transformers/cuda/sdp_utils.cpp#L583) 3. [bool can_use_mem_efficient_attention(sdp_params const& params, bool debug)](https://github.com/pytorch/pytorch/blob/a8611da86f42a442c3ab891a038af55440ccd8d0/aten/src/ATen/native/transformers/cuda/sdp_utils.cpp#L634) 2 and 3 are undocumented, and FA may be used even if `TransformerEncoderLayer` should use non-fast path according to its document. ## Suggested solution When calling `TransformerEncoderLayer._sa_block`, explicitly disables FA/MEFF backend by enclosing `x = self.self_attn` with `with sdpa_kernel([SDPBackend.MATH]):` ### Versions ``` PyTorch version: 2.5.0a0+git735162e Is debug build: False CUDA used to build PyTorch: N/A ROCM used to build PyTorch: 6.2.41133-dd7f95766 OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 Clang version: Could not collect CMake version: version 3.26.4 Libc version: glibc-2.31 Python version: 3.9.19 (main, May 6 2024, 19:43:03) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-6.5.0-35-generic-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: AMD Instinct MI210 (gfx90a:sramecc+:xnack-) Nvidia driver version: Could not collect cuDNN version: Could not collect HIP runtime version: 6.2.41133 MIOpen runtime version: 3.2.0 Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 43 bits physical, 48 bits virtual CPU(s): 128 On-line CPU(s) list: 0-127 Thread(s) per core: 2 Core(s) per socket: 32 Socket(s): 2 NUMA node(s): 2 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC 7542 32-Core Processor Stepping: 0 Frequency boost: enabled CPU MHz: 1500.000 CPU max MHz: 2900.0000 CPU min MHz: 1500.0000 BogoMIPS: 5800.24 Virtualization: AMD-V L1d cache: 2 MiB L1i cache: 2 MiB L2 cache: 32 MiB L3 cache: 256 MiB NUMA node0 CPU(s): 0-31,64-95 NUMA node1 CPU(s): 32-63,96-127 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection Vulnerability Spec rstack overflow: Mitigation; Safe RET Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_ nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_loc al clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev se v_es Versions of relevant libraries: [pip3] flake8==6.1.0 [pip3] flake8-bugbear==23.3.23 [pip3] flake8-comprehensions==3.15.0 [pip3] flake8-executable==2.1.3 [pip3] flake8-logging-format==0.9.0 [pip3] flake8-pyi==23.3.1 [pip3] flake8-simplify==0.19.3 [pip3] mypy==1.10.0 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.0 [pip3] optree==0.12.1 [pip3] pytorch-triton-rocm==3.0.0 [pip3] torch==2.5.0a0+git2aca0ae [pip3] torchaudio==2.4.0+rocm6.1 [pip3] torchvision==0.19.0a0 [pip3] triton==3.0.0 [conda] mkl 2021.4.0 h06a4308_640 [conda] mkl-include 2021.4.0 h06a4308_640 [conda] numpy 1.20.3 pypi_0 pypi [conda] optree 0.11.0 pypi_0 pypi [conda] torch 2.4.0a0+git6b37b45 pypi_0 pypi [conda] torchvision 0.19.0a0+69e03db pypi_0 pypi [conda] triton 3.0.0 pypi_0 pypi ``` cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @bhosmer @cpuhrsch @erichan1 @drisspg
module: nn,triaged
low
Critical
2,506,351,572
go
runtime: consider removing osyield call from lock2
The current `runtime.lock2` implementation does a bit of spinning to try to acquire the `runtime.mutex` before sleeping. If a thread has gone to sleep within `lock2` (via a syscall), it will eventually require another thread (in `unlock2`) to do another syscall to wake it. The bit of spinning allows us to avoid those syscalls in some cases. Slowing down a bit and trying again, at a high level, seems good; maybe the previous holder has exited the critical section. The first phase of spinning involves a `runtime.procyield` call, which asks the processor to pause for a moment (on the scale of tens or hundreds of nanoseconds). There's some uncertainty about what that duration is and what it should be (described in part in #69232) but the idea of using this mechanism to slow down for a bit, again at a high level, seems good. The second phase of spinning involves a `runtime.osyield` call. That's a syscall, implemented on Linux as a call to `sched_yield(2)`. The discussion in [CL 473656](https://go.dev/cl/473656) links to https://www.realworldtech.com/forum/?threadid=189711&curpostid=189752 , which gives a perspective on why that's not a universally good idea. - It's a syscall, so it doesn't help with avoiding syscalls. (Though a single syscall here has a _chance_ of avoiding a _pair_ of syscalls, one to sleep indefinitely and one to wake). - The semantics aren't very well defined, and—very loosely speaking—don't align with our goals. We don't mean for the OS scheduler to drag a thread over from another NUMA node just because we said "we can't run at this instant". Maybe we should delete that part of `lock2`. Or maybe we should replace it with an explicit `nanosleep(2)` call of some tiny time interval. I don't see any urgency here. Mostly I'd like a tracking issue to reference in lock2's comments. CC @golang/runtime
Performance,NeedsInvestigation,compiler/runtime
low
Major
2,506,399,820
pytorch
TestSDPAPrivateUse1Only tests causes irreversible changes to PyTorch internal state
### 🐛 Describe the bug Related Issue: https://github.com/pytorch/pytorch/issues/134602 To reproduce: ``` # PYTORCH_TEST_WITH_ROCM=1 is need on ROCM platform PYTORCH_TESTING_DEVICE_ONLY_FOR="cuda" python test/test_transformers.py -v --use-pytest ``` The `--use-pytest` option will run `TestSDPAPrivateUse1Only.*` at the beginning of the whole `test/test_transformers.py` test suite, and causes massive failures due to synchronization problems because `TestSDPAPrivateUse1Only` will irreversibly change the internal state of PyTorch by registering the `privateuse1` backend: https://github.com/pytorch/pytorch/blob/a8611da86f42a442c3ab891a038af55440ccd8d0/test/test_transformers.py#L3766-L3780 After registering the privateuse1 backend, `at::getAccelerator` will always return `kPrivateUse1` https://github.com/pytorch/pytorch/blob/a8611da86f42a442c3ab891a038af55440ccd8d0/aten/src/ATen/DeviceAccelerator.cpp#L17-L22 Normally this is not a problem, since for common applications, `privateuse1` and other devices are mutually exclusive. However, this causes synchronization problem in PyTorch's test suites, if `PrivateUse1` is tested before any other cuda/rocm/other accelerator tests. Suppose we are testing some **_CUDA backward tests_** after running `privateuse1` related test. The registered `privateuse1` backend will the default device during autograd: https://github.com/pytorch/pytorch/blob/fb1c58089290f982be1cd06e95b690308fb8af78/torch/csrc/autograd/function.h#L242-L253 And the `opt_device_type` certainly will not match any of the input's device because they are all CUDA. Consequently the function `Node::stream()` will always return `std::nullopt`, disregard any stream supplied by `input_metadata_`. ## Suggested Solution 1 Add prefix `z` to privateuse1 related tests to ensure it is always the last one (even if `--use-pytest` is supplied) ## Suggested Solution 2 Add a context manager to unregister the `privateuse1` backend when the test is complete. Note there is no C++ API to unregister the `privateuse1` backend at the moment. ### Versions ``` PyTorch version: 2.5.0a0+git735162e Is debug build: False CUDA used to build PyTorch: N/A ROCM used to build PyTorch: 6.2.41133-dd7f95766 OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 Clang version: Could not collect CMake version: version 3.26.4 Libc version: glibc-2.31 Python version: 3.9.19 (main, May 6 2024, 19:43:03) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-6.5.0-35-generic-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: AMD Instinct MI210 (gfx90a:sramecc+:xnack-) Nvidia driver version: Could not collect cuDNN version: Could not collect HIP runtime version: 6.2.41133 MIOpen runtime version: 3.2.0 Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 43 bits physical, 48 bits virtual CPU(s): 128 On-line CPU(s) list: 0-127 Thread(s) per core: 2 Core(s) per socket: 32 Socket(s): 2 NUMA node(s): 2 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC 7542 32-Core Processor Stepping: 0 Frequency boost: enabled CPU MHz: 1500.000 CPU max MHz: 2900.0000 CPU min MHz: 1500.0000 BogoMIPS: 5800.24 Virtualization: AMD-V L1d cache: 2 MiB L1i cache: 2 MiB L2 cache: 32 MiB L3 cache: 256 MiB NUMA node0 CPU(s): 0-31,64-95 NUMA node1 CPU(s): 32-63,96-127 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection Vulnerability Spec rstack overflow: Mitigation; Safe RET Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_ nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_loc al clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev se v_es Versions of relevant libraries: [pip3] flake8==6.1.0 [pip3] flake8-bugbear==23.3.23 [pip3] flake8-comprehensions==3.15.0 [pip3] flake8-executable==2.1.3 [pip3] flake8-logging-format==0.9.0 [pip3] flake8-pyi==23.3.1 [pip3] flake8-simplify==0.19.3 [pip3] mypy==1.10.0 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.0 [pip3] optree==0.12.1 [pip3] pytorch-triton-rocm==3.0.0 [pip3] torch==2.5.0a0+git2aca0ae [pip3] torchaudio==2.4.0+rocm6.1 [pip3] torchvision==0.19.0a0 [pip3] triton==3.0.0 [conda] mkl 2021.4.0 h06a4308_640 [conda] mkl-include 2021.4.0 h06a4308_640 [conda] numpy 1.20.3 pypi_0 pypi [conda] optree 0.11.0 pypi_0 pypi [conda] torch 2.4.0a0+git6b37b45 pypi_0 pypi [conda] torchvision 0.19.0a0+69e03db pypi_0 pypi [conda] triton 3.0.0 pypi_0 pypi ``` cc @mruberry @ZainRizvi @NmomoN @mengpenghui @fwenguang @cdzhan @1274085042 @PHLens
module: tests,triaged,module: PrivateUse1
low
Critical
2,506,402,453
vscode
Enable inlay hints in chat code blocks
From [microsoft/vscode-dotnettools#1361](https://github.com/microsoft/vscode-dotnettools/issues/1361)
feature-request,panel-chat
low
Minor
2,506,403,302
vscode
Enabled code lenses in chat code blocks
From https://github.com/microsoft/vscode-dotnettools/issues/1361
feature-request,panel-chat
low
Minor
2,506,405,927
vscode
Enable folding / folding ranges in chat code blocks
From https://github.com/microsoft/vscode-dotnettools/issues/1361
feature-request,panel-chat
low
Minor
2,506,406,652
vscode
Expose a Way to Determine if a User Changes Thier Text Document Language Using the Language Mode Option in the VS Code UI
<!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- Please search existing issues to avoid creating duplicates. --> <!-- Describe the feature you'd like. --> We are currently working on the C++ Extension on adding behavior that will correctly identify the textDocumentLanguage of an extensionless file that is opened using a C++ extension feature. We are running into an issue where when we use the vscode.languages.setTextDocumentLanguage() when we first open a file, we overwrite any language mode that is selected through the UI in the bottom right. ![image](https://github.com/user-attachments/assets/d36a5a97-1e0a-4cbf-bef2-1fa6afcffdc8) We want to be able to detect whether the textDocument for a file was set using this option so we do not overwrite it. Is there an existing way to identify this? If not, would it be possible to add a flag to determine if this is how a textDocument language was set? This is in reference to https://github.com/microsoft/vscode-cpptools/issues/4077
feature-request,languages-guessing
low
Minor
2,506,407,209
vscode
Enable sticky scroll in chat code blocks
From https://github.com/microsoft/vscode-dotnettools/issues/1361
feature-request,editor-sticky-scroll,panel-chat
low
Minor
2,506,411,578
flutter
Flutter iOS app opened in Xcode shows a confusing "Cannot find module 'Flutter'" error until the app is built the first time
Filing this issue after [debugging a broken dev workflow](https://discord.com/channels/608014603317936148/1280999656154992775) with @jonahwilliams and @loic-sharma. Basically, I had done the following: ```sh flutter create . --platforms ios open ios/Runner/Runner.xcworkspace ``` ... but kept getting "Cannot find module 'Flutter'" and "Cannot find 'Flutter/Flutter.h'" type errors. A casual look says this is a somewhat common issue: - [`is:issue "Flutter.h"`](https://github.com/flutter/flutter/issues?q=is%3Aissue+%22Flutter.h%22) The issue is a _full_ build is required (`flutter build ios`), which is not the same as other platforms.
platform-ios,tool,a: first hour,a: annoyance,P3,team-ios,triaged-ios
low
Critical
2,506,416,292
PowerToys
[Workspaces] Taskbar Icon Order Not Preserved
### Microsoft PowerToys version 0.84.0 ### Installation method GitHub ### Running as admin Yes ### Area(s) with issue? Workspaces ### Steps to reproduce Open the apps you want for your workspace so you have those icons on the taskbar. Save your workspace layout. When you load that layout the taskbar icon order is not preserved. ### ✔️ Expected Behavior Taskbar icons should maintain their original position when loading a saved workspace. Taskbar icons should maintain their position when rearranged by user and switching between virtual desktops. ### ❌ Actual Behavior Taskbar icon order is not preserved. Loading a saved workspace results in icons for fastest opening programs appearing first on the taskbar. Taskbar icon order is also not preserved when switching between virtual desktops if those icons were rearranged. ### Other Software _No response_
Issue-Bug,Needs-Triage,Product-Workspaces
low
Minor
2,506,477,685
godot
IBusHangul keyboard unusable
### Tested versions 4.3.1.rc ### System information Ubuntu 22.04.4 LTS 64-bit ### Issue description The IBusHangul keyboard has a lot of weird behavior in Godot when typing in Korean mode. * Trying to type a multiple block word just overwrites a single syllabic block. * Unfinished syllabic blocks can't be deleted sometimes. * (Multiline) Moving to another line via arrow keys while modifying a syllabic block duplicates it. This seems to be graphical only. * (Multiline) Moving to another location via mouse while modifying a syllabic block will block typing in English until more Korean is typed. I haven't been able to find similar behavior in other software. ### Steps to reproduce N/A ### Minimal reproduction project (MRP) N/A
platform:linuxbsd,topic:thirdparty,needs testing
low
Minor
2,506,480,878
go
proposal: encoding/xml: Add EmptyElement token type to support self-closing elements.
### Proposal Details ## Background The current implementation has no concept of self-closing elements and if a empty value (like an empty string) is provided then a start element followed by a end element is given like `<foo></foo>` Unfortunately there are XML implementations out there that depend on self closing elements. These are usually closed source and probably doing some hand parsing of certain elements. These implementations _are broken_ but it is what we have. For my personal issue Juniper Networks has a XML API to their routers. This is all closed source and they expect certain elements to denote configuration targets such as the running config or the startup config. These probably should not be elements at all but they only accept `<running/>` or `<startup/>` as valid config targets. Sending `<running></running>` results in a parsing error using their API. There may be other reasons to support self-closing elements but it is unknown to me. There have been a number of requests to support self-closing elements in the `encoding/xml` package. * https://github.com/golang/go/issues/6896 * https://github.com/golang/go/issues/59710 * https://github.com/golang/go/issues/21399 And on external repos * https://github.com/google/osv-scanner/issues/1184 * https://github.com/scrapli/scrapligo/issues/73 ## Proposal This expands on some of the discussions in #21399 bu @SamWhited, @Merovius and others that there needed to be a way to encode with `Encoder.EncodeToken()` (at very least). This would add a new token type called `EmptyElement` (not in love with the name and we can bikeshed it needed) which would be essentially a clone of `StartElement` ```go // EmptyElement repsents a self-closing element (i.e <my-element/>). This // element type is only used during encoding and is never emitted when decoding. type EmptyElement struct { Name Name Attr []Attr } ``` This element would be encoded by `Encoder.EncodeToken()` and would emit a self closing tag for the given Name and optional attributes. ```go enc := xml.NewEncoder(w) _ := enc.EncodeToken(xml.EmptyElement{Name: xml.Name{Local: "foo"}}; ``` would emit `<foo/>` to the writer. Unlike `StartElement` no error would be given if a `EndElement` is not found as it already self-closing. This new token type would only be used for encoding. Decoding would continue to use the same logic and would only produce `StartElement` and `EndElement` types even if a self-closing tag is found to be fully backwards compatible. Documenation will state this. Given this new type custom types using `xml.Marshaler` can be created to emit self-closing elements. In the future a struct tag to make this easier for encoding structs could be investigated, but given the scope of this issue, probably isn't warranted. ## Alternatives ### Automatically convert "empty" elements to self-closing (i.e: `<foo></foo>` -> `<foo/>`) This was my original suggestion back in 2013 in #6896. As pointed out this would be a breaking change for the encoding itself and it is not feasible with the existing package. ### `allowempty` struct tag or similar as proposed in #21399 This was implemented in https://go-review.googlesource.com/c/go/+/59830 and inspired this change. However there is no way to use the lower level `Encoder.EncodeElement()` or `Encoder.EncodeToken()` methods with it. This proposal would lay the ground work for basic support and then something like a struct tag could be added/layered on top later if needed. ### Extend `StartElement` struct `StartElement` could be extended to have a `Closed bool` field to be self closing. i.e: ```go type StartElement struct { Name Name Attr []Attr SelfClose bool // ADDED } ``` This would allow for `encoder.EncodeToken()` to produce the right tag. However this may be confusing when used with `Encoder.EncodeElement()` which specifically requires a `StartElement` token to be passed in along with a value. Today if a `nil` value is passed to `Encoder.EncodeElement()` then no xml element is produced. One could imagine revising this so that if `nil` value and `SelfClose` is set then you would produce a `self-closing` tag. You would also want to emit an error if a non-nil (or non-empty?) value is passed and `SelfClose` is set to true. However I believe this makes for a more confusing API given the current `EncodeElement()` function signature as well as the `MarshalXML()` method on the `Marshaller` interface. ### A RawXML token as proposed in #26756 Having a `RawXML` token as proposed in #26756 could allow for users to compose and emit their own self-closing tags. This could replace and/or augment this proposal and could be acceptable. ### Wait for a revised `encoding/xml` sweep and possibly a overhall (`encoding/xml/v2`?) There are a number of other shortcomings in the xml package. It may be worth it to hold off on any changes to the existing API to either move to a new package. This package could be replace the one in the standard library or perhaps even be created outside of the stdlib with the existing `encoding/xml` being placed on a official freeze similar to packages like `net/smtp` as the demand for XML isn't as big as `JSON` and other encodings.
Proposal
low
Critical
2,506,483,322
terminal
UTF-8 decoding problem when a codepoint straddles an i/o boundary
### Windows Terminal version 1.20.11781.0 ### Windows build number 10.0.19045.4780 ### Other Software _No response_ ### Steps to reproduce 1. Create a text file with 180 instances of the Unicode character U+20B0 (German Penny Sign) on one line and save it as UTF-8 (with or without BOM). Call it foo.txt. (I've attached a sample.) 2. Open a Command Prompt profile in Terminal. 3. `chcp 65001` 4. `type foo.txt` Note that, near the end of the output there are a couple Unicode replacement characters. What's happening is that `type` sends the text to the terminal in 512-byte blocks. The UTF-8 encoding of U+20B0 takes 3 bytes. Since 512 isn't a multiple of 3, the 171st German Penny Sign is split across the boundary of the first and second write operations issued by `type`. The UTF-8 decoding is resetting state state with each write. But it's not just UTF-8 decoding. If one write ends with a complete character, and the next write begins with a combining character, they either (1) won't be composed or (2) they will be composed but there will be an empty cell immediately after it. These problems occur less frequently with applications that issue larger writes, but they do still happen. They can even happen with applications that normally flush the output on line boundaries if a single line grows so long that an intermediate flush occurs. [foo.txt](https://github.com/user-attachments/files/16882048/foo.txt) ### Expected Behavior I expected UTF-8 decoding and composition of combining characters to resync if a sequence of bytes that represents a single codepoint or grapheme cluster happens to fall on the boundary between two consecutive writes. ### Actual Behavior Note the replacement characters in the output. ![image](https://github.com/user-attachments/assets/9ba7193a-2fda-463a-9850-551480295b44)
Product-Cmd.exe,Issue-Bug,Tracking-External
low
Major
2,506,543,549
neovim
permission denied by AppArmor with `MANPAGER="nvim -" man man` (socketpair EACCES)
### Problem When using neovim as a pager, nvim crashes when viewing man pages from `man`. ```sh $ MANPAGER="nvim --clean --noplugin -" man man E903: Process failed to start: permission denied: "/usr/local/bin/nvim"Failed to start Nvim server! man: command exited with status 1: sed -e '/^[[:space:]]*$/{ N; /^[[:space:]]*\n[[:space:]]*$/D; }' | LESS=-ix8RmPm Manual page man(1) ?ltline %lt?L/%L.:byte %bB?s/%s..?e (END):?pB %pB\%.. (press h for help or q to quit)$PM Manual page man(1) ?ltline %lt?L/%L.:byte %bB?s/%s..?e (END):?pB %pB\%.. (press h for help or q to quit)$ MAN_PN=man(1) nvim --clean --noplugin - ``` However, it works when piping into neovim ```sh # this works perfect! $ MANPAGER=cat man man | nvim --clean --noplugin - ``` ```sh $ man --version man 2.11.2 $ build/bin/nvim -V1 -v NVIM v0.11.0-dev-714+g220b8aa6f Build type: Debug LuaJIT 2.1.1724512491 Compilation: /usr/bin/cc -g -Wall -Wextra -pedantic -Wno-unused-parameter -Wstrict-prototypes -std=gnu99 -Wshadow -Wconversion -Wvla -Wdouble-promotion -Wmissing-noreturn -Wmissing-format-attribute -Wmissing-prototypes -fsigned-char -fstack-protector-strong -Wno-conversion -fno-common -Wimplicit-fallthrough -fdiagnostics-color=always -DNVIM_LOG_DEBUG -DUNIT_TESTING -DHAVE_UNIBILIUM -D_GNU_SOURCE -DINCLUDE_GENERATED_DECLARATIONS -DUTF8PROC_STATIC -I/home/max/workspace/github.com/neovim/neovim.git/.deps/usr/include/luajit-2.1 -I/home/max/workspace/github.com/neovim/neovim.git/.deps/usr/include -I/home/max/workspace/github.com/neovim/neovim.git/build/src/nvim/auto -I/home/max/workspace/github.com/neovim/neovim.git/build/include -I/home/max/workspace/github.com/neovim/neovim.git/build/cmake.config -I/home/max/workspace/github.com/neovim/neovim.git/src -I/usr/include system vimrc file: "$VIM/sysinit.vim" fall-back for $VIM: "/usr/local/share/nvim" Run :checkhealth for more info ``` ### Steps to reproduce On my machine all I need to do is: ```sh $ MANPAGER="nvim --clean --noplugin -" man man ``` and it crashes every time ### Expected behavior Neovim should open the man page ### Neovim version (nvim -v) NVIM v0.11.0-dev-714+g220b8aa6f ### Vim (not Nvim) behaves the same? no ### Operating system/version Debian GNU/Linux 12 (bookworm) ### Terminal name/version urxvt,gnome-terminal ### $TERM environment variable xterm-256color ### Installation build from source
documentation,plugin,runtime,system,startup,permissions
medium
Critical
2,506,554,841
pytorch
DISABLED test_cat_slice_cat_cuda_cuda_wrapper (__main__.TestCudaWrapper)
Platforms: linux This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cat_slice_cat_cuda_cuda_wrapper&suite=TestCudaWrapper&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/29689879614). Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 10 failures and 8 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_cat_slice_cat_cuda_cuda_wrapper` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. <details><summary>Sample error message</summary> ``` Traceback (most recent call last): File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/__init__.py", line 2194, in __eq__ and self.config == other.config AttributeError: '_TorchCompileInductorWrapper' object has no attribute 'config' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 11381, in new_test return value(self) File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner return func(*args, **kwds) File "/var/lib/jenkins/workspace/test/inductor/test_cuda_cpp_wrapper.py", line 113, in fn _, code = test_torchinductor.run_and_get_cpp_code( File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/utils.py", line 1936, in run_and_get_cpp_code result = fn(*args, **kwargs) File "/var/lib/jenkins/workspace/test/inductor/test_pattern_matcher.py", line 753, in test_cat_slice_cat_cuda actual = torch.compile(fn)(*args) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 465, in _fn return fn(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1189, in __call__ is_skipfile = trace_rules.check(frame.f_code) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/trace_rules.py", line 3488, in check return check_verbose(obj, is_inlined_call).skipped File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/trace_rules.py", line 3454, in check_verbose if isinstance( File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/base.py", line 104, in __instancecheck__ if type.__instancecheck__( SystemError: <method '__instancecheck__' of 'type' objects> returned a result with an exception set To execute this test, run the following from the base repo dir: python test/inductor/test_cuda_cpp_wrapper.py TestCudaWrapper.test_cat_slice_cat_cuda_cuda_wrapper This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 ``` </details> Test file path: `inductor/test_cuda_cpp_wrapper.py` cc @clee2000 @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire
triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor
low
Critical
2,506,558,164
PowerToys
[New PT] Settings Scheduler to set a timer or set time for changing settings
### Description of the new feature / enhancement A simple utility for scheduling a setting for a set time or duration. The utility would have a few options for different settings then the option to change them based on a timer or based on a specific schedule. https://github.com/user-attachments/assets/ccc800af-1734-4e23-a211-d06887182b8d ![image](https://github.com/user-attachments/assets/106fe709-9517-4a0a-b77e-27bc6d3505d5) ### Scenario when this would be used? - Set the theme to dark mode every day at 7pm and light mode at 7am - Turn off wi-fi for the rest of the day - Set laptop speaker volume to zero every morning ### Supporting information - Similar to Task Scheduler but more friendly - More focused and easy to jump in and out than Power Automate - Could evolve to include more settings - Triggers could make more complex scenarios like when connecting to specific Wi-Fi networks, or locations like have a - "work mode" trigger which sets a few settings when arriving to work, or home - This could be an Awake v2
Idea-Enhancement,Idea-New PowerToy
low
Major
2,506,559,311
electron
[Bug]: webgpu not support `shader-f16` feature
### Preflight Checklist - [X] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project. - [X] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to. - [X] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success. ### Electron Version 32.0.1,30.1.2 ### What operating system(s) are you using? Windows ### Operating System Version windows 10 19045.4717 ### What arch are you using? x64 ### Last Known Working Electron version _No response_ ### Expected Behavior The 'shader-f16' feature of WebGPU is officially supported in [Chrome 120](https://developer.chrome.com/blog/new-in-webgpu-120) I tested Chrome and chromium both support this feature But when I tested the electron, I found that it doesn't have this feature. Even 32.0.1 doesn't have it [test link](https://webgpureport.org/) ![image](https://github.com/user-attachments/assets/9f01d6e9-43db-436e-9193-0a91c4834f6a) ### Actual Behavior ![image](https://github.com/user-attachments/assets/4a20d9bb-f8fd-460b-951f-c187164e934a) ### Testcase Gist URL https://gist.github.com/951547aa3c490a91ea30d1f773146fa4 ### Additional Information If this is a bug, can it be fixed to version 30 (Chrome 124)
platform/windows,bug :beetle:,has-repro-gist
low
Critical
2,506,575,909
ui
[bug]: tailwindcss-animate conflict with tailwindcss arbitrary values for duration-{value}, because the utility collides with the inbuilt transition duration utility
### Describe the bug After add tailwindcss-animate, I can't use tailwindcss arbitrary values for duration-{value}, because of tailwindcss-animate's issue: https://github.com/jamiebuilds/tailwindcss-animate/pull/46#issue-1909923734 ### Affected component/components other ### How to reproduce 1. install shadcn-ui follow the document 2. enable the plugins: [tailwindcssAnimate] 3. tailwindcss arbitrary values for duration-{value} does not work anymore ### Codesandbox/StackBlitz link _No response_ ### Logs _No response_ ### System Info ```bash macOS 14.6.1 ``` ### Before submitting - [X] I've made research efforts and searched the documentation - [X] I've searched for existing issues
bug
low
Critical
2,506,578,822
tensorflow
Calibrator segfaults trying to log the "while" operation
### 1. System information - OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 22.04 - TensorFlow installation (pip package or built from source): pip package - TensorFlow library (version, if pip package or github SHA, if built from source): 2.17.0 ### 2. Code [reproducer.zip](https://github.com/user-attachments/files/16894947/reproducer.zip) ### 3. Failure after conversion Segmentation fault (signal 11) during conversion ### 5. (optional) Any other info / logs The "while" operation does a check if an output tensor of the body subgraph is the same as the corresponding input tensor. If it's the case, it deallocates its own output tensor. The check is done at the prepare stage, so the affected tensor is already included in the "loggable_outputs" list by the calibrator. Then the calibrator tries to read the data from the deallocated tensor and segfaults. I've debugged it up to https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/tools/optimize/calibration/calibrator.cc#L267 and found that the `tensor.data.f == nullptr`. The check in question was introduced between 2.13 and 2.14, so it might be considered a regression: https://github.com/tensorflow/tensorflow/commit/7d49fd431ee5cebbb76eda88bc17e48921e10c85
awaiting review,stat:awaiting tensorflower,type:bug,comp:lite,TFLiteConverter,2.17
medium
Critical
2,506,616,394
create-react-app
Create-react-app should create a jsconfig.json file by default
### Is your proposal related to a problem? When creating a JS project, IDE's like VS Code won't provide import suggestions unless the checkJs compiler option is enabled. However, enabling this option causes VS Code to raise a number of issues with JSX in React JS files. Version information: - VS Code Version: 1.92.2 (Universal) - Commit: fee1edb8d6d72a0ddff41e5f71a671c23ed924b9 - Date: 2024-08-14T17:29:30.058Z (3 wks ago) - Electron: 30.1.2 - ElectronBuildId: 9870757 - Chromium: 124.0.6367.243 - Node.js: 20.14.0 - V8: 12.4.254.20-electron.0 - OS: Darwin arm64 23.5.0 - React Scripts: 5.0.1 ### Describe the solution you'd like Creating a jsconfig.json file within the project, with the following content enables import suggestions in VS Code, and avoids VS Code flagging errors due to JSX. ``` { "compilerOptions": { "checkJs": true, "jsx": "react-jsx" } } ``` ### Describe alternatives you've considered VS Code has a _JS/TS › Implicit Project Config: Check JS_ setting, which is equivalent to checkJs, but this does not tell VS Code how to handle JSX in JS files, so a jsconfig.json file is still needed. ### Additional context Import suggestions are an important feature for modern IDEs, and having this work out of the box for a popular IDE like VS Code would improve the React developer experience, especially for new developers. Example Stack Overflow question asking about this: https://stackoverflow.com/questions/77490192/why-am-i-getting-no-import-suggestions-in-my-react-project-in-vs-code/78946476#78946476 My understanding is TypeScript projects do have a tsconfig.json file created. It would be good to have parity for JS.
issue: proposal,needs triage
low
Critical
2,506,687,739
pytorch
[DTensor] loss_parallel not worked with shifted logits and labels
### 🐛 Describe the bug Hi, I just found a bug of `loss_parallel` with shifted `logits`. Basically, I follows the tutorial in the [link](https://pytorch.org/tutorials/intermediate/TP_tutorial.html#apply-loss-parallel). I made changes by slicing `logits` and `labels` so we are predicting next label. The code is as follows: ``` class ParallelStyleTest(DTensorTestBase): @property def world_size(self) -> int: return 4 @with_comms def test_slice_backward(self): device_mesh = self.build_device_mesh() logits = torch.rand((8, 15), device='cuda').requires_grad_() labels = torch.randint(0, 15, (8,), dtype=torch.long, device='cuda') loss = CrossEntropyLoss()(logits[:-1, :], labels[1:]) print(f'{loss = }', flush=True) loss.backward() d_logits = DTensor.from_local( logits, device_mesh, (Replicate(),) ).redistribute(device_mesh, (Shard(-1),)) d_labels = DTensor.from_local(labels, device_mesh, (Replicate(),)) with loss_parallel(): loss = CrossEntropyLoss()(d_logits[:-1, :], d_labels[1:]) print(f'{loss = }', flush=True) loss.backward() if __name__ == '__main__': run_tests() ``` Then an error is thrown. I found the error happens when `slice_backward` is called. But slice seems to work well without `loss_parallel`. ``` Traceback (most recent call last): File "/.../torch24/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 657, in run_test getattr(self, test_name)() File "/.../torch24/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 539, in wrapper fn() File "/.../torch24/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2744, in wrapper method(*args, **kwargs) File "/.../torch24/lib/python3.10/site-packages/torch/testing/_internal/distributed/_tensor/common_dtensor.py", line 369, in wrapper func(self, *args, **kwargs) # type: ignore[misc] File "/mnt/workspace/jiqi/Code/mtp_pai/mtp/tests/auto_parallel/test_parallel_style.py", line 42, in test_slice_backward loss.backward() File "/.../torch24/lib/python3.10/site-packages/torch/_tensor.py", line 521, in backward torch.autograd.backward( File "/.../torch24/lib/python3.10/site-packages/torch/autograd/__init__.py", line 289, in backward _engine_run_backward( File "/.../torch24/lib/python3.10/site-packages/torch/autograd/graph.py", line 768, in _engine_run_backward return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass File "/mnt/workspace/jiqi/Code/mtp_pai/mtp/tests/auto_parallel/zoos/utils.py", line 14, in __torch_dispatch__ return DTensor._op_dispatcher.dispatch( File "/.../torch24/lib/python3.10/site-packages/torch/distributed/_tensor/_dispatch.py", line 205, in dispatch local_results = op_call(*local_tensor_args, **op_info.local_kwargs) File "/.../torch24/lib/python3.10/site-packages/torch/_ops.py", line 667, in __call__ return self_._op(*args, **kwargs) RuntimeError: The size of tensor a (15) must match the size of tensor b (3) at non-singleton dimension 1 ``` ### Versions PyTorch 2.4.0 cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
oncall: distributed
low
Critical
2,506,712,288
ollama
Intel GPU - model > 4b nonsense?
### What is the issue? qwen4b works fine, all other models larger than 4b are gibberish ``` time=2024-09-05T11:35:49.569+08:00 level=INFO source=download.go:175 msg="downloading 8eeb52dfb3bb in 16 291 MB part(s)" time=2024-09-05T11:37:19.112+08:00 level=INFO source=download.go:370 msg="8eeb52dfb3bb part 0 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." time=2024-09-05T11:37:21.112+08:00 level=INFO source=download.go:370 msg="8eeb52dfb3bb part 4 stalled; retrying. If this persists, press ctrl-c to exit, then 'ollama pull' to find a faster connection." [GIN] 2024/09/05 - 11:41:40 | 200 | 5m55s | 10.0.0.18 | POST "/api/pull" [GIN] 2024/09/05 - 11:51:04 | 200 | 1.182ms | 10.0.0.18 | GET "/api/tags" [GIN] 2024/09/05 - 11:51:05 | 200 | 0s | 10.0.0.18 | GET "/api/version" [GIN] 2024/09/05 - 11:51:24 | 200 | 510.7µs | 10.0.0.18 | GET "/api/version" [GIN] 2024/09/05 - 11:51:33 | 200 | 0s | 10.0.0.18 | GET "/api/version" time=2024-09-05T11:51:51.177+08:00 level=INFO source=download.go:175 msg="downloading 8eeb52dfb3bb in 16 291 MB part(s)" time=2024-09-05T11:51:58.238+08:00 level=INFO source=download.go:175 msg="downloading 73b313b5552d in 1 1.4 KB part(s)" time=2024-09-05T11:52:01.269+08:00 level=INFO source=download.go:175 msg="downloading 0ba8f0e314b4 in 1 12 KB part(s)" time=2024-09-05T11:52:04.339+08:00 level=INFO source=download.go:175 msg="downloading 56bb8bd477a5 in 1 96 B part(s)" time=2024-09-05T11:52:07.492+08:00 level=INFO source=download.go:175 msg="downloading 1a4c3c319823 in 1 485 B part(s)" [GIN] 2024/09/05 - 11:52:14 | 200 | 28.5001976s | 10.0.0.18 | POST "/api/pull" [GIN] 2024/09/05 - 11:52:14 | 200 | 1.0817ms | 10.0.0.18 | GET "/api/tags" [GIN] 2024/09/05 - 11:52:18 | 200 | 0s | 10.0.0.18 | GET "/api/version" time=2024-09-05T11:52:23.514+08:00 level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[20.3 GiB]" memory.required.full="4.6 GiB" memory.required.partial="0 B" memory.required.kv="256.0 MiB" memory.required.allocations="[4.6 GiB]" memory.weights.total="3.9 GiB" memory.weights.repeating="3.5 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="677.5 MiB" time=2024-09-05T11:52:23.520+08:00 level=INFO source=server.go:395 msg="starting llama server" cmd="C:\\Users\\12742\\Desktop\\llama-cpp\\dist\\windows-amd64\\lib\\ollama\\runners\\cpu_avx2\\ollama_llama_server.exe --model C:\\Users\\12742\\.ollama\\models\\blobs\\sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 999 --no-mmap --parallel 1 --port 55176" time=2024-09-05T11:52:23.546+08:00 level=INFO source=sched.go:450 msg="loaded runners" count=1 time=2024-09-05T11:52:23.546+08:00 level=INFO source=server.go:595 msg="waiting for llama runner to start responding" time=2024-09-05T11:52:23.547+08:00 level=INFO source=server.go:629 msg="waiting for server to become available" status="llm server error" INFO [wmain] build info | build=1 commit="c455d1d" tid="6776" timestamp=1725508343 INFO [wmain] system info | n_threads=14 n_threads_batch=-1 system_info="AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="6776" timestamp=1725508343 total_threads=20 INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="19" port="55176" tid="6776" timestamp=1725508343 llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from C:\Users\12742\.ollama\models\blobs\sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 8B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1 llama_model_loader: - kv 5: general.size_label str = 8B llama_model_loader: - kv 6: general.license str = llama3.1 llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 9: llama.block_count u32 = 32 llama_model_loader: - kv 10: llama.context_length u32 = 131072 llama_model_loader: - kv 11: llama.embedding_length u32 = 4096 llama_model_loader: - kv 12: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 13: llama.attention.head_count u32 = 32 llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 17: general.file_type u32 = 2 llama_model_loader: - kv 18: llama.vocab_size u32 = 128256 llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 27: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 28: general.quantization_version u32 = 2 llama_model_loader: - type f32: 66 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors time=2024-09-05T11:52:23.809+08:00 level=INFO source=server.go:629 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 4.33 GiB (4.64 BPW) llm_load_print_meta: general.name = Meta Llama 3.1 8B Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 ggml_sycl_init: GGML_SYCL_FORCE_MMQ: no ggml_sycl_init: SYCL_USE_XMX: yes ggml_sycl_init: found 1 SYCL devices: llm_load_tensors: ggml ctx size = 0.27 MiB llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: SYCL0 buffer size = 4156.00 MiB llm_load_tensors: SYCL_Host buffer size = 281.81 MiB llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 [SYCL] call ggml_check_sycl ggml_check_sycl: GGML_SYCL_DEBUG: 0 ggml_check_sycl: GGML_SYCL_F16: no found 1 SYCL devices: | | | | |Max | |Max |Global | | | | | | |compute|Max work|sub |mem | | |ID| Device Type| Name|Version|units |group |group|size | Driver version| |--|-------------------|---------------------------------------|-------|-------|--------|-----|-------|---------------------| | 0| [level_zero:gpu:0]| Intel Arc A730M Graphics| 1.5| 384| 1024| 32| 12514M| 1.3.30398| llama_kv_cache_init: SYCL0 KV buffer size = 256.00 MiB llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB llama_new_context_with_model: SYCL_Host output buffer size = 0.50 MiB llama_new_context_with_model: SYCL0 compute buffer size = 258.50 MiB llama_new_context_with_model: SYCL_Host compute buffer size = 12.01 MiB llama_new_context_with_model: graph nodes = 1062 llama_new_context_with_model: graph splits = 2 INFO [wmain] model loaded | tid="6776" timestamp=1725508352 time=2024-09-05T11:52:32.341+08:00 level=INFO source=server.go:634 msg="llama runner started in 8.80 seconds" ``` ![image](https://github.com/user-attachments/assets/616e39ab-9f78-48f3-8f86-fbc65a7b87d6) ### OS Linux, Windows ### GPU Intel ### CPU Intel ### Ollama version 0.3.6-ipexllm-20240905
bug,intel
low
Critical
2,506,732,163
flutter
[iOS] print function not print the logs to the terminal when run with `flutter run --release` command with physical device
### Steps to reproduce Steps: 1. Create an empty project: `flutter create test_log` 2. Add `print` to `_incrementCounter` ```dart class _MyHomePageState extends State<MyHomePage> { ... void _incrementCounter() { setState(() { // This call to setState tells the Flutter framework that something has // changed in this State, which causes it to rerun the build method below // so that the display can reflect the updated values. If we changed // _counter without calling setState(), then the build method would not be // called again, and so nothing would appear to happen. _counter++; print('test log _counter: $_counter'); }); } ... } ``` 3. Run in terminal: `flutter run --release` 4. Click the "+" button 5. No logs with `test log _counter: xx` printed in the terminal <img width="813" alt="image" src="https://github.com/user-attachments/assets/ffa4c02d-536d-4fd7-ae47-dfd2f1249bf9"> But if I run the APP with the Xcode, change the schema to `Release` <img width="922" alt="image" src="https://github.com/user-attachments/assets/e411f799-7666-4829-a455-20e8b8d3c455"> The logs can be printed in the Xcode console <img width="665" alt="image" src="https://github.com/user-attachments/assets/6a48eecc-988a-4f7f-826b-5f2c0e4fdc73"> And there's no issue on Android. ### Expected results The logs can be printed in the terminal. ### Actual results The logs can not be printed in the terminal. ### Code sample <details open><summary>Code sample</summary> ```dart [Paste your code here] ``` </details> ### Screenshots or Video <details open> <summary>Screenshots / Video demonstration</summary> [Upload media here] </details> ### Logs <details open><summary>Logs</summary> ```console [Paste your logs here] ``` </details> ### Flutter Doctor output My device: iPhone 8, iOS 14.8.1 <details open><summary>Doctor output</summary> ```console ➜ test_log_print_ios flutter doctor -v [✓] Flutter (Channel stable, 3.22.2, on macOS 14.0 23A344 darwin-arm64, locale zh-Hans-CN) • Flutter version 3.22.2 on channel stable at /xxx/Library/flutter • Upstream repository https://github.com/flutter/flutter.git • Framework revision 761747bfc5 (3 months ago), 2024-06-05 22:15:13 +0200 • Engine revision edd8546116 • Dart version 3.4.3 • DevTools version 2.34.3 [✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0) • Android SDK at /xxx/Library/Android/sdk • Platform android-34, build-tools 34.0.0 • ANDROID_HOME = /xxx/Library/Android/sdk • Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java • Java version OpenJDK Runtime Environment (build 17.0.7+0-17.0.7b1000.6-10550314) • All Android licenses accepted. [✓] Xcode - develop for iOS and macOS (Xcode 15.2) • Xcode at /Applications/Xcode.app/Contents/Developer • Build 15C500b • CocoaPods version 1.14.3 [✓] Chrome - develop for the web • Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome [✓] Android Studio (version 2023.1) • Android Studio at /Applications/Android Studio.app/Contents • Flutter plugin can be installed from: 🔨 https://plugins.jetbrains.com/plugin/9212-flutter • Dart plugin can be installed from: 🔨 https://plugins.jetbrains.com/plugin/6351-dart • Java version OpenJDK Runtime Environment (build 17.0.7+0-17.0.7b1000.6-10550314) [✓] VS Code (version 1.85.1) • VS Code at /Applications/Visual Studio Code.app/Contents • Flutter extension version 3.94.0 [✓] Connected device (4 available) • .... [✓] Network resources • All expected network resources are available. ``` </details>
platform-ios,tool,a: release,has reproducible steps,P2,team-tool,triaged-tool,found in release: 3.24,found in release: 3.25
low
Major
2,506,820,488
node
Unexpected behavior in process.env handling null bytes
### Version v22.8.0 ### Platform ```text Linux fw13 6.10.7-200.fc40.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Aug 30 00:08:59 UTC 2024 x86_64 GNU/Linux ``` ### Subsystem Process ### What steps will reproduce the bug? Case 1: Reading environment variables using bracket notation property accessors where the property name contains a null byte. ``` console.log(process.env['SHELL\0TERM']); ``` Case 2: Setting an environment variable where the value of the environment variable contains a null byte. ``` process.env.SHELL = "HELLO\0GOODBYE"; ``` ### How often does it reproduce? Is there a required condition? There are no special required conditions ### What is the expected behavior? Why is that the expected behavior? In case 1, when referencing an environment variable using the property accessor, the expected output should be undefined, as environment variable names and values cannot contain null bytes. In case 2, it's more ambiguous, but based on the behavior of other languages such as Python, throwing an error would be a sensible behavior. It would also make sense to simply escape the null byte. ### What do you see instead? In case 1, the value of the environment variable named SHELL is printed. In case 2, the value of the environment variable SHELL becomes "HELLO" ### Additional information _No response_
process
low
Critical
2,506,867,891
deno
Add support for multiple import maps
Import maps currently have to load before any ES module and there can only be a single import map per document. That makes them fragile and potentially slow to use in real-life scenarios: Any module that loads before them breaks the entire app, and in apps with many modules the become a large blocking resource, as the entire map for all possible modules needs to load first. There's an HTML PR [proposal](https://github.com/whatwg/html/pull/10528) to enable multiple import maps per document, by merging them in a consistent and deterministic way. I'd appreciate y'all's opinions.
question
low
Major
2,506,869,135
node
Add support for multiple import maps
### What is the problem this feature will solve? Import maps currently have to load before any ES module and there can only be a single import map per document. That makes them fragile and potentially slow to use in real-life scenarios: Any module that loads before them breaks the entire app, and in apps with many modules the become a large blocking resource, as the entire map for all possible modules needs to load first. ### What is the feature you are proposing to solve the problem? There's an HTML PR [proposal](https://github.com/whatwg/html/pull/10528) to enable multiple import maps per document, by merging them in a consistent and deterministic way. I'd appreciate y'all's opinions. ### What alternatives have you considered? _No response_
feature request,loaders,web-standards
low
Major
2,506,882,883
PowerToys
File Explorer Extension to run all selected files at once
### Description of the new feature / enhancement Like PowerRename or File Locksmith, this would be a File Explorer addon that shows up in the context menu. If you select a bunch of (say) .exe or .png files and then open them, currently only the las selected gets executed. With this new tool, all of the selected files can instead be run at once. , along with an option to do so with administrator priveliges. ### Scenario when this would be used? When you need to run like 10+ exe files at once, all requiring you to separately approve for administrator priveliges. ### Supporting information _No response_
Needs-Triage
low
Minor
2,506,894,341
ui
[bug]: CLI doesn't respect tsconfig -> references, following installation docs for vite+react is wrong; npx shadcn@latest add doesn't respect --path
### Describe the bug There are two issues here: 1. Following the [installation instructions for vite](https://ui.shadcn.com/docs/installation/vite) step-by-step, trying to run `npx shadcn@latest init` fails complaining about being unable to find import aliases. The root issue here appears to be that the create vite script (react+typescript template) creates three tsconfig files: ```json // tsconfig.json { "files": [], "references": [ { "path": "./tsconfig.app.json" }, { "path": "./tsconfig.node.json" } ] } ``` ```json // tsconfig.app.json { "compilerOptions": { "target": "ES2020", "useDefineForClassFields": true, "lib": [ "ES2020", "DOM", "DOM.Iterable" ], "module": "ESNext", "skipLibCheck": true, /* Bundler mode */ "moduleResolution": "bundler", "allowImportingTsExtensions": true, "isolatedModules": true, "moduleDetection": "force", "noEmit": true, "jsx": "react-jsx", /* Linting */ "strict": true, "noUnusedLocals": true, "noUnusedParameters": true, "noFallthroughCasesInSwitch": true, "baseUrl": ".", "paths": { "@/*": [ "./src/*" ] } }, "include": [ "src" ] } ``` ```json // tsconfig.node.json { "compilerOptions": { "target": "ES2022", "lib": [ "ES2023" ], "module": "ESNext", "skipLibCheck": true, /* Bundler mode */ "moduleResolution": "bundler", "allowImportingTsExtensions": true, "isolatedModules": true, "moduleDetection": "force", "noEmit": true, /* Linting */ "strict": true, "noUnusedLocals": true, "noUnusedParameters": true, "noFallthroughCasesInSwitch": true, "baseUrl": ".", "paths": { "@/*": [ "./src/*" ] } }, "include": [ "vite.config.ts" ] } ``` and the shadcn init script is not respecting the `references` in the base `tsconfig.json`. You can work around this by adding alias(es) to the base `tsconfig.json`, but this should not be required; not only is it redundant, it breaks LSP/autocomplete in neovim, and vite itself. 2. The `--path` option when trying to add a component (e.g. `npx shadcn@latest add button --path src/components/shadcn/ui` seems to be ignored regardless of the presence of aliases in `tsconfig.json`. I found this issue: https://github.com/shadcn-ui/ui/issues/1221 which describes the same problem, but it was closed due to inactivity. ### Affected component/components CLI, all components ### How to reproduce 1. Follow the [vite install instructions](https://ui.shadcn.com/docs/installation/vite) 2. During creation of the vite project, choose typescript/react 3. Setup tailwindcss 4. Add alias(es) to `tsconfig.app.json` and `tsconfig.node.json` 5. Try to setup shadcn: `npx shadcn@latest init`; error: `No import alias found in your tsconfig.json file.` Workaround: add alias(es) to `tsconfig.json` ### Codesandbox/StackBlitz link I didn't see a way to create a sandbox app with the vite+react+typescript template. ### Logs _No response_ ### System Info ```bash Nothing special. uname -a Linux jezrien 6.10.5-zen1 #1-NixOS ZEN SMP PREEMPT_DYNAMIC Tue Jan 1 00:00:00 UTC 1980 x86_64 GNU/Linux ``` ``` ### Before submitting - [X] I've made research efforts and searched the documentation - [X] I've searched for existing issues
bug
low
Critical
2,506,902,716
ui
[bug]: New CLI vite installation instructions lead to No Tailwind error
### Describe the bug Follow exact instructions for CLI vite and receive error. ### Affected component/components CLI ### How to reproduce https://ui.shadcn.com/docs/installation/vite ### Codesandbox/StackBlitz link _No response_ ### Logs _No response_ ### System Info ```bash ✔ Preflight checks. ✔ Verifying framework. Found Vite. ✖ Validating Tailwind CSS. ✔ Validating import alias. No Tailwind CSS configuration found at **C://.../project** It is likely you do not have Tailwind CSS installed or have an invalid configuration. Install Tailwind CSS then try again. Visit https://tailwindcss.com/docs/guides/vite to get started. ``` ### Before submitting - [X] I've made research efforts and searched the documentation - [X] I've searched for existing issues
bug
low
Critical
2,506,918,863
godot
Display Wireframe doesn't work in Android editor `Mobile` renderer
### Tested versions - Reproducible in 4.2.2.stable, 4.3.stable `Mobile` renderer - Not Reproducible in 4.3.stable `compatibility` renderer ### System information Android-11,Redmi 10 2022 ### Issue description If I enable Display Wireframe on `Mobile` renderer it just shows mesh as normal display. ![Screenshot_2024-09-05-12-37-16-547_org godotengine editor v4](https://github.com/user-attachments/assets/d314f56d-5fcd-414c-9614-ae0ffd2f2902) ### Steps to reproduce Open a tscn with mesh in `Mobile` renderer , then set view to Display Wireframe. ### Minimal reproduction project (MRP) [wireframe.zip](https://github.com/user-attachments/files/16885139/wireframe.zip)
bug,platform:android,topic:rendering,topic:thirdparty,topic:3d
low
Minor
2,506,972,368
PowerToys
Chrome profiles not loading correctly
### Microsoft PowerToys version 0.84.0 ### Installation method PowerToys auto-update ### Running as admin No ### Area(s) with issue? Workspaces ### Steps to reproduce Open Google Chrome profile.. like chrome.exe --profile-directory="Profile 3" Open a different Google Chrome profile.. like chrome.exe --profile-directory="Profile 4" Save the Workspace. Edit the Workspace. First Chrome profile has the correct title of the profile. Second Chrome profile has the title of the first profile with "(2)" appended. There's no means to edit the instance names. Launch the workspace. Both profiles open the default Chrome profile selection page. Neither opens the correct profile. Edit the workspace. Change first Chrome instance to add CLI command --profile-directory="Profile 3". Change second Chrome instance to add CLI command --profile-directory="Profile 4". Save and launch the workspace. Both profiles open the default Chrome profile selection page. Neither opens the correct profile. ### ✔️ Expected Behavior I was expecting Chrome profiles to be named correctly in workspace, and instances to correctly load their respective profiles. ### ❌ Actual Behavior Neither Chrome instance had the correct title. Both Chrome instances opened the profile selection page. ### Other Software Chrome 128.0.6613.119, Windows 11 23H2 fully updated.
Issue-Bug,Needs-Triage,Product-Workspaces
low
Minor
2,507,005,210
kubernetes
scheduler-perf: add a test case to confirm QHint's impact on the scheduling throughput
The current test cases of scheduler_perf are basically simple; create Pods with a specific template (i.e., specific scheduling constraint etc) and measure the metrics. But, the real cluster actually often has various unschedulable Pods, and each of them has different unschedulable plugins. By adding such a scenario in scheduler-perf, we can observe QHint's impact on the scheduling throughput more, since QHints accurately selects Pods to move to activeQ/backoffQ based on each Pod's unschedulable reason. /sig scheduling /kind feature
sig/scheduling,kind/feature,needs-triage
medium
Major
2,507,032,158
tauri
[feat] Channel send should be fallible if webview is not listening
### Describe the problem The new Channel api is great, but I'd really like `send` to fail with error if the webview is not subscribed to the channel, or perhaps behave like tokio channels where the `send` is awaited. I have a frontend app that might discard a channel at some point, and I'd like it to be built in to the channel implementation to know that there is no longer something on the other end of the call. ### Describe the solution you'd like `Channel<T>::send() -> Result(())` should also include in the errors an error type to indicate the message passed was not handled by the webapp. It currently only sends `tauri::error::Json` for when the payload fails to serialize. ### Alternatives considered Implementing some kind of ack return from the frontend via invoke and wrapping it all up in some custom implementation. It might actually be easier to replace the channel implementation entirely and use some binding to a Webview passed to the comand? ### Additional context _No response_
type: feature request
low
Critical
2,507,048,982
PowerToys
Workspaces: Edge Profiles and Workspaces
### Description of the new feature / enhancement Hi, when using Workspaces with Microsoft Edge, it only detects the root installment from Edge itself. It might be nice, for a future release, when you can configure/detect also Profiles and Workspaces from MS Edge (or Chrome for that matter). Cheers. ### Scenario when this would be used? Configuring and launching workspaces. ### Supporting information _No response_
Needs-Triage,Product-Workspaces
low
Minor
2,507,064,853
ollama
The speed of using embedded models is much slower compared to xinference
I use the BGE-M3 model and send the same request, especially with xinference taking about 10 seconds and ollama taking about 200 seconds. I'm sure they all use GPUs. I found that xinference allocates more video memory, while ollama's video memory usage remains basically unchanged. Perhaps this is the reason for the speed difference?
feature request,performance
low
Major
2,507,070,422
pytorch
ModuleNotFoundError: No module named 'torch.fx.experimental.shape_inference'
### 🐛 Describe the bug Reproducing example: ``` >>> import torch >>> from torch.fx.experimental.shape_inference.infer_shape import infer_shape Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'torch.fx.experimental.shape_inference' ``` ### Versions ``` Versions of relevant libraries: [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.4 [pip3] onnx==1.14.1 [pip3] onnxruntime==1.16.3 [pip3] optree==0.12.1 [pip3] torch==2.5.0.dev20240902+cpu [pip3] torch-tb-profiler==0.4.0 [pip3] torchaudio==2.5.0.dev20240902+cpu [pip3] torchmetrics==1.4.0.post0 [pip3] torchvision==0.20.0.dev20240902+cpu [conda] Could not collect ``` cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
needs reproduction,triaged,module: fx
low
Critical
2,507,076,227
deno
Bug: `exports` is `undefined` in NestJS
Reported on discord: https://discord.com/channels/684898665143206084/1281161740138319872/1281161740138319872 ```sh $ deno run -A main.js error: Uncaught (in promise) ReferenceError: exports is not defined Object.defineProperty(exports, "__esModule", { value: true }); ^ at file:///home/sheik/Documentos/project-name/dist/main.js:2:23 ``` Version: Deno 1.46.3
bug
low
Critical
2,507,134,919
pytorch
lstm quantize issues
### 🐛 Describe the bug import numpy as np import torch import os import torch # Might not be necessary os.environ.setdefault("KMP_DUPLICATE_LIB_OK", "True") #import pytorch_GRU #import quantGRUcell device = torch.device("cpu") class Net(torch.nn.Module): def __init__(self, seq_length): super(Net, self).__init__() self.hidden_size = 18 self.input_size = 18 self.seq_length = seq_length self.relu1 = torch.nn.Sigmoid() # Need to specify input sizes up front # batch_first specifies an input shape of (nBatches, nSeq, nFeatures), # otherwise this is (nSeq, nBatch, nFeatures) self.lstm = torch.nn.LSTM(input_size = self.input_size, hidden_size = self.hidden_size, batch_first = True) self.linear1 = torch.nn.Linear(self.hidden_size, self.hidden_size) self.linear2 = torch.nn.Linear(self.hidden_size, self.hidden_size) self.dropout = torch.nn.Dropout(0.5) #self.squeeze = torch.squeeze self.linearOut = torch.nn.Linear(self.hidden_size, 20) self.sigmoidOut = torch.nn.Sigmoid() self.sqeeze1 = torch.Tensor.squeeze self.quant = torch.ao.quantization.QuantStub() self.dequant = torch.ao.quantization.DeQuantStub() def forward(self, x): print(type(x)) x = self.quant(x) x, (h,c) = self.lstm(x)#, self.h0) # Get last output, x[:,l - 1,:], equivalent to (last) hidden state # Squeeze to remove length 1 dim x = h.reshape(1, h.shape[-1]) x = self.dropout(x) x = self.linear2(x) x = self.relu1(x) x = self.linearOut(x) x = self.dequant(x) # Apply sigmoid either in the loss function, or in eval(...) return x def evaluate(self,x): return self.sigmoidOut(self.forward(x)) model_fp32 = Net(10) model_fp32.eval() model_fp32.qconfig = torch.ao.quantization.get_default_qat_qconfig('fbgemm') model_fp32_prepared = torch.ao.quantization.prepare_qat(model_fp32.train(), inplace=True) model_fp32_prepared.eval() model_int8 = torch.ao.quantization.convert(model_fp32_prepared, [inplace=True)](url) input_fp32 = torch.rand(1, 10, 18) res = model_int8(input_fp32) print(model_int8) torch.onnx.export( model_int8, # pytorch网络模型 torch.rand(1, 10, 18), # 随机的模拟输入 "lstm_quant_fx.onnx", # 导出的onnx文件位置 export_params=True, # 导出训练好的模型参数 opset_version=17, training=torch.onnx.TrainingMode.EVAL, # 导出模型调整到推理状态,将dropout,BatchNorm等涉及的超参数固定 input_names=["input_0"], # 为静态网络图中的输入节点设置别名,在进行onnx推理时,将input_names字段与输入数据绑定 output_names=["output_data"], # 为输出节点设置别名 # 如果不设置dynamic_axes,那么对于输入形状为[4, 3, 224, 224],在以后使用onnx进行推理时也必须输入[4, 3, 224, 224] # 下面设置了输入的第0维是动态的,以后推理时batch_size的大小可以是其他动态值 # dynamic_axes={ # # a dictionary to specify dynamic axes of input/output # # each key must also be provided in input_names or output_names # "input_0": {0: [2,3]}, # "output_0": {0: [2,3]}} ) ### Versions Traceback (most recent call last): File "d:/笔记/量化/quant_lstm/torch_qat.py", line 46, in <module> torch.onnx.export( File "C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\onnx\utils.py", line 551, in export _export( File "C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\onnx\utils.py", line 1648, in _export graph, params_dict, torch_out = _model_to_graph( File "C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\onnx\utils.py", line 1174, in _model_to_graph graph = _optimize_graph( File "C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\onnx\utils.py", line 714, in _optimize_graph graph = _C._jit_pass_onnx(graph, operator_export_type) File "C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\onnx\utils.py", line 1997, in _run_symbolic_function return symbolic_fn(graph_context, *inputs, **attrs) File "C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\onnx\symbolic_opset10.py", line 892, in quantized_mul x, _, _, _ = symbolic_helper.dequantize_helper(g, x) File "C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\onnx\symbolic_helper.py", line 1609, in dequantize_helper unpacked_qtensors = _unpack_quantized_tensor(qtensor) File "C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\onnx\symbolic_helper.py", line 203, in _unpack_quantized_tensor raise errors.SymbolicValueError( torch.onnx.errors.SymbolicValueError: ONNX symbolic expected the output of `%161 : Tensor = onnx::Sigmoid(%149), scope: lstm_qat_torch.Net::/torch.ao.nn.quantized.modules.rnn.LSTM::lstm/torch.ao.nn.quantizable.modules.rnn._LSTMLayer::layers.0/torch.ao.nn.quantizable.modules.rnn._LSTMSingleLayer::layer_fw/torch.ao.nn.quantizable.modules.rnn.LSTMCell::cell/torch.nn.modules.activation.Sigmoid::forget_gate # C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\nn\modules\activation.py:301:0 ` to be a quantized tensor. Is this likely due to missing support for quantized `onnx::Sigmoid`. Please create an issue on https://github.com/pytorch/pytorch/issues [Caused by the value '161 defined in (%161 : Tensor = onnx::Sigmoid(%149), scope: lstm_qat_torch.Net::/torch.ao.nn.quantized.modules.rnn.LSTM::lstm/torch.ao.nn.quantizable.modules.rnn._LSTMLayer::layers.0/torch.ao.nn.quantizable.modules.rnn._LSTMSingleLayer::layer_fw/torch.ao.nn.quantizable.modules.rnn.LSTMCell::cell/torch.nn.modules.activation.Sigmoid::forget_gate # C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\nn\modules\activation.py:301:0 )' (type 'Tensor') in the TorchScript graph. The containing node has kind 'onnx::Sigmoid'.] (node defined in C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\nn\modules\activation.py(301): forward C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\nn\modules\module.py(1543): _slow_forward C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\nn\modules\module.py(1562): _call_impl C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\nn\modules\module.py(1553): _wrapped_call_impl C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\ao\nn\quantizable\modules\rnn.py(78): forward C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\nn\modules\module.py(1543): _slow_forward C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\nn\modules\module.py(1562): _call_impl C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\nn\modules\module.py(1553): _wrapped_call_impl C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\ao\nn\quantizable\modules\rnn.py(153): forward C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\nn\modules\module.py(1543): _slow_forward C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\nn\modules\module.py(1562): _call_impl C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\nn\modules\module.py(1553): _wrapped_call_impl C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\ao\nn\quantizable\modules\rnn.py(204): forward C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\nn\modules\module.py(1543): _slow_forward C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\nn\modules\module.py(1562): _call_impl C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\nn\modules\module.py(1553): _wrapped_call_impl C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\ao\nn\quantizable\modules\rnn.py(366): forward C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\nn\modules\module.py(1543): _slow_forward C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\nn\modules\module.py(1562): _call_impl C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\nn\modules\module.py(1553): _wrapped_call_impl d:\笔记\量化\quant_lstm\lstm_qat_torch.py(39): forward C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\nn\modules\module.py(1543): _slow_forward C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\nn\modules\module.py(1562): _call_impl C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\nn\modules\module.py(1553): _wrapped_call_impl C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\jit\_trace.py(1275): trace_module C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\jit\_trace.py(695): _trace_impl C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\jit\_trace.py(1000): trace C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\onnx\utils.py(1124): _pre_trace_quant_model C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\onnx\utils.py(1169): _model_to_graph C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\onnx\utils.py(1648): _export C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\onnx\utils.py(551): export d:/笔记/量化/quant_lstm/torch_qat.py(46): <module> ) Inputs: #0: 149 defined in (%149 : Tensor = onnx::Slice(%gates, %145, %148, %137), scope: lstm_qat_torch.Net::/torch.ao.nn.quantized.modules.rnn.LSTM::lstm/torch.ao.nn.quantizable.modules.rnn._LSTMLayer::layers.0/torch.ao.nn.quantizable.modules.rnn._LSTMSingleLayer::layer_fw/torch.ao.nn.quantizable.modules.rnn.LSTMCell::cell # C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\ao\nn\quantizable\modules\rnn.py:75:0 ) (type 'Tensor') Outputs: #0: 161 defined in (%161 : Tensor = onnx::Sigmoid(%149), scope: lstm_qat_torch.Net::/torch.ao.nn.quantized.modules.rnn.LSTM::lstm/torch.ao.nn.quantizable.modules.rnn._LSTMLayer::layers.0/torch.ao.nn.quantizable.modules.rnn._LSTMSingleLayer::layer_fw/torch.ao.nn.quantizable.modules.rnn.LSTMCell::cell/torch.nn.modules.activation.Sigmoid::forget_gate # C:\Users\luohaifeng\Anaconda3\envs\tvm\lib\site-packages\torch\nn\modules\activation.py:301:0 ) (type 'Tensor') cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim @mikaylagawarecki
oncall: quantization,module: rnn,low priority,triaged,release notes: nn
low
Critical
2,507,148,406
PowerToys
"Windows Settings" Windows not snapping to last fancy zone after windows startup
### Microsoft PowerToys version 0.84.0 ### Installation method PowerToys auto-update ### Running as admin Yes ### Area(s) with issue? FancyZones ### Steps to reproduce "Windows Settings" windows forgets last fancy zone after windows reboot. After changing again to zone, window will re-open in correct fancy zone until there is a reboot. ### ✔️ Expected Behavior "Windows Settings" windows opens to last fancy zone when opened for the first time after reboot. ### ❌ Actual Behavior "Windows Settings" windows opens to full screen window. ### Other Software Windows 11 Home Version 23H2 OS Build 22631.4037
Issue-Bug,Product-FancyZones,Needs-Triage
low
Minor
2,507,168,334
godot
Resource "opened state" in Array doesn't move with the item
### Tested versions 4.3 stable ### System information Godot v4.3.stable - macOS 14.6.1 - Vulkan (Mobile) - integrated Apple M3 Max - Apple M3 Max (14 Threads) ### Issue description When I drag an array item to another position inside the array, the item does move, but whether that item is 'opened' in the inspector doesn't move along with it. If the first object in an array is opened, regardless of whether that object is moved somewhere else or another object moves into that position, the first object will remain opened. This is very confusing. It makes it seem like moving the object wasn't successful. It also makes it hard to keep an overview of the objects. https://github.com/user-attachments/assets/68f2175d-1b62-4d9c-882f-346b72062de4 ### Steps to reproduce Make an array with multiple resources, open one, and then drag the resources around inside the array. The 'opened' objects always remain on the same index. ### Minimal reproduction project (MRP) n/a
bug,topic:editor,usability
low
Minor
2,507,185,233
godot
MSBuild cannot access environment variables (PATH) for NuGet dependencies
### Tested versions - Reproducible in: v4.3.stable.mono.official [77dcf97d8] ### System information Godot v4.3.stable.mono - macOS 14.4.1 - Vulkan (Forward+) - integrated Apple M2 Pro - Apple M2 Pro (12 Threads) ### Issue description My project has a dependency that is installed with NuGet. When I build the Godot project, this NuGet package tries to build an MSBuild target, and fails, because it cannot find `dotnet`, although `dotnet` is installed properly on my system. ### Steps to reproduce I have created an example project with a dependency, added to the `.csproj` file: <PackageReference Include="FlatSharp.Compiler" Version="[7.7.0]" /> When I build the Godot project, this package tries to build an MSBuild target, and fails, because it cannot find `dotnet`. This is the output in Godot's "MSBuild" tab: MSB3073: The command "dotnet --list-sdks" exited with code 127. /Users/theome/.nuget/packages/flatsharp.compiler/7.7.0/build/FlatSharp.Compiler.targets(98,5) This is an excerpt from the `msbuild_log.txt`: Project "nuget-example.csproj" (default targets): Exec: dotnet --list-sdks Exec: /var/folders/w_/mdv4sslx3sg2_xk497c1vz5h0000gn/T/MSBuildTemptheome/tmpb99fda8e72d64fe99d946dee8fd19f35.exec.cmd: line 2: dotnet: command not found /Users/theome/.nuget/packages/flatsharp.compiler/7.7.0/build/FlatSharp.Compiler.targets(98,5): error MSB3073: The command "dotnet --list-sdks" exited with code 127. [/Users/theome/nuget-example/nuget-example.csproj] Done building project "nuget-example.csproj" -- FAILED. This is the command in the file `FlatSharp.Compiler.targets` that tries to run `dotnet`: <Exec Command="dotnet --list-sdks" ConsoleToMsBuild="false"> <Output TaskParameter="ConsoleOutput" PropertyName="StdOut" /> </Exec> If I modify the file and add the following command: <Exec Command="echo $PATH" ConsoleToMsBuild="false"> <Output TaskParameter="ConsoleOutput" PropertyName="StdOut" /> </Exec> then I can see in `msbuild_log.txt` that `PATH` doesn't contain the required `dotnet` path: Project "nuget-example.csproj" (default targets): Exec: echo $PATH Exec: /usr/bin:/bin:/usr/sbin:/sbin - `dotnet` is installed correctly on my machine, and I can build and run the Godot project if I don't add the dependency. - If I build and run the project from VS Code, the build process succeeds and the game launches. - If I open Godot from a shell with `open /Applications/Godot_mono.app/` and build and run the project from Godot, it also works. - Setting an explicit `dotnet` path in Editor Settings at `dotnet/editor/custom_exec_path` does not solve the issue. I would expect that the project builds without errors when I launch Godot from the Finder and then use the "Build Project" option in Godot. Would it be possible to add an Editor Setting to explicitly set environment variables for the MSBuild process? ### Minimal reproduction project (MRP) [nuget-example.zip](https://github.com/user-attachments/files/16886607/nuget-example.zip)
bug,topic:buildsystem,topic:dotnet
low
Critical
2,507,209,973
ui
[feat]: Can automatically display "<Pagination Ellipsis />" in case of multiple "PaginationLink"
### Feature description can automatically display "<Pagination Ellipsis />" in case of multiple "PaginationLink". For example: ![image](https://github.com/user-attachments/assets/8fb2acf9-2db5-4632-8dc1-fe6a91c77b7d) ### Affected component/components _No response_ ### Additional Context Additional details here... ### Before submitting - [X] I've made research efforts and searched the documentation - [X] I've searched for existing issues and PRs
area: request
low
Minor
2,507,235,658
godot
Godot 4.3 system resource bug
### Tested versions 4.3.stable.steam[77dcf97d8] ### System information Windows 10, intel core i7-12700f, version 4.3.stable.steam[77dcf97d8] ### Issue description Godot engine is using too much ram for no reason, even in a blank project it can use up to 8gb of ram sitting at idle. ### Steps to reproduce let the engine run, and the ram usage will slowly increase. ### Minimal reproduction project (MRP) [blank_project.zip](https://github.com/user-attachments/files/16886937/blank_project.zip)
needs testing,performance
low
Critical
2,507,268,018
rust
re-exported macro_export doc(hidden) pub use macro_rules macro not showing up
<!-- Thank you for filing a bug report! 🐛 Please provide a short summary of the bug, along with any information you feel relevant to replicating the bug. --> Let's say you have two crates * Crate A and Crate B * Crate A imports crate B, and re-exports its modules In crate B you have something like ```rs //! crate B : lib.rs pub mod whatever { #[macro_export] #[doc(hidden)] macro_rules! __foo { ... } #[doc(inline)] pub use crate::__foo as foo; } ``` As well as in crate A: ```rs //! crate A : lib.rs #[doc(inline)] pub use ::crate_b::whatever; ``` What I now have is that if you would open the docs for `crate B` you can see in module `whatever` that the `foo` macro is there. Nice However if you open the docs of `crate A` you notice that in `A::whatever` there is no macro `foo` visible at all.. You can use it within your code as `A::whatever::foo` but it doesn't show in the docs... Editor does also code hints it (rust analyzer?). I think this is either a bug, or me just running into limits of this hack, or perhaps I do it wrong? This is a simplified example of what i have in my `rama` crates. For example: * you can see nicely here the `match_service` macro: https://docs.rs/rama-http/0.2.0-alpha.2/rama_http/service/web/index.html * however in https://ramaproxy.org/docs/rama/http/service/web/index.html you do not see it :/ And yes these are two different commits, but trust me I tried it locally and it's the same. I don't have public edge builds for the individual crates. repo: https://github.com/plabayo/rama Anyway, hope my minimal example above is already enough... ### Meta <!-- If you're using the stable version of the compiler, you should also check if the bug also exists in the beta or nightly versions. --> `rustc --version --verbose`: ``` rustc 1.80.1 (3f5fd8dd4 2024-08-06) binary: rustc commit-hash: 3f5fd8dd41153bc5fdca9427e9e05be2c767ba23 commit-date: 2024-08-06 host: aarch64-apple-darwin release: 1.80.1 LLVM version: 18.1.7 ``` As requested I also tested it locally with `rust version 1.83.0-nightly (4ac7bcbaa 2024-09-04)`, but same-same.
T-rustdoc,A-macros,C-bug,A-cross-crate-reexports
low
Critical
2,507,273,209
flutter
When there is a WidgetSpan in a TextSpan, and the RichText contains both Chinese and English, TextAlign.justify cannot take effect
### Steps to reproduce run code sample on the Mac ### Expected results The text should fill the width without leaving any white space ### Actual results ![image](https://github.com/user-attachments/assets/69ca27f0-755f-4077-ab8b-c0aacc381a41) The green row has white space on the right side ### Code sample ```dart import 'package:flutter/material.dart'; void main(List<String> args) { const style = TextStyle( fontFamily: 'PingFang', fontSize: 26, color: Colors.black, ); runApp( MaterialApp( debugShowCheckedModeBanner: false, home: Scaffold( body: Center( child: SizedBox( width: 376, child: Column( mainAxisAlignment: MainAxisAlignment.center, crossAxisAlignment: CrossAxisAlignment.stretch, children: [ Container( color: Colors.red, child: RichText( text: TextSpan( style: style, children: [ TextSpan(text: '为什么'), TextSpan(text: 'flutter text layout is so expensive?'), ], ), textAlign: TextAlign.justify, ), ), Container( color: Colors.green, child: RichText( text: TextSpan( style: style, children: [ TextSpan(text: '为什么'), WidgetSpan( child: Image.network( width: 26, 'https://storage.googleapis.com/cms-storage-bucket/a40ceb6e5d342207de7b.png', ), ), TextSpan(text: 'flutter text layout is so expensive?'), ], ), textAlign: TextAlign.justify, ), ), Container( color: Colors.yellow, child: RichText( text: TextSpan( style: style, children: [ TextSpan(text: 'why '), WidgetSpan( child: Image.network( width: 26, 'https://storage.googleapis.com/cms-storage-bucket/a40ceb6e5d342207de7b.png', ), ), TextSpan(text: 'flutter text layout is so expensive?'), ], ), textAlign: TextAlign.justify, ), ), ], ), ), ), ), ), ); } ``` ### Screenshots or Video _No response_ ### Logs Flutter 3.22.2 • channel stable • https://github.com/flutter/flutter.git Framework • revision 761747bfc5 (3 months ago) • 2024-06-05 22:15:13 +0200 Engine • revision edd8546116 Tools • Dart 3.4.3 • DevTools 2.34.3 ### Flutter Doctor output ``` [✓] Flutter (Channel stable, 3.22.2, on macOS 14.6.1 23G93 darwin-arm64, locale zh-Hans-CN) • Flutter version 3.22.2 on channel stable at /Users/zjg/fvm/versions/3.22.2 • Upstream repository https://github.com/flutter/flutter.git • Framework revision 761747bfc5 (3 months ago), 2024-06-05 22:15:13 +0200 • Engine revision edd8546116 • Dart version 3.4.3 • DevTools version 2.34.3 • Pub download mirror https://pub.flutter-io.cn • Flutter download mirror https://storage.flutter-io.cn [✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0) • Android SDK at /Users/zjg/Library/Android/sdk • Platform android-34, build-tools 34.0.0 • ANDROID_SDK_ROOT = /Users/zjg/Library/Android/sdk • Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java • Java version OpenJDK Runtime Environment (build 17.0.7+0-17.0.7b1000.6-10550314) • All Android licenses accepted. [✓] Xcode - develop for iOS and macOS (Xcode 15.3) • Xcode at /Applications/Xcode.app/Contents/Developer • Build 15E204a • CocoaPods version 1.14.2 [✓] Chrome - develop for the web • Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome [✓] Android Studio (version 2023.1) • Android Studio at /Applications/Android Studio.app/Contents • Flutter plugin can be installed from: 🔨 https://plugins.jetbrains.com/plugin/9212-flutter • Dart plugin can be installed from: 🔨 https://plugins.jetbrains.com/plugin/6351-dart • Java version OpenJDK Runtime Environment (build 17.0.7+0-17.0.7b1000.6-10550314) [✓] VS Code (version 1.92.2) • VS Code at /Applications/Visual Studio Code.app/Contents • Flutter extension version 3.94.0 [✓] Connected device (5 available) • LE2110 (mobile) • 192.168.0.104:5555 • android-arm64 • Android 14 (API 34) • iPad Air (5th generation) (mobile) • 9525D3C4-C31B-4DF2-B024-7AC586E6FDA6 • ios • com.apple.CoreSimulator.SimRuntime.iOS-17-4 (simulator) • macOS (desktop) • macos • darwin-arm64 • macOS 14.6.1 23G93 darwin-arm64 • Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.6.1 23G93 darwin-arm64 • Chrome (web) • chrome • web-javascript • Google Chrome 128.0.6613.119 [✓] Network resources • All expected network resources are available. • No issues found! ```
framework,a: internationalization,a: typography,has reproducible steps,P2,team-framework,triaged-framework,found in release: 3.24,found in release: 3.25
low
Critical
2,507,274,398
ollama
Add Dracarys-Llama-3.1-70B-Instruct support
Hello, thanks for the awesome work on Ollama. It would be nice to add support of the [Dracarys-Llama-3.1-70B-Instruct](https://huggingface.co/abacusai/Dracarys-Llama-3.1-70B-Instruct) model from [abacus.ai](https://abacus.ai/) . This is a Coding fine-tune version of Llama-3.1-70B-Instruct managing a high score on LiveCodeBench. Thanks in advance.
model request
low
Minor
2,507,275,752
ollama
Loading a smaller context model after a bigger model is loaded
### What is the issue? ## Hardware Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 64 On-line CPU(s) list: 0-63 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz CPU family: 6 Model: 85 Thread(s) per core: 2 Core(s) per socket: 16 Socket(s): 2 3 x Quadro RTX 5000 ![Screenshot from 2024-09-05 14-53-01](https://github.com/user-attachments/assets/160c1e28-ca5e-4499-bb05-481a4528024d) ## Error The below happens when **llama3.1** is already loaded and i am loading **smollm** which are having different context length. Both model are loaded into gpu on request but results below on API request ![Screenshot from 2024-09-05 14-51-52](https://github.com/user-attachments/assets/769c484c-1399-4c65-8e93-4dd331eab024) Both model works fine concurrently using ollama cli ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.3.5
bug
low
Critical
2,507,290,388
godot
Layout loading problem.
### Tested versions v4.4.dev1.official [28a72fa43] ### System information w10 64 ### Issue description Loading a layout that was saved with the script editor docked to the main editor does not dock the script editor if it is undocked. ### Steps to reproduce 1 - Disable the "single window" mode. 2 - Create and save a layout where the script editor is attached to the main editor. 3 - Detach the script editor into a separate window. 4 - Now load the layout saved in step 2 where the script editor is attached to the main editor. The layout loading will not attach the script editor that you detached from the main editor in step number 3. ### Minimal reproduction project (MRP) ...
topic:editor,needs testing
low
Minor
2,507,295,641
react
Bug: First render doesn't create DOM nodes before next javascript is executed in script
This may have been discussed elsewhere but I wasn't able to find anything. With the update to using `createRoot` in React 18, the DOM is created asynchronously, which means any code running after `root.render()` cannot depend on the DOM that React is creating. React version: 18.2.0 ## Steps To Reproduce ```js const root = ReactDOM.createRoot(document.getElementById("app")); root.render( React.createElement("div", { id: "reactChild" }, "Rendered By React") ); document.getElementById("reactChild").innerHTML = "Replaced By VanillaJS"; // this errors ``` ## The current behavior JS will error because `document.getElementById("reactChild")` is null and not found ## The expected behavior React will render first and then `document.getElementById("reactChild")` will execute find the node [Link to code example](https://codesandbox.io/p/sandbox/2njxcj?layout=%257B%2522sidebarPanel%2522%253A%2522EXPLORER%2522%252C%2522rootPanelGroup%2522%253A%257B%2522direction%2522%253A%2522horizontal%2522%252C%2522contentType%2522%253A%2522UNKNOWN%2522%252C%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522id%2522%253A%2522ROOT_LAYOUT%2522%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522contentType%2522%253A%2522UNKNOWN%2522%252C%2522direction%2522%253A%2522vertical%2522%252C%2522id%2522%253A%2522cm0p0a5am00063b6igau3mza0%2522%252C%2522sizes%2522%253A%255B100%255D%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522contentType%2522%253A%2522EDITOR%2522%252C%2522direction%2522%253A%2522horizontal%2522%252C%2522id%2522%253A%2522EDITOR%2522%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL%2522%252C%2522contentType%2522%253A%2522EDITOR%2522%252C%2522id%2522%253A%2522cm0p0a5am00023b6i7pymhzso%2522%257D%255D%257D%252C%257B%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522contentType%2522%253A%2522SHELLS%2522%252C%2522direction%2522%253A%2522horizontal%2522%252C%2522id%2522%253A%2522SHELLS%2522%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL%2522%252C%2522contentType%2522%253A%2522SHELLS%2522%252C%2522id%2522%253A%2522cm0p0a5am00033b6i48103piz%2522%257D%255D%257D%255D%257D%252C%257B%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522contentType%2522%253A%2522DEVTOOLS%2522%252C%2522direction%2522%253A%2522vertical%2522%252C%2522id%2522%253A%2522DEVTOOLS%2522%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL%2522%252C%2522contentType%2522%253A%2522DEVTOOLS%2522%252C%2522id%2522%253A%2522cm0p0a5am00053b6i9ksfo73k%2522%257D%255D%257D%255D%252C%2522sizes%2522%253A%255B50%252C50%255D%257D%252C%2522tabbedPanels%2522%253A%257B%2522cm0p0a5am00023b6i7pymhzso%2522%253A%257B%2522tabs%2522%253A%255B%257B%2522id%2522%253A%2522cm0p0a5am00013b6i0zwjrq7t%2522%252C%2522mode%2522%253A%2522permanent%2522%252C%2522type%2522%253A%2522FILE%2522%252C%2522filepath%2522%253A%2522%252Findex.html%2522%252C%2522state%2522%253A%2522IDLE%2522%252C%2522initialSelections%2522%253A%255B%257B%2522startLineNumber%2522%253A20%252C%2522startColumn%2522%253A1%252C%2522endLineNumber%2522%253A20%252C%2522endColumn%2522%253A1%257D%255D%257D%255D%252C%2522id%2522%253A%2522cm0p0a5am00023b6i7pymhzso%2522%252C%2522activeTabId%2522%253A%2522cm0p0a5am00013b6i0zwjrq7t%2522%257D%252C%2522cm0p0a5am00053b6i9ksfo73k%2522%253A%257B%2522tabs%2522%253A%255B%257B%2522id%2522%253A%2522cm0p0a5am00043b6iord4hwnv%2522%252C%2522mode%2522%253A%2522permanent%2522%252C%2522type%2522%253A%2522UNASSIGNED_PORT%2522%252C%2522port%2522%253A0%257D%255D%252C%2522id%2522%253A%2522cm0p0a5am00053b6i9ksfo73k%2522%252C%2522activeTabId%2522%253A%2522cm0p0a5am00043b6iord4hwnv%2522%257D%252C%2522cm0p0a5am00033b6i48103piz%2522%253A%257B%2522tabs%2522%253A%255B%255D%252C%2522id%2522%253A%2522cm0p0a5am00033b6i48103piz%2522%257D%257D%252C%2522showDevtools%2522%253Atrue%252C%2522showShells%2522%253Afalse%252C%2522showSidebar%2522%253Atrue%252C%2522sidebarPanelSize%2522%253A15%257D) Is this just an expected result with React 18+? If you fallback and use `ReactDOM.render()` instead, it works as expected.
Status: Unconfirmed
low
Critical
2,507,326,215
PowerToys
Save settings for use on multiple machines
### Description of the new feature / enhancement Some file that can be saved local/network that saves settings of powertoys (for instance what kind op shortcut you have setup) ### Scenario when this would be used? When you use multiple machines where powertoys is installed you want the same settings. Or when you reinstall.. ### Supporting information _No response_
Needs-Triage,Needs-Team-Response
low
Minor
2,507,406,074
node
Should we be calling `SSL_CTX_add_client_CA()` always when a custom CA is set?
When specifying a `ca` option for TLS's `createSecureContext()`, we call `SSL_CTX_add_client_CA()` since the early days of TLS in Node.js: 2a61e1cd4979fcab4f2bf58a7b21f685c42f641e From the [docs](https://docs.openssl.org/3.3/man3/SSL_CTX_set0_CA_list/#description) for that function: > In most cases it is not necessary to set CA names on the client side. The list of CA names that are acceptable to the client will be sent in plaintext to the server. This has privacy implications and may also have performance implications if the list is large. This optional capability was introduced as part of TLSv1.3 and therefore setting CA names on the client side will have no impact if that protocol version has been disabled. Most servers do not need this and so this should be avoided unless required. @tniessen @nodejs/security-triage
tls
low
Major
2,507,429,264
langchain
ChatGoogleGenerativeAI: **TypeError** when using @tool decorated methods to perform Tool Calling using Gimini.
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code ``` from langchain_google_genai import ChatGoogleGenerativeAI model_name = "gemini-1.5-flash-001" #"gemini-1.5-pro-latest" llm = ChatGoogleGenerativeAI(model=model_name, google_api_key=os.getenv('GOOGLE_API_KEY_2'), max_output_tokens = 1024, temperature = 0, verbose=False, ) llm_with_tools = llm.bind(functions = [search_web, split_documents ] ) responce = llm_with_tools.invoke("Search for LangChain on DuckDuckGo") ``` #Tool definitions ``` @tool(parse_docstring=True) def split_documents( chunk_size: int, knowledge_base: Annotated[Union[List[LangchainDocument], LangchainDocument], InjectedToolArg], chunk_overlap: Optional[int] = None, tokenizer_name: Annotated[Optional[str], InjectedToolArg] = config.EMBEDDING_MODEL_NAME ) -> List[LangchainDocument]: """ This method is an implimentation of Text splitter that uses HuggingFace tokenizer to count length. This method is to be called for chuncking a Langchain Document/List of Langchain Documents. Returns a list of chuncked LangChain Document(s) Args: chunk_size: "Size of Chunks to be created, in number of tokens.Depends on the Context window length of the Embedding model" knowledge_base: "List of Langchain Document(s) to process. To be passed at run time chunk_overlap: "Size of overlap between Chunks, in number of tokens" tokenizer_name: "Name of the tokanizer model to be used for tokanization before chunking the Langchain Document(s) in knowledge_base" """ # Tool Code @tool(parse_docstring=True) def search_web( query:str, engine:Optional[str]="Google", num_results:Optional[int]=5, truncate_threshold:Optional[int]=None, ) -> List[LangchainDocument]: """ Performs web search for the passed query using the desired search engine's API. This function will then use web scraping to get page content and return them as a list of LangChain documents. Args: query:"Query to perform Web search for" engine: "The search engine to use for the web search" num_results: "The number of search results to return from web search" truncate_threshold: "Threshold in number of characters to truncate each web pages' content" """ # Tool Code ``` ### Error Message and Stack Trace (if applicable) ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) [<ipython-input-12-af095bbdae0b>](https://localhost:8080/#) in <cell line: 1>() ----> 1 responce = llm_with_tools.invoke("Search for LangChain on DuckDuckGo") 10 frames [/usr/local/lib/python3.10/dist-packages/langchain_core/runnables/base.py](https://localhost:8080/#) in invoke(self, input, config, **kwargs) 5090 **kwargs: Optional[Any], 5091 ) -> Output: -> 5092 return self.bound.invoke( 5093 input, 5094 self._merge_configs(config), [/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py](https://localhost:8080/#) in invoke(self, input, config, stop, **kwargs) 275 return cast( 276 ChatGeneration, --> 277 self.generate_prompt( 278 [self._convert_input(input)], 279 stop=stop, [/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py](https://localhost:8080/#) in generate_prompt(self, prompts, stop, callbacks, **kwargs) 775 ) -> LLMResult: 776 prompt_messages = [p.to_messages() for p in prompts] --> 777 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) 778 779 async def agenerate_prompt( [/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py](https://localhost:8080/#) in generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs) 632 if run_managers: 633 run_managers[i].on_llm_error(e, response=LLMResult(generations=[])) --> 634 raise e 635 flattened_outputs = [ 636 LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[list-item] [/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py](https://localhost:8080/#) in generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs) 622 try: 623 results.append( --> 624 self._generate_with_cache( 625 m, 626 stop=stop, [/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py](https://localhost:8080/#) in _generate_with_cache(self, messages, stop, run_manager, **kwargs) 844 else: 845 if inspect.signature(self._generate).parameters.get("run_manager"): --> 846 result = self._generate( 847 messages, stop=stop, run_manager=run_manager, **kwargs 848 ) [/usr/local/lib/python3.10/dist-packages/langchain_google_vertexai/chat_models.py](https://localhost:8080/#) in _generate(self, messages, stop, run_manager, stream, **kwargs) 1163 if not self._is_gemini_model: 1164 return self._generate_non_gemini(messages, stop=stop, **kwargs) -> 1165 return self._generate_gemini( 1166 messages=messages, 1167 stop=stop, [/usr/local/lib/python3.10/dist-packages/langchain_google_vertexai/chat_models.py](https://localhost:8080/#) in _generate_gemini(self, messages, stop, run_manager, **kwargs) 1319 **kwargs: Any, 1320 ) -> ChatResult: -> 1321 request = self._prepare_request_gemini(messages=messages, stop=stop, **kwargs) 1322 response = _completion_with_retry( 1323 self.prediction_client.generate_content, [/usr/local/lib/python3.10/dist-packages/langchain_google_vertexai/chat_models.py](https://localhost:8080/#) in _prepare_request_gemini(self, messages, stop, stream, tools, functions, tool_config, safety_settings, cached_content, tool_choice, **kwargs) 1234 ) -> GenerateContentRequest: 1235 system_instruction, contents = _parse_chat_history_gemini(messages) -> 1236 formatted_tools = self._tools_gemini(tools=tools, functions=functions) 1237 if tool_config: 1238 tool_config = self._tool_config_gemini(tool_config=tool_config) [/usr/local/lib/python3.10/dist-packages/langchain_google_vertexai/chat_models.py](https://localhost:8080/#) in _tools_gemini(self, tools, functions) 1375 ) 1376 if tools: -> 1377 return [_format_to_gapic_tool(tools)] 1378 if functions: 1379 return [_format_to_gapic_tool(functions)] [/usr/local/lib/python3.10/dist-packages/langchain_google_vertexai/functions_utils.py](https://localhost:8080/#) in _format_to_gapic_tool(tools) 214 ): 215 fd = _format_to_gapic_function_declaration(tool) --> 216 gapic_tool.function_declarations.append(fd) 217 continue 218 # _ToolDictLike TypeError: Parameter to MergeFrom() must be instance of same class: expected google.cloud.aiplatform.v1beta1.FunctionDeclaration got FunctionDeclaration. ``` ### Description I am trying to use ChatGoogleGenerativeAI for tool calling and Agentic AI implimentation. When exicuting the below code I expect it to call search_web with appropriate arguments. ` responce = llm_with_tools.invoke("Search for LangChain on DuckDuckGo")` But instead it give me the following error. `TypeError: Parameter to MergeFrom() must be instance of same class: expected google.cloud.aiplatform.v1beta1.FunctionDeclaration got FunctionDeclaration.` I am using langchain's @tool decorator to create these tools. I assume this should be compatible internally ### System Info System Information ------------------ > OS: Linux > OS Version: #1 SMP PREEMPT_DYNAMIC Thu Jun 27 21:05:47 UTC 2024 > Python Version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] Package Information ------------------- > langchain_core: 0.2.38 > langchain: 0.2.15 > langchain_community: 0.2.15 > langsmith: 0.1.114 > langchain_google_genai: 1.0.10 > langchain_google_vertexai: 1.0.10 > langchain_groq: 0.1.9 > langchain_text_splitters: 0.2.4 Optional packages not installed ------------------------------- > langgraph > langserve Other Dependencies ------------------ > aiohttp: 3.10.5 > anthropic[vertexai]: Installed. No version info available. > async-timeout: 4.0.3 > dataclasses-json: 0.6.7 > google-cloud-aiplatform: 1.65.0 > google-cloud-storage: 2.18.2 > google-generativeai: 0.7.2 > groq: 0.11.0 > httpx: 0.27.2 > httpx-sse: 0.4.0 > jsonpatch: 1.33 > langchain-mistralai: Installed. No version info available. > numpy: 1.26.4 > orjson: 3.10.7 > packaging: 24.1 > pillow: 10.4.0 > pydantic: 2.8.2 > PyYAML: 6.0.2 > requests: 2.32.3 > SQLAlchemy: 2.0.34 > tenacity: 8.5.0 > typing-extensions: 4.12.2
🤖:bug,investigate
low
Critical