id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
607000025 | flink sql gateway seems not support to submit job for stream table
failed to submit a sql job for kafka stream table, exception: Cannot generate a valid execution plan for the given query. but i tried the same sql in Flink Client, it works well.
what exception you get ? and how to submit a sql job ?
it looks like jar dependencies exist question
I got something like:
com.ververica.flink.table.gateway.utils.SqlExecutionException: Invalid SQL query.
at com.ververica.flink.table.gateway.operation.SelectOperation.executeQueryInternal(SelectOperation.java:223) ~[flink-sql-gateway-0.3-SNAPSHOT.jar:?]
at com.ververica.flink.table.gateway.operation.SelectOperation.execute(SelectOperation.java:87) ~[flink-sql-gateway-0.3-SNAPSHOT.jar:?]
at com.ververica.flink.table.gateway.rest.session.Session.runStatement(Session.java:106) ~[flink-sql-gateway-0.3-SNAPSHOT.jar:?]
at com.ververica.flink.table.gateway.rest.handler.StatementExecuteHandler.handleRequest(StatementExecuteHandler.java:81) ~[flink-sql-gateway-0.3-SNAPSHOT.jar:?]
at com.ververica.flink.table.gateway.rest.handler.AbstractRestHandler.respondToRequest(AbstractRestHandler.java:77) ~[flink-sql-gateway-0.3-SNAPSHOT.jar:?]
at com.ververica.flink.table.gateway.rest.handler.AbstractHandler.channelRead0(AbstractHandler.java:178) [flink-sql-gateway-0.3-SNAPSHOT.jar:?]
at com.ververica.flink.table.gateway.rest.handler.AbstractHandler.channelRead0(AbstractHandler.java:75) [flink-sql-gateway-0.3-SNAPSHOT.jar:?]
at org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.runtime.rest.handler.router.RouterHandler.routed(RouterHandler.java:115) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.runtime.rest.handler.router.RouterHandler.channelRead0(RouterHandler.java:94) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.runtime.rest.handler.router.RouterHandler.channelRead0(RouterHandler.java:55) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.runtime.rest.FileUploadHandler.channelRead0(FileUploadHandler.java:208) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.runtime.rest.FileUploadHandler.channelRead0(FileUploadHandler.java:69) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [flink-dist_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.shaded.netty4.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [flink-dist_2.12-1.13.0.jar:1.13.0]
at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: java.lang.IllegalArgumentException: Mismatch between configured runtime mode and actual runtime mode. Currently, the 'execution.runtime-mode' can only be set when instantiating the table environment. Subsequent changes are not supported. Please instantiate a new TableEnvironment if necessary.
at org.apache.flink.table.planner.delegation.BatchPlanner.validateAndOverrideConfiguration(BatchPlanner.scala:173) ~[flink-table-blink_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:162) ~[flink-table-blink_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1516) ~[flink-table_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:1508) ~[flink-table_2.12-1.13.0.jar:1.13.0]
at org.apache.flink.table.api.bridge.java.internal.StreamTableEnvironmentImpl.getPipeline(StreamTableEnvironmentImpl.java:547) ~[flink-table_2.12-1.13.0.jar:1.13.0]
at com.ververica.flink.table.gateway.context.ExecutionContext.lambda$createPipeline$1(ExecutionContext.java:235) ~[flink-sql-gateway-0.3-SNAPSHOT.jar:?]
at com.ververica.flink.table.gateway.context.ExecutionContext.wrapClassLoader(ExecutionContext.java:202) ~[flink-sql-gateway-0.3-SNAPSHOT.jar:?]
at com.ververica.flink.table.gateway.context.ExecutionContext.createPipeline(ExecutionContext.java:232) ~[flink-sql-gateway-0.3-SNAPSHOT.jar:?]
at com.ververica.flink.table.gateway.operation.SelectOperation.executeQueryInternal(SelectOperation.java:216) ~[flink-sql-gateway-0.3-SNAPSHOT.jar:?]
| gharchive/issue | 2020-04-26T12:24:57 | 2025-04-01T06:46:10.110656 | {
"authors": [
"GANJE",
"felixzh2020",
"godfreyhe",
"weilanying"
],
"repo": "ververica/flink-sql-gateway",
"url": "https://github.com/ververica/flink-sql-gateway/issues/41",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1351222537 | iv key fixed
Darwin/gtar: sshtars/ssh.tar: Cannot open: No such file or directory
Darwin/gtar: Error is not recoverable: exiting now
failed
Make sure you clone recursive
also run hdiutil detach /tmp/SSHRD
| gharchive/issue | 2022-08-25T17:23:11 | 2025-04-01T06:46:10.114820 | {
"authors": [
"Brayan-Villa",
"verygenericname"
],
"repo": "verygenericname/SSHRD_Script",
"url": "https://github.com/verygenericname/SSHRD_Script/issues/21",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
614700879 | document processing: double invocation - update documentation
Using https://github.com/vespa-engine/sample-apps/tree/master/vespa-cloud/album-recommendation-docproc
1588936248.407807 h624a.dev.aws-us-east-1c.vespa-external.aws.oath.cloud 883/700 container Container.ai.vespa.example.album.LyricsDocumentProcessor info In process
1588936248.422296 h624a.dev.aws-us-east-1c.vespa-external.aws.oath.cloud 883/700 container Container.ai.vespa.example.album.LyricsDocumentProcessor info Added to requests pending: 1
1588936248.422867 h624a.dev.aws-us-east-1c.vespa-external.aws.oath.cloud 883/700 container Container.ai.vespa.example.album.LyricsDocumentProcessor info Request pending ID: 1, Progress.LATER
1588936248.453697 h624a.dev.aws-us-east-1c.vespa-external.aws.oath.cloud 883/703 container Container.ai.vespa.example.album.LyricsDocumentProcessor info In process
1588936248.466975 h624a.dev.aws-us-east-1c.vespa-external.aws.oath.cloud 883/703 container Container.ai.vespa.example.album.LyricsDocumentProcessor info Request pending ID: 1, Progress.LATER
1588936248.476168 h624a.dev.aws-us-east-1c.vespa-external.aws.oath.cloud 883/704 container Container.ai.vespa.example.album.LyricsDocumentProcessor info In handleResponse
1588936248.477377 h624a.dev.aws-us-east-1c.vespa-external.aws.oath.cloud 883/704 container Container.ai.vespa.example.album.LyricsDocumentProcessor info Async response to put or get, requestID: 1
1588936248.477641 h624a.dev.aws-us-east-1c.vespa-external.aws.oath.cloud 883/704 container Container.ai.vespa.example.album.LyricsDocumentProcessor info Found lyrics for : document 'id:mynamespace:lyrics::1' of type 'lyrics'
1588936248.487219 h624a.dev.aws-us-east-1c.vespa-external.aws.oath.cloud 883/700 container Container.ai.vespa.example.album.LyricsDocumentProcessor info In process
1588936248.487786 h624a.dev.aws-us-east-1c.vespa-external.aws.oath.cloud 883/700 container Container.ai.vespa.example.album.LyricsDocumentProcessor info Set lyrics, Progress.DONE
1588936248.500432 h624a.dev.aws-us-east-1c.vespa-external.aws.oath.cloud 883/700 container Container.com.yahoo.documentapi.messagebus.protocol.DocumentRouteSelectorPolicy config Selector for route 'musiccluster' is '(music) or (lyrics)'.
then adding to services.xml:
<document-processing cluster="default" />
<document-processing cluster="default" chain="default" />
log from next run:
1588936391.386205 h624a.dev.aws-us-east-1c.vespa-external.aws.oath.cloud 883/723 container Container.ai.vespa.example.album.LyricsDocumentProcessor info In process
1588936391.401065 h624a.dev.aws-us-east-1c.vespa-external.aws.oath.cloud 883/723 container Container.ai.vespa.example.album.LyricsDocumentProcessor info Added to requests pending: 1
1588936391.401291 h624a.dev.aws-us-east-1c.vespa-external.aws.oath.cloud 883/723 container Container.ai.vespa.example.album.LyricsDocumentProcessor info Request pending ID: 1, Progress.LATER
1588936391.415627 h624a.dev.aws-us-east-1c.vespa-external.aws.oath.cloud 883/726 container Container.ai.vespa.example.album.LyricsDocumentProcessor info In handleResponse
1588936391.416342 h624a.dev.aws-us-east-1c.vespa-external.aws.oath.cloud 883/726 container Container.ai.vespa.example.album.LyricsDocumentProcessor info Async response to put or get, requestID: 1
1588936391.416487 h624a.dev.aws-us-east-1c.vespa-external.aws.oath.cloud 883/726 container Container.ai.vespa.example.album.LyricsDocumentProcessor info Found lyrics for : document 'id:mynamespace:lyrics::1' of type 'lyrics'
1588936391.421766 h624a.dev.aws-us-east-1c.vespa-external.aws.oath.cloud 883/727 container Container.ai.vespa.example.album.LyricsDocumentProcessor info In process
1588936391.422327 h624a.dev.aws-us-east-1c.vespa-external.aws.oath.cloud 883/727 container Container.ai.vespa.example.album.LyricsDocumentProcessor info Set lyrics, Progress.DONE
1588936391.435244 h624a.dev.aws-us-east-1c.vespa-external.aws.oath.cloud 883/727 container Container.com.yahoo.documentapi.messagebus.protocol.DocumentRouteSelectorPolicy config Selector for route 'musiccluster' is '(music) or (lyrics)'.
1588936391.451431 h624a.dev.aws-us-east-1c.vespa-external.aws.oath.cloud 883/723 container Container.ai.vespa.example.album.LyricsDocumentProcessor info In process
1588936391.452487 h624a.dev.aws-us-east-1c.vespa-external.aws.oath.cloud 883/723 container Container.ai.vespa.example.album.LyricsDocumentProcessor info Added to requests pending: 2
1588936391.452548 h624a.dev.aws-us-east-1c.vespa-external.aws.oath.cloud 883/723 container Container.ai.vespa.example.album.LyricsDocumentProcessor info Request pending ID: 2, Progress.LATER
1588936391.453924 h624a.dev.aws-us-east-1c.vespa-external.aws.oath.cloud 883/731 container Container.ai.vespa.example.album.LyricsDocumentProcessor info In handleResponse
1588936391.455006 h624a.dev.aws-us-east-1c.vespa-external.aws.oath.cloud 883/731 container Container.ai.vespa.example.album.LyricsDocumentProcessor info Async response to put or get, requestID: 2
1588936391.455052 h624a.dev.aws-us-east-1c.vespa-external.aws.oath.cloud 883/731 container Container.ai.vespa.example.album.LyricsDocumentProcessor info Found lyrics for : document 'id:mynamespace:lyrics::1' of type 'lyrics'
1588936391.472769 h624a.dev.aws-us-east-1c.vespa-external.aws.oath.cloud 883/727 container Container.ai.vespa.example.album.LyricsDocumentProcessor info In process
1588936391.473339 h624a.dev.aws-us-east-1c.vespa-external.aws.oath.cloud 883/727 container Container.ai.vespa.example.album.LyricsDocumentProcessor info Set lyrics, Progress.DONE
I don't get why the processing is run twice
In the first case, it uses chain.indexing, in the latter chain.default
[1588694607.592] Recognized 'default' as route 'default/chain.default indexing'.
[1588694607.596] Recognized 'default/chain.default' as HopBlueprint(selector = { '[LoadBalancer:cluster=default;session=chain.default]' }, recipients = { }, ignoreResult = false).
[1588694607.603] Running routing policy 'LoadBalancer'.
[1588694607.612] Component 'tcp/vespa-container:19101/chain.default' selected by policy 'LoadBalancer'.
[1588694607.613] Resolving 'tcp/vespa-container:19101/chain.default indexing'.
[1588694607.661] Sending message (version 7.204.28) from client to 'tcp/vespa-container:19101/chain.default' with 179.941 seconds timeout.
[1588694607.662] Message (type 100004) received at 'default/container.0' for session 'chain.default'.
[1588694607.688] Recognized 'indexing' as HopBlueprint(selector = { '[DocumentRouteSelector]' }, recipients = { 'musiccluster' }, ignoreResult = false).
[1588694607.688] Running routing policy 'DocumentRouteSelector'.
[1588694607.688] Component '[MessageType:musiccluster]' selected by policy 'DocumentRouteSelector'.
[1588694607.688] Resolving '[MessageType:musiccluster]'.
[1588694607.688] Running routing policy 'MessageType'.
[1588694607.688] Component 'musiccluster-index' selected by policy 'MessageType'.
[1588694607.688] Resolving 'musiccluster-index'.
[1588694607.689] Recognized 'musiccluster-index' as route 'default/chain.indexing [Content:cluster=musiccluster]'.
[1588694607.689] Recognized 'default/chain.indexing' as HopBlueprint(selector = { '[LoadBalancer:cluster=default;session=chain.indexing]' }, recipients = { }, ignoreResult = false).
[1588694607.689] Running routing policy 'LoadBalancer'.
[1588694607.689] Component 'tcp/vespa-container:19101/chain.indexing' selected by policy 'LoadBalancer'.
[1588694607.689] Resolving 'tcp/vespa-container:19101/chain.indexing [Content:cluster=musiccluster]'.
[1588694607.693] Sending message (version 7.204.28) from 'default/container.0' to 'tcp/vespa-container:19101/chain.indexing' with 179.97 seconds timeout.
[1588694607.694] Message (type 100004) received at 'default/container.0' for session 'chain.indexing'.
[1588694607.696] Running routing policy 'Content'.
[1588694607.696] Selecting route
[1588694607.696] No cluster state cached. Sending to random distributor.
vs
[1588694738.472] Recognized 'default' as route 'default/chain.default indexing'.
[1588694738.477] Recognized 'default/chain.default' as HopBlueprint(selector = { '[LoadBalancer:cluster=default;session=chain.default]' }, recipients = { }, ignoreResult = false).
[1588694738.482] Running routing policy 'LoadBalancer'.
[1588694738.492] Component 'tcp/vespa-container:19101/chain.default' selected by policy 'LoadBalancer'.
[1588694738.493] Resolving 'tcp/vespa-container:19101/chain.default indexing'.
[1588694738.542] Sending message (version 7.204.28) from client to 'tcp/vespa-container:19101/chain.default' with 179.947 seconds timeout.
[1588694738.543] Message (type 100004) received at 'default/container.0' for session 'chain.default'.
[1588694738.583] Recognized 'indexing' as HopBlueprint(selector = { '[DocumentRouteSelector]' }, recipients = { 'musiccluster' }, ignoreResult = false).
[1588694738.597] Running routing policy 'DocumentRouteSelector'.
[1588694738.597] Component '[MessageType:musiccluster]' selected by policy 'DocumentRouteSelector'.
[1588694738.597] Resolving '[MessageType:musiccluster]'.
[1588694738.608] Running routing policy 'MessageType'.
[1588694738.608] Component 'musiccluster-index' selected by policy 'MessageType'.
[1588694738.608] Resolving 'musiccluster-index'.
[1588694738.608] Recognized 'musiccluster-index' as route 'default/chain.default [Content:cluster=musiccluster]'.
[1588694738.608] Recognized 'default/chain.default' as HopBlueprint(selector = { '[LoadBalancer:cluster=default;session=chain.default]' }, recipients = { }, ignoreResult = false).
[1588694738.608] Running routing policy 'LoadBalancer'.
[1588694738.608] Component 'tcp/vespa-container:19101/chain.default' selected by policy 'LoadBalancer'.
[1588694738.608] Resolving 'tcp/vespa-container:19101/chain.default [Content:cluster=musiccluster]'.
[1588694738.613] Sending message (version 7.204.28) from 'default/container.0' to 'tcp/vespa-container:19101/chain.default' with 179.931 seconds timeout.
[1588694738.613] Message (type 100004) received at 'default/container.0' for session 'chain.default'.
[1588694738.653] Running routing policy 'Content'.
[1588694738.653] Selecting route
[1588694738.653] No cluster state cached. Sending to random distributor.
ref http://localhost:4000/documentation/reference/services-content.html#document-processing :
the document-processing is not needed to insert a docproc into the indexing chain. I guess this is for the complex case of multiple clusters
First step is to look at what config is generated.
Also look at what happens if the cluster is given another name than "default".
the docs seem to suggest that you should do either:
{noformat}
and no document-processing inside
{noformat}
in this case you get an extra "chain.default" that is run before "chain.indexing"
or, for better control, do this:
{noformat}
<content id="musiccluster" version="1.0">
<redundancy>1</redundancy>
<documents>
<document type="music" mode="index" />
<document type="lyrics" mode="index" />
<document-processing cluster="whatever" chain="mine" />
</documents>
<nodes> <node distribution-key='1' hostalias='node1'/> </nodes>
</content>
{noformat}
to get a single chain that runs both your custom processor and the built-in indexing.
if you do this:
<content id="musiccluster" version="1.0">
<redundancy>1</redundancy>
<documents>
<document type="music" mode="index" />
<document type="lyrics" mode="index" />
<document-processing cluster="default" chain="default" />
</documents>
you will run the chain twice.
First because it was named "default" and therefore implicitly added before indexing (the doc says: "Note that the document processing chain must be called default to automatically be included in the default route.")
Then the second time because you asked vespa to use that chain as its indexing chain.
Does this make things clear?
Thanks @arnej27959!
I am taking the ticket to update the documentation
sample app and doc updated https://github.com/vespa-engine/documentation/pull/1057
I ran into the same problem again, in a different incarnation - my notes here for later addition to FAQ:
Added a synthetic field gram_title:
schema doc {
field gram_title type string {
indexing: input title | index | summary
match {
gram
gram-size: 3
}
}
document doc {
Document processing config:
<document-processing>
<chain id="default" inherits="indexing">
<documentprocessor id="ai.vespa.cloud.docsearch.OutLinksDocumentProcessor" bundle="vespacloud-docsearch"/>
</chain>
</document-processing>
When feeding, got:
"[UNKNOWN(252001) @ tcp/vespa-example:19101/chain.indexing]: Processing failed. Error message: java.lang.IllegalArgumentException: Field 'gram_content' is not part of the declared document type 'doc'. -- See Vespa log for details. "
This highlighted the problem in services.xml, a double run through the chain, as the second time, the synthetic field was already added and causing the error.
In short, doing <chain id="default" inherits="indexing"> is wrong, as naming it "default" will anyway put it before the default indexing chain, but the inherits attribute will add indexing chain steps to it as well.
To fix, either name the chain differently (and inherit), or remove the inherit, calling it "default". The latter does not change feeding endpoint.
Another tip is tracelevel:
curl -H Content-Type:application/json --data-binary @docs.json $ENDPOINT/document/v1/mynamespace/doc/docid/1?tracelevel=4 | jq .
Another variant of this issue where synthetic fields inserted by IndexingProc is complained about if one ends up with two IndexingProc
<container id="default" version="1.0">
<document-processing>
<chain id="default" inherits="indexing">
<documentprocessor id="processor.OfferDocumentProcessor">
</documentprocessor>
</chain>
</document-processing>
</container>
Using a schema with a position type field called coords:
WARNING : container [Container.com](http://container.com/).yahoo.docproc.jdisc.DocumentProcessingTask Processing of put of document id:offer:offer::. Last call: call to class com.yahoo.docprocs.indexing.IndexingProcessor (id: com.yahoo.docprocs.indexing.IndexingProcessor in indexing) failed at call to class com.yahoo.docprocs.indexing.IndexingProcessor (id: com.yahoo.docprocs.indexing.IndexingProcessor in indexing)\nexception=\njava.lang.IllegalArgumentException: Field 'coords_zcurve' is not part of the declared document type 'offer'.\n\tat com.yahoo.docprocs.indexing.DocumentScript.requireThatFieldIsDeclaredInDocument(DocumentScript.java:71)\n\tat com.yahoo.docprocs.indexing.DocumentScript.execute(DocumentScript.java:47)\n\tat com.yahoo.docprocs.indexing.IndexingProcessor.processDocument(IndexingProcessor.java:103)\n\tat com.yahoo.docprocs.indexing.IndexingProcessor.process(IndexingProcessor.java:75)\n\tat com.yahoo.docproc.Call.call(Call.java:154)\n\tat com.yahoo.docproc.impl.DocprocExecutor.process(DocprocExecutor.java:117)\n\tat com.yahoo.docproc.jdisc.DocumentProcessingTask.process(DocumentProcessingTask.java:121)\n\tat com.yahoo.docproc.jdisc.DocumentProcessingTask.run(DocumentProcessingTask.java:68)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.base/java.lang.Thread.run(Thread.java:833)\n
Adding this to the content cluster
<documents>
<document type="offer" mode="index"/>
<document-processing cluster="default" chain="default"/>
</documents>
Then triggers the validation override mentioned above:
Indexing cluster 'XX' specifies the chain 'default' as indexing chain. As the 'default' chain is run by default, using it as the indexing chain will run it twice. Use a different name for the indexing chain.
The below resolved the problem.
<document-processing>
<chain id="offer-processing" inherits="indexing">
<documentprocessor id="processor.OfferDocumentProcessor">
</documentprocessor>
</chain>
</document-processing>
<documents>
<document type="offer" mode="index"/>
<document-processing cluster="default" chain="offer-processing"/>
</documents>
| gharchive/issue | 2020-05-08T12:04:21 | 2025-04-01T06:46:10.222704 | {
"authors": [
"arnej27959",
"geirst",
"jobergum",
"kkraune"
],
"repo": "vespa-engine/vespa",
"url": "https://github.com/vespa-engine/vespa/issues/13193",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
318056233 | Document ranking.matchPhase, ranking.softtimeout and ranking.diversity in the search API reference
These three are missing from http://docs.vespa.ai/documentation/reference/search-api-reference.html
in this document I also find some diversity - is this what you mean by ranking.diversity, @bratseth ?
Those attributes are to ensure diversity in the context of using matchPhase only: matchPhase is a feature which throws away hits. The matchPhase.diversity feature tells Vespa to prefer keeping hits which increases diversity.
Done
| gharchive/issue | 2018-04-26T14:31:08 | 2025-04-01T06:46:10.225827 | {
"authors": [
"bratseth"
],
"repo": "vespa-engine/vespa",
"url": "https://github.com/vespa-engine/vespa/issues/5728",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
672572509 | Avoid doing a full get for metadata only get
@vekterli or @geirst or @toregge or @arnej27959 or @havardpe PR
@hakonhall PR Another small one, but not without semantics.
| gharchive/pull-request | 2020-08-04T07:29:17 | 2025-04-01T06:46:10.227054 | {
"authors": [
"baldersheim"
],
"repo": "vespa-engine/vespa",
"url": "https://github.com/vespa-engine/vespa/pull/13978",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
841048206 | Sample transient disk usage for disk index fusion
@toregge please review
@baldersheim FYI
@toregge PTAL
| gharchive/pull-request | 2021-03-25T15:30:42 | 2025-04-01T06:46:10.227967 | {
"authors": [
"geirst"
],
"repo": "vespa-engine/vespa",
"url": "https://github.com/vespa-engine/vespa/pull/17182",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
983820451 | Using resource value that is compared to the resource limit
I confirm that this contribution is made under the terms of the license found in the root directory of this repository's source tree and that I have the authority necessary to make this contribution on behalf of its copyright owner.
@bratseth, please disregard if this is not inline with the assumptions in the AutoScaler implementation
Yes, it's not. Autoscaling has these hardcoded targets:
static final double idealMemoryLoad = 0.65;
static final double idealDiskLoad = 0.6;
| gharchive/pull-request | 2021-08-31T12:45:23 | 2025-04-01T06:46:10.229447 | {
"authors": [
"bjormel",
"bratseth"
],
"repo": "vespa-engine/vespa",
"url": "https://github.com/vespa-engine/vespa/pull/18919",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2430628748 | 🛑 Site da Razao is down
In 068193c, Site da Razao (https://www.razaoinfo.com.br) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Site da Razao is back up in 291a04a after 5 minutes.
| gharchive/issue | 2024-07-25T17:33:03 | 2025-04-01T06:46:10.231937 | {
"authors": [
"thiagodamas"
],
"repo": "vetorial-labs/upptime",
"url": "https://github.com/vetorial-labs/upptime/issues/98",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
120993942 | ReflectiveComponentSerializer is easy to break through infinite recursion
I'm finding that serializing any object that has a field of its own type causes ReflectiveComponentSerializer to recurse infinitely. This is using Vexe.FastSave.Save.ObjectToMemory to save some objects.
For example, System.TimeSpan which has a field public static readonly TimeSpan Zero, or System.Guid which has a field public static readonly Guid Empty, both trigger this behaviour.
I'd expect this behaviour if there genuinely were circular references (though even then it would be nice to detect!), but for these static readonly members it seems maybe easy to avoid.
Can you point me in the right direction to figure out how to fix this?
Are you using the standard mode for BinaryX20? It should handle cycles/serializing by reference. (See comments on SetMode function in BinaryX20.cs).
But you mentioned 'Guid', which is a struct, it's been a while maybe I'm not handling cycles for structs... Anyway, the way to handle that is check for cycles in the serializer code - or a much simpler way is to write a custom serializer to write/read Guid objects (e.g. StrongSerializer) and then just make sure to add your serializer to the list of serializer when constructing BinaryX20.
Let me know how that goes.
Thanks for getting back to me!
Yes, this is using BinaryX20 in standard mode. And you're right, Guid and TimeSpan are both structs - so I imagine there's no cycle checking code for structs.
I don't think I'm confident enough with the internals of VFW to implement that yet so I had a go at adding custom serializers for both, and that seems to have done the trick - thanks!
I'll leave it up to you to decide whether to leave this issue open to track struct cycle checking, or just close it :)
| gharchive/issue | 2015-12-08T12:09:26 | 2025-04-01T06:46:10.237421 | {
"authors": [
"hymerman",
"vexe"
],
"repo": "vexe/VFW",
"url": "https://github.com/vexe/VFW/issues/69",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
304653989 | kafka deals数据堆积
每秒3000笔的下单时,deals数据会堆积,marketprice消费速度开始正常后面就非常慢,kafka服务器没打印错误日志。
用python写的代码消费deals数据,没有出现问题
不知道该如何定位和解决? 非常感谢
我抽时间复现一下。
@yuliyuanchn Kafka 如何配置
kafka用的是默认的配置,用kafka提供的测试工具测试过,每秒吞吐量在10万以上(多少忘记了)
另外用Python写的consumer程序也测过
kafka用的是默认的配置,用kafka提供的测试工具测试过,每秒吞吐量在10万以上(多少忘记了)
另外用Python写的consumer程序也测过
@myghk
这个问题在于 marketprice优化的不太好,消费速度慢导致的。
恩,我尝试使用批量消费、异步消费,效果都不理想,猜测是跟consumer的参数配置有关,没找到解决方案
我目前是用python把这部分重写了,另外把k线的制作、查询、订阅都分开了,制作好的K线再推送到kafka,查询从redis获取,订阅服务从kafka读取分发,制作是单进程,其他都可以多开,等测试稳定也可以发出来
2018-04-24 22:43 GMT+08:00 haipo yang notifications@github.com:
这个问题在于 marketprice优化的不太好,消费速度慢导致的。
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/viabtc/viabtc_exchange_server/issues/71#issuecomment-383958920,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AHj6W5hh9aczy7LPOFoyoId9qTTEKwUFks5trzotgaJpZM4SoMmH
.
@yuliyuanchn 能不能加个qq1434351139讨论一下这个数据消费
| gharchive/issue | 2018-03-13T07:27:13 | 2025-04-01T06:46:10.302014 | {
"authors": [
"haipome",
"myghk",
"pilipala1029",
"yuliyuanchn"
],
"repo": "viabtc/viabtc_exchange_server",
"url": "https://github.com/viabtc/viabtc_exchange_server/issues/71",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1535514965 | Get rid of GitServiceProvider.GitHub for WaylandCore & WaylandProtocols
They're irrelevant, upstream Wayland project has moved Freedesktop GitLab instance and the GitHub repos are no longer up-to-date (mirrored) nor archived.
See https://github.com/vially/wayland-explorer/pull/23#issuecomment-1384116644 and the PR in general for more details.
Fixed in d1c24e6ef7cf35f7d23de443e3a9276ef5cccce7.
| gharchive/issue | 2023-01-16T21:39:32 | 2025-04-01T06:46:10.307198 | {
"authors": [
"JamiKettunen",
"vially"
],
"repo": "vially/wayland-explorer",
"url": "https://github.com/vially/wayland-explorer/issues/24",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1554469962 | Create automation for z/VM LDAP setup
Even though it only happens with a new z/VM, the process to initialise z/VM LDAP should be captured and automated.
The majority of schema files needed exist on LXWORK01 (in /opt/resources) but I will need to run them a few times to get the order right I think. The biggest challenge with these is that the two IBM-supplied schemas required to seed the LDBM are shipped in a format that is not compatible with ldapmodify in OpenLDAP so they have to be loaded from TCPMAINT using LDAPMDFY. In anticipation of creating some interesting scripting I have installed expect and x3270-text on LXWORK01. :)
Plan is to add a setup-zvm-ldap role that would
create the configuration files (DS CONF, DS ENV, try to do something with DTCPARMS if appropriate)
add the PKCS#12 database for TLS (if the copy-pkcs12-to-zvm role, or the new one for fresh certs on z/VM, doesn't do it already)
populate the schema, starting with IBMSCHEM and USRSCHEM
create the RFC2307bis directory (users, groups, anything else?)
I have created an expect script that will drive c3270 to do the basic USRSCHEM and IBMSCHEM work. I'm just not sure I want to use that inside Ansible. It's idempotent (insofar as z/VM LDAP doesn't seem to care if it's run multiple times) but the c3270 output is probably going to be a mess.
Also, it looks like Bruce manages the config files as part of the z/VM build, so yay (except I don't see the PKCS#12 for LDAP).
Commit 378c12a includes the role setup-zvm-ldap which applies the required schema and replaces the LDAP tree.
I tried iterations of using the Ansible LDAP support, but it is very limited and didn't fit what I wanted. After a few hours of wasted coding effort I've gone back to using ldapmodify/ldapadd.
| gharchive/issue | 2023-01-24T07:38:25 | 2025-04-01T06:46:10.314652 | {
"authors": [
"viccross"
],
"repo": "viccross/ansible-playbooks",
"url": "https://github.com/viccross/ansible-playbooks/issues/130",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
114929167 | Raise QueryStringInterface::MixedArgumentError to avoid implicit conv…
…ersion error when merging values of different types
To avoid this kind of error:
INFO -- : Parameters: {"program_id.in"=>"5|6", "program_id"=>"7"}
ERROR -- : Params: {"program_id.in"=>"5|6", "program_id"=>"7", "controller"=>"videos", "action"=>"index"}
ERROR -- : no implicit conversion of Fixnum into Hash
And show a better message:
INFO -- : Parameters: {"program_id.in"=>"5|6", "program_id"=>"7"}
ERROR -- : Params: {"program_id.in"=>"5|6", "program_id"=>"7", "controller"=>"videos", "action"=>"index"}
ERROR -- : arguments `program_id.in` and `program_id` could not be mixed
👍🏻
| gharchive/pull-request | 2015-11-03T22:53:52 | 2025-04-01T06:46:10.316699 | {
"authors": [
"dx7"
],
"repo": "vicentemundim/query_string_interface",
"url": "https://github.com/vicentemundim/query_string_interface/pull/3",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
951687224 | added resources for HTML and CSS
Added some most important resources for HTML and CSS
Thank you @Janvi-Thakkar 👍🏼 🥇
Hello Vikash Tiwari,
Thank you for merging my pull request. This one will be lifelong memory for
me as this was my first open source contribution. Thanks a lot!
Thanks & Regards,
Janvi Thakkar
On Tue, 3 Aug, 2021, 12:15 PM Vikesh Tiwari, @.***>
wrote:
Thank you @Janvi-Thakkar https://github.com/Janvi-Thakkar 👍🏼 🥇
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/vicky002/AlgoWiki/pull/358#issuecomment-891580029,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ANILDVI22VW3FAWRS5Y7EH3T26F7TANCNFSM5A4HNEGA
.
Triage notifications on the go with GitHub Mobile for iOS
https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675
or Android
https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email
.
| gharchive/pull-request | 2021-07-23T15:38:25 | 2025-04-01T06:46:10.331920 | {
"authors": [
"Janvi-Thakkar",
"vicky002"
],
"repo": "vicky002/AlgoWiki",
"url": "https://github.com/vicky002/AlgoWiki/pull/358",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
263840078 | The Page didnt load Google Maps correctly
Sorry! Something went wrong. This page didn't load Google Maps correctly. See the Javascript console for technical details.
It seems to load briefly then get this error, any ideas?
not sure but will take a look
Hi. Same problem here.
Hey there -- I am having the same issue, but I double-checked my Google Maps API key, I even created a new one. Could you expand on how you "messed up" the key?
I'm with Swizz... I see the map come up on restart for about 5 seconds followed by "Sorry! Something went wrong" Looks great for those 5 sec though - looking forward to getting it running.
I'm having the same issue, the map comes up but goes to JavaScript error within a second. npm start dev has a line: "You have included the Google Maps JavaScript API multiple times on this page. This may cause unexpected errors." It then shows the Key. I don't know code enough to deal with this. Any help would be appreciated.
Same problem here. It worked, however, when I first started it. After restarting MM this error will show up.
@chsamuelHU Same thing happened to me. I started using this module back in February and it worked fine, then I tried using it again a few days ago and I get the watermark as well. It looks like Google changed they way how they hand out API Keys which you can see here. I'm assuming this is causing the errors, but I haven't done any testing with it to verify. I think you need to set up a billing account with them now.
@PhenomAnimal Yea. I have done testing and I can confirm that google limits API usage to one request a day if billing is not enabled. To everyone else having the same issue: Google doesn't allow more than 1 API request / key a day when using a "free" key. In order to increase this limit, one has to enable billing as described here: link
I had the same problem. To fix it I went into my API console and enabled billing and then enabled Maps JavaScript API.
@PhenomAnimal Yea. I have done testing and I can confirm that google limits API usage to one request a day if billing is not enabled. If you open up MM in a browser and check the JS console you'll see something like: "You have exceeded your request quota for this API."
To everyone else having the same issue: Google doesn't allow more than 1 API request / key a day when using a "free" key. In order to increase this limit, one has to enable billing as described here: link.
Looks like it is not going to be very free ... (unless you restart your mirror only max. once a day)
Question: does the module refresh the traffic data every x minutes? If so, I wonder whether this also counts as an API request..
P.S if you click "OK" on that small dialog that pops up, remember that it will pop up again in x seconds.
That was my issue always. I linked my payment to this account and got a free 365 day trial with $300 for free. Thanks for this thread. I was beating my head against the wall.
| gharchive/issue | 2017-10-09T09:57:29 | 2025-04-01T06:46:10.339384 | {
"authors": [
"Brando27615",
"Keniki5",
"MarcelVuijst",
"PhenomAnimal",
"chsamuelHU",
"dc331903",
"mathiaswscott",
"screechyboy79",
"swizzlest",
"vicmora"
],
"repo": "vicmora/MMM-GoogleMapsTraffic",
"url": "https://github.com/vicmora/MMM-GoogleMapsTraffic/issues/5",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2405156387 | Debounce hangs with libuv integration
Hi @victimsnino,
hopefully, i can bother you with another request, but i am having a problem with the debounce operator and libuv/uvw eventloop integration. And i also do not see an issue with my part, so maybe you can help (or point me to somewhere else, since it might be a bug with libuv or my integration code).
I wrote a unit test to reproduce this behaviour:
https://github.com/mincequi/uvw_iot/blob/main/tests/RppUvwTest.cpp
And this is my uvw integration for rpp:
https://github.com/mincequi/uvw_iot/blob/main/uvw_iot/RppUvw.h
Especially on low-powered machines, the debounce operator does not emit anymore (but rx stream does not complete).
Do you have any idea?
Thanks a lot!
Hi @mincequi !
Yeah, you are welcome =)
This code looks fine for me. It even works fine for me...
~/uvw_iot/b/tests main ❯ /home/victimsnino/uvw_iot/build/tests/tests
Randomness seeded to: 1379457251
debounceCount: 1000
debounceCount: 2000
debounceCount: 3000
debounceCount: 4000
debounceCount: 5000
So, looks like classical answer "it works on my station" :-)
My first thought about it: i would suggest you to log this place to be sure if your callback was called at all (for example, does uvw guarantees strict order evaluation of timers?).
UPD:
reproduced it via adding some "useful workload" at the begin of test:
for (size_t i =0; i < 100; ++i) {
rpp::source::just(1)
| rpp::ops::repeat()
| rpp::ops::subscribe_on(rpp::schedulers::new_thread{})
| rpp::ops::subscribe();
}
So, yes, in this case timer from main_thread_scheduler is never firing for some reason ¯_(ツ)_/¯
Spot it!
https://github.com/mincequi/uvw_iot/blob/main/uvw_iot/RppUvw.h#L47 <- there new_tp->value can be less than rpp::schedulers::clock_type::now(). In this case you are going schedule it "to the past". My schedulers works fine with that (due to absolute comparison of timestamps and obtaining just "minimal"). But uvw handles it in this way
clamped_timeout = handle->loop->time + timeout;
if (clamped_timeout < timeout)
clamped_timeout = (uint64_t) -1;
so, negative duration goes to the end of queue
Awesome, thanks a lot...i should have known that... 😬
| gharchive/issue | 2024-07-12T09:33:45 | 2025-04-01T06:46:10.394217 | {
"authors": [
"mincequi",
"victimsnino"
],
"repo": "victimsnino/ReactivePlusPlus",
"url": "https://github.com/victimsnino/ReactivePlusPlus/issues/609",
"license": "BSL-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
490697744 | upsert: invalid column filtering formula
When I put down key = "Name" as the search key for airtable.upsert, I got this error:
{
"errorType": "object",
"errorMessage": "The formula for filtering records is invalid: Invalid formula. Please check your formula text.(INVALID_FILTER_BY_FORMULA)[Http code 422]",
"trace": []
}
In my Airtable Column, I have a list of names: if I find a matching name, I want to update the record; if not, I want to create a new one. However, the upsert method offered by doesn't seem to be working correctly. I give it the name of the column to compare on, and it errors out every time (creates records flawlessly).
@czhao028 Can you provide a full example? This package doesn't manipulate the filterByFormula when sending to the Airtable SDK so it would most likely be that your formula is incorrect or there is a deeper underlying issue in the SDK.
| gharchive/issue | 2019-09-08T01:23:27 | 2025-04-01T06:46:10.396865 | {
"authors": [
"KyleRoss",
"czhao028"
],
"repo": "victorhahn/airtable-plus",
"url": "https://github.com/victorhahn/airtable-plus/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1301617553 | Update sbt to 1.7.1
Updates org.scala-sbt:sbt from 1.6.2 to 1.7.1.
GitHub Release Notes - Version Diff
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Adjust future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "org.scala-sbt", artifactId = "sbt" } ]
Or, add this to slow down future updates of this dependency:
dependencyOverrides = [{
pullRequests = { frequency = "@monthly" },
dependency = { groupId = "org.scala-sbt", artifactId = "sbt" }
}]
labels: library-update, early-semver-minor, semver-spec-minor, commit-count:1
Superseded by #418.
| gharchive/pull-request | 2022-07-12T06:49:29 | 2025-04-01T06:46:10.455084 | {
"authors": [
"scala-steward"
],
"repo": "vigoo/prox",
"url": "https://github.com/vigoo/prox/pull/396",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
626272382 | Update eff to 5.9.0
Updates org.atnos:eff from 5.8.0 to 5.9.0.
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Ignore future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "org.atnos", artifactId = "eff" } ]
labels: library-update, semver-minor
Codecov Report
Merging #53 into master will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #53 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 1 1
Lines 57 57
Branches 3 2 -1
=========================================
Hits 57 57
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 9f4627c...f4d69bf. Read the comment docs.
| gharchive/pull-request | 2020-05-28T07:10:25 | 2025-04-01T06:46:10.462260 | {
"authors": [
"codecov-commenter",
"scala-steward"
],
"repo": "vigoo/simpp",
"url": "https://github.com/vigoo/simpp/pull/53",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
359126126 | Browserlist does not seem to be working to target browsers
This line in the VueDS FAQ seems to indicate that creating different targets should work.
If we want to edit the browsers supported in production we can edit the browsers list in package.json. To see what browsers are selected by the browser list, run npx browserslist --config="package.json" in the root directory of this project.
I have not changed the the browserlist config in VueDS. However, I get errors on IE11 such as "multiple properties not allowed in strict mode" and missing colons.
I have added several of my own components, and have not yet tested with a clean version of VueDS. However, my expectation was that babel would be able to target the browsers in the browserlist config and add any necessary polyfills, but perhaps it needs polyfills to be added manually? If so, I'm not even sure how to determine what to add.
@sdellis hmm, yes, things should be working on IE11 by default. Where are you getting this error? On the docs?
@viljamis yes, but also on the system build, which I think is the culprit since the docs are using it too.
I don't have easy access to IE11 (downloading a Windows VM now to test) but here's our docs site in case you have IE11 and want to see for yourself. I was using a friend's Windows machine yesterday and it seems that the VueDS example docs is working in IE11, so perhaps there's something strange with one of our components?
Since the error refers to "strict mode" I think it must have something to do with some transpiled ES6 code that uses "generators". I was not able to import it into a VueDS component with the necessary polyfills (they have to load first and I couldn't get it to work in a bundle), so I used Babel to transpile the original source into a utility module that I could import (I don't remember specifying "strict mode").
One final point is that if this hunch is correct, it's perhaps another reason to implement a way to easily codesplit or treeshake VueDS that I mentioned in #111 since the component that uses this utility module is a "mini-app" that is not public facing and does not have to be IE11 compatible.
@viljamis , so my issue had nothing to do with the transpiled ES6 code. I got the system build working in IE11. It was two things: 1) I accidentally had @input and v-on:input handlers defined on one of my components (explains the multiple properties not allowed error) and 2) I needed to add a polyfill for vuex (import "es6-promise/auto") at the top of my system.js file.
I still haven't gotten the docs to work, but that's for internal use where we can control the browsers. Having the system build working for our end users was the main concern. I'm getting this same error, so perhaps it's related to object shorthand notation (although I added the ecma: 5 option to the UglifyJS plugin and it did not help).
I also tested a freshly cloned copy of the VueDS docs in IE11 and it works, but the components (elements, patterns, templates) do not show up (can share a screen shot) and I get the following error in the console: SCRIPT438: Object doesn't support property or method 'forEach'. I will close this issue, and leave it up to you if you want to create a new issue for that.
@sdellis I think I know where that forEach issue is coming from and am able to fix it in the next release.
All IE11 issues have been fixed in 3.1.0 :)
| gharchive/issue | 2018-09-11T16:32:10 | 2025-04-01T06:46:10.483399 | {
"authors": [
"sdellis",
"viljamis"
],
"repo": "viljamis/vue-design-system",
"url": "https://github.com/viljamis/vue-design-system/issues/110",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2580574388 | Strava integration
[x] Build initial UI
[x] Build API
[x] UI on hover + functions
[x] Fix avg speed calculation
[x] Refactor & extract components
[x] Fix page UI
[x] Create loading state (spinners or skeleton etc)
[x] Responsive/mobile version
[x] Localization
[x] Make grid card
[ ] Optimize API
[ ] Feature - change week/month/year -> click forward/back when possible and go back to current -button
// import { useEffect, useState } from "react";
// const [activities, setActivities] = useState([]);
// useEffect(() => {
// const fetchActivities = async () => {
// const response = await fetch("/api/strava/activities");
// const data = await response.json();
// setActivities(data);
// };
// fetchActivities();
// }, []);
// console.log(activities);
https://magecdn.com/tools/svg-loaders
Backdrop filter
https://perfectionist.dev/
https://million.dev/
| gharchive/issue | 2024-10-11T06:45:40 | 2025-04-01T06:46:10.489619 | {
"authors": [
"villivald"
],
"repo": "villivald/villivald.com",
"url": "https://github.com/villivald/villivald.com/issues/55",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
260860999 | There's some mistakes in showing line number and trailing number
environment
vim: vim8
vim-airline: lastest
OS: win8
Have you reproduced with a minimal vimrc: yes
What is your airline configuration:
Plug "vim-ariline/vim-airline"
actual behavior
In this picture, the actual line number is 1285. But it shows 11285.
In this picture, it shows double '/' and double ']'.
duplicate of #1422
Thank you.
| gharchive/issue | 2017-09-27T06:30:42 | 2025-04-01T06:46:10.492902 | {
"authors": [
"chrisbra",
"yehuohan"
],
"repo": "vim-airline/vim-airline",
"url": "https://github.com/vim-airline/vim-airline/issues/1568",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1260979402 | :sparkles: Support a Shared server to avoid the bottleneck of process launches
Usage
Start denops global server as follow
deno run -A --no-check ./denops/@denops-private/cli.ts
Then specify the address to g:denops_server_addr prior to Vim startup like
let g:denops_server_addr = '127.0.0.1:32123'
Difference
Thanks to @ippachi at vim-jp slack.
Before
https://user-images.githubusercontent.com/546312/172044580-7e2419e7-48f5-419b-b098-35e22bf36208.mov
After
https://user-images.githubusercontent.com/546312/172044587-cdb47636-beec-41cf-a46f-4c6de81cca40.mov
It seems Denops itself had some resource leak and this global denops emphasize that issue.
Probably I'll merge it after 2022/06/10. Let me know if there are some issue on this PR.
What about using std tee instead of the custom one? https://deno.land/std@0.133.0/async/tee.ts#L78
Hi, the bin/denops works.
#!/usr/bin/bash
cd "$(dirname "$(readlink -f "$0")")"
deno run -A --no-check ../denops/@denops-private/cli.ts
Can you include it?
What about using std tee instead of the custom one? https://deno.land/std@0.133.0/async/tee.ts#L78
Unfortunately, that's for AsyncIterator but what we need is tee for Deno.Reader. We can convert Deno.Reader to AsyncIterator but it would add some extra cost.
Can you include it?
Unfortunately no.
It does not support Windows
readlink is not common enough (some unix OS does not provide it in default)
We don't want to maintain multiple script just for that
| gharchive/pull-request | 2022-06-05T07:08:17 | 2025-04-01T06:46:10.499216 | {
"authors": [
"Shougo",
"lambdalisue",
"sigmaSd"
],
"repo": "vim-denops/denops.vim",
"url": "https://github.com/vim-denops/denops.vim/pull/193",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
299456378 | Checker Do Actual Computation (lang: SML) (bug?)
Hi,
I'm using Syntastic for Standard ML. The syntax checker (automatically upon opening the source file) seems to be evaluating my program.
I realize this because I have an expression evaluating Ackermann function on (4, 1). Opening the file takes really long time. When I disable Syntastic, opening the file becomes instant fast.
Is there a way to get around this? Or is this something to do with the checker itself?
Thank you!
In order to see what syntastic does you can enable debugging (cf. :h syntastic-debug).
In order to turn off automatic checking of your files you can set sml as a passive filetype (cf. :h 'syntastic_mode_map').
In order to check files in background you can use an asynchronous checker instead of syntastic, such as ale.
| gharchive/issue | 2018-02-22T18:17:39 | 2025-04-01T06:46:10.505379 | {
"authors": [
"SAMFYB",
"lcd047"
],
"repo": "vim-syntastic/syntastic",
"url": "https://github.com/vim-syntastic/syntastic/issues/2152",
"license": "WTFPL",
"license_type": "permissive",
"license_source": "github-api"
} |
2220083096 | help needed to execute python file
Please pardon me for my maybe trivial query.I am trying to learn python. I am trying to migrate from vscode wherein I could just click a button to run the python file which I was editiing or learning from to be executable in a separate output pane below or a terminal for interactive session.
While migrating to neovim I was searching for a similar functionality when I came across this plugin. I installed it and tried to run the xxx.py file - but then it says test file not found. I still not that advanced where I know about unit tests and test file - just basic python files to run like `!python3 %'
Would it be possible to so with this plugin using the strategies for directing the output to another tumx pane, iterm pane , wezterm pane etc. or do I need to search for all other alternaitves
Hello and thank you for the question, so you are looking for a way to execute a Python file, not necessarily a test file? If so, what you are looking for is a REPL, for which there are several plug-ins:
iron.nvim
vim-repl
| gharchive/issue | 2024-04-02T10:01:57 | 2025-04-01T06:46:10.508763 | {
"authors": [
"Trid-collab",
"codeinabox"
],
"repo": "vim-test/vim-test",
"url": "https://github.com/vim-test/vim-test/issues/792",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
558925017 | Python3 support problem
I still can't make this work. On Debian, 10 I have:
vim --version | grep python
+comments +libcall -python +visualextra
+conceal +linebreak -python3 +viminfo
and
python -V
Python 3.7.3
but Vim complains:
:python3 is not available, vdebug will not be loaded.
Any idea?
which package of vim do you have installed, I just installed vim-nox and that one has +python3
which package of vim do you have installed, I just installed vim-nox and that one has +python3
As far as I remember, it is the default Vim installed with Debian 10:
VIM - Vi IMproved 8.1 (2018 May 18, compiled Jun 15 2019 16:41:15)
Included patches: 1-875, 878, 884, 948, 1046, 1365-1368, 1382, 1401
Modified by team+vim@tracker.debian.org
Compiled by team+vim@tracker.debian.org
Huge version without GUI. Features included (+) or not (-):
+acl +extra_search +mouse_netterm +tag_old_static
+arabic +farsi +mouse_sgr -tag_any_white
+autocmd +file_in_path -mouse_sysmouse -tcl
+autochdir +find_in_path +mouse_urxvt +termguicolors
-autoservername +float +mouse_xterm +terminal
-balloon_eval +folding +multi_byte +terminfo
+balloon_eval_term -footer +multi_lang +termresponse
-browse +fork() -mzscheme +textobjects
++builtin_terms +gettext +netbeans_intg +textprop
+byte_offset -hangul_input +num64 +timers
+channel +iconv +packages +title
+cindent +insert_expand +path_extra -toolbar
-clientserver +job -perl +user_commands
-clipboard +jumplist +persistent_undo +vartabs
+cmdline_compl +keymap +postscript +vertsplit
+cmdline_hist +lambda +printer +virtualedit
+cmdline_info +langmap +profile +visual
+comments +libcall -python +visualextra
+conceal +linebreak -python3 +viminfo
+cryptv +lispindent +quickfix +vreplace
+cscope +listcmds +reltime +wildignore
+cursorbind +localmap +rightleft +wildmenu
+cursorshape -lua -ruby +windows
+dialog_con +menu +scrollbind +writebackup
+diff +mksession +signs -X11
+digraphs +modify_fname +smartindent -xfontset
-dnd +mouse +startuptime -xim
-ebcdic -mouseshape +statusline -xpm
+emacs_tags +mouse_dec -sun_workshop -xsmp
+eval +mouse_gpm +syntax -xterm_clipboard
+ex_extra -mouse_jsbterm +tag_binary -xterm_save
system vimrc file: "$VIM/vimrc"
user vimrc file: "$HOME/.vimrc"
2nd user vimrc file: "~/.vim/vimrc"
user exrc file: "$HOME/.exrc"
defaults file: "$VIMRUNTIME/defaults.vim"
fall-back for $VIM: "/usr/share/vim"
Compilation: gcc -c -I. -Iproto -DHAVE_CONFIG_H -Wdate-time -g -O2 -fdebug-prefix-map=/build/vim-4Pursk/vim-8.1.0875=. -fstack-protector-strong -Wformat -Werror=format-security -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1
Linking: gcc -Wl,-z,relro -Wl,-z,now -Wl,--as-needed -o vim -lm -ltinfo -lnsl -lselinux -lacl -lattr -lgpm -ldl
could you give me the output of dpkg -l | grep vim ?
Sure:
ii vim 2:8.1.0875-5 amd64 Vi IMproved - enhanced vi editor
ii vim-addon-manager 0.5.10 all manager of addons for the Vim editor
ii vim-common 2:8.1.0875-5 all Vi IMproved - Common files
ii vim-runtime 2:8.1.0875-5 all Vi IMproved - Runtime files
ii vim-tiny 2:8.1.0875-5 amd64 Vi IMproved - enhanced vi editor - compact version
Should I install vim-nox explicitly?
vim-tiny is definitely not the version you want to have, I think vim is a metapackage in debian? So I would go for vim-nox or if you want gui too for vim-gtk3, those should have python3 enabled
Sure:
ii vim 2:8.1.0875-5 amd64 Vi IMproved - enhanced vi editor
ii vim-addon-manager 0.5.10 all manager of addons for the Vim editor
ii vim-common 2:8.1.0875-5 all Vi IMproved - Common files
ii vim-runtime 2:8.1.0875-5 all Vi IMproved - Runtime files
ii vim-tiny 2:8.1.0875-5 amd64 Vi IMproved - enhanced vi editor - compact version
Should I install vim-nox explicitly?
Thanks, with vim-nox it definitely works!
| gharchive/issue | 2020-02-03T08:34:30 | 2025-04-01T06:46:10.514581 | {
"authors": [
"BlackIkeEagle",
"welblaud"
],
"repo": "vim-vdebug/vdebug",
"url": "https://github.com/vim-vdebug/vdebug/issues/437",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
523099905 | gM works differently for 'wrap' and 'nowrap' if 'showbreak' is set
Run gvim.exe --clean
Put this text into the buffer with the "*P command:
0123456789 A1 0123456789 A2 0123456789 A3 0123456789 A4 0123456789 A5 0123456789 A6 0123456789 A7 0123456789 A8 0123456789 A9 0123456789 B0 0123456789 B1 0123456789 B2 0123456789 B3 0123456789 B4 0123456789 B5 0123456789 B6 0123456789 B7 0123456789 B8 0123456789 B9
Execute :set showbreak=>
Hit 90gM. The cursor gets positioned at the "7" in "...89 B7..."
Execute :set nowrap
Hit 90gM again. The cursor gets positioned at the "0" in "...B7 01...".
Gvim: https://github.com/vim/vim-win32-installer/releases/download/v8.1.2300/gvim_8.1.2300_x86.zip,
Windows 10 Home 1803.
I don't think this is an error and I believe this is mostly because the virtcol() changes. See how the ruler changes when going from the last character in the first screen line (0g$) moving over to the first column in the second screen line (l). This is even more apparent when using showbreak with more characters and/or breakindent is set.
I'll leave that as open and let Bram decide, whether this is something to fix or can be closed as works correctly.
Well, if Christian thinks it's OK it probably is. gM wasn't meant for very exact positioning anyway, more to quickly go to around where you want to go.
| gharchive/issue | 2019-11-14T20:56:11 | 2025-04-01T06:46:10.520197 | {
"authors": [
"brammool",
"chrisbra",
"lkintact"
],
"repo": "vim/vim",
"url": "https://github.com/vim/vim/issues/5219",
"license": "Vim",
"license_type": "permissive",
"license_source": "github-api"
} |
690828023 | b2d0e51366dea6843f991f31a457f5456d162678 fails with linkage error on HP-UX
Describe the bug
b2d0e51366dea6843f991f31a457f5456d162678 introduced the usage of dirfd() which is unfortunately not available on HP-UX. Linking fails with unsatisfied symbol. The test in configure.ac compiles only, but does not link:
configure:13760: checking for dirfd
configure:13774: /opt/aCC/bin/aCC -Ae -c -g -I/opt/ports/include conftest.c >&5
"conftest.c", line 149: warning #2223-D: function "dirfd" declared implicitly
DIR * dir=opendir("dirname"); dirfd(dir);
^
configure:13774: $? = 0
configure:13775: result: yes
but it should be rather:
bash-5.0$ cc -o out test.c
"test.c", line 5: warning #2223-D: function "dirfd" declared implicitly
DIR * dir=opendir("dirname"); dirfd(dir);
^
ld: Unsatisfied symbol "dirfd" in file test.o
1 error.
bash-5.0$ echo $?
1
Solution
Either one needs to try link the application or do the naive patch I have done:
# if defined(UNIX) && defined(HAVE_FLOCK) && defined(HAVE_DIRFD)
# ifdef __hpux
#define dirfd(x) ((x)->__dd_fd)
#endif
...
Note: flock() is present on HP-UX.
After the patch:
bash-5.0$ ldd src/vim
src/vim:
libm.so.1 => /usr/lib/hpux32/libm.so.1
libxcurses.so.1 => /usr/lib/hpux32/libxcurses.so.1
libintl.so.10 => /opt/ports/lib/hpux32/libintl.so.10
libc.so.1 => /usr/lib/hpux32/libc.so.1
libiconv.so.8 => /opt/ports/lib/hpux32/libiconv.so.8
libc.so.1 => /usr/lib/hpux32/libc.so.1
libdl.so.1 => /usr/lib/hpux32/libdl.so.1
libc.so.1 => /usr/lib/hpux32/libc.so.1
bash-5.0$ file src/vim
src/vim: ELF-32 executable object file - IA64
Environment (please complete the following information):
Vim version 8.2.1551
OS: HP-UX deblndw0 B.11.31 U ia64 HP-U
Terminal: PuTTY and dtterm
References
https://www.mail-archive.com/search?l=help-cfengine@cfengine.org&q=subject:"Cannot+compile+cfengine\-3.2.0.tar.gz+on+Solaris+10"&o=newest&f=1
https://openqnx.com/phpbbforum/viewtopic.php?f=20&t=4272#p16959
https://womble.decadent.org.uk/readdir_r-advisory.html
Hi @michael-o ,
do you want the feature which the mentioned pull request brought available on HP-UX?
Either way, I propose to use AC_TRY_LINK instead of AC_TRY_COMPILE. It will catch the error and the feature will not be used.
Hi @michael-o ,
do you want the feature which the mentioned pull request brought available on HP-UX?
Either way, I propose to use AC_TRY_LINK instead of AC_TRY_COMPILE. It will catch the error and the feature will not be used.
I won't insist, but I would like to have consistency across platforms since dirfd() is likely to be a macro on other platforms too, it is fairly easy to add this for HP-UX. I checked the FreeBSD libc code and it is just a macro with wrapper call. I see no reason not to support this on HP-UX.
Ok, would you mind trying this
patch if it works on HP-UX?
@zdohnal Going through...
The check now properly fails:
configure:13760: checking for dirfd
configure:13774: /opt/aCC/bin/aCC -Ae -o conftest -g -I/opt/ports/include -L/opt/ports/lib/hpux32 conftest.c -lm -lelf -lcurses -liconv >&5
"conftest.c", line 149: warning #2223-D: function "dirfd" declared implicitly
DIR * dir=opendir("dirname"); dirfd(dir);
^
ld: Unsatisfied symbol "dirfd" in file conftest.o
1 error.
...
configure:13779: result: not usable
and vim links correctly. Though I don't understand the hunk in fileio.c. Although you define this macro, it is never used because the function vim_opentempdir() is guarded by defined(HAVE_DIRFD) which is not set now in config.h.
(facepalm) You're right, sorry - another iteration...
Here we go
vim-hpux.patch.txt
This one is incomplete, it ignores:
#ifdef TEMPDIRNAMES
# if defined(UNIX) && defined(HAVE_FLOCK) && defined(HAVE_DIRFD)
EXTERN DIR *vim_tempdir_dp INIT(= NULL); // File descriptor of temp dir
# endif
from globals.h
Please check now
vim-hpux.patch.txt
@zdohnal Patch works for me now.
@zdohnal @brammool Thank you for the quick turnaround!
@michael-o I'm sorry for the inconvenience which my previous patch brought and thank you @brammool for pushing the changes!
| gharchive/issue | 2020-09-02T08:31:15 | 2025-04-01T06:46:10.532519 | {
"authors": [
"michael-o",
"zdohnal"
],
"repo": "vim/vim",
"url": "https://github.com/vim/vim/issues/6838",
"license": "Vim",
"license_type": "permissive",
"license_source": "github-api"
} |
1057754809 | Termdebug problem with fast actions in vim (but not in gdb)
Steps to reproduce
have a bigger program with some frames
start it via Termdebug
change to GDB window
enter "step" as command and execute with Enter
keep Enter pressed for multiple seconds, so the command is re-executed
vim (source) window is updated, everything works smooth
restart the debug session
use "S" - which is mapped to the Step() function which does an call s:SendCommand('-exec-step')
keep "S" pressed, very fast one sees the error message
Cannot execute this command while the selected thread is running.
keeping it pressed brings up nearly a full screen of those messages; if nomodifiable is active one (alternatively) gets
E21: Cannot make changes, 'modifiable' is off.
Expected behaviour
no mapped keys, in this case "S", should "slip through" to the editor
executing the step command (same happens for next) via Termdebug should ideally be as fast as on the gdb side, in any case there should be no message about "Cannot execute the command" (if in question "eat" the command)
Operating system
CentOS 8
Version of Vim
8.2.2760
Logs and stack traces
No response
I guess that when you press enter, the next character is only read at the prompt, thus after the previous step finished.
When using ":Step" it doesn't wait for the prompt. So it sends -exec-step while the program is running.
Should ":Step" wait for the prompt?
Should ":Step" wait for the prompt? I think "yes", same for -exec-next.
Just rechecking as I've seen the issue yesterday again when testing the startup and decoding changes, I think it would be especially useful for :Step - and when this got in I also have something to monkey-see-monkey-do test for :Next later.
In any case it think it would be reasonable to have Step and Next in a function so adjustments are more easy.
Just rechecking as I've seen the issue yesterday again when testing
the startup and decoding changes, I think it would be especially
useful for :Step - and when this got in I also have something to
monkey-see-monkey-do test for :Next later.
In any case it think it would be reasonable to have Step and Next
in a function so adjustments are more easy.
Are you using the version I sent out a couple of days ago?
It should have the s:SendCommandIfStopped() function, which drops a
command if the debugger is not stopped.
--
Did Adam and Eve have navels?
/// Bram Moolenaar -- @.*** -- http://www.Moolenaar.net \
/// \
\\ sponsor Vim, vote for features -- http://www.Vim.org/sponsor/ ///
\\ help me help AIDS victims -- http://ICCF-Holland.org ///
Cool, haven't switched to that so far and now see that it is in and used for step and next. Will recheck the result next week, likely tomorrow.
Related: Would you mind a PR that moves Step, Next, Finish to a separate function, even if it is just the current one-liner or would you consider that "pollution"? [I'm using some "stuff" not commonly useful (yet) and between other check the filetype in those functions]
Cool, haven't switched to that so far and now see that it is in and
used for step and next. Will recheck the result next week, likely
tomorrow.
Related: Would you mind a PR that moves Step, Next, Finish to a
separate function, even if it is just the current one-liner or would
you consider that "pollution"? [I'm using some "stuff" not commonly
useful (yet) and between other check the filetype in those functions]
It depends. If the user command has a simple argument it works fine
without a function. If it is more, or shares code with other commands,
it should be a function. A function is more lines in a script, and
needs a name, thus only use it when useful.
--
A year spent in artificial intelligence is enough to make one
believe in God.
/// Bram Moolenaar -- @.*** -- http://www.Moolenaar.net \
/// \
\\ sponsor Vim, vote for features -- http://www.Vim.org/sponsor/ ///
\\ help me help AIDS victims -- http://ICCF-Holland.org ///
A function is more lines in a script, and needs a name, thus only use it when useful.
It would so far only be useful for my personal scenario ("hooking" into those functions), so I'll keep that part out of pull requests.
It should have the s:SendCommandIfStopped() function, which drops a command if the debugger is not stopped.
Tested that version, in the worst case (just keeping S pressed with a mapping to Step for 3 seconds) I've got 3 drops in the debuglog and >30 errors that the inferior is running - the issue here is that the command is send, and the "running" message doesn't get in fast enough.
I've worked around that with an assumed "step/next/finish will run the inferior" option as seen in the PR (now I get many drops and no error and "keep S pressed in Vim" is now half as fast as "keep pressed in gdb" (it waits longer because of my change, but I consider the general speed fast enough, because in Vim you can still follow the code in the source window [if you know it well], in GDB you can't).
While this issue is solved now the only culprit of SendCommandIfStopped: if the user asks for "step" while the inferior is "just running in a sleep" (like when it waits for a user input in the terminal), the user previously got the error message that this command can't be executed, now that never occurs. Trying another approach using a new SendCommandWithWait function that sets a wait state and then resets that on "stopped" event did not worked out, because sometimes the stopped event seem to not get trough.
All in all I consider the new one better as the last one, so I'd go with #9239.
I'll include your suggested fix, with some more changes.
I'm not sure that telling the user that a command was dropped would be useful. Let's first try without it.
Perhaps if the stopped state can't be detected reliable we need to do something.
Perhaps if the stopped state can't be detected reliable we need to do something.
It can't with the current design of "everything asynchronous and anonymous".
The only way that may work is:
removing message anonymity by adding tokens to the messages sent (which is in general the suggested and commonly documented way), at least optional [I have a 1/3 finished solution for that locally, could start a draft PR if wanted]
sent out an instruction that will tell us if the inferior is running or not
internally block all further messages that potentially tinker with the run state (easiest: all messages, possibly also block interaction with the GDB buffer)
that way wait for the answer to the message we sent out (has its token on return so can be distinguished)
handle the answer - we now now the current state
unblock everything
do something with the information "stopped yes/no"
If all is left out but:
token addition
sending out a message that will tell us running yes/no (not sure which one this would be)
checking the token to get the answer to the specific check we did
Then the check would be at least "more reliable" but not much as we could have received an interrupted/running message from a command sent out in the meantime.
| gharchive/issue | 2021-11-18T20:09:51 | 2025-04-01T06:46:10.555235 | {
"authors": [
"GitMensch",
"brammool"
],
"repo": "vim/vim",
"url": "https://github.com/vim/vim/issues/9158",
"license": "Vim",
"license_type": "permissive",
"license_source": "github-api"
} |
2432911820 | MinGW: Fix regressions in v9.1.0621 and v9.1.0622
Fix build error when COVERAGE=yes.
Fix if_lua with USE_GC_SECTIONS=yes.
Additionally, remove coverage files on "make clean".
thanks!
| gharchive/pull-request | 2024-07-26T20:11:18 | 2025-04-01T06:46:10.557355 | {
"authors": [
"chrisbra",
"k-takata"
],
"repo": "vim/vim",
"url": "https://github.com/vim/vim/pull/15361",
"license": "Vim",
"license_type": "permissive",
"license_source": "github-api"
} |
2449338664 | Feature request: Query macro
I would like to make the database interaction a bit more ergonomic and am working on completing a declarative macro. It looks like other people are also making wrappers around the API at the moment so I was hoping this could be included in the repo here. I can also publish it in a different crate (/repo) if you'd like too keep this a bit more centered.
What I have currently looks something like this:
macro_rules! use_db {
( // Get an item from the database
$db:ident ==> <$itemType:ty>::($keyValue:expr $(=> $secondaryFieldKey:expr)?)
// $(.$linkItem:ident $(-> <$linkItemTypeTo:ty>::[_ $(= $secondaryLinkKeyTo:expr)?])?
// $(<- <$linkItemTypeFrom:ty>::[$secondaryLinkKeyFrom:expr])?)*
$(::{ $($returnItem:ident),+})?
) => {..};
( // get items from the database
$db:ident ==> <$itemType:ty>::[$($keyValue:expr $(=> $secondaryFieldKey:expr)?)?]
// $(.$linkItem:ident $(-> <$linkItemTypeTo:ty>::[_ $(= $secondaryLinkKeyTo:expr)?])?
// $(<- <$linkItemTypeFrom:ty>::[$secondaryLinkKeyFrom:expr])?)*
$(::{ $($returnItem:ident),+})?
) => {..};
( // watch an item in the database
$db:ident ==? <$itemType:ty>($keyValue:expr $(=> $secondaryFieldKey:expr)?)
$(::{ $($returnItem:ident),+})?
) => {..};
( // watch items in the database
$db:ident ==? <$itemType:ty>[$keyValue:expr $(=> $secondaryFieldKey:expr)?]
$(::{ $($returnItem:ident),+})?
) => {..};
// Create a new item in the database
( $db:ident <== $($itemToAdd:expr);+ ) => {..};
// Delete an item from the database
( $db:ident <!= $($itemToDelete:ident),+ ) => {..};
// Update an item in the database
( $db:ident =<= $($itemUpdated:expr => $itemOutdated:expr),+ ) => {..};
}
(thanks too https://lukaslueg.github.io/macro_railroad_wasm_demo/)
this can be used like so:
// Create
use_db![db <== repo.clone(); doc.clone(); doc1.clone(); doc2.clone(); doc3.clone()]?;
// Read, (type annotations are optional and serve as documentation)
let read_repo: Repository = use_db![db ==> <Repository>::(1)]?;
let read_doc: Document = use_db![db ==> <Document>::("some name" => DocumentKey::name)]?;
let (id, repo_name): (Uuid, String) = use_db![db ==> <Repository>::(&repo.id)::{id, name} ]?;
let read_docs_ids: Vec<Document> = use_db![db ==> <Document>::[]]?;
let read_docs_ids: Vec<Uuid> = use_db![db ==> <Document>::[]::{id}]?;
let read_docs_ids: Vec<Uuid> = use_db![db ==> <Document>::[0..10]::{id}]?;
let read_docs_ids: Vec<String> = use_db![db ==> <Document>::["read"..="read9" => DocumentKey::name]::{name}]?;
// Update
use_db![db =<= doc_updated => doc.clone()]?;
// Delete
let removed: (Document, Repository) = use_db![db <!= doc, repo]?;
I was a bit constraint on some of the characters I was allowed too use but I like the end result. If you'd like to include this ill make a pull request with what I have so far implementation wise. Also let me know your opinion about the syntax. Thanks for your consideration. Love the project.
@C4Phoenix Thank you for your message/work and support for the project! Your macro syntax you propose reduces the code. Are you already using this approach in your projects?
Regarding the integration into the native_db repository, for now, I will keep the crate more focused on a single query syntax. If it becomes relevant, why not fully adopt your approach? But for now, you are welcome to publish a crate that allows using native_db with a macro. You can include a link to it in the README.md of native_db.
@vincent-herlemont, I'm currently using it for one project but not yet extensively because adding the unit tests revealed the bug reported earlier. I really wanted to add a pattern to 'link' two keys together like in a graph database like surrealdb but I could not implement that properly (as I wanted to assume that a index was non unique).
The current implementation also collects iterators often which might hide some ram costs in certain queries (when only selecting a value from a struct for instance).
All that is too say still a bit of a work in progress but good enough for me, once the #208 is resolved and I've used/tested it more I'll look into publishing a crate. Thanks for the awesome work.
| gharchive/issue | 2024-08-05T19:40:42 | 2025-04-01T06:46:10.600274 | {
"authors": [
"C4Phoenix",
"vincent-herlemont"
],
"repo": "vincent-herlemont/native_db",
"url": "https://github.com/vincent-herlemont/native_db/issues/210",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
178638528 | Get closest iframe src using tracker
Hi, I am using this in Google Tag manager, and it is triggering, however, I do not have iframes with ids, so if a user clicks on an iframe I want to get the src of the iframe.
In GTM my code looks like the following. How can I access the iframe src in the callback function?
function(){
$('iframe').iframeTracker({
blurCallback: function(){
//how can I get the iframe src in here?
}
});
}
Anybody any solution?
I think this will work
`function(){
$('iframe').iframeTracker({
iframeSrc: null,
overCallback: function(element) {
this.iframeSrc = $(element).attr('src');
},
outCallback: function(element) {
this.iframeSrc = null;
},
blurCallback: function(){
//how can I get the iframe src in here?
console.log(this.iframeSrc) // this should print out the iframe src
}
});
}`
| gharchive/issue | 2016-09-22T15:15:26 | 2025-04-01T06:46:10.609455 | {
"authors": [
"Pau1fitz",
"ericbae",
"jirkace"
],
"repo": "vincepare/iframeTracker-jquery",
"url": "https://github.com/vincepare/iframeTracker-jquery/issues/17",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
243147722 | beast::detail::is_invocable: Move Only Parameters
Previously reported that function objects taking move only types by value were not invocable.
This will go into version 82, thanks!
| gharchive/pull-request | 2017-07-15T02:33:08 | 2025-04-01T06:46:10.629154 | {
"authors": [
"RobertLeahy",
"vinniefalco"
],
"repo": "vinniefalco/Beast",
"url": "https://github.com/vinniefalco/Beast/pull/652",
"license": "BSL-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1896343247 | Looking for new owner
My priorities have changed and I can no longer keep this repo up to date.
@vinszent, would you like to take over?
Sure I can take over and maintain for as long as I use it 🙂 I think that now that we don't need to build a custom VTK, there is a chance to upstream this to nixpkgs. I will investigate the possibility to do this.
The repo has now been transferred to me and I will take over maintainership 🙂. Thank you to @marcus7070 for creating and maintaining this Flake up until now!
I have pushed my changes that were in PR #46, so it should build again, and over the next coming weeks do a bit of house cleaning and see if I can get GitHub CI with Cachix working again.
| gharchive/issue | 2023-09-14T11:11:34 | 2025-04-01T06:46:10.631143 | {
"authors": [
"marcus7070",
"vinszent"
],
"repo": "vinszent/cq-flake",
"url": "https://github.com/vinszent/cq-flake/issues/47",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
98786701 | Footer is too big on certain resolutions
Once the width of the page goes below 1035px, the footer's float elements get pushed to the next line, making the footer larger than the design anticipates.
Footer was redesigned and should not have this problem anymore
| gharchive/issue | 2015-08-03T16:34:32 | 2025-04-01T06:46:10.653785 | {
"authors": [
"viridis"
],
"repo": "viridis/BouncyCastle",
"url": "https://github.com/viridis/BouncyCastle/issues/14",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
270116362 | Converting FBX file to VRX format, detailed instructions
Hello, I've looked at the page http://docs.viromedia.com/docs/3d-objects but am still not clear on the steps to convert fbx to vrx using the ViroFBX script. Do you have more detailed instruction on how to use the script to convert fbx files? Thanks.
Here are some more detailed instructions which hopefully help.
Run the script from the bin directory -> i.e. [Project Name]/bin.
In the documentation we provide the following example script
./ViroFBX js/res/model.fbx js/res/model.vrx
Example script broken down
./ViroFBX runs the script
js/res/model.fbx is the location of the fbx file you want to convert
js/res/model.vrx is the name and location of the vrx file that will be export
Where can I download the converter?
Here are some more detailed instructions which hopefully help.
Run the script from the bin directory -> i.e. [Project Name]/bin.
In the documentation we provide the following example script
./ViroFBX js/res/model.fbx js/res/model.vrx
Example script broken down
./ViroFBX runs the script
js/res/model.fbx is the location of the fbx file you want to convert
js/res/model.vrx is the name and location of the vrx file that will be export
thank you man !
There is no bin directory in my project where i run the following command
./ViroFBX path/to/model.fbx path/to/model.vrx
whin i run command its not creating file insted its opening ViroFBX file
| gharchive/issue | 2017-10-31T20:57:19 | 2025-04-01T06:46:10.665623 | {
"authors": [
"Ayzekberk",
"aca-hakan-pinar",
"dam00n",
"vinaycoder404",
"waqaskhanroghani"
],
"repo": "viromedia/viro",
"url": "https://github.com/viromedia/viro/issues/76",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
479303495 | refactor: simplify owner initialization
Get rid of the pendingOwnerNetIds hash table. Lunacy from Unity
nice find
:tada: This PR is included in version 3.15.0 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
| gharchive/pull-request | 2019-08-10T20:26:51 | 2025-04-01T06:46:10.677603 | {
"authors": [
"paulpach",
"vis2k"
],
"repo": "vis2k/Mirror",
"url": "https://github.com/vis2k/Mirror/pull/1018",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
597586115 | fix: adding NonSerialized attribute to networkBehavioursCache
Fixes https://github.com/vis2k/Mirror/issues/1675
looks like an easy fix if it works.
let's wait my comment in #1675 to get answered. the bug might have been caused by the user disabling domain reload.
gonna label this as needs-testing.
we need to know exactly if this fixes the issue.
and we need to know exactly if the issue was caused by domain reload in the first place.
and if it was, we need to know exactly why nonserialized fixes it.
closing this because nonserialized is too much magic.
if anything we should use ondestroy to clear the array.
but the original issue was about disabling domain reload, which we don't support anyway
| gharchive/pull-request | 2020-04-09T22:18:50 | 2025-04-01T06:46:10.679914 | {
"authors": [
"James-Frowen",
"vis2k"
],
"repo": "vis2k/Mirror",
"url": "https://github.com/vis2k/Mirror/pull/1680",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
398301174 | Remove ES6 - continued from #673
Continued discussion from #673.
I just wanted to clarify this bit:
Or they use Babel.
As a library author, how would I use Babel to fix one of the dependecies? Sure I can produce a browser bundle with my code + dependencies, but what if my library has a common js interface? debug will be pulled from npm when people install my module. There is nothing I can do to stop them from hitting the issue.
By the way, this is a bit of an awkward conversation where each statement/reply pair is in their own ticket. I wish @Qix- wasn't locking the discussion all the time.
There's a simple workaround that does not involve wasting hours and hours battling babel/webpack/eslint configs to transpile debug to ES5 since there is already an ES5 version of debug in the dist folder. Importing it will define the debug object globally, so for browser usage you can do something like this:
import 'debug/dist/debug';
const myDebug = window.debug('myNamespace');
Please don't use the dist folder. Please use a transpiler or update your tools.
@bowsersenior you can use es5 compatible fork https://www.npmjs.com/package/debug-es5
Please don't advertise your packages here. I can't guarantee the security of your fork because you are not a trusted maintainer of debug.
Locking.
| gharchive/issue | 2019-01-11T14:03:06 | 2025-04-01T06:46:10.696831 | {
"authors": [
"Qix-",
"artemave",
"bowsersenior"
],
"repo": "visionmedia/debug",
"url": "https://github.com/visionmedia/debug/issues/674",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
199913031 | Warn when using unsupported mix of send and field
Warn about gotcha in #1152
I have no real opinion on this but would console.warn be slightly more semantically correct if we consider these "warnings"? I think ultimately this is going to cause misbehavior so they usually do represent actual errors in the application needing a code change to fix. (In which case console.error seems like the best choice)
I have no strong opinion either way. I went withconsole.error, because it may become an error.
| gharchive/pull-request | 2017-01-10T19:34:17 | 2025-04-01T06:46:10.698502 | {
"authors": [
"focusaurus",
"pornel"
],
"repo": "visionmedia/superagent",
"url": "https://github.com/visionmedia/superagent/pull/1153",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1986676830 | Fix Nondeterministic
An easy workaround to fix #168
Hi,
thanks debugging this issue and proposing a fix! I have one concern though: Even if the execution order in the graph is not ambiguous (due to only sequential nodes), but existing code did not pass the right order of nodes to the GraphINN, then existing checkpoints would break. Would you agree that using a defaultset(list) in https://github.com/vislearn/FrEIA/blob/6912465ea3412d18e2a4f3c5f5c00e0495bad74a/FrEIA/framework/graph_inn/graph_inn.py#L258
would also yield a deterministic order?
Best! Felix
Hi!
This is by no means supposed to be an ideal solution, just one that fixed my particular issue. I don't really have the patience or time to dig into potential corner cases, so please feel free to redo this as you please!
| gharchive/pull-request | 2023-11-10T01:26:03 | 2025-04-01T06:46:10.701042 | {
"authors": [
"fdraxler",
"ju-w"
],
"repo": "vislearn/FrEIA",
"url": "https://github.com/vislearn/FrEIA/pull/169",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1082739083 | 🛑 Saleminnma is down
In 69e093c, Saleminnma (http://saleminnma.com/) was down:
HTTP code: 500
Response time: 13845 ms
Resolved: Saleminnma is back up in c98ca57.
| gharchive/issue | 2021-12-16T23:50:00 | 2025-04-01T06:46:10.707065 | {
"authors": [
"vistaardigital"
],
"repo": "vistaardigital/uptime",
"url": "https://github.com/vistaardigital/uptime/issues/213",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1127162229 | Weird initial building quirk
So from a clean repo clone, running pnpm install is fine up until it does npm i --prefix docs, at which point, it errors as follows
When I go into package.json and change the postinstall script from npm i to pnpm install, it gets stuck in a recursive post-install. When I then revert the package.json postinstall script, it successfully runs
so assuming JQ is installed, the bash replication would be as follows
git clone https://github.com/vitebook/vitebook.git ~/projects/vitebook_unique_new_folder
cd ~/projects/vitebook_unique_new_folder
pnpm install # fail
mv package.json package.json.old
jq '.scripts.postinstall="pnpm install --prefix docs"' package.json.old > package.json
pnpm install # CTRL+C after a couple of recursions
rm package.json && mv package.json.old package.json
pnpm install # success
When setting the postinstall to pnpm --prefix ./docs install ./docs. Could someone with a linux environment try and replicate and see if this new postinstall works on linux?
Definitely strange, not sure if it's an environment issue but I just tested a fresh clone/install on Mac and it worked.
What strikes me as odd is the fact that the postinstall script would ever work. Since pnpm keeps track of state separately from npm, then running npm i, regardless of if --prefix is used, surely should run install on the top-level package json anyway before doing the prefix?
What's the intended behaviour for running npm i --prefix docs? Is it just a case of executing npm i from within the docs folder? Or is it meant to link the docs workspace to the vitebook workspace? Because it could be worth reimplementing it to avoid any unnecessary nested npm calls. Here's a stack overflow answer that mentions about the node lifecycles
| gharchive/issue | 2022-02-08T11:59:53 | 2025-04-01T06:46:10.737719 | {
"authors": [
"JamesYeoman",
"mihar-22"
],
"repo": "vitebook/vitebook",
"url": "https://github.com/vitebook/vitebook/issues/35",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
635835735 | reason-react template doesn't work - 404 (Not Found)
Hi,
I just trying to create new app using vite with the following commands:
npm init vite-app reason-app-vite --template reason-react
$ cd reason-app-vite
$ npm install
$ npm run dev
The project was successfully instantiated, but when I run it localy npm run dev I see a blank screen and the error in the browser:
Failed to load resource: the server responded with a status of 404 (Not Found) Index.bs.js:1
I have the following file structure after I ran all the commands:
My node version is: v14.4.0
NPM: 6.14.5
Chrome: Version 83.0.4103.9
It turns out that I had to run separately:
npm run re:build
After that all .bs.js files were generated the webpage showed up as expected.
Why can't this be integrated in dev build process, so everything can be run with one command npm run dev?
| gharchive/issue | 2020-06-10T00:13:42 | 2025-04-01T06:46:10.741633 | {
"authors": [
"markvital"
],
"repo": "vitejs/create-vite-app",
"url": "https://github.com/vitejs/create-vite-app/issues/24",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1574507690 | Build failed: Unterminated string literal
Describe the bug
When I include a dynamic entry point into my app and start the dev server, I get an error that the build failed.
// vite.config.js
import { defineConfig } from 'vite';
import { createHtmlPlugin } from 'vite-plugin-html';
export default defineConfig(() => {
return {
plugins: [
// @see https://github.com/vbenjs/vite-plugin-html#usage
createHtmlPlugin({
inject: {
data: {
ENTRY_POINT: '/main.js',
},
},
}),
],
};
});
In the index.html file, I dynamically set the entry point using the format...
<script type="module" src="<%- ENTRY_POINT %>"></script>
When I start the vite dev server, I get...
$ vite
08:44:49
VITE v4.1.1 ready in 1658 ms
➜ Local: http://localhost:5173/ 08:44:49
➜ Network: use --host to expose 08:44:49
➜ press h to show help 08:44:49
✘ [ERROR] Unterminated string literal
script:/home/projects/vitejs-vite-bdubvy/index.html?id=0:1:2:
1 │ ">
╵ ^
ERROR Build failed with 1 error: 08:44:49
script:/home/projects/vitejs-vite-bdubvy/index.html?id=0:1:2: ERROR: Unterminated string literal
Even though this error occurs, the app starts just fine.
Reproduction
https://stackblitz.com/edit/vitejs-vite-bdubvy?file=vite.config.js
Steps to reproduce
Run npm install followed by npm run dev
System Info
🍕 npx envinfo --system --npmPackages '{vite,@vitejs/*}' --binaries --browsers
System:
OS: macOS 13.1
CPU: (8) arm64 Apple M1
Memory: 87.66 MB / 16.00 GB
Shell: 5.1.16 - /opt/homebrew/bin/bash
Binaries:
Node: 16.19.0 - ~/.nvm/versions/node/v16.19.0/bin/node
Yarn: 1.22.19 - /opt/homebrew/bin/yarn
npm: 8.5.5 - /opt/homebrew/bin/npm
Watchman: 2022.10.24.00 - /opt/homebrew/bin/watchman
Browsers:
Chrome: 109.0.5414.119
Firefox: 109.0.1
Safari: 16.2
npmPackages:
@vitejs/plugin-basic-ssl: ^1.0.1 => 1.0.1
@vitejs/plugin-react: ^3.1.0 => 3.1.0
vite: ^4.1.1 => 4.1.1
### Used Package Manager
yarn
### Logs
_No response_
### Validations
- [X] Follow our [Code of Conduct](https://github.com/vitejs/vite/blob/main/CODE_OF_CONDUCT.md)
- [X] Read the [Contributing Guidelines](https://github.com/vitejs/vite/blob/main/CONTRIBUTING.md).
- [X] Read the [docs](https://vitejs.dev/guide).
- [X] Check that there isn't [already an issue](https://github.com/vitejs/vite/issues) that reports the same bug to avoid creating a duplicate.
- [X] Make sure this is a Vite issue and not a framework-specific issue. For example, if it's a Vue SFC related bug, it should likely be reported to [vuejs/core](https://github.com/vuejs/core) instead.
- [X] Check that this is a concrete bug. For Q&A open a [GitHub Discussion](https://github.com/vitejs/vite/discussions) or join our [Discord Chat Server](https://chat.vitejs.dev/).
- [X] The provided reproduction is a [minimal reproducible example](https://stackoverflow.com/help/minimal-reproducible-example) of the bug.
duplicate with https://github.com/vbenjs/vite-plugin-html/issues/9
Maybe the fix here is to run transformIndexHtml when scanning. That way we can detected injected scripts and crawl them too.
In retrospect, I think it's unlikely that we'll run transformIndexHtml when scanning as it'll slow down the startup. And we haven't got similar reports of this being an issue. I'll go ahead and close this as a wontfix for now.
| gharchive/issue | 2023-02-07T15:09:44 | 2025-04-01T06:46:10.747354 | {
"authors": [
"bluwy",
"fi3ework",
"thril"
],
"repo": "vitejs/vite",
"url": "https://github.com/vitejs/vite/issues/11966",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2114519967 | bindCLIShortcuts need a new option to disable default shortcuts
Description
I write a vite plugin and the plugin need implement a new custom shortcuts.
the plugin have a vite plugin hook like this:
configureServer(server) {
devServer = server
devServer.bindCLIShortcuts({
print: true,
customShortcuts: [{
key: 'p',
description: 'xxxx',
action(server) {
console.log('test')
}
}]
})
}
When i start a dev server in a project with commod vite,the CLI shortcuts bind twice。
for example
press h + enter will print helper text twice
press o + enter will open browser twice
What is the correct way to implement my plugin?###
Suggested solution
Add a new option in bindCLIShortcuts function to disable default shortcuts.
Alternative
No response
Additional context
No response
Validations
[X] Follow our Code of Conduct
[X] Read the Contributing Guidelines.
[X] Read the docs.
[X] Check that there isn't already an issue that request the same feature to avoid creating a duplicate.
Hi @patak-dev
Is there a plan to fix it?
| gharchive/issue | 2024-02-02T09:47:27 | 2025-04-01T06:46:10.753277 | {
"authors": [
"XioDone",
"yangxin9003"
],
"repo": "vitejs/vite",
"url": "https://github.com/vitejs/vite/issues/15781",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1074487857 | Bad handle of (dynamic) optional dependencies during dependency pre-bundling
Describe the bug
Hello,
First of all, thank you for developping this awesome tool. I use it on many projects now as the replacement of my old but stable toolchain.
Let's get into this issue: I'm maintainer of the library svelte-query which uses an optional dependency (broadcast-channel) to provide an opt-in feature.
When developers install the library on a vite project, vite is throwing an error: Failed to resolve import "broadcast-channel" (SvelteStack/svelte-query#63).
However, this dependency is imported dynamically only when the developper actually use the associated feature.
I think, there are two options to resolve this issue:
Delay the pre-bundling of dynamic imports when they are actually imported (best IMHO)
If solution 1 is not acceptable, maybe we could only delay pre-bundling of dynamic imports that are referenced in optionalDependencies in the package.json.
The only workaround I found is to install the optional dependency even if it's not used. I tried to include @sveltestack/svelte-query, broadcast-channel or both in optimizeDeps.exclude but it's not enough to workaround this issue.
I'm not familiar with vite code but I'm happy to help if needed.
Reproduction
1/ Start by creating a fresh project using create-vite:
$ npm init vite
✔ Project name: … vite-project
✔ Select a framework: › svelte
✔ Select a variant: › svelte-ts
$ cd vite-project
$ npm install
2/ Install svelte-query dependency:
$ npm install @sveltestack/svelte-query
3/ Use it in your code (eg: App.svelte):
<script lang="ts">
// append this at the end of imports
import { QueryClientProvider } from "@sveltestack/svelte-query";
</script>
<QueryClientProvider>
<!-- The content of the page -->
</QueryClientProvider>
4/ Run the project:
$ npm run dev
System Info
System:
OS: Linux 5.10 Ubuntu 20.04.3 LTS (Focal Fossa)
CPU: (8) x64 Intel(R) Core(TM) i7-1065G7 CPU @ 1.30GHz
Memory: 1.63 GB / 7.57 GB
Container: Yes
Shell: 5.0.17 - /bin/bash
Binaries:
Node: 16.13.0 - ~/.nvm/versions/node/v16.13.0/bin/node
Yarn: 1.22.5 - /usr/bin/yarn
npm: 8.1.0 - ~/.nvm/versions/node/v16.13.0/bin/npm
npmPackages:
vite: ^2.7.0 => 2.7.1
Used Package Manager
npm
Logs
[vite] Internal server error: Failed to resolve import "broadcast-channel" from "node_modules/@sveltestack/svelte-query/svelte/queryCore/broadcastQueryClient-experimental/index.js". Does the file exist?
Plugin: vite:import-analysis
File: /projects/vite-project/node_modules/@sveltestack/svelte-query/svelte/queryCore/broadcastQueryClient-experimental/index.js
1 | import '../core';
2 | export async function broadcastQueryClient({ queryClient, broadcastChannel = 'svelte-query', }) {
3 | const { BroadcastChannel } = await import('broadcast-channel');
| ^
4 | let transaction = false;
5 | const tx = (cb) => {
at formatError (/projects/vite-project/node_modules/vite/dist/node/chunks/dep-3daf770c.js:42587:46)
at TransformContext.error (/projects/vite-project/node_modules/vite/dist/node/chunks/dep-3daf770c.js:42583:19)
at normalizeUrl (/projects/vite-project/node_modules/vite/dist/node/chunks/dep-3daf770c.js:80909:26)
at async TransformContext.transform (/projects/vite-project/node_modules/vite/dist/node/chunks/dep-3daf770c.js:81049:57)
at async Object.transform (/projects/vite-project/node_modules/vite/dist/node/chunks/dep-3daf770c.js:42803:30)
at async doTransform (/projects/vite-project/node_modules/vite/dist/node/chunks/dep-3daf770c.js:57478:29)
Validations
[X] Follow our Code of Conduct
[X] Read the Contributing Guidelines.
[X] Read the docs.
[X] Check that there isn't already an issue that reports the same bug to avoid creating a duplicate.
[X] Make sure this is a Vite issue and not a framework-specific issue. For example, if it's a Vue SFC related bug, it should likely be reported to https://github.com/vuejs/vue-next instead.
[X] Check that this is a concrete bug. For Q&A open a GitHub Discussion or join our Discord Chat Server.
[X] The provided reproduction is a minimal reproducible example of the bug.
I'm having this exact same issue in a SvelteKit project!
Even in a try catch, we get the same error about vite:import-analysis
let dependency;
try {
dependency = await import('@org/dependency')).default;
} catch {
//
}
We recently migrated from rollup to Vite and now our old code doesn't work which is super annoying.
I'm seeing this error when conditionally importing a CSS file that doesn't exist - i.e. the error shows up and prevents compilation even when the condition is falsy:
if (import.meta.env.VITE_SOME_FALSY_VAR) {
import('./some-css-file-that-doesnt-exist-in-this-environment.css');
}
Seems like dynamic imports are analyzed statically just like static imports.
@SomaticIT Hi, just tried to use your repo with SvelteKit and running into this issue. Did you ever find a resolution?
@ZetiMente: Unfortunately, I think there is no simple solution. The only way is to install optional dependencies to allow vite to analyze it. (It will not be bundled during the build)
I think, vite should only analyze and bundle dynamic dependencies when they are actually used in the code. However, it means modifying the dependency analyzer process which is a sensible part of the process.
I would be happy to take some time to create a PR (when possible) but I need some kind of help from a contributor to avoid mistakes.
Hmmmm, I also need to find a way to support an optional dependency. Storybook is working on react 18 support, but in a way that also continues to support older versions. We need to dynamically import react-dom/client if using react 18, but vite throws an error since it can't resolve it in older react versions. I guess we would need to conditionally add react-dom to optimizeDeps.exclude, perhaps?
I am running into this issue as well. I am trying to add an optional dependency to tradingstrategy.ai/frontend application. We would like to include an optional third-party library that's proprietary. If someone clones / forks our app who doesn't have access to the proprietary module (in a private repo within our org), we would like them to still be able to run the dev server, build, etc. We thought we'd be able to dynamically import the module inside a try / catch and gracefully degrade the UI based on whether it loaded.
The "positive" scenario works (dynamically loading when module available). The "negative" scenario results in the same vite:import-analysis error. Adding the module to optimizeDeps.exclude does not seem to help.
Let me know if there's a solution or work-around for this. Thanks!
@kenkunz this is the workaround I arrived at, in addition to optimizeDeps.exclude: https://github.com/storybookjs/builder-vite/pull/320/files#diff-53261c79d121de43b9341f7d87ba28d33c2cdac416c10b0ccf5022f767937e91R68-R75
Thanks for sharing this, @IanVS … I will try out this approach tomorrow!
@IanVS thanks again for your suggestion. I wound up using a different work-around – inspired by another issue you opened: #5728 [Feature request] Allow import.meta.glob from node_modules (addressed by PR #6056).
I am now loading the optional dep using:
const modules = import.meta.glob('/node_modules/chartiq/{js,css}/*.{js,css}');
…and then checking Object.keys(modules).length / importing the modules I need if available. Works great / no vite:import-analysis errors.
My only slight hesitancy with this approach is that it's a Vite-specific feature. But given that it's a work-around to a Vite-specific issue … and this is in a SvelteKit app (no real likelihood of moving away from Vite) … I'm fine with that.
@IanVS would you be able to provide a minimal setup for this so we can reproduce it? Are you using enforce: 'pre' in your plugin to get before the internal resolve plugin? For reference plugin react is now using a proxy when dealing with the jsx runtime, so something, like you were doing, should continue working now too:
https://github.com/vitejs/vite/blob/5151e7466bdb86066991b77df8db6ada066bb71f/packages/plugin-react/src/index.ts#L363
Yes, I'm using enforce: 'pre'.
I'm sorry, I don't quite see the connection to the jsx runtime.
I'll work on a minimal reproduction.
Reporting back, setting an alias to point to my file seems like a usable workaround. I wasn't able to use a virtual file using the approach from 'react/jsx-runtime'. One small downside is that rollup warns about generating an empty chunk, but I can live with that.
I tried to create a minimal reproduction, but wasn't able to reproduce the issue when using my previous workaround without an alias on vite 3. It failed in storybook, but not when I tried to reproduce it. I'm happy to use the alias unless there's a downside to it I don't know of.
I think an alias is a better solution here, but it would be good to avoid that warning :)
I can probably add a rollupOptions.onWarn to filter it out.
| gharchive/issue | 2021-12-08T14:44:54 | 2025-04-01T06:46:10.774728 | {
"authors": [
"IanVS",
"JackPriceBurns",
"SomaticIT",
"ZetiMente",
"axelboc",
"kenkunz",
"patak-dev"
],
"repo": "vitejs/vite",
"url": "https://github.com/vitejs/vite/issues/6007",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1595461944 | docs: define no longer follows the esbuild rules
Description
In Vite 2.x, config.define is passed to esbuild to perform the replacement.
https://github.com/vitejs/vite/blob/d4886ea9567106be947538003757ba817976e080/packages/vite/src/node/optimizer/index.ts#L279-L299
For now, config.define is no longer passed to esbuild when performing optimization. The line this PR deleted is introduced in https://github.com/vitejs/vite/issues/5570. The reproduction that time works on Vite4 now.
Made a new demo to confirm https://stackblitz.com/edit/vite-nitjua?file=package-lock.json,vite.config.js&file=main.js
Additional context
We should bring this back after https://github.com/vitejs/vite/pull/11151 landed.
What is the purpose of this pull request?
[ ] Bug fix
[ ] New Feature
[x] Documentation update
[ ] Other
Before submitting the PR, please make sure you do the following
[x] Read the Contributing Guidelines.
[x] Read the Pull Request Guidelines and follow the PR Title Convention.
[x] Check that there isn't already a PR that solves the problem the same way to avoid creating a duplicate.
[x] Provide a description in this PR that addresses what the PR is solving, or reference the issue that it solves (e.g. fixes #123).
[ ] Ideally, include relevant tests that fail without this PR but pass with it.
Looks like the limitation was indirectly removed in https://github.com/vitejs/vite/pull/8606. But I think if we're planning to go with https://github.com/vitejs/vite/pull/11151 one day (which I could revisit again), maybe it's good to pre-emptively mention this limitation so one that PR lands, it doesn't cause too much breaking changes to users.
Looks like the limitation was indirectly removed in https://github.com/vitejs/vite/pull/8606. But I think if we're planning to go with https://github.com/vitejs/vite/pull/11151 one day (which I could revisit again), maybe it's good to pre-emptively mention this limitation so one that PR lands, it doesn't cause too much breaking changes to users.
Make sense. 😜
| gharchive/pull-request | 2023-02-22T16:56:23 | 2025-04-01T06:46:10.784061 | {
"authors": [
"bluwy",
"fi3ework"
],
"repo": "vitejs/vite",
"url": "https://github.com/vitejs/vite/pull/12156",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
941836579 | fix(create-vite): distinguish pnpm pkgManager
Description
Same as #4193 , slove conflict.
Additional context
What is the purpose of this pull request?
[x] Bug fix
[ ] New Feature
[ ] Documentation update
[ ] Other
Before submitting the PR, please make sure you do the following
[x] Read the Contributing Guidelines.
[x] Read the Pull Request Guidelines and follow the Commit Convention.
[x] Check that there isn't already a PR that solves the problem the same way to avoid creating a duplicate.
[x] Provide a description in this PR that addresses what the PR is solving, or reference the issue that it solves (e.g. fixes #123).
[x] Ideally, include relevant tests that fail without this PR but pass with it.
@antfu @Shinigami92 review
@patak-js Maybe you forgot it?
Tests are blocked, I dont know why. Would you try to merge main?
Tests are blocked, I dont know why. Would you try to merge main?
This may be because I send a request review to you, and util mentioned that 'patak-js requested changes', I have marked that conversation as resolved, sorry for I don't have much experience in this field.
Tests are blocked, I dont know why. Would you try to merge main?
This may be because I send a request review to you, and util mentioned that 'patak-js requested changes', I have marked that conversation as resolved
I don't think that is the case, it looks like a GitHub bug to me
sorry for I don't have much experience in this field.
Thanks for this PR, this is a great addition and I hope we'll see you around with other improvements later
| gharchive/pull-request | 2021-07-12T09:05:55 | 2025-04-01T06:46:10.791215 | {
"authors": [
"cabbage9",
"patak-js",
"ryanmoyo"
],
"repo": "vitejs/vite",
"url": "https://github.com/vitejs/vite/pull/4220",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1065392712 | chore: use cjs extension with scripts
Description
Use .cjs extension with Node.js scripts
Additional context
I was looking at what would be required to set "type": "module". This is a change that can be broken off easily and done ahead of time in a way that works with or without setting "type": "module"
Longer-term we'll probably want to convert these to ESM, but that'd make the migration more of a big bang and it'd probably be easier to do it incrementally
What is the purpose of this pull request?
[ ] Bug fix
[ ] New Feature
[ ] Documentation update
[X] Other
Before submitting the PR, please make sure you do the following
[X] Read the Contributing Guidelines.
[X] Read the Pull Request Guidelines and follow the Commit Convention.
[X] Check that there isn't already a PR that solves the problem the same way to avoid creating a duplicate.
[X] Provide a description in this PR that addresses what the PR is solving, or reference the issue that it solves (e.g. fixes #123).
[ ] Ideally, include relevant tests that fail without this PR but pass with it.
I pushed an update to rename all files in the scripts directories. I hadn't done this earlier because the tests were failing locally, but I've discovered that happens for me even on main and is unrelated to this change
| gharchive/pull-request | 2021-11-28T17:43:38 | 2025-04-01T06:46:10.796889 | {
"authors": [
"benmccann"
],
"repo": "vitejs/vite",
"url": "https://github.com/vitejs/vite/pull/5877",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1822020492 | ReferenceError Occurring After Updating Vitest: Is 'glob' or 'path' Affected?
Describe the bug
I have the following configuration in the setupFiles of vitest:
import { globSync } from 'glob';
import { resolve } from 'path';
const appDir = resolve(__dirname, '../src/app');
// **/api.ts
globSync(`${appDir}/**/api.ts`).forEach((path) => {
vi.mock(path, () => ({ default: {} }));
});
// **/*.graphql
globSync(`${appDir}/**/*.graphql`).forEach((path) => {
vi.mock(path, () => ({ default: {} }));
});
After updating vitest from 0.28.5 to 0.33.0, I received the following error:
ReferenceError: path is not defined
What could be the cause of this?
Does updating vitest affect 'glob' or 'path'?
Reproduction
npm install vitest@0.33.0
Replace the setupFiles configuration in the Vitest config with the following code snippet...
Run the test suite using the command...
See error "ReferenceError: path is not defined"
System Info
- OS: macOS Big Sur 11.2.3
- Vitest version: 0.33.0
- Node.js version: 14.15.1
- NPM/Yarn version: 6.14.8/1.22.10
Used Package Manager
npm
Validations
[X] Follow our Code of Conduct
[X] Read the Contributing Guidelines.
[X] Read the docs.
[X] Check that there isn't already an issue that reports the same bug to avoid creating a duplicate.
[X] Check that this is a concrete bug. For Q&A open a GitHub Discussion or join our Discord Chat Server.
[X] The provided reproduction is a minimal reproducible example of the bug.
vi.mock is hoisted, you cannot call it like this.
If you define dynamic mocks, use vi.doMock
| gharchive/issue | 2023-07-26T09:53:41 | 2025-04-01T06:46:10.816443 | {
"authors": [
"naoki-kubota110",
"sheremet-va"
],
"repo": "vitest-dev/vitest",
"url": "https://github.com/vitest-dev/vitest/issues/3815",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2007355210 | [v1.0.0-beta5] Using Yarn PnP results in "MISSING DEP Can not find dependency 'vite'" error.
Describe the bug
Using Yarn 4.0.2 with PnP mode causes Vitest to error with MISSING DEP Can not find dependency 'vite'
Vitest v0.34.6 works in the same environment with PnP mode.
Reproduction
https://github.com/Unnoen/Vitest-1.0-Yarn-PnP-Reproduction
System Info
System:
OS: Windows 11 10.0.22631
CPU: (32) x64 13th Gen Intel(R) Core(TM) i9-13900K
Memory: 22.18 GB / 63.78 GB
Binaries:
Node: 20.8.1 - C:\Program Files\nodejs\node.EXE
Yarn: 4.0.2 - C:\Program Files\nodejs\yarn.CMD
npm: 10.1.0 - C:\Program Files\nodejs\npm.CMD
Browsers:
Edge: Chromium (119.0.2151.72)
Internet Explorer: 11.0.22621.1
Used Package Manager
yarn
Validations
[X] Follow our Code of Conduct
[X] Read the Contributing Guidelines.
[X] Read the docs.
[X] Check that there isn't already an issue that reports the same bug to avoid creating a duplicate.
[X] Check that this is a concrete bug. For Q&A open a GitHub Discussion or join our Discord Chat Server.
[X] The provided reproduction is a minimal reproducible example of the bug.
I am too facing this issue, on vitest@1.0.1.
It seems to be related to #4413, #899 as it's ultimately local-pkg which results in incorrect outcomes being produced. This has been reported in https://github.com/antfu/local-pkg/issues/2 but it's been stuck for 1.5 years now 🤷♂️
| gharchive/issue | 2023-11-23T01:46:18 | 2025-04-01T06:46:10.822911 | {
"authors": [
"Unnoen",
"wojtekmaj"
],
"repo": "vitest-dev/vitest",
"url": "https://github.com/vitest-dev/vitest/issues/4575",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2084626083 | Method getUnhandledErrors called on RPC, but does not exist and throws error
Describe the bug
Running my tests in vitest browser mode with Playwright throws an error inside the vitest JS-bundle.
https://github.com/vitest-dev/vitest/blob/39814357024e96b81546ec22d6aaf777b8db6cef/packages/ui/client/composables/client/index.ts#L79
Reproduction
git clone https://github.com/enjikaka/rutabaga-failing-vitest
run npm test
System Info
System:
OS: macOS 12.7.3
CPU: (8) x64 Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz
Memory: 2.76 GB / 16.00 GB
Shell: 5.8.1 - /bin/zsh
Binaries:
Node: 18.18.2 - ~/.nvm/versions/node/v18.18.2/bin/node
npm: 9.8.1 - ~/.nvm/versions/node/v18.18.2/bin/npm
bun: 1.0.22 - ~/.bun/bin/bun
Browsers:
Brave Browser: 120.1.61.109
Chrome Canary: 122.0.6249.0
Safari: 17.1.2
npmPackages:
@vitest/browser: ^1.2.0 => 1.2.0
vite: ^5.0.11 => 5.0.11
vitest: ^1.1.3 => 1.1.3
Used Package Manager
npm
Validations
[X] Follow our Code of Conduct
[X] Read the Contributing Guidelines.
[X] Read the docs.
[X] Check that there isn't already an issue that reports the same bug to avoid creating a duplicate.
[X] Check that this is a concrete bug. For Q&A open a GitHub Discussion or join our Discord Chat Server.
[X] The provided reproduction is a minimal reproducible example of the bug.
You have a version mismatch in your dependencies. Vitest and vitest/browser should be the same version - maybe that's the problem?
You have a version mismatch in your dependencies. Vitest and vitest/browser should be the same version - maybe that's the problem?
Syncing the versions helped! Thanks.
I suppose the peerDependencies of @vitest/browser should be updated to 1.2.0 and not allow lower version then?
Yes, it should be!
Made a PR!
I think correct way to solve this would be to conditionally use getUnhandledErrors and not crash when older version of vitest is used with latest @vitest/browser.
I think correct way to solve this would be to conditionally use getUnhandledErrors and not crash when older version of vitest is used with latest @vitest/browser.
Can you explain why you think this would be better?
| gharchive/issue | 2024-01-16T19:03:17 | 2025-04-01T06:46:10.832020 | {
"authors": [
"AriPerkkio",
"enjikaka",
"sheremet-va"
],
"repo": "vitest-dev/vitest",
"url": "https://github.com/vitest-dev/vitest/issues/4983",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2261877463 | Stack overflow caused by large output
Describe the bug
This appears to be a regression of #3060. When an error occurs with a very large output (in my case from @testing-library/react), picocolors causes a stack overflow.
⎯⎯⎯⎯⎯⎯ Unhandled Error ⎯⎯⎯⎯⎯⎯⎯
RangeError: Maximum call stack size exceeded
❯ replaceClose node_modules/picocolors/picocolors.js:22:21
❯ replaceClose node_modules/picocolors/picocolors.js:25:30
❯ replaceClose node_modules/picocolors/picocolors.js:25:30
❯ replaceClose node_modules/picocolors/picocolors.js:25:30
❯ replaceClose node_modules/picocolors/picocolors.js:25:30
❯ replaceClose node_modules/picocolors/picocolors.js:25:30
❯ replaceClose node_modules/picocolors/picocolors.js:25:30
❯ replaceClose node_modules/picocolors/picocolors.js:25:30
❯ replaceClose node_modules/picocolors/picocolors.js:25:30
Reproduction
Unfortunately I don't have a reproduction handy. I suspect this would reproduce with RTL, a large dom and DEBUG_PRINT_LIMIT=100000 vitest.
System Info
System:
OS: macOS 14.4.1
CPU: (10) arm64 Apple M1 Max
Memory: 5.56 GB / 64.00 GB
Shell: 5.2.15 - /opt/homebrew/bin/bash
Binaries:
Node: 18.18.0 - ~/.volta/tools/image/node/18.18.0/bin/node
Yarn: 1.22.17 - ~/.volta/tools/image/yarn/1.22.17/bin/yarn
npm: 9.8.1 - ~/.volta/tools/image/node/18.18.0/bin/npm
Browsers:
Brave Browser: 121.1.62.156
Chrome: 124.0.6367.62
Chrome Canary: 121.0.6125.0
Edge: 124.0.2478.51
Firefox: 125.0.1
Safari: 17.4.1
npmPackages:
vitest: ^1.5.0 => 1.5.0
Used Package Manager
yarn
Validations
[X] Follow our Code of Conduct
[X] Read the Contributing Guidelines.
[X] Read the docs.
[X] Check that there isn't already an issue that reports the same bug to avoid creating a duplicate.
[X] Check that this is a concrete bug. For Q&A open a GitHub Discussion or join our Discord Chat Server.
[X] The provided reproduction is a minimal reproducible example of the bug.
The linked issue https://github.com/vitest-dev/vitest/issues/3060 seems related, but I'm not sure how to get the error considering the fix https://github.com/vitest-dev/vitest/pull/3078 seems okay.
I tried something like this and still seems fine with DEBUG_PRINT_LIMIT=100000:
const node = <div id="hey">
{[...Array(5000)].map(i => <div key={i} id={i} style={{ color: "red" }}><span/></div>)}
</div>
render(node);
screen.getByRole('no-such-thing') // testing-library throws an error
TestingLibraryElementError: Unable to find an accessible element with the role "no-such-thing"
There are no accessible roles. But there might be some inaccessible roles. If you wish to access them, then set the `hidden` option to `true`. Learn more about this here: https://testing-library.com/docs/dom-testing-library/api-queries#byrole
Ignored nodes: comments, script, style
<body>
<div>
<div
id="hey"
>
Can I get this issue reopened? I've created a reproduction. Just yarn && yarn test should reproduce the bug (npm should work fine too). They key thing turns out to be CI=1, it seems like there is some code path that's only happening in CI that also needs the guard.
Let me know if I should open a new issue instead of reopening this one.
Confirmed the issue. Thanks for the reproduction :+1:
It seems like call stack error is due to a specific implementation of picocolors and technically it can be avoided, so I raised an issue upstream https://github.com/alexeyraspopov/picocolors/issues/63.
As upstream might not progress fast, Vitest can still fix this by patching picocolors or simply truncate the error message wherever this is happening.
The upstream issue is fixed and thanks @hi-ogawa for contribution. picocolors@1.0.1 is published and now should be more resilient to large colored outputs.
@hi-ogawa are you planning to bump the version in vitest? I'm happy to submit that PR if that's helpful.
@RobinClowers picocolors is not bundled with vitest package, so you should be able to update the transitive dependency on your own. Depending on package manager, for example, pnpm dedupe would help.
I also confirmed the last reproduction is fixed with package.json overrdies
https://stackblitz.com/edit/vitest-dev-vitest-a3p1xh?file=package.json
Yeah, I already updated my app, just wanted to make sure it would land here, sounds like the renovate PR has it covered!
Cool, thanks for the reminder! As you confirmed it's working for you, I'll close this issue then.
I just remembered that Vitest has utilities which is mostly a copy of picocolors internally, so I'm updating that in a separate PR https://github.com/vitest-dev/vitest/pull/5733.
| gharchive/issue | 2024-04-24T18:01:24 | 2025-04-01T06:46:10.844887 | {
"authors": [
"RobinClowers",
"alexeyraspopov",
"hi-ogawa"
],
"repo": "vitest-dev/vitest",
"url": "https://github.com/vitest-dev/vitest/issues/5614",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2473531930 | chore: add pkg.pr.new
Description
This PR includes:
~update PR template to include /publish tip~
~add CR comment workflow via /publish (tip on the PR template)~
add CR workflow to use cr-tracked label PR or main branch push
release all packages: we can add one workflow per package releasing only the corresponding one
This PR requires:
create a cr-tracked label
add pkg-pr-new app to vitest package
~add CR_PAT secret to Actions with a token with public_repo (Classic Token): later we can automate it with something like this https://github.com/vitest-dev/vitest/blob/main/.github/workflows/ecosystem-ci-trigger.yml#L63C1-L68 (I'm a newbye on GH actions/webflows)~
This PR requires also a review to exclude CR for docs for example, we can use something like this in on.pull_request:
paths-ignore:
- '.github/**'
- 'test/**'
- 'examples/**'
- 'docs/**'
- '*.md'
/cc @Aslemammad
Please don't delete this checklist! Before submitting the PR, please make sure you do the following:
[ ] It's really useful if your PR references an issue where it is discussed ahead of time. If the feature is substantial or introduces breaking changes without a discussion, PR might be closed.
[ ] Ideally, include a test that fails without this PR but passes with it.
[ ] Please, don't make changes to pnpm-lock.yaml unless you introduce a new test example.
Tests
[ ] Run the tests with pnpm test:ci.
Documentation
[ ] If you introduce new functionality, document it. You can run documentation with pnpm run docs command.
Changesets
[ ] Changes in changelog are generated from PR name. Please, make sure that it explains your changes in an understandable manner. Please, prefix changeset messages with feat:, fix:, perf:, docs:, or chore:.
I love this.
@userquin We discussed about how to trigger pkg.pr.new and we agreed "cr-tracked" label based workflow is okay. I'll push a few commits here and see how it goes.
@sheremet-va We haven't installed the app yet? https://github.com/vitest-dev/vitest/actions/runs/11050071341/job/30697038656?pr=6362#step:7:13
{"url":"/check","statusCode":404,"statusMessage":"Not Found","message":"The app https://github.com/apps/pkg-pr-new is not installed on vitest-dev/vitest."}
@sheremet-va We haven't installed the app yet? vitest-dev/vitest/actions/runs/11050071341/job/30697038656?pr=6362#step:7:13
{"url":"/check","statusCode":404,"statusMessage":"Not Found","message":"The app https://github.com/apps/pkg-pr-new is not installed on vitest-dev/vitest."}
We didn't install anything
Should be installed now
you need to add the app to vitest repo: https://github.com/stackblitz-labs/pkg.pr.new?tab=readme-ov-file#setup
@userquin We discussed about how to trigger pkg.pr.new and we agreed "cr-tracked" label based workflow is okay. I'll push a few commits here and see how it goes.
Check vitepress repo, using vue bot, check cr.yml and cr-comment.yml here: https://github.com/vuejs/vitepress/tree/main/.github/workflows
@userquin cr-comment.yml doesn't look necessary since we can just add label manually when we want to allow PRs to be released. Isn't it only doing /publish comment to trigger adding label cr-tracked indirectly?
@hi-ogawa I've sent you an invitation to vue discord server and a link to Vitepress room/channel, I asked brc_dd and his answer:
Actually, github actions can also add labels but labels added via their default bot cannot trigger other actions. 🫠 That's why add it using separate user.
Actually, github actions can also add labels but labels added via their default bot cannot trigger other actions. 🫠 That's why add it using separate user.
We don't want to add labels automatically. The idea is that the maintainer adds a label to allow CR manually
Yeah, I just also checked re-labeling "cr-tracked" and this alone can also trigger CR correctly. https://github.com/vitest-dev/vitest/actions/workflows/cr.yml
I tested a few examples locally and also on stackblitz and it looks working https://stackblitz.com/edit/vitest-dev-vitest-ei9jaw?file=package.json
I think we can proceed with this PR.
For reference, adding the label automatically refers to this scheme:
https://github.com/vitejs/vite/pull/18211
I think adding a label instead of a comment is quite nice too, so good to move forward with this PR as is. I think in Vite we'll keep the comment (with the label workaround). We can try both approaches and see what ends up working best.
Do we need a mention in the readme/documentation that we have this? We have this for example: https://vitest.dev/guide/#using-unreleased-commits
https://github.com/vite-pwa/vite-plugin-pwa/blob/main/.github/pull_request_template.md
Do we need a mention in the readme/documentation that we have this? We have this for example: vitest.dev/guide#using-unreleased-commits
vite-pwa/vite-plugin-pwa@main/.github/pull_request_template.md
We also publish every commit on main. And the comment can only be added by the maintainers
I just pushed a few commits but pkg-pr-new publish is getting stuck today.
@Aslemammad Could it be some infra issue? Would be great to know if there's something we can do to diagnose issue.
interesting, can you trigger a re-run for the failed workflow?
Thanks for the help. I started a re-run and it looks like it's still stuck.
I think it is fixed now, just triggered a re-run!
@Aslemammad Thanks! I confirmed pkg-pr-new is working now https://github.com/vitest-dev/vitest/actions/runs/11115894967/job/30904769639?pr=6362
Happy to hear that ❤️
| gharchive/pull-request | 2024-08-19T14:39:52 | 2025-04-01T06:46:10.867276 | {
"authors": [
"Aslemammad",
"hi-ogawa",
"patak-dev",
"sheremet-va",
"userquin"
],
"repo": "vitest-dev/vitest",
"url": "https://github.com/vitest-dev/vitest/pull/6362",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
513999919 | Question regarding about join
I am just starting to learn your library and I think its pretty awesome.
I have a question though.
Suppose I have two Entities
@entity
class Customer {
@PrimaryKey(autoGenerate: true)
final int id;
final String name;
}
@Entity(foreignKeys: [
ForeignKey(
childColumns: ["customer_id"],
parentColumns: ["id"],
entity: Customer)
])
class Order {
@PrimaryKey(autoGenerate: true)
final int id;
final String itemName;
@ColumnInfo(name: "customer_id")
final int customerId;
}
And for example this will be my query on the DAO
SELECT Customer.name, Order.itemName
FROM Order
INNER JOIN Customer
ON Customer.id = Order.customer_id
WHERE Order.customer_id = :customerId
How will I write the abstract method in the DAO so that it can take all the required values? Right now I'm thinking of using two queries (without using the join) and the other is creating another class (not Entity) that have all the required fields. Something like this:
class CustomerOrder {
String itemName;
String customerName;
}
and an abstract method on the DAO like this:
@Query("QUERY")
Future<CustomerOrder> foo(int customerId);
Will this approach work and with no possible bad effects in a long run?
If you can recommend a better approach than this please do.
Cheers! Thanks for your hard work!
This is currently only possible with DatabaseViews. But I am working on a under-the-hood improvement(#321) which would make the feature you describe possible. Nevertheless, we will be tracking this feature in #94.
Could you possibly give an example of how to use the DatabaseViews? I'm not sure how to go about doing a join @mqus
Let's go with the example above: there has to be a class which represents your result, like CustomerOrder. You have to annotate that class with @DatabaseView('YOUR JOIN QUERY WITHOUT :PARAMETERS') and add that view to the views field of the database annotation, like you would with an entity. The class for the exanple should look like the following(notice the missing where clause which previously filtered for the parameter. We'll add that later):
@DatabaseView(' SELECT Customer.name, Order.itemName FROM Order INNER JOIN Customer ON Customer.id = Order.customer_id')
class CustomerOrder {
String itemName;
String customerName;
}
This will create a virtual table in the database on which you can perform selects like on a real table. So the query in the dao only has to filter the databaseview on the parameter. For the example this looks like that:
@Query("SELECT * FROM CustomerOrder WHERE customer_id = :customerId ")
Future<CustomerOrder> foo(int customerId);
Actually, you don't really have to reference the view in the query of your dao method and could just use the full join query, as long as the view exists(which is necessary to keep e.g. Streams working as expected) and the structure of the class matches the query output, although I would not recommend to rely on it too much, since this is an implementation detail which could change in the future.
| gharchive/issue | 2019-10-29T15:18:47 | 2025-04-01T06:46:10.879493 | {
"authors": [
"Bricktheworld",
"Chr1st-oo",
"mqus"
],
"repo": "vitusortner/floor",
"url": "https://github.com/vitusortner/floor/issues/213",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1611904714 | Fix: Set the logging level to logging.DEBUG
Otherwise no logging is displayed on my local dev server.
I've no clue why it didn't work yesterday. But now I've no longer problems with any log level.
| gharchive/pull-request | 2023-03-06T17:32:55 | 2025-04-01T06:46:10.887210 | {
"authors": [
"sveneberth"
],
"repo": "viur-framework/viur-core",
"url": "https://github.com/viur-framework/viur-core/pull/669",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
242909729 | update jgit dependency to 4.8.0.201706111038-r
The previous version (4.5.0.201609210915-r) had false positives when testing for uncomitted changes in a git lfs repository. In the result the plugin prevented creating a release version.
see: #76
@mld-ger will try and get this merged in soon. Sorry a bit busy with work and school.
| gharchive/pull-request | 2017-07-14T06:33:05 | 2025-04-01T06:46:10.894763 | {
"authors": [
"mld-ger",
"vivin"
],
"repo": "vivin/gradle-semantic-build-versioning",
"url": "https://github.com/vivin/gradle-semantic-build-versioning/pull/77",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1251851837 | Баг: Exception при 'ответе' на сообщение.
Опишите баг: При использовании User-Longpoll'а, если кто-то отправляет сообщение с ответом (reply, не пересылкой) и неважно, с какой именно стороны идёт пересылка; со стороны владельца страницы или нет, происходит Exception:
'user_id'
Traceback (most recent call last):
File ".../vkbottle/dispatch/base.py", line 22, in route
await view.handle_event(event, ctx_api, self.state_dispenser)
File ".../vkbottle/dispatch/views/abc/message.py", line 49, in handle_event
message = await self.get_message(event, ctx_api, self.replace_mention)
File ".../vkbottle/dispatch/views/user/message.py", line 28, in get_message
return await message_min(event[1], ctx_api, replace_mention)
File ".../vkbottle/tools/dev/mini_types/user/message.py", line 42, in message_min
return MessageMin(
File "pydantic/main.py", line 339, in pydantic.main.BaseModel.__init__
File "pydantic/main.py", line 1012, in pydantic.main.validate_model
File ".../vkbottle/tools/dev/mini_types/user/message.py", line 29, in __foreign_messages
foreign_message["user_id"] = values["user_id"]
KeyError: 'user_id'
Ваш код (опционально): pastebin
Заполните поля технической информации:
vkbottle: 4.3.2
vkbottle-types: 5.131.146.2.post1
OS: Manjaro (Arch) Linux
оффтоп: а почему к issue assign'ится другой человек?
Fixed in 366d824234d91740e00cac0b01119c0fd298d919
:disappointed:
| gharchive/issue | 2022-05-29T10:39:57 | 2025-04-01T06:46:10.911362 | {
"authors": [
"FeeeeK",
"Zensonaton"
],
"repo": "vkbottle/vkbottle",
"url": "https://github.com/vkbottle/vkbottle/issues/516",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1723353907 | flexibleSlotTimeLimits ignores background events
not sure if this might be on purpose, but flexibleSlotTimeLimits: true ignores background events, so they disappear when there is no event after or before them
Yes, this is expected behavior in the current version. Background events are not taken into account. For example, in the main consumer of the library (Bookly plugin), background events indicate non-working time and it would be strange for it to affect anything.
Would you be fine with merging, if I add an option flag which would allow background events to limit the view?
The ability to consider background events was added in version 1.3.0. See new settings for flexibleSlotTimeLimits.
Thanks a lot for adding this and for maintaining that package so well.
| gharchive/issue | 2023-05-24T07:27:54 | 2025-04-01T06:46:10.923423 | {
"authors": [
"mrvnklm",
"vkurko"
],
"repo": "vkurko/calendar",
"url": "https://github.com/vkurko/calendar/issues/133",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1775661207 | Crashes with React Native (Android)
Issue Description
Hello. I'm trying to integrate Human with React Native, for a mobile android app. I've been wrapping my head around this for the past 3 days, to no avail. I would greatly appreciate any help in setting this up.
Steps to Reproduce
Clone this repository: https://github.com/Lamarcke/human-test
yarn
yarn start
The following error (which crashes the APP) should appear:
Cannot evaluate flag 'CANVAS2D_WILL_READ_FREQUENTLY_FOR_GPU': no evaluation function found.
I'm also receiving the "The kenerl x for backend "wasm" is already registered" warnings.
Some extra info:
I've already tried to set this up using the latest expo, tfjs-react-native (using --force to override the outdated dependencies), expo-gl versions. The current @tensorflow/tfjs-react-native is using outdated packages, so i'm trying to their example which include tested versions of the dependencies. The example does work, but human still doesn't.
I've also tried to use react-native-canvas the way you described here: https://github.com/vladmandic/human/issues/327#issuecomment-1410359418 and just importing the package.
Expected Behavior
Human should load since the no-bundle version is being used, and tfjs-react-native is set up following their example (https://github.com/tensorflow/tfjs-examples/tree/master/react-native/image-classification). I also have react-native-canvas installed.
Environment
React Native with Expo
Human library version? ^3.0.7
Built-in demo or custom code?
Type of module used (e.g. js, esm, esm-nobundle)?
TensorFlow/JS version (if not using bundled module)? 0.8.0 (tfjs-react-native)
Browser or NodeJS and version (e.g. NodeJS 14.15 or Chrome 89)? Node 18
OS and Hardware platform (e.g. Windows 10, Ubuntu Linux on x64, Android 10)? Android on Expo Go
Packager (if any) (e.g, webpack, rollup, parcel, esbuild, etc.)?
Framework (if any) (e.g. React, NextJS, etc.)? Expo & React Native
Diagnostics
Check out any applicable diagnostic steps
Additional
For installation or startup issues include your package.json
{
"name": "image-classification-with-tfjs-react-native-expo",
"main": "index.js",
"version": "0.0.1",
"license": "Apache-2.0",
"scripts": {
"start": "expo start",
"start:tunnel": "expo start --tunnel"
},
"dependencies": {
"@react-native-async-storage/async-storage": "~1.17.3",
"@tensorflow-models/mobilenet": "2.1.0",
"@tensorflow/tfjs": "3.18.0",
"@tensorflow/tfjs-backend-wasm": "^4.8.0",
"@tensorflow/tfjs-backend-webgpu": "^4.8.0",
"@tensorflow/tfjs-react-native": "0.8.0",
"@vladmandic/human": "^3.0.7",
"expo": "~45.0.6",
"expo-camera": "^12.2.0",
"expo-file-system": "^14.0.0",
"expo-gl": "^11.3.0",
"expo-gl-cpp": "^11.3.0",
"react": "17.0.2",
"react-dom": "17.0.2",
"react-native": "0.68.2",
"react-native-canvas": "^0.1.39",
"react-native-fs": "2.14.1",
"react-native-gesture-handler": "~2.2.1",
"react-native-webview": "^13.2.2"
},
"devDependencies": {
"@babel/core": "^7.9.0",
"@expo/webpack-config": "^0.15.0",
"@types/react": "~16.9.35",
"@types/react-native": "~0.63.2",
"babel-preset-expo": "~8.3.0",
"jest-expo": "~41.0.0",
"typescript": "~4.0.0"
},
"jest": {
"preset": "react-native"
},
"private": true
}
For usage issues, it is recommended to post your code as gist
For general questions, create a discussion topic
given the lack of progress on official tfjs-react-native package, i've halted any testing on react-native platform a while ago and as a result, recent versions of human may not be compatible out-of-the-box.
if you feel comfortable, install human in dev mode and change internal code where needed (rebuild is pretty fast).
otherwise, i'm not sure when this will get out of my backlog.
since react-native is not officially supported, i'll mark this as feature request.
given the lack of progress on official tfjs-react-native package, i've halted any testing on react-native platform a while ago and as a result, recent versions of human may not be compatible out-of-the-box.
if you feel comfortable, install human in dev mode and change internal code where needed (rebuild is pretty fast). otherwise, i'm not sure when this will get out of my backlog.
since react-native is not officially supported, i'll mark this as feature request.
Thank you for the quick response. I have moved over to a PWA architecture because it's a lot easier to work with Human directly on the browser. You have done an incredible work, thank you.
in that case, i'm going to close this issue as not much i can do about it right now, but feel free to report any problems you might have.
| gharchive/issue | 2023-06-26T21:34:46 | 2025-04-01T06:46:10.958947 | {
"authors": [
"Lamarcke",
"vladmandic"
],
"repo": "vladmandic/human",
"url": "https://github.com/vladmandic/human/issues/373",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2319377420 | Is Hibernate 6.5 supported?
Hi,
since in the readme 6.5 is not mentioned, does that mean it is not supported yet?
Or is the readme simply outdated and using hypersistence-utils-hibernate-63 with Hibernate 6.5 is totally fine?
PS: Thanks for this amazing library :)
The library was never tested with Hibernate 6.5, so it could either work or not.
You have to try it with Hibernate 6.5 and see if it works.
| gharchive/issue | 2024-05-27T15:15:51 | 2025-04-01T06:46:10.961066 | {
"authors": [
"N4SoftwareNinja",
"vladmihalcea"
],
"repo": "vladmihalcea/hypersistence-utils",
"url": "https://github.com/vladmihalcea/hypersistence-utils/issues/722",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1492845096 | Add to Play Store
I think it would be nice to add it on Play Store so that the easier access to the app can attract more users.
Google is not interested in free open source products, they require $25 to host them.
It pains me to say this, but I live in a country that is in conflict with its neighbors. International payments do not work for us. Therefore, all I can do for users is to publish the application code here and put the builds in all open stores.
| gharchive/issue | 2022-12-12T21:53:03 | 2025-04-01T06:46:10.962252 | {
"authors": [
"Penknife0915",
"vladpen"
],
"repo": "vladpen/cams",
"url": "https://github.com/vladpen/cams/issues/57",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2431145908 | [Bug][ROCm] The embedding layer does not support long inputs
Your current environment
8xMI300x machine using the docker image built with Dockerfile.rocm.
Versions of relevant libraries:
[pip3] mypy==1.7.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] optree==0.9.1
[pip3] pytorch-triton-rocm==3.0.0+21eae954ef
[pip3] torch==2.5.0.dev20240710+rocm6.1
[pip3] torchaudio==2.4.0.dev20240710+rocm6.1
[pip3] torchvision==0.20.0.dev20240710+rocm6.1
[pip3] transformers==4.43.2
[pip3] triton==3.0.0
[conda] No relevant packages
ROCM Version: 6.1.40093-bd86f1708
Neuron SDK Version: N/A
vLLM Version: 0.5.3.post1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
Could not collect
🐛 Describe the bug
import torch
import torch.nn as nn
with torch.inference_mode():
NUM_TOKENS = 128 * 1024
HIDDEN_SIZE = 16 * 1024
VOCAB_SIZE = 128 * 1024
DTYPE = torch.bfloat16
x = torch.randint(VOCAB_SIZE, (NUM_TOKENS,), dtype=torch.int64, device="cuda")
embedding = nn.Embedding(VOCAB_SIZE, HIDDEN_SIZE, dtype=DTYPE, device="cuda")
y = embedding(x)
torch.cuda.synchronize()
The above script raises the following error:
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: HIP error: invalid configuration argument
HIP kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing AMD_SERIALIZE_KERNEL=3
Compile with `TORCH_USE_HIP_DSA` to enable device-side assertions.
while it works when NUM_TOKENS=32 * 1024.
@WoosukKwon I happened to fix the similar issue reported in pytorch a couple of days ago. Check this PR: https://github.com/pytorch/pytorch/pull/130994
It should be there in pytorch nightly, but with different date
The other follow up is the perf optimization PR: https://github.com/pytorch/pytorch/pull/131713
| gharchive/issue | 2024-07-25T23:58:46 | 2025-04-01T06:46:11.048142 | {
"authors": [
"WoosukKwon",
"hongxiayang"
],
"repo": "vllm-project/vllm",
"url": "https://github.com/vllm-project/vllm/issues/6807",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2458130603 | [RFC]: Encoder/decoder models & feature compatibility
Motivation #
There is significant interest in vLLM supporting encoder/decoder models. Issues #187 and #180 , for example, request encoder/decoder model support. As a result encoder/decoder support was recently introduced to vLLM via the following three PRs:
#4837
#4888
#4942
These three PRs make encoder/decoder model inference possible; however, they leave more to be desired in terms of (1) parity between vLLM's decoder-only & encoder/decoder request processing pipelines with respect to feature support, and (2) the number of encoder/decoder models which are supported.
The ask for the vLLM community is to contribute PRs which help bring vLLM encoder/decoder functionality to a similar level of maturity as that of vLLM's decoder-only functionality.
Proposed changes #
The support matrix below summarizes which encoder/decoder models have already been added & which features are currently compatible with the vLLM encoder/decoder pipeline, versus which features & models will require additional PRs to implement in the long-term:
Model/feature
Model is already available/feature is already compatible with encoder-decoder?
Having this model/making this feature compatible is a long-term goal?
Encoder/decoder infrastructure
Yes
Yes
BART
Yes
Yes
Whisper
No
Yes
T5
No
Yes
Other enc/dec models
No
Yes
Quantization
Untested
Yes
Multimodality
No
Yes
Attention backends other than Xformers (esp. flash-attn, flashinfer)
No
Yes
Custom attention bias support
No
Yes
CUDAGraph
No(Issue #7447)
Yes
Pipeline parallelism
No
Yes
Speculative decoding
No
Low-priority but nice-to-have; difficult.
Automatic prefix caching
No
Low-priority; difficult.
Sliding window
No
No
Chunked prefill
No
No
LoRA
No
No
This RFC gives an overview of those features & models which are not compatible with encoder/decoder currently, but which should be made compatible eventually (i.e. No in the second column, Yes in the third column in the support matrix.)
Note that there are features (automatic prefix caching/sliding window/chunked prefill/LoRA) which are not long-term compatibility goals.
Background #
Before continuing, it will be helpful to review the details of the new vLLM encoder/decoder infrastructure.
It will also be helpful to review this how-to guide for adding new encoder/decoder models & improving encoder/decoder feature compatibility.
Initial goal #
Members of the vLLM contributor community identify models/features in the support matrix above, for which they will work on writing a PR.
Detailed long-term goals #
Add new models to vLLM #
Please review the how-to guide for adding new models to vLLM
See tests/models/test_bart.py for an example of an encoder/decoder model unit test. See tests/distributed/test_basic_distributed_correctness_enc_dec.py for an example of an encoder/decoder model test with TP > 1.
Add Whisper model #
Steps to add support for Whisper, a multimodal encoder/decoder speech recognition model:
Extend existing vLLM multimodality support to encoder/decoder models
Extend existing vLLM prompt processing pipeline to support audio
Port HuggingFace Whisper model to vLLM; an existing open PR for this workstream is #5964
Modify each Whisper layer, where appropriate, to support TP > 1
Add a Whisper test under tests/models/
Proposal: consider whether or not it makes sense to implement encoder/decoder multimodality, audio support, and Whisper in the same PR; that way, the Whisper model may be used to facilitate an end-to-end test with of audio multimodality.
Add T5 model #
Note: T5 depends on custom attention bias being supported by at least one of the attention backends which also supports encoder attention & cross-attention; at time of writing no vLLM attention backend fulfills this requirement. The vLLM XFormers attention backend is the only backend which supports encoder/decoder models but neither it nor any other vLLM attention backend supports custom attention bias. (Custom attention bias is required in order to support T5 relative positional encoding.)
Steps to add support for the T5 model:
Port HuggingFace T5 model to vLLM
This includes porting over the method which computes the custom attention bias matrix for T5 relative position encoding
Modify each T5 layer, where appropriate, to support TP > 1
The custom attention bias computation must also support TP > 1
Add a T5 test to tests/models/
Note: T5 was added to an older version of vLLM in #3117 , which could be a helpful starting-point
Add other encoder/decoder models
Review open vLLM issues on GitHub and identify other encoder/decoder models which are requested by users
Quantization #
The goal of this workstream is to make sure that quantization + encoder/decoder models is fully-tested, and to fill in any gaps (should they exist) in vLLM's support for quantized encoder/decoder models.
Steps to ensure that vLLM supports encoder/decoder models in combination with all existing vLLM quantization methods:
Identify the list of quantization methods which vLLM currently supports with decoder-only models.
Add unit tests for encoder/decoder models with all of these quantization methods.
Determine which quantization methods are currently incompatible with vLLM encoder/decoder infrastructure.
Scope out the effort involved in making these quantization methods compatible & submit a PR making the change.
vLLM encoder/decoder infrastructure should be compatible with most of the existing vLLM quantization methods, because the specialized quantization kernels are only employed for GEMM operations involving the learned weight matrices ($W_q$, $W_k$, etc.), whereas the encoder/decoder work really only modifies how the Attention(q, k, v, kv_cache) layer behaves & does not impact the learned weight matrices at all.
It is less clear whether vLLM encoder/decoder infrastructure would be incompatible with FP8. It does appear that a specialized quantized KV cache kernel is employed by the Attention(q, k, v, kv_cache) layer when FP8 quantization is employed.
Support encoder/decoder multimodality #
Technically, vLLM already supports multimodality for models which have an "encoder" and a "decoder", i.e. Llava. However, Llava's decoder does not utilize cross-attention & the model is basically compatible with vLLM's pre-existing decoder-only infrastructure.
But critically, for encoder/decoder models with cross-attention such as Whisper vLLM does not currently support multimodality of any sort. The processing pipeline does not extract or utilize multimodal data from the input prompt, and the EncoderDecoderModelRunner has an assert which fails if the multimodal config is not None. Addressing this is what is meant by "supporting encoder/decoder multimodality".
Steps to extend existing vLLM multimodality support to encoder/decoder models:
Review existing vLLM multimodality support in the decoder-only pipeline
Scope out a plan for adding encoder/decoder multimodality support.
Propose & implement one or more multimodal prompt formats for encoder/decoder models
Integrate multimodality support into encoder/decoder processing pipeline
Remove the assertion which fails when multimodality is enabled for an encoder/decoder model (see assert_enc_dec_mr_supported_scenario() in vllm/worker/utils.py)
Add one or more unit tests with multimodal data
There are a number of multimodal encoder/decoder models which will benefit from this feature. One possibility is to add multimodality support & a multimodal model such as Whisper in the same PR, so that Whisper may be used to facilitate an end-to-end test with multimodality.
Another possibility is to implement multimodality support in its own PR.
Considerations for designing multimodal encoder/decoder prompt formats #
One approach to designing the vLLM multimodal encoder/decoder prompt formats, is to consider what we want the user experience to be for high-priority multimodal encoder/decoder models such as
Llama 3.1 multimodal
Whisper
Initial proposal for multimodal encoder/decoder prompt formats
It may be helpful to review
The non-multimodal encoder/decoder prompt formats which are currently supported by vLLM: singleton prompts (raw text prompt, TextPrompt, TokensPrompt) as well as ExplicitEncoderDecoder prompts
The multimodal decoder-only prompt formats which are currently supported by vLLM; search for multi_modal_data here and also review the vLLM documentation on multimodality
Generally speaking, in encoder/decoder models based on cross-attention, the non-text input modality is passed to the encoder as input. Conversely, any text prompt is typically passed to the decoder as a input prompt.
The following two encoder/decoder multimodal prompt formats are tentatively proposed:
Singleton TextPrompt with multi_modal_data field
vLLM will extract the multi_modal_data and pass it to the encoder module
vLLM will extract the prompt text, tokenize it and pass the token-list to the decoder (note that this is the opposite of vLLM behavior for non-multimodal prompts, where the prompt text would be passed to the encoder.)
For example passing the TextPrompt below to vLLM BART
TextPrompt(
'prompt': "The rain in spain falls mainly on the",
'multi_modal_data': <multi modal data structure>
)
results in
Encoder input: <multi modal data structure>
Decoder prompt: "The rain in spain falls mainly on the"
Singleton TokensPrompt with multi_modal_data field
vLLM will extract the multi_modal_data and pass it to the encoder module
vLLM will extract the token list and pass it unmodified to the decoder (note that this is the opposite of vLLM behavior for non-multimodal prompts, where the prompt tokens would be passed to the encoder.)
For example passing the TokensPrompt below to vLLM BART
TokensPrompt(
'prompt_tokens': [2,0,171,5,2],
'multi_modal_data': <multi modal data structure>
)
results in
Encoder prompt: <multi modal data structure>
Decoder prompt: [2,0,171,5,2]
It may also be worth considering whether or how to support
ExplicitEncoderDecoderPrompts with multimodality
An input prompt format which encapsulates only multimodal encoder inputs, with no associated decoder text/tokens prompt (this would result in the decoder being passed a "default" or empty prompt.)
Add support for encoder attention and cross-attention to additional backends #
At time of writing, XFormers is the only vLLM attention backend which supports encoder attention & cross-attention.
The goal of this workstream would be to extend encoder attention & cross-attention support to additional backends, the highest-priority being flash-attention and flashinfer.
Reviewing encoder attention and cross-attention support in the XFormers backend would be a good starting-point for extending support to other models.
For context on the requirements for a backend to support encoder and cross-attention, it may help to review the encoder/decoder architecture, the way that attention masks are currently constructed in the XFormers backend, and the recommended architecture for vLLM encoder/decoder models.
A summary of the key changes required for an attention backend to support encoder attention and cross-attention:
The backend's AttentionMetadata subclass must support fields for encoder sequence lengths, encoder sequence token count, cross-attention blocktables, and cross-attention slot mapping. XFormers examples:
AttentionMetadata subclass' encoder field declarations
Handle encoder & cross-attention fields in prefill_metadata() method
Handle encoder & cross-attention fields in decode_metadata() method
The forward() method of the backend implementation must accept an attn_type argument of type AttentionType, which allows choosing between encoder attention, decoder attention, or encoder/decoder cross-attention. XFormers example
The backend implementation must recognize which option has been chosen for attn_type, and adjust accordingly in terms of (1) how it utilizes attn_metadata when invoking the attention kernels (review XFormers forward() for context), and (2) the choice of causal or non-causal attention, as well the choice of attention mask shape (XFormers example).
Initial goals
Identify the changes required to add encoder attention & cross-attention support to flash-attention and flashinfer
PR the required changes
Remove/modify any asserts which fail if the vLLM attention backend is not XFormers
Currently, the __init__() method of EncoderDecoderModelRunner invokes a method EncoderDecoderModelRunner._maybe_force_supported_attention_backend() defined here which (1) attempts to force encoder/decoder models to use XFormers attention backend, and (2) raises an exception if the user has overridden the attention backend to be anything other than XFormers.
Long-term goals
All vLLM attention backends support encoder attention and cross-attention
Support custom attention bias #
Note: T5 takes a dependency on custom attention bias. Custom attention bias is likely complex enough to merit its own PR.
Note: custom bias support was added to PagedAttention in an older version of vLLM as part of #3117 ; given changes in vLLM since then, additional work would be required to integrate this implementation.
Custom attention bias and relative positional encoding
Attention bias refers to adding a matrix $A$ to the scaled dot-product (SDP) attention scores matrix before performing softmax, as shown below:
$$
attn(Q,K,V,A) = softmax(\frac{Q K^T + A}{\sqrt{d}})V
$$
Here, custom attention bias is understood to mean that the vLLM attention backend allows $A$ to be an arbitrary matrix, provided the tensor dimensions are commensurate with the shape of the SDP attention scores matrix. This is in contrast to the existing vLLM attention backend implementations, which can only accommodate simple block-diagonal causal or non-causal masks which are uniformly either $0$ or $-\infty$.
There are broadly two possible approaches to custom attention bias, which do not necessarily have to be mutually-exclusive:
$A$ is a fully-materialized attention bias matrix passed to the attention backend
$A$ is computed on-the-fly by the attention kernel, using an element-wise formula for the attention bias which is fused with the $Q K^T$ and $softmax$ computations
T5 employs custom attention bias in order to implement relative positional encoding, wherein pairwise positional relationships between tokens are represented by the bias matrix. The HuggingFace Transformers T5 implementation provides an example of how the relative positional encoding matrix is computed.
Existing attention bias support
Currently, no vLLM attention backend fully supports passing in a custom attention bias. This is primarily due to underlying kernel limitations. For example, the xFormers memory_efficient_attention_forward kernel is the only NVIDIA-GPU-oriented kernel which permits passing in an arbitrary PyTorch tensor as a materialized attention bias (via the attn_bias argument) (at time of writing I have not investigated if custom attention bias is supported by any of the kernels for AMD GPU, CPU, etc.) Regardless, vLLM only employs xFormers memory_efficient_attention_forward for prefill; to my knowledge, none of the decode-phase kernels employed by vLLM can accept an arbitrary tensor as a custom attention bias, making custom attention bias impossible to apply end-to-end for both prefill and decode under the current vLLM implementation.
In addition to lack of kernel-level support for custom attention bias, most vLLM backends also prevent passing a custom attention bias matrix to the underlying kernel. The exception is the XFormers backend, which accepts an attention bias via XFormersMetadata.attn_bias attribute (however the XFormers backend only utilizes attn_bias in the prefill phase.)
Proposed methods for supporting custom attention bias
Here the following two approaches for supporting custom attention bias in vLLM are proposed:
Fully-materialized bias matrix: Modify vLLM attention backends to accept an arbitrary PyTorch tensor, passed into the backend via the AttentionMetadata.attn_bias field.
On-the-fly/fused bias matrix computation: Enable an efficient workflow whereby vLLM developers can tweak an attention kernel to compute the custom attention bias on the fly
For example: rather than computing the T5 relative position encoder bias matrix once, instead the attention kernel can fuse the element-wise bias matrix formula with the $Q K^T$ and $softmax()$. The attention bias matrix is never fully materialized.
FlexAttention enables fused custom attention bias computations in a FlashAttention-style kernel, using torch.compile.
It may make sense to support one or both of these methods.
Note that custom attention bias support must be added on a backend-by-backend basis, because of the kernel modifications & backend logic changes required.
Initial goals for introducing custom attention bias support
Focus on a particular vLLM attention backend
Suggestion: focus on an attention backend which also supports encoder/decoder models, in order to facilitate running T5. At time of writing, XFormers is the only backend which supports encoder/decoder models, however there will likely be work on supporting encoder/decoder in additional attention backends.
Scope out the effort involved in introducing custom attention bias support to this backend
Some steps which will likely be involved in introducing custom attention bias support:
Augment attention backend's kernels to accept custom attention bias; for example, the PagedAttention kernel (for XFormers backend), the Flash-attention kernel (for the flash-attn backend), or the Flashinfer kernels (for the Flashinfer backend)
(Except for XFormers) add an attn_bias attribute to attention backend's AttentionMetadata subclass
Ensure that the attention backend passes the attn_bias attribute to both the prefill and decode kernels
Add at least two custom attention bias unit tests (for prefill & decode respectively)
Final goals for introducing custom attention bias support
All vLLM attention backends should support custom attention bias, with unit tests
Some links which may be helpful for understanding how causal & non-causal attention masks are currently configured in vLLM:
Invocation of flash-attention for prefill in vLLM backend, using causal flag
Invocation of xFormers attention kernel for prefill in vLLM backend, using BlockDiagonalMask and BlockDiagonalCausalMask
Invocation of FlashInfer attention kernel for prefill in backend, using causal flag
Invocation of PagedAttention kernel for decode in vLLM backend
Invocation of FlashInfer kernel for decode in vLLM backend
Support CUDAGraph with encoder/decoder models #
Note: this topic is being tracked by Issue #7447
Steps to support CUDAGraph with encoder/decoder models:
Scope out the effort require to support CUDAGraph with encoder/decoder models
Write a PR for CUDAGraph + encoder/decoder
Remove the assertion which fails when CUDAGraph is enabled for an encoder/decoder model (see assert_enc_dec_mr_supported_scenario() in vllm/worker/utils.py)
Support pipeline-parallelism with encoder/decoder models #
Steps to support pipeline-parallelism with encoder/decoder models:
Scope out the effort required to support pipeline-parallelism with encoder/decoder models
Write a PR for pipeline-parallelism + encoder/decoder
Remove the assertion which fails when pipeline-parallelism is enabled for an encoder/decoder model (see assert_enc_dec_mr_supported_scenario() in vllm/worker/utils.py)
Support multi-step scheduling with encoder/decoder models #
Note: depends on #7000 landing in order to add multi-step scheduling support; it may be helpful to review the multi-step scheduling RFC ( #6854 )
Steps to support multi-step scheduling with encoder/decoder models:
Scope out the effort required to support multi-step scheduling
EncoderDecoderModelRunner multi-step support
Write a PR for multi-step scheduling + encoder/decoder
Write at least one test of an encoder/decoder model with multi-step scheduling
Low-priority high-effort tasks #
Speculative decoding
Automatic prefix caching
Here it is proposed that these features are low-priority. Adding support for speculative decoder and automatic prefix caching would require a significant of effort to scope out and design the implementations.
Note that adding support for either of these features would require removing the assertions which fail when speculative decoding or automatic prefix caching are enabled for an encoder/decoder model (see assert_enc_dec_mr_supported_scenario() in vllm/worker/utils.py)
Feedback Period.
Closed.
CC List.
@WoosukKwon
@robertgshaw2-neuralmagic
@mgoin
@tms
@njhill
@sroy745
@ywang96
@DarkLight1337
@js8544
Any Other Things.
No response
Looks like footnote references (i.e. [^1], [^2], etc.) are not rendered for RFCs on github, so I just edited the RFC to replace all of the footnote references with direct links.
cc @mgoin
FYI, added a section to the RFC about adding multi-step scheduling + encoder/decoder support.
@robertgshaw2-neuralmagic
Sorry for mentioning this so late. For multimodal models, there are actually two ways to apply cross-attention:
Cross-attention between text and multimodal features (e.g. Llama 3.1 multimodal)
Cross-attention between text features only (i.e. multimodal encoder with an encoder-decoder language model) (e.g. #5934)
I wonder how the current plan could handle the latter case. To keep the API consistent, we should distinguish between the above two cases internally when multimodal data is passed.
Tagging #8811 for future reference
I can start looking into T5 support starting from the custom attention bias 🤞🏻
I can start looking into T5 support starting from the custom attention bias 🤞🏻
Thanks @NickLucche - this would be greatly appreciated
@
I can start looking into T5 support starting from the custom attention bias 🤞🏻
Thanks @NickLucche - this would be greatly appreciated
Hi @NickLucche are you looking at any particular kernel for adding the custom bias needed for T5? FYI I am trying to add support to make the encoder-decoder models work with the flash_attn kernel.
@
I can start looking into T5 support starting from the custom attention bias 🤞🏻
Thanks @NickLucche - this would be greatly appreciated
Hi @NickLucche are you looking at any particular kernel for adding the custom bias needed for T5? FYI I am trying to add support to make the encoder-decoder models work with the flash_attn kernel.
@sroy745 - I think it’s probably easiest to do it with the paged attention backend, but doing in flash_attn would be the best
For us the most important is having something functional for T5 so was also thinking xformers option might be preferable/easier for the initial support.
@sroy745 @robertgshaw2-neuralmagic @njhill so xformers appears to be indeed the easiest, but it has to be paired with a PagedAttention modification too, AFAIK xformers is not used in decode because it has no concept of kv paging/block table.
FlashAttention would be super interesting to explore, but will likely require PRs to https://github.com/vllm-project/flash-attention or to the original repo. Even then I am not sure if a fully materialized bias matrix option would be accepted there, while the fused approach would be too specific to T5 to live there.
Personally I would like to start with the "less-efficient"/easier approach first (fully materialized bias) and leave optimization for later, unless there's a more mature implementation we can somehow integrate here.
Why isn't LoRA included as an LTS feature? What considerations are there behind this decision? I've recently been working on supporting LoRA for mllama , and currently my main challenge is dealing with cross attention
| gharchive/issue | 2024-08-09T15:03:54 | 2025-04-01T06:46:11.123890 | {
"authors": [
"DarkLight1337",
"NickLucche",
"afeldman-nm",
"jeejeelee",
"njhill",
"robertgshaw2-neuralmagic",
"sroy745"
],
"repo": "vllm-project/vllm",
"url": "https://github.com/vllm-project/vllm/issues/7366",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
803157473 | [Windows] antrea-agent panics when disable AntreaProxy feature
Describe the bug
antrea-agent will panic if disable AntreaProxy.
To Reproduce
Disable AntreaProxy and start antrea-agent. antrea-agent will fail to start with panic messages.
featureGates:
# Enable antrea proxy which provides ServiceLB for in-cluster services in antrea agent.
# It should be enabled on Windows, otherwise NetworkPolicy will not take effect on
# Service traffic.
AntreaProxy: false
The root case is that uplink table is not initialized in generatePipeline() when disable AntreaProxy feature.
Which will cause antrea-agent crashed when tring to access the table(nil pointer).
| gharchive/issue | 2021-02-08T03:39:37 | 2025-04-01T06:46:11.140631 | {
"authors": [
"ruicao93"
],
"repo": "vmware-tanzu/antrea",
"url": "https://github.com/vmware-tanzu/antrea/issues/1833",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
676830093 | Bump up ofnet version to support readable error message in ofnet
Use readable message of OpenFlow errors in the log
Expose a public method to support copy Flow actions in ofnet
/test-all
With this PR, the error message printed in ofnet should be compatible with OVS errors, e.g.
error logs in ofnet:
E0811 12:10:24.683714 1 ofSwitch.go:357] Received Vendor error NXBAC_CT_DATAPATH_SUPPORT on ONFT_BUNDLE_ADD_MESSAGE message
E0811 12:10:24.689234 1 ofSwitch.go:357] Received Vendor error NXBAC_CT_DATAPATH_SUPPORT on ONFT_BUNDLE_ADD_MESSAGE message
erros in OVS logs:
2020-08-11T12:04:19.957Z|00038|connmgr|INFO|br-int<->unix#0: sending NXBAC_CT_DATAPATH_SUPPORT error reply to ONFT_BUNDLE_ADD_MESSAGE message
2020-08-11T12:04:20.104Z|00040|connmgr|INFO|br-int<->unix#0: sending NXBAC_CT_DATAPATH_SUPPORT error reply to ONFT_BUNDLE_ADD_MESSAGE message
/test-all
This is great, thanks for working on this. If it's not too late, could you add a ":" after "Received Vendor error" for readability?
Sure, I would add in the binding.
/test-all
/test-all
/test-all
/skip-hw-offload
Ignoring test failure as it is an unrelated issue that @Dyanngg is working on
| gharchive/pull-request | 2020-08-11T12:17:40 | 2025-04-01T06:46:11.144506 | {
"authors": [
"antoninbas",
"wenyingd"
],
"repo": "vmware-tanzu/antrea",
"url": "https://github.com/vmware-tanzu/antrea/pull/1065",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1210169328 | Docs: Packaging process docs have broken image links
Bug Report
Images under the following page: https://tanzucommunityedition.io/docs/v0.11/designs/package-process/
are broken
Expected Behavior
See images in docs page
Steps to Reproduce the Bug
Go to the package process docs page and see no images. Open developer console to see 404 links
May be related, or have a similar fix required: https://github.com/vmware-tanzu/community-edition/issues/4084
| gharchive/issue | 2022-04-20T20:16:55 | 2025-04-01T06:46:11.147813 | {
"authors": [
"jpmcb",
"stmcginnis"
],
"repo": "vmware-tanzu/community-edition",
"url": "https://github.com/vmware-tanzu/community-edition/issues/4126",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
806791814 | Update rbac/v1beta1 -> rbac/v1 in deployment files
Fixes #60
LGTM too.
| gharchive/pull-request | 2021-02-11T22:25:41 | 2025-04-01T06:46:11.148765 | {
"authors": [
"christianang",
"tylerschultz"
],
"repo": "vmware-tanzu/cross-cluster-connectivity",
"url": "https://github.com/vmware-tanzu/cross-cluster-connectivity/pull/79",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
173080825 | ansible playbooks for deploy
would it be possible to get some ansible roles for the deployment?
or a guide on how to deploy without using the scripts
@casualjim
Are you trying to use ansible to call docker to deploy? This is easy as the docker-compose.yml is self-explanatory.
Or are you asking for using ansible to deploy without docker, I believe this is doable, but we haven't tried that, so maybe difficult.
memo:
https://github.com/supervised-io/machines/blob/dockerRegistryAnsible/terraform/playbooks/harbor.yml
@casualjim
Did you work on it already? I'd like to write an ansible galaxy role. If someone already started it, I needn't work it from beginning.
yeah we have something in the repo linked by @reasonerjt
@casualjim
The link as marked as memo is 404 for me.
@reasonerjt
Could you open it to public for this link: https://github.com/supervised-io/machines/blob/dockerRegistryAnsible/terraform/playbooks/harbor.yml ?
It gives me 404 now also...
@reasonerjt & @casualjim
If this link is broken, could you please upload the code from your fork, if you did that before?
ansible role for harbor
tasks/main.yml
---
- name: ensure data folders
file: path=/var/lib/harbor/{{ item }} mode=0755 state=directory
with_items:
- database
- logs
- job_logs
- registry
- name: ensure config folder
file: path=/etc/harbor/ mode=0755 state=directory
- name: copy config files
copy: src=config/{{ item }} dest=/etc/harbor mode=0644
with_items:
- db
- jobservice
- nginx
- registry
- ui
- name: add app.config
template: src=app.conf dest=/etc/harbor/ui/app.conf mode=0644
- name: install docker registry
docker_container:
name: registry
image: library/registry:2.5.0
restart_policy: always
volumes:
- "/var/lib/harbor/registry:/storage"
- "/etc/harbor/registry:/etc/registry/"
env:
GODEBUG: "netdns=cgo"
ports:
- "5001:5001"
command: "serve /etc/registry/config.yml"
- name: install mysql
docker_container:
name: mysql
image: jcali/mysql
restart_policy: always
env:
MYSQL_ROOT_PASSWORD: root123
volumes:
- /var/lib/harbor/database:/var/lib/mysql
- name: install ui
docker_container:
name: ui
image: jcali/ui
restart_policy: always
links:
- "mysql:mysql"
env:
MYSQL_HOST: mysql
MYSQL_PORT: 3306
MYSQL_USR: root
MYSQL_PWD: "{{harbor_db_pass}}"
REGISTRY_URL: http://registry:5000
UI_URL: http://ui
CONFIG_PATH: /etc/ui/app.conf
HARBOR_REG_URL: "{{inventory_hostname}}"
HARBOR_ADMIN_PASSWORD: "{{harbor_admin_pass}}"
HARBOR_URL: http://{{inventory_hostname}}
AUTH_MODE: db_auth
LDAP_URL: ldaps://ldap.mydomain.com
LDAP_BASE_DN: "uid=%s,ou=people,dc=mydomain,dc=com"
UI_SECRET: "{{harbor_ui_secret}}"
SELF_REGISTRATION: "on"
USE_COMPRESSED_JS: "on"
LOG_LEVEL: debug
GODEBUG: "netdns=cgo"
EXT_ENDPOINT: http://{{inventory_hostname}}
TOKEN_URL: http://ui
VERIFY_REMOTE_CERT: "on"
TOKEN_EXPIRATION: 30
ports:
- "8000:80"
volumes:
- /etc/harbor/ui/app.conf:/etc/ui/app.conf
- /etc/harbor/ui/private_key.pem:/etc/ui/private_key.pem
- name: install jobservice
docker_container:
name: jobservice
image: jcali/jobservice
restart_policy: always
links:
- "mysql:mysql"
env:
MYSQL_HOST: mysql
MYSQL_PORT: 3306
MYSQL_USR: root
MYSQL_PWD: "{{harbor_db_pass}}"
UI_SECRET: "{{harbor_ui_secret}}"
CONFIG_PATH: /etc/jobservice/app.conf
REGISTRY_URL: http://registry:5000
VERIFY_REMOTE_CERT: "on"
MAX_JOB_WORKERS: 3
LOG_LEVEL: debug
LOG_DIR: /var/log/jobs
GODEBUG: "netdns=cgo"
EXT_ENDPOINT: http://{{inventory_hostname}}
TOKEN_URL: http://ui
volumes:
- /var/lib/harbor/job_logs:/var/log/jobs
- /etc/harbor/jobservice/app.conf:/etc/jobservice/app.conf
- name: Create certs directory
file: path=/etc/nginx/cert mode=0755 state=directory
- name: add certificate
copy: dest="/etc/nginx/cert/celloproject.io.crt" content="{{ celloproject_io_certificate }}"
- name: add key
copy: dest="/etc/nginx/cert/celloproject.io.key" content="{{ celloproject_io_private_key }}"
- name: set up nginx
docker_container:
name: nginx
image: library/nginx:1.9.0
restart_policy: always
ports:
- "80:80"
- "443:443"
links:
- "mysql:mysql"
- "registry:registry"
- "ui:ui"
volumes:
- "/etc/harbor/nginx:/etc/nginx/"
- "/etc/nginx/cert/:/etc/nginx/cert/"
templates/app.conf
appname = {{ harbor_appname }}
runmode = {{ harbor_runmode }}
[lang]
types = en-US|zh-CN
names = en-US|zh-CN
[dev]
httpport = {{ harbor_dev_httpport }}
[mail]
host = {{ smtp_host }}
port = {{ smtp_port }}
username = {{ smtp_username }}
password = {{ smtp_password }}
from = {{ email_from }}
ssl = false
this is most of the work, can't share the repo publicly because it requires all kinds of corp hoops.
Ended up here after looking at the installation instructions and the intstall.sh, which don't look very idempotent (yet?).
Will try out the above to see how far that gets me.
In foreseeable future, here will be three deployment approaches supported:
docker-compose: For quickly deployment on single host.
bosh-release/tile: For product integration.
helm chart: For deployment on top of k8s cluster.
So I'm closing this issue as won't fix, any interest to maintain other form of installation would be appreciated.
I have started this: https://github.com/nicholasamorim/ansible-role-harbor
It does some thins like install (obviously) but also creates users and projects. It also allows you to change some things like the default Redis settings, in case the user already has another docker instance of redis running and so on.
I'd love to get input on it to make it better as I'm a novice to Harbor.
| gharchive/issue | 2016-08-24T23:21:02 | 2025-04-01T06:46:11.190269 | {
"authors": [
"SydOps",
"casualjim",
"nicholasamorim",
"reasonerjt",
"senorsmile"
],
"repo": "vmware/harbor",
"url": "https://github.com/vmware/harbor/issues/716",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
273376445 | Refine the Dockerfile
Refine the Dockerfile to remove temporary workarounds.
Also fixes #3587, to make sure the configuration files of rsyslog can be
read by uid 10000.
Coverage increased (+0.04%) to 56.247% when pulling 6d7c028729809568921fcecce990572fbe6c35a8 on reasonerjt:dockerfile-refine into fa83b3a31e9a6c9917a18845dd833a9bdd107c78 on vmware:master.
| gharchive/pull-request | 2017-11-13T10:18:57 | 2025-04-01T06:46:11.193365 | {
"authors": [
"coveralls",
"reasonerjt"
],
"repo": "vmware/harbor",
"url": "https://github.com/vmware/harbor/pull/3605",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
256868206 | Octopus - E2E tests
Add E2E integration test support for multi-VC feature.
Closing since it is a duplicate issue. The E2E tests are tracked in issues# 268 and 267
| gharchive/issue | 2017-09-11T23:07:55 | 2025-04-01T06:46:11.194363 | {
"authors": [
"BaluDontu",
"SandeepPissay"
],
"repo": "vmware/kubernetes",
"url": "https://github.com/vmware/kubernetes/issues/269",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1264993828 | Suspected code error in splinterdb_iterator_init(), results in seg-fault and / or unpredictable behaviour in iterator init() logic
Describe the bug
This issue was discovered while developing C-sample programs under PR #399. An attempt was made to extract out a stand-alone repro with a test-case.
The following simple test case results in an assertion failure from trunk_range_iterator_init() -> trunk_find_pivot().
The test is simply doing very basic configuration of SplinterDB, then splinterdb_create(), followed by splinterdb_iterator_init. Stack is as follows:
#2 0x00007ffff7fa3645 in platform_assert_false (filename=0x7ffff7fad9db "src/trunk.c",
linenumber=1373, functionname=0x7ffff7fb1ab0 <__FUNCTION__.9172> "trunk_find_pivot",
expr=0x7ffff7fadbb2 "cmp <= 0", message=0x7ffff7fad3aa "")
at src/platform_linux/platform.c:335
335 abort();
(gdb)
#3 0x00007ffff7f79c5b in trunk_find_pivot (spl=0x5555558082c0, node=0x5555557a8e80,
key=0x555555dc0f88 "", comp=less_than_or_equal) at src/trunk.c:1373
1373 debug_assert(cmp <= 0);
(gdb)
#4 0x00007ffff7f88da2 in trunk_range_iterator_init (spl=0x5555558082c0,
range_itor=0x555555dc0b40, min_key=0x7fffffffe590 "", max_key=0x0,
num_tuples=18446744073709551615) at src/trunk.c:5768
5768 trunk_find_pivot(spl, node, range_itor->min_key, less_than_or_equal);
(gdb) bt
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007ffff7d0b859 in __GI_abort () at abort.c:79
#2 0x00007ffff7fa3645 in platform_assert_false (filename=0x7ffff7fad9db "src/trunk.c",
linenumber=1373, functionname=0x7ffff7fb1ab0 <__FUNCTION__.9172> "trunk_find_pivot",
expr=0x7ffff7fadbb2 "cmp <= 0", message=0x7ffff7fad3aa "")
at src/platform_linux/platform.c:335
#3 0x00007ffff7f79c5b in trunk_find_pivot (spl=0x5555558082c0, node=0x5555557a8e80,
key=0x555555dc0f88 "", comp=less_than_or_equal) at src/trunk.c:1373
#4 0x00007ffff7f88da2 in trunk_range_iterator_init (spl=0x5555558082c0,
range_itor=0x555555dc0b40, min_key=0x7fffffffe590 "", max_key=0x0,
num_tuples=18446744073709551615) at src/trunk.c:5768
#5 0x00007ffff7f66e7a in splinterdb_iterator_init (kvs=0x55555557c040, iter=0x7fffffffe640,
start_key=...) at src/splinterdb.c:822
#6 0x000055555555c66d in ctest_splinterdb_quick_test_custom_cmp_iterator_init_bug_run (
data=0x55555556de80 <ctest_splinterdb_quick_test_custom_cmp_iterator_init_bug_data>)
at tests/unit/splinterdb_quick_test.c:847
The code fragment where this fails is this:
1350 static inline uint16
1351 trunk_find_pivot(trunk_handle *spl,
[...]
1366 if (size == 1) {
1367 cmp = trunk_key_compare(spl, trunk_get_pivot(spl, node, 0), key);
1368 switch (comp) {
1369 case less_than:
1370 debug_assert(cmp < 0);
1371 return 0;
1372 case less_than_or_equal:
1373 debug_assert(cmp <= 0);
1374 return 0;
Under the debugger we find:
(gdb) p cmp
$1 = 24
Reproduction steps
1. Unit-test being developed: Something on the lines of:
CTEST2(splinterdb_quick, test_custom_cmp_iterator_init_bug)
{
// We need to reconfigure Splinter with user-specified data_config
// Tear down default instance, and create a new one.
splinterdb_close(&data->kvsb);
data->cfg.data_cfg = test_data_config;
// Assert will trip if you enable this set.
data->cfg.data_cfg->key_size = 24;
data->cfg.data_cfg->min_key_length = 24;
data->cfg.data_cfg->max_key_length = 24;
int rc = splinterdb_create(&data->cfg, &data->kvsb);
ASSERT_EQUAL(0, rc);
const int num_inserts = 5;
rc = insert_some_keys(num_inserts, data->kvsb);
ASSERT_EQUAL(0, rc);
splinterdb_iterator *it = NULL;
rc = splinterdb_iterator_init(data->kvsb, &it, NULL_SLICE);
ASSERT_EQUAL(0, rc);
bool iter_valid = splinterdb_iterator_valid(it);
ASSERT_TRUE(iter_valid);
Expected behavior
Should not get any assertions
Even with attempted fix in splinterdb_iterator_init to correct the handling of on-stack start_key_buffer , the assertion in the unit-test on iter_valid trips unexpectedly. Iterator comes up as "invalid" whereas it should have been "valid"
Under development unit-test should pass cleanly for all combinations.
Additional context
No response
Fixed by this commit:
c3e6952 gapisback 24 minutes ago Thu, 09-Jun-2022, 05:51:32 PM (Authored: Thu, 09-Jun-2022, 05:51:32 PM)
(#419) Fix bug in splinterdb_iterator_init() when min_key is configured.
| gharchive/issue | 2022-06-08T16:17:34 | 2025-04-01T06:46:11.225888 | {
"authors": [
"gapisback"
],
"repo": "vmware/splinterdb",
"url": "https://github.com/vmware/splinterdb/issues/419",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
290805247 | 期货期权,此处下限应为(k-f)exp(-rT)。
Pricing目录下black文件,第115行
已修复
| gharchive/issue | 2018-01-23T11:40:13 | 2025-04-01T06:46:11.254557 | {
"authors": [
"EasyBasket",
"vnpy"
],
"repo": "vnpy/vnpy",
"url": "https://github.com/vnpy/vnpy/issues/710",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
493676823 | Is Ukrainian to ASCII correct?
Example: Is this correct?
"діти" -> "diti"
Can a native speaker, please check the character-replacement, thanks.
https://github.com/voku/portable-ascii/blob/master/src/voku/helper/data/ascii_by_languages.php#L538
Hello, the correct transliteration for "діти" is "dity". Please check my PR with updates according to the official Ukrainian transliteration docs https://zakon.rada.gov.ua/laws/show/55-2010-п?lang=en#Text
#59
Also, I have a question.
In Ukrainian for some letters ('ю', 'я', 'Ї', etc) transliteration is different if it is the first letter in a word.
For example:
'яма' -> 'yama'
'пісня' -> 'pisnia'
The letter 'я' may be 'ya' or 'ia' depending on the letter position.
Do you think it's possible to implement this feature? May it be useful for other languages?
| gharchive/issue | 2019-09-14T23:30:59 | 2025-04-01T06:46:11.325538 | {
"authors": [
"Andr1yk0",
"voku"
],
"repo": "voku/portable-ascii",
"url": "https://github.com/voku/portable-ascii/issues/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
464131034 | Update charts
Changes are below:
Support deleting admission job when job completed.
Support deleting admission config when re-install charts.
@k82cn
/lgtm
/approve
/lgtm
/approve
| gharchive/pull-request | 2019-07-04T08:20:25 | 2025-04-01T06:46:11.342441 | {
"authors": [
"TommyLike",
"asifdxtreme",
"k82cn"
],
"repo": "volcano-sh/charts",
"url": "https://github.com/volcano-sh/charts/pull/6",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
265265906 | Setting lookup filterValue programmatically
Hi,
I set up a lookup Editor in one of my forms. This lookup should be filtered by a field of my form. I ended up with a variant of the solution from Issue #2440 .
protected changeInProcess = false; // used to avoid the second change event triggered by the first change event
protected lastValue = undefined; // used to store the original selected value to restore it after handling all recursive change events
constructor() {
super();
this.form.EquipmentAliasId.change(e => {
if (!this.changeInProcess) {
this.changeInProcess = true;
this.lastValue = e.target.value;
this.form.EquipmentAliasId.filterValue = this.form.EquipmentId == null ? "@@@@@@" : parseInt(this.form.EquipmentId.value);
this.changeInProcess = false;
}
this.form.EquipmentAliasId.value = this.lastValue;
});
Obviously this is more a workaround than a solution.
Is there any other event I can use than the change event on the property with the lookupeditor. This would be a little bit nicer because it avoids recursive calls of the event itself.
Hi LarsMengel,
do I understand correct that you have one field (lets Name it Field1) in the form and a Change within this field1 should update the lookupEditor on EquipmentAliasId?
If yes, then I don't understand why you trigger on the EquipmentAliasId.change Event instead on the Field1.change Event.
Otherwise please add a screenshot with the fields involved :-)
With Kind regards,
John
Hi John,
it's an edit dialog and it's opened from a grid view by clicking an edit link. My first idea was to use change event on equipmentid (Field1) as you suggested. But it's never called during the opening process of the edit dialog.
To tell the whole story - there is a grid view showing items of some kind of equipment. Each equipment item has an Id and a unique name. Additionally the user has the possibility to create aliases for each equipment item and to set a current alias property. I make use of two tables: Equipment and EquipmentAlias. EquipmentAlias contains a list of aliases bound to an equipment item. When the Equipment is edited the alias lookup has to be filtered by the current equipmentId because the user shouldn't be able to see aliases of other equipment items.
Unfortunately I wasn't able to reach this behaviour using the cascade or filter functionallity of the lookup editor attribute.
Ok, it took me a while but eventually I ended up overwriting the afterLoadEntity methode:
protected afterLoadEntity() {
super.afterLoadEntity();
if (this.entity.EquipmentId != null) {
this.form.EquipmentAliasId.filterValue = this.entity.EquipmentId;
}
else {
this.form.EquipmentAliasId.filterValue = "@|\?"; // we don´t want to see any items for a new Equipment
}
if (this.entity.EquipmentAliasId != null) {
this.form.EquipmentAliasId.value = this.entity.EquipmentAliasId.toString();
}
}
| gharchive/issue | 2017-10-13T12:04:22 | 2025-04-01T06:46:11.358720 | {
"authors": [
"JohnRanger",
"LarsMengel"
],
"repo": "volkanceylan/Serenity",
"url": "https://github.com/volkanceylan/Serenity/issues/2755",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
429163335 | Show orders grid in orders form
Hello,
Is there any way I can show order grid in order form ? I want to show all the previous records present in order table in form of grid in order form.
master detail ?
or just instantiate grid in form
No master detail. I want to show grid of same entity on form.
@TechAspirant ,
is the grid just to be statically showed or does it need to have some interaction like sorting, filtering and "opening" a different form instance when clicking on a row/editlink in the grid?
Because your requirement more or less seems to technically resemble what is the entityGriddialog in paid StartSharp (see demo here: https://serenity.is/demo/AdvancedSamples/EntityGridDialog#).
With kind regards,
John
| gharchive/issue | 2019-04-04T09:05:58 | 2025-04-01T06:46:11.362087 | {
"authors": [
"JohnRanger",
"TechAspirant",
"ga5tan"
],
"repo": "volkanceylan/Serenity",
"url": "https://github.com/volkanceylan/Serenity/issues/4375",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
144513757 | EntitySqlHelper
GetFromReader Unable to cast object of type 'System.Guid' to type 'System.String Guid Primary key
Your field is string while it should be of type GUID
uniqueidentifier sql server type,field StringField GuidField All tried This is the wrong
| gharchive/issue | 2016-03-30T09:14:38 | 2025-04-01T06:46:11.363273 | {
"authors": [
"107295472",
"volkanceylan"
],
"repo": "volkanceylan/Serenity",
"url": "https://github.com/volkanceylan/Serenity/issues/482",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
156906365 | Feature request for InPlaceSearch (similar to InPlaceAdd)
Since lookup editor shows only NameField then sometimes it is difficult to find it by just NameField.
I think if we had button besides lookup editor and
clicking this button would popup a grid with filter enabled
and after selecting a record and closing the popup.
Thanks for your work. This is really awesome framework.
you can create new editor based lookup editor. AdvancedSearchLookupEditor and make pull request :smile:
You may use a combination of fields as name field. I don't think opening another dialog to select something is a good idea. But might add a sample later.
| gharchive/issue | 2016-05-26T05:28:28 | 2025-04-01T06:46:11.365063 | {
"authors": [
"VictorTomaili",
"tvldev02",
"volkanceylan"
],
"repo": "volkanceylan/Serenity",
"url": "https://github.com/volkanceylan/Serenity/issues/729",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
85595267 | How to use this.logger instead of console.log
Hello,
I try to use the "logger" option in the settings like in a doc constructor in drone.js file:
/**
Constructs a new RollingSpider
@param {Object} options to construct the drone with:
{String} uuid to connect to. If this is omitted then it will connect to the first device starting with 'RS_' as the local name.
logger function to call if/when errors occur. If omitted then uses console#log
@constructor
*/
But I can not.
I try :
var mydrone = new RollingSpider({uuid : "RS_R112233"}, logger);
or
var mydrone = new RollingSpider({uuid : "RS_R112233", logger = true});
Do you have a solution maybe?
Best regard
Here is an example:
var mydrone = new RollingSpider({uuid: 'RS_R112233', logger: console.log});
Waoou excuse me, I had read "If omitted then uses console#log", I thought not having to put something in parameter..
Thanks you
Agh will fix the docs! Thanks!
On Monday, June 8, 2015, shiva02 notifications@github.com wrote:
Waoou excuse me, I had read "If omitted then uses console#log", I thought
not having to put something in parameter..
Thanks you
—
Reply to this email directly or view it on GitHub
https://github.com/voodootikigod/node-rolling-spider/issues/41#issuecomment-109987872
.
--
Chris Williams
@voodootikigod http://twitter.com/voodootikigod | GitHub
http://github.com/voodootikigod
The things I make that you should check out:
SaferAging http://www.saferaging.com/ | JSConf http://jsconf.com/ |
RobotsConf http://robotsconf.com/ | RobotsWeekly
http://robotsweekly.com/
Help me end the negativity on the internet, share this
http://jsconf.eu/2011/an_end_to_negativity.html.
| gharchive/issue | 2015-06-05T18:08:10 | 2025-04-01T06:46:11.377673 | {
"authors": [
"shiva02",
"voodootikigod"
],
"repo": "voodootikigod/node-rolling-spider",
"url": "https://github.com/voodootikigod/node-rolling-spider/issues/41",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1505415668 | Does not work as part of Blocks
We use a number of fields to generate templated text in our current CMS. I was looking at leveraging different blocks as a way to allow adding 1 of a few options for producing the templated text. However, I found that this computed fields plugin does not work with blocks.
I tried adding all the fields feeding into the calculated field, plus the calculated field, then added that to another model and there is a console error about not being available.
Next, I tried to reference the fields in the block from a calculated field directly on the model, but that didn't have access to the block's fields either.
Example:
Block
Calculated field:
return `${custom_shipping_text} | ${shipping_price}`
Error:
@velomovies thanks a bunch for the reply. I was hoping there would be a workaround, but didn't quite connect these dots. This might be worth linking to from the readme.
@velomovies now that I played around with it a bit, there was one thing I'm not quite following. It seems like the solution you described above the template portion (const fieldPath = ..... const shippingPrice) is generalizable isn't it? In other words, could the plugin not automatically take this approach to make each field in a block available to the calculated field(s) in the block?
@rothnic Good point! Now I think about it, that should indeed be doable.
So we will do this when the field is in a block. We make a new variable i.e. thisBlock and then you can do thisBlock.[dataYouNeed]. (thisBlock.custom_shipping_text)
| gharchive/issue | 2022-12-20T22:56:53 | 2025-04-01T06:46:11.384558 | {
"authors": [
"rothnic",
"velomovies"
],
"repo": "voorhoede/datocms-plugin-computed-fields",
"url": "https://github.com/voorhoede/datocms-plugin-computed-fields/issues/23",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
} |
585804800 | dettaglio ordine non visualizza nome prodotto
Sia nella pagina amministrativa, sia in quella utente si vede un numero anziché il nome prodotto
Fixed
CommitId 935c9efb11102e209983217980f231c9fc471622
| gharchive/issue | 2020-03-22T20:26:54 | 2025-04-01T06:46:11.385654 | {
"authors": [
"vormix"
],
"repo": "vormix/PHP-Ecommerce",
"url": "https://github.com/vormix/PHP-Ecommerce/issues/31",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2016949597 | Missing license on this project
Hey!
Is it possible to select a license for this project? Becaue if no license is selected - even though its published on github - means that basically the code is all rights reserved and not really usable for open source. Which I think was not the intention ;)
Thanks!
Tom
Any preferred license form ?
Good question ;)
The most simple one is the MIT license, if you want to keep it completely open to everything:
https://choosealicense.com/licenses/mit/
Alternative is the more complex GPLv3 which has a few restrictions compared to MIT:
https://choosealicense.com/licenses/gpl-3.0/
Up to you, since you invested the time and energy to create that nice piece of software. Thanks for that by the way :)
Tom
added MIT license file
| gharchive/issue | 2023-11-29T16:17:41 | 2025-04-01T06:46:11.388720 | {
"authors": [
"twoellert",
"vortex314"
],
"repo": "vortex314/serial2mqtt",
"url": "https://github.com/vortex314/serial2mqtt/issues/59",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1337866411 | [Front-End] Sign Up page
Description
Get our sign up page working. Sign Up form should have the following fields and requirements
Username unique
Email unique
Password min-length 8
New users should automatically be signed in and given a JWT token
Sign up page is currently there. Not much functionality yet.
| gharchive/issue | 2022-08-13T05:41:37 | 2025-04-01T06:46:11.391645 | {
"authors": [
"carlosrrdev",
"thejake989"
],
"repo": "votingwithfriends/votingwithfriends",
"url": "https://github.com/votingwithfriends/votingwithfriends/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1686214288 | unclear error message
When the device file is not found, it just says "Error: No such file or directory", but it should say that the device file is not found, and show the filename of the device file.
fixed in this PR: https://github.com/vouch-opensource/mcumgr-client/pull/6
| gharchive/issue | 2023-04-27T06:41:53 | 2025-04-01T06:46:11.405416 | {
"authors": [
"Frank-Buss"
],
"repo": "vouch-opensource/mcumgr-client",
"url": "https://github.com/vouch-opensource/mcumgr-client/issues/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
105485040 | Mysite
Updating logo for mystatesman. I've not created a pull request from a forked project like this, so Vox if you see this, I'm sorry. It's for our statesman version.
this was a mistake ... meant to merge with our version.
| gharchive/pull-request | 2015-09-08T22:55:25 | 2025-04-01T06:46:11.406295 | {
"authors": [
"critmcdonald"
],
"repo": "voxmedia/meme",
"url": "https://github.com/voxmedia/meme/pull/29",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
271334823 | Add task to release module as "GitHub release"
If I understood @bastelfreak's talk correctly, it is intended to add a task to this module to automatically push GitHub releases for modules.
I just prototyped such a task based on the github_api gem:
https://github.com/leoarnold/puppet-cups/blob/master/rakelib/github.rake
Feel free to use.
That looks like an awesome place to begin. Thanks!
Thanks for this @leoarnold, I will have a look soon and try to implement it.
| gharchive/issue | 2017-11-06T01:28:59 | 2025-04-01T06:46:11.430700 | {
"authors": [
"bastelfreak",
"ekohl",
"leoarnold"
],
"repo": "voxpupuli/voxpupuli-release-gem",
"url": "https://github.com/voxpupuli/voxpupuli-release-gem/issues/17",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
175799300 | Add the ability to automatically replace the location in WSDL file using info from incoming requests
I'm a little bit shocked node-soap is not doing it.
May need to make it optional, since server might behind reversed proxy.
Coverage decreased (-0.03%) to 92.666% when pulling 6d182801fa423ee333cfd2e583a2a7a3bc567130 on haocenxu:master into 57c64f573a6e5f4ecdf46982a765231a1ca74672 on vpulim:master.
Coverage decreased (-0.03%) to 92.666% when pulling 7d2f129ea6cc43a0c28c3806ad037fa7539151cf on haocenxu:master into 57c64f573a6e5f4ecdf46982a765231a1ca74672 on vpulim:master.
| gharchive/pull-request | 2016-09-08T16:25:55 | 2025-04-01T06:46:11.445813 | {
"authors": [
"coveralls",
"haocenxu"
],
"repo": "vpulim/node-soap",
"url": "https://github.com/vpulim/node-soap/pull/875",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1201162632 | tips
1:表的名称,建议也附加在表名后,可参考chiner的风格;
2:卡顿,操作不顺畅,多选操作后,会有明显卡顿;
3:感觉是好东西,但是需要完善的还有很多,加油,继续搞。
感谢你的建议
docker latest 版在表名后已显示 comment 信息
| gharchive/issue | 2022-04-12T06:14:01 | 2025-04-01T06:46:11.448787 | {
"authors": [
"lovebetterworld",
"vran-dev"
],
"repo": "vran-dev/databasir",
"url": "https://github.com/vran-dev/databasir/issues/86",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
124633153 | Better mobile integration (allows text input)
Before the virtual keyboard would not show up when a text field was active, now objects attempting to grabKeyboardInput will also pull up the virtual keyboard if on mobile.
This has been tested on LOVE stable OSX as well as LOVE master iOS.
The one downside I see of this change is the possibility of the user of SUIT to be using love.keyboard.setTextInput(), however I'm not sure why they would be.
Looks good, thanks!
This is useful to do on desktop as well.
@vrld would it be possible to get the x/y/w/h of the current input when grabbing keyboard? Properly dealing with textedit (composition) events depends on it.
Right now this is possible, but has to be handled by the host application, e.g.:
local x,y,w,h = layout.row()
local state = suit.Input(input, x,y,w,h)
if suit.isActive(id) and suit.hasKeyboardFocus(id) then
-- just grabbed focus.
end
Would it if the widget reports its box (x,y,w,h) and whether it grabbed focus in the return state?
Also, I am admittedly clueless about this: Why do you need the widget area when the widget grabs the keyboard focus? I assume so you can focus a camera on the input area?
Nothing game logic related.
CJK IME's need to know where the text box (from the editing cursor to the end of the area, ideally) is on screen in order to position the candidate list appropriately. You need to tell it the region to show up in when it is focused and when the contents change in any way.
(more info here: https://wiki.libsdl.org/Tutorials/TextInput, if the basics are doable I'm more than willing to do what's needed to get the details right)
The more you know! This should be done library-side (in input.lua), then. Pull requests are very welcome ;)
| gharchive/pull-request | 2016-01-03T06:44:09 | 2025-04-01T06:46:11.453040 | {
"authors": [
"WetDesertRock",
"shakesoda",
"vrld"
],
"repo": "vrld/SUIT",
"url": "https://github.com/vrld/SUIT/pull/24",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2008363594 | Disable web access
Add option to disable web access or using system prompts.
The web access is considered as plugin now.
I'm already searching for methods :)
Actually, just need to add "nosearchall" to options in enums
H @gustavosmendes1 Do you mean that providing this extra flag disables Bing Chat/Copilot from searching the web?
I will look into it as well and possibly push a release for this in the weekend!
Exactly.
Thanks for the investigation! I will have some time later today/tomorrow to implement and do some testing!
@gustavosmendes1 Latest version has search parameter on ask method!
| gharchive/issue | 2023-11-23T14:42:26 | 2025-04-01T06:46:11.476531 | {
"authors": [
"gustavosmendes1",
"vsakkas"
],
"repo": "vsakkas/sydney.py",
"url": "https://github.com/vsakkas/sydney.py/issues/116",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
440031854 | Improve/configure support of NX Workspace
[x] I'm sure this issue is not a duplicate.
Hi.... I'm working with a Nx Project with some Angular and some NestJS apps, at least for now, i don't have React apps...
I'm getting this message...
Is possible to setup which folders are Angular apps, and which one are NestJS to get the proper icons in each of them?
See https://github.com/vscode-icons/vscode-icons/issues/1925#issuecomment-476273935
Also, duplicate of #1939 but who cares.
@JimiC well... is not a duplicated of #1939, I'm just asking about the possibility of configure the icon preset in a folder scope level instead a global workspace, not about a restart loop 😄...
I think this issue should be open again
I corrected my comment about the duplicate. The reference to the comment I made in #1925 is your answer.
@gperdomor I like to apologize for my bluntness yesterday. I had an issue that got on my nerves at the time but this is not an excuse to be blunt to others.
To the point, vscode has tech limitations when it comes to icons themes. You see, it does not have any mechanism to determine the icon to use other than the file extension or the filename itself (and the languageid when it comes to 3rd party vscode extensions).
In NX Workspace projects, Angular and Nest file extensions, for some files, are the same and therefore we can enable only one or the other icon set. Never both. The only trick I could find was explained in https://github.com/vscode-icons/vscode-icons/issues/1925#issuecomment-478358795.
I hope you got the explanation you were seeking and forgive me.
| gharchive/issue | 2019-05-03T12:52:05 | 2025-04-01T06:46:11.481903 | {
"authors": [
"JimiC",
"gperdomor"
],
"repo": "vscode-icons/vscode-icons",
"url": "https://github.com/vscode-icons/vscode-icons/issues/2048",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
92033836 | "Relative paths" in "Source paths" under "Catalogue properties" gone with 1.8.2 ?
I've not been using Poedit for some weeks. This morning went to translate a wordpress theme, opened Poedit and it asked me to update, and I did, to 1.8.2
Opened a .po , went to set up paths to source files because I made some changes to the theme and needed to update the catalogue, but I couldn't find the buttons to add "." and ".." I was used to.
Here is a pic:
With the "+" button a window popup appeared but it lets me browse for a folderan absolute path, but it doesn't even work because it doesn't save the "drive:", only the absolute path, eg sites\mysite\ instead of g:\sites\mysite\
I'm using Windows Vista 32bit
doesn't even work because it doesn't save the "drive:", only the absolute path
No, it adds relative path from the location of the PO file, i.e. does automatically the only correct thing, something you had to do manually before (and more likely than not, screw it up).
If you believe there's a bug, kindly provide detailed instructions on how to reproduce it and explain why Poedit is wrong. "I add some path and it is wrong" really isn't enough, please see http://www.chiark.greenend.org.uk/~sgtatham/bugs.html if you don't have experience filing bugs.
A popup came out with a white cross on redd background telling me poedit coulnd't find any source file at the specific location.
To be more precise, I opened a default.po and it already had a path saved, but it was of the kind:
../../../../../../../../../wamp/some/foo/bar/
That made no sense in my configuration. I deleted it and browsed for the parent folder relative to default.po and I recevied the error.
I ave up on the task for another reason so next time I will face this "bug" I'll let you know with more details.
Thank you.
Again, whenever you create a bug (not just for Poedit, anywhere), please include reproducible description.
As it is, I still have absolutely no idea what is wrong (or you think is wrong). Your latest comment is unclear and not even close to being something reproducible and doesn't seem to have any relation to the original issue. What little detail it contains is clearly inaccurate (you couldn't have possibly received that error immediately after browsing for an error) and is most likely explained by you choosing a folder that did not in fact contain any source code.
Poedit 1.8 creates the relative paths automatically, just add the sources folder. That doesn't mean there isn't some bug in there, but so far, there's nothing in the above comments that would indicate that, mostly because you refuse to describe your situation in any meaningful detail 😢
The last comment was just a comment/information, I didn't expect a solution, don't worry. I'm no longer in need to translate the theme and I don't have time to investigate further.
It's just that in the past I was used to type ".." and I was done, while now I can't type anything, but only browse stuff. If this is intended behaviour, it's okay, if the button was hidden or missing dued to my configuration, in that scenario it was a bug, but you made it clear that since 1.8 there is no more typing paths, it's okay to me then.
| gharchive/issue | 2015-06-30T09:14:20 | 2025-04-01T06:46:11.492694 | {
"authors": [
"DrLightman",
"vslavik"
],
"repo": "vslavik/poedit",
"url": "https://github.com/vslavik/poedit/issues/191",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1169571900 | Added a GIF and Updated README.md
Fixing issue #1 ---
Created and added an animated GIF explaining the working of twrm.io website. (demo.gif)
Made changes in the README.md about the usage of the website and it's working along with the demo file.
Open for feedback on this PR 😁
Also, the GIF shows only after our tool, it would be really amazing if we can follow this process 👇
Normally mention a user in tweet editor
Now type that mention in twrm.io's magic box ✨
Go back to the tweet editor and paste them below the original mention.
Thanks for your feedback! Will make the changes as much as I can!
Hi @vsnthdev 👋
I've changed the GIF as you asked for.
I've deleted the instructions for using twrm from README.md as I think you know your project much better than I can.
It would be better if you make changes on 'How to use twrm.io' in README.md!
Let me know whether the GIF is OK or not...
Thank you so much ✨ @ShruAgarwal 👏
Thanks for merging it! 😄
Will contribute again if I find something interesting!
On Fri, Mar 18, 2022, 18:42 Vasanth Srivatsa @.***>
wrote:
Thank you so much ✨ @ShruAgarwal https://github.com/ShruAgarwal 👏
—
Reply to this email directly, view it on GitHub
https://github.com/vsnthdev/twrm.io/pull/7#issuecomment-1072396772, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/ATXZWRNCMBYT7IAVA4PAE23VAR6LNANCNFSM5QYNYHEA
.
Triage notifications on the go with GitHub Mobile for iOS
https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675
or Android
https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you were mentioned.Message ID:
@.***>
| gharchive/pull-request | 2022-03-15T11:53:01 | 2025-04-01T06:46:11.502131 | {
"authors": [
"ShruAgarwal",
"vsnthdev"
],
"repo": "vsnthdev/twrm.io",
"url": "https://github.com/vsnthdev/twrm.io/pull/7",
"license": "Zlib",
"license_type": "permissive",
"license_source": "github-api"
} |
197054873 | Added MaterialComponents.
Project URL
Description
Why it should be included to awesome-ios (optional)
Checklist
[ ] Only one project/change is in this pull request
[ ] Addition in chronological order (bottom of category)
[ ] Supports iOS 9 or later
[ ] Supports Swift 3
[ ] Has a commit from less than 2 years ago
[ ] Has a clear README in English
1 Error
:no_entry_sign:
Found 1 link issue
Link issue by awesome_bot
Line
Status
Link
414
301
http://developer.couchbase.com/mobile/ redirects tohttps://developer.couchbase.com/mobile/
Generated by :no_entry_sign: danger
| gharchive/pull-request | 2016-12-21T23:04:28 | 2025-04-01T06:46:11.507975 | {
"authors": [
"danger-awesome-ios",
"michaeldove"
],
"repo": "vsouza/awesome-ios",
"url": "https://github.com/vsouza/awesome-ios/pull/1361",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
208622969 | Add DSGradientProgressView
Added DSGradientProgressView
Project URL
https://github.com/DholStudio/DSGradientProgressView
Description
DSGradientProgressView is a simple and customizable animated progress bar written in Swift.
Checklist
[x] Only one project/change is in this pull request
[x] Addition in chronological order (bottom of category)
[x] Supports iOS 9 or later
[x] Supports Swift 3
[x] Has a commit from less than 2 years ago
[x] Has a clear README in English
Hey @iabtyagi, cool lib 👍
thanks for contributing!
Thanks!
| gharchive/pull-request | 2017-02-18T07:50:27 | 2025-04-01T06:46:11.511315 | {
"authors": [
"iabtyagi",
"lfarah"
],
"repo": "vsouza/awesome-ios",
"url": "https://github.com/vsouza/awesome-ios/pull/1478",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
541659913 | Added New TabBar Component named BEKCurveTabbar
[New Component]
New UITabbar Custom Component added to the Tab Bar Section.
compatible with XCode +10 and fully customizable via Interface_Builder panel. BEKCurveTabBar derived the UITabBar class and compatible with every iOS device.
Project URL
https://github.com/behrad-kzm/BEKCurveTabbar
Category
Tab Bar
Description
just added a new component in right formating with 2 lines of description about customizability and Compatibility with Xcode Interface builder
Why it should be included to awesome-ios (mandatory)
There is no Tab bar in awesome-ios that developer can change the appearance realtime in interface builder at this level of customizability.
also it derived UITabBarController and UITabBar so it will stay compatible with every iOS Environment.
BEKCurveTabbar has different Appearances in iphone X screen types.
Checklist
[ ] Has 50 GitHub stargazers or more
[x] Only one project/change is in this pull request
[x] Isn't an archived project
[ ] Has more than one contributor
[ ] Has unit tests, integration tests or UI tests
[ ] Addition in chronological order (bottom of category)
[ ] Supports iOS 9 / tvOS 10 or later
[x] Supports Swift 4 or later
[x] Has a commit from less than 2 years ago
[x] Has a clear README in English
1 Warning
:warning:
Found 16 link issues, a project collaborator will take care of these, thanks :)
Link issues by awesome_bot
Line
Status
Link
4
302
https://triplebyte.com/a/azelHFa/d redirects tohttps://triplebyte.com/
276
301
https://github.com/analytics-pros/Swift-GA-Tracker-for-Apple-tvOS redirects tohttps://github.com/adswerve/Swift-GA-Tracker-for-Apple-tvOS
474
404
https://github.com/czater/Colorify
767
301
https://github.com/igormatyushkin014/Sensitive redirects tohttps://github.com/hellowizman/Sensitive
935
301
https://github.com/onmyway133/Anchors redirects tohttps://github.com/onmyway133/EasyAnchor
1333
404
https://github.com/jorik041/PerfectAPIClient
1382
301
http://www.catapush.com/ redirects tohttps://www.catapush.com/
1452
301
https://github.com/adrianmateoaea24/magic-mapper-swift redirects tohttps://github.com/adrianmteo/magic-mapper-swift
1532
301
https://github.com/IvanVorobei/SPPermission redirects tohttps://github.com/ivanvorobei/SPPermissions
1615
301
https://github.com/facebook/facebook-swift-sdk redirects tohttps://github.com/facebookarchive/facebook-swift-sdk
2010
301
https://github.com/Babylonpartners/DrawerKit redirects tohttps://github.com/babylonhealth/DrawerKit
2097
301
https://github.com/soheil/RadialLayer redirects tohttps://github.com/soheil-zz/RadialLayer
2138
301
https://github.com/ML-Works/Overlap redirects tohttps://github.com/MLWOS/Overlap
2505
301
https://github.com/rynecheow/PopupKit redirects tohttps://github.com/Pointwelve/PopupKit
2548
301
https://github.com/OpenFeyn/KafkaRefresh redirects tohttps://github.com/HsiaohuiHsiang/KafkaRefresh
3253
301
http://jamesonquave.com/blog/tutorials/ redirects tohttps://jamesonquave.com/blog/tutorials/
Generated by :no_entry_sign: Danger
added new Files to use the component without interface builder and an example of usage
Thanks for contributing!
| gharchive/pull-request | 2019-12-23T09:16:44 | 2025-04-01T06:46:11.534448 | {
"authors": [
"behrad-kzm",
"danger-awesome-ios",
"vsouza"
],
"repo": "vsouza/awesome-ios",
"url": "https://github.com/vsouza/awesome-ios/pull/2893",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1977858758 | Have issues running on my RPI Homebridge
Describe Your Problem:
Can't even get my Candelas mac.
Have "Error scanning for devices" after pushing the SCAN button in plugins settings dialog, and the next strings in log...
Have installed it from Plugins tab of my HB UI.
Logs:
[05/11/2023, 19:03:30] [Homebridge UI] [homebridge-yeelight-ble] Incoming Request: /scan
[05/11/2023, 19:03:30] [Homebridge UI] [homebridge-yeelight-ble] Error: Command failed: yeelightble scan -t 3
/bin/sh: 1: yeelightble: not found
at ChildProcess.exithandler (node:child_process:422:12)
at ChildProcess.emit (node:events:517:28)
at maybeClose (node:internal/child_process:1098:16)
at Socket.<anonymous> (node:internal/child_process:450:11)
at Socket.emit (node:events:517:28)
at Pipe.<anonymous> (node:net:350:12) {
code: 127,
killed: false,
signal: null,
cmd: 'yeelightble scan -t 3 ',
stdout: '',
stderr: '/bin/sh: 1: yeelightble: not found\n'
}
[05/11/2023, 19:03:35] [Homebridge UI] [homebridge-yeelight-ble] Terminating child process...
[05/11/2023, 19:03:35] [Homebridge UI] [homebridge-yeelight-ble] Child process ended
[05/11/2023, 19:03:37] [Homebridge UI] Starting terminal session
Plugin Config:
In screenshots
Screenshots:
Environment:
Plugin Version: homebridge-yeelight-ble v2.2.0
Homebridge Version: [v1.7.0] ( but had the same with 1.6.x)
Node.js Version: [v18.18.2]
NPM Version: 9.8.1
Operating System: Raspbian GNU/Linux Buster (10)
Thank you,
and I look forward for any support
The error is pretty much self-explanatory, you need to install python library yeelightble, refer to setup section in documentation
@vsternbach Thanks! I tried, but got:
pi@homebridge:/var/lib/homebridge $ sudo pip3 install git+https://github.com/vsternbach/yeelightble
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Collecting git+https://github.com/vsternbach/yeelightble
Cloning https://github.com/vsternbach/yeelightble to /tmp/pip-req-build-7128wn0q
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-req-build-7128wn0q/setup.py", line 2, in <module>
from yeelightble.version import __version__
File "/tmp/pip-req-build-7128wn0q/yeelightble/__init__.py", line 2, in <module>
from .lamp import Lamp
File "/tmp/pip-req-build-7128wn0q/yeelightble/lamp.py", line 5, in <module>
from retry import retry
ModuleNotFoundError: No module named 'retry'
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-req-build-7128wn0q/
Meanwhile I found my Candela MAC with bluetoothctl, scan.
@PnnnG don't see your latest comment here, the thing with yeelightble installation is weird indeed, didn't have any problem installing it on rpi, but got the same error when trying to install it on my mac, modified something and hopefully it solves the problem, try reinstalling it
@vsternbach, I solved it! Installed py3, then yeelightble with all requires manually. And it works! Sometimes I see inverted status in Homebridge, but that is 99% better then nothing)
Thank you for this really crucial plugin for my smart home! I do love my candelas
Glad to hear the issue is resolved, and yeah ble connections are really unreliable and sometimes sending commands to the lamps just fails even with additional retries, so this is indeed the case where it's better than nothing :)
| gharchive/issue | 2023-11-05T16:22:04 | 2025-04-01T06:46:11.544270 | {
"authors": [
"PnnnG",
"vsternbach"
],
"repo": "vsternbach/homebridge-yeelight-ble",
"url": "https://github.com/vsternbach/homebridge-yeelight-ble/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
55053988 | Shipping Options Minicart: Loading ao enviar o attachment
[ ] Disable dos seletores até chegar resosta do orderForm
[ ] Mensagem de "Carregando..." no topo do minicart
Link do CRM.
Wiki da Integração.
Publicação com as Issues: 15,16,17,18,13,12 no ambiente 'Beta'
Publicação com as Issues: 19,14,22,23,20,24,15,16,17,18,13,12 no ambiente 'Produção'
| gharchive/issue | 2015-01-21T17:59:42 | 2025-04-01T06:46:11.567870 | {
"authors": [
"vmattos",
"vtexcrmgithub"
],
"repo": "vtex/front.portal-plugins",
"url": "https://github.com/vtex/front.portal-plugins/issues/18",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.