Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 1 744 | labels stringlengths 4 574 | body stringlengths 9 211k | index stringclasses 10 values | text_combine stringlengths 96 211k | label stringclasses 2 values | text stringlengths 96 188k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,927 | 3,967,028,934 | IssuesEvent | 2016-05-03 14:58:00 | dotnet/wcf | https://api.github.com/repos/dotnet/wcf | opened | Basic authentication tests fail in CI when using IIS-hosted services | bug Infrastructure | The Basic authentication tests fail on CI runs against IIS-hosted WCF services but pass in self-hosted. Likely cause is that Basic Authentication is not enabled for the CI web sites in IIS.
These are the 2 failing tests:
```
Https_ClientCredentialTypeTests.BasicAuthenticationInvalidPwd_throw_MessageSecurityException
Https_ClientCredentialTypeTests.BasicAuthentication_RoundTrips_Echo
```
Failure looks like this:
```
Https_ClientCredentialTypeTests.BasicAuthenticationInvalidPwd_throw_MessageSecurityException [FAIL]
Assert.Throws() Failure
Expected: typeof(System.ServiceModel.Security.MessageSecurityException)
Actual: typeof(System.ServiceModel.CommunicationException): The server did not provide a meaningful reply; this might be caused by a contract mismatch, a premature session shutdown or an internal server error.
Stack Trace:
D:\j\workspace\outerloop_win---1452b487\src\System.Private.ServiceModel\src\System\ServiceModel\Channels\ServiceChannel.cs(764,0): at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout)
D:\j\workspace\outerloop_win---1452b487\src\System.Private.ServiceModel\src\System\ServiceModel\Channels\ServiceChannelProxy.cs(371,0): at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(MethodCall methodCall, ProxyOperationRuntime operation)
D:\j\workspace\outerloop_win---1452b487\src\System.Private.ServiceModel\src\System\ServiceModel\Channels\ServiceChannelProxy.cs(136,0): at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(MethodInfo targetMethod, Object[] args)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Reflection.DispatchProxyGenerator.Invoke(Object[] args)
at generatedProxy_2.Echo(String )
D:\j\workspace\outerloop_win---1452b487\src\System.Private.ServiceModel\tests\Scenarios\Security\TransportSecurity\Https\ClientCredentialTypeTests.cs(82,0): at Https_ClientCredentialTypeTests.<>c.<BasicAuthenticationInvalidPwd_throw_MessageSecurityException>b__4_0()
Https_ClientCredentialTypeTests.BasicAuthentication_RoundTrips_Echo [FAIL]
Test Case: BasicAuthentication FAILED with the following errors: Basic echo test.
Test variation:...
BasicAuthentication_RoundTrips_Echo
Using address: 'https://wcfcoresrv2.cloudapp.net/WcfService15//BasicAuth.svc//https-basic'
Unexpected exception was caught: System.ServiceModel.CommunicationException: The server did not provide a meaningful reply; this might be caused by a contract mismatch, a premature session shutdown or an internal server error.
at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout) in D:\j\workspace\outerloop_win---1452b487\src\System.Private.ServiceModel\src\System\ServiceModel\Channels\ServiceChannel.cs:line 764
at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(MethodCall methodCall, ProxyOperationRuntime operation) in D:\j\workspace\outerloop_win---1452b487\src\System.Private.ServiceModel\src\System\ServiceModel\Channels\ServiceChannelProxy.cs:line 371
at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(MethodInfo targetMethod, Object[] args) in D:\j\workspace\outerloop_win---1452b487\src\System.Private.ServiceModel\src\System\ServiceModel\Channels\ServiceChannelProxy.cs:line 136
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Reflection.DispatchProxyGenerator.Invoke(Object[] args)
at generatedProxy_2.Echo(String )
at Https_ClientCredentialTypeTests.BasicAuthentication_RoundTrips_Echo() in D:\j\workspace\outerloop_win---1452b487\src\System.Private.ServiceModel\tests\Scenarios\Security\TransportSecurity\Https\ClientCredentialTypeTests.cs:line 40
``` | 1.0 | Basic authentication tests fail in CI when using IIS-hosted services - The Basic authentication tests fail on CI runs against IIS-hosted WCF services but pass in self-hosted. Likely cause is that Basic Authentication is not enabled for the CI web sites in IIS.
These are the 2 failing tests:
```
Https_ClientCredentialTypeTests.BasicAuthenticationInvalidPwd_throw_MessageSecurityException
Https_ClientCredentialTypeTests.BasicAuthentication_RoundTrips_Echo
```
Failure looks like this:
```
Https_ClientCredentialTypeTests.BasicAuthenticationInvalidPwd_throw_MessageSecurityException [FAIL]
Assert.Throws() Failure
Expected: typeof(System.ServiceModel.Security.MessageSecurityException)
Actual: typeof(System.ServiceModel.CommunicationException): The server did not provide a meaningful reply; this might be caused by a contract mismatch, a premature session shutdown or an internal server error.
Stack Trace:
D:\j\workspace\outerloop_win---1452b487\src\System.Private.ServiceModel\src\System\ServiceModel\Channels\ServiceChannel.cs(764,0): at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout)
D:\j\workspace\outerloop_win---1452b487\src\System.Private.ServiceModel\src\System\ServiceModel\Channels\ServiceChannelProxy.cs(371,0): at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(MethodCall methodCall, ProxyOperationRuntime operation)
D:\j\workspace\outerloop_win---1452b487\src\System.Private.ServiceModel\src\System\ServiceModel\Channels\ServiceChannelProxy.cs(136,0): at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(MethodInfo targetMethod, Object[] args)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Reflection.DispatchProxyGenerator.Invoke(Object[] args)
at generatedProxy_2.Echo(String )
D:\j\workspace\outerloop_win---1452b487\src\System.Private.ServiceModel\tests\Scenarios\Security\TransportSecurity\Https\ClientCredentialTypeTests.cs(82,0): at Https_ClientCredentialTypeTests.<>c.<BasicAuthenticationInvalidPwd_throw_MessageSecurityException>b__4_0()
Https_ClientCredentialTypeTests.BasicAuthentication_RoundTrips_Echo [FAIL]
Test Case: BasicAuthentication FAILED with the following errors: Basic echo test.
Test variation:...
BasicAuthentication_RoundTrips_Echo
Using address: 'https://wcfcoresrv2.cloudapp.net/WcfService15//BasicAuth.svc//https-basic'
Unexpected exception was caught: System.ServiceModel.CommunicationException: The server did not provide a meaningful reply; this might be caused by a contract mismatch, a premature session shutdown or an internal server error.
at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout) in D:\j\workspace\outerloop_win---1452b487\src\System.Private.ServiceModel\src\System\ServiceModel\Channels\ServiceChannel.cs:line 764
at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(MethodCall methodCall, ProxyOperationRuntime operation) in D:\j\workspace\outerloop_win---1452b487\src\System.Private.ServiceModel\src\System\ServiceModel\Channels\ServiceChannelProxy.cs:line 371
at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(MethodInfo targetMethod, Object[] args) in D:\j\workspace\outerloop_win---1452b487\src\System.Private.ServiceModel\src\System\ServiceModel\Channels\ServiceChannelProxy.cs:line 136
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Reflection.DispatchProxyGenerator.Invoke(Object[] args)
at generatedProxy_2.Echo(String )
at Https_ClientCredentialTypeTests.BasicAuthentication_RoundTrips_Echo() in D:\j\workspace\outerloop_win---1452b487\src\System.Private.ServiceModel\tests\Scenarios\Security\TransportSecurity\Https\ClientCredentialTypeTests.cs:line 40
``` | non_process | basic authentication tests fail in ci when using iis hosted services the basic authentication tests fail on ci runs against iis hosted wcf services but pass in self hosted likely cause is that basic authentication is not enabled for the ci web sites in iis these are the failing tests https clientcredentialtypetests basicauthenticationinvalidpwd throw messagesecurityexception https clientcredentialtypetests basicauthentication roundtrips echo failure looks like this https clientcredentialtypetests basicauthenticationinvalidpwd throw messagesecurityexception assert throws failure expected typeof system servicemodel security messagesecurityexception actual typeof system servicemodel communicationexception the server did not provide a meaningful reply this might be caused by a contract mismatch a premature session shutdown or an internal server error stack trace d j workspace outerloop win src system private servicemodel src system servicemodel channels servicechannel cs at system servicemodel channels servicechannel call string action boolean oneway proxyoperationruntime operation object ins object outs timespan timeout d j workspace outerloop win src system private servicemodel src system servicemodel channels servicechannelproxy cs at system servicemodel channels servicechannelproxy invokeservice methodcall methodcall proxyoperationruntime operation d j workspace outerloop win src system private servicemodel src system servicemodel channels servicechannelproxy cs at system servicemodel channels servicechannelproxy invoke methodinfo targetmethod object args end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system reflection dispatchproxygenerator invoke object args at generatedproxy echo string d j workspace outerloop win src system private servicemodel tests scenarios security transportsecurity https clientcredentialtypetests cs at https clientcredentialtypetests c b https clientcredentialtypetests basicauthentication roundtrips echo test case basicauthentication failed with the following errors basic echo test test variation basicauthentication roundtrips echo using address unexpected exception was caught system servicemodel communicationexception the server did not provide a meaningful reply this might be caused by a contract mismatch a premature session shutdown or an internal server error at system servicemodel channels servicechannel call string action boolean oneway proxyoperationruntime operation object ins object outs timespan timeout in d j workspace outerloop win src system private servicemodel src system servicemodel channels servicechannel cs line at system servicemodel channels servicechannelproxy invokeservice methodcall methodcall proxyoperationruntime operation in d j workspace outerloop win src system private servicemodel src system servicemodel channels servicechannelproxy cs line at system servicemodel channels servicechannelproxy invoke methodinfo targetmethod object args in d j workspace outerloop win src system private servicemodel src system servicemodel channels servicechannelproxy cs line end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system reflection dispatchproxygenerator invoke object args at generatedproxy echo string at https clientcredentialtypetests basicauthentication roundtrips echo in d j workspace outerloop win src system private servicemodel tests scenarios security transportsecurity https clientcredentialtypetests cs line | 0 |
168,020 | 26,582,362,507 | IssuesEvent | 2023-01-22 16:09:13 | jsdelivr/www.jsdelivr.com | https://api.github.com/repos/jsdelivr/www.jsdelivr.com | closed | Third-party images and badges in readmes | new design 2022 | In order to keep Google happy and optimize the performance of our project pages I made a serverless image proxy and optimizer on top of Gcore. There is no origin at all and all sources must be manually pre-approved:
Current supported domains I took from popular readmes:
https://img.jsdelivr.com/cloud.githubusercontent.com/assets/835857/14581711/ba623018-0436-11e6-8fce-d2ccd4d379c9.gif
https://img.jsdelivr.com/img.shields.io/badge/code_style-standard-brightgreen.svg
https://img.jsdelivr.com/raw.githubusercontent.com/wiki/js-cookie/js-cookie/Browserstack-logo%402x.png
https://img.jsdelivr.com/github.com/jquery.png?s=20
https://img.jsdelivr.com/upload.wikimedia.org/wikipedia/commons/thumb/1/1a/Canal%2B.svg/2000px-Canal%2B.svg.png
https://img.jsdelivr.com/opencollective.com/bootstrap/sponsor/0/avatar.svg
https://img.jsdelivr.com/flat.badgen.net/circleci/github/nuxt-community/workbox-cdn
https://img.jsdelivr.com/images.opencollective.com/casinofiables-com/b824bab/logo.png
https://img.jsdelivr.com/avatars.githubusercontent.com/u/9919?v=4&s=128
https://img.jsdelivr.com/badgen.net/github/checks/pillarjs/router/master?label=ci
_Originally posted by @jimaek in https://github.com/jsdelivr/www.jsdelivr.com/issues/462#issuecomment-1282432018_
| 1.0 | Third-party images and badges in readmes - In order to keep Google happy and optimize the performance of our project pages I made a serverless image proxy and optimizer on top of Gcore. There is no origin at all and all sources must be manually pre-approved:
Current supported domains I took from popular readmes:
https://img.jsdelivr.com/cloud.githubusercontent.com/assets/835857/14581711/ba623018-0436-11e6-8fce-d2ccd4d379c9.gif
https://img.jsdelivr.com/img.shields.io/badge/code_style-standard-brightgreen.svg
https://img.jsdelivr.com/raw.githubusercontent.com/wiki/js-cookie/js-cookie/Browserstack-logo%402x.png
https://img.jsdelivr.com/github.com/jquery.png?s=20
https://img.jsdelivr.com/upload.wikimedia.org/wikipedia/commons/thumb/1/1a/Canal%2B.svg/2000px-Canal%2B.svg.png
https://img.jsdelivr.com/opencollective.com/bootstrap/sponsor/0/avatar.svg
https://img.jsdelivr.com/flat.badgen.net/circleci/github/nuxt-community/workbox-cdn
https://img.jsdelivr.com/images.opencollective.com/casinofiables-com/b824bab/logo.png
https://img.jsdelivr.com/avatars.githubusercontent.com/u/9919?v=4&s=128
https://img.jsdelivr.com/badgen.net/github/checks/pillarjs/router/master?label=ci
_Originally posted by @jimaek in https://github.com/jsdelivr/www.jsdelivr.com/issues/462#issuecomment-1282432018_
| non_process | third party images and badges in readmes in order to keep google happy and optimize the performance of our project pages i made a serverless image proxy and optimizer on top of gcore there is no origin at all and all sources must be manually pre approved current supported domains i took from popular readmes originally posted by jimaek in | 0 |
663,717 | 22,203,072,584 | IssuesEvent | 2022-06-07 12:55:31 | episphere/connect | https://api.github.com/repos/episphere/connect | closed | Finalize Shipment Modal Word Change | Shipping Dashboard Priority 1 | On the finalize shipment screen of the Shipping Dashboard, when the 'This will finalize the shipment' modal appears, it states "Please enter your name here to indicate this shipment is finalized." Because the user is no longer entering their name but has to enter in their email address, please change the wording to "Please enter your email here to indicate this shipment is finalized." | 1.0 | Finalize Shipment Modal Word Change - On the finalize shipment screen of the Shipping Dashboard, when the 'This will finalize the shipment' modal appears, it states "Please enter your name here to indicate this shipment is finalized." Because the user is no longer entering their name but has to enter in their email address, please change the wording to "Please enter your email here to indicate this shipment is finalized." | non_process | finalize shipment modal word change on the finalize shipment screen of the shipping dashboard when the this will finalize the shipment modal appears it states please enter your name here to indicate this shipment is finalized because the user is no longer entering their name but has to enter in their email address please change the wording to please enter your email here to indicate this shipment is finalized | 0 |
21,180 | 28,149,388,153 | IssuesEvent | 2023-04-02 21:20:35 | AbdElAziz333/Pluto | https://api.github.com/repos/AbdElAziz333/Pluto | closed | Can't reconnect to the server after disconnecting | bug pending release in processing | The new patch still doesn't fix the bug that you can't reconnect to the server after disconnecting.
pluto-mc1.18.2-0.0.5
forgeVersion=40.2.0
mcVersion=1.18.2
mcpVersion=20220404.173914
Server latest.log
`[14Mar2023 10:47:34.490] [Server thread/INFO] [net.minecraft.server.network.ServerLoginPacketListenerImpl/]: com.mojang.authlib.GameProfile@44e8c18d[id=<null>,name=SussyBaka__UwU,properties={},legacy=false] (/x.x.x.x:62991) lost connection: Disconnected`
server debug.log just gets this message from ftbbackups mod everytime i try to recconect (probably unrelated but I'll still include it just in case)
`[14Mar2023 10:47:39.161] [ftbbackups2_QuartzSchedulerThread/DEBUG] [net.creeperhost.ftbbackups.org.quartz.core.QuartzSchedulerThread/]: batch acquisition of 0 triggers`
client latest.log
`[14Mar2023 10:47:04.945] [Render thread/INFO] [net.minecraft.client.gui.screens.ConnectScreen/]: Connecting to x.x.x.x, 25570`
client debug.log
`[14Mar2023 10:47:04.945] [Render thread/INFO] [net.minecraft.client.gui.screens.ConnectScreen/]: Connecting to x.x.x.x, 25570`
`[14Mar2023 10:47:04.969]
[Netty Client IO #7/DEBUG] [net.minecraftforge.network.HandshakeHandler/FMLHANDSHAKE]: Starting new vanilla impl connection.` | 1.0 | Can't reconnect to the server after disconnecting - The new patch still doesn't fix the bug that you can't reconnect to the server after disconnecting.
pluto-mc1.18.2-0.0.5
forgeVersion=40.2.0
mcVersion=1.18.2
mcpVersion=20220404.173914
Server latest.log
`[14Mar2023 10:47:34.490] [Server thread/INFO] [net.minecraft.server.network.ServerLoginPacketListenerImpl/]: com.mojang.authlib.GameProfile@44e8c18d[id=<null>,name=SussyBaka__UwU,properties={},legacy=false] (/x.x.x.x:62991) lost connection: Disconnected`
server debug.log just gets this message from ftbbackups mod everytime i try to recconect (probably unrelated but I'll still include it just in case)
`[14Mar2023 10:47:39.161] [ftbbackups2_QuartzSchedulerThread/DEBUG] [net.creeperhost.ftbbackups.org.quartz.core.QuartzSchedulerThread/]: batch acquisition of 0 triggers`
client latest.log
`[14Mar2023 10:47:04.945] [Render thread/INFO] [net.minecraft.client.gui.screens.ConnectScreen/]: Connecting to x.x.x.x, 25570`
client debug.log
`[14Mar2023 10:47:04.945] [Render thread/INFO] [net.minecraft.client.gui.screens.ConnectScreen/]: Connecting to x.x.x.x, 25570`
`[14Mar2023 10:47:04.969]
[Netty Client IO #7/DEBUG] [net.minecraftforge.network.HandshakeHandler/FMLHANDSHAKE]: Starting new vanilla impl connection.` | process | can t reconnect to the server after disconnecting the new patch still doesn t fix the bug that you can t reconnect to the server after disconnecting pluto forgeversion mcversion mcpversion server latest log com mojang authlib gameprofile x x x x lost connection disconnected server debug log just gets this message from ftbbackups mod everytime i try to recconect probably unrelated but i ll still include it just in case batch acquisition of triggers client latest log connecting to x x x x client debug log connecting to x x x x starting new vanilla impl connection | 1 |
549 | 3,007,007,846 | IssuesEvent | 2015-07-27 14:04:53 | hbz/nwbib | https://api.github.com/repos/hbz/nwbib | closed | Subject chains with double angle brackets aren't shown correctly in HTML | bug deploy processing | See e.g. http://lobid.org/nwbib/HT018312899. It reads:
`Schlagwortfolge Grabbe, Christian Dietrich | <> Cid | Dramaturgie | Rezeption`
The [underlying data](http://lobid.org/resource?id=HT018312899&format=full): `subjectChain": "Grabbe, Christian Dietrich | <<Der>> Cid | Dramaturgie | Rezeption (231,321)"` | 1.0 | Subject chains with double angle brackets aren't shown correctly in HTML - See e.g. http://lobid.org/nwbib/HT018312899. It reads:
`Schlagwortfolge Grabbe, Christian Dietrich | <> Cid | Dramaturgie | Rezeption`
The [underlying data](http://lobid.org/resource?id=HT018312899&format=full): `subjectChain": "Grabbe, Christian Dietrich | <<Der>> Cid | Dramaturgie | Rezeption (231,321)"` | process | subject chains with double angle brackets aren t shown correctly in html see e g it reads schlagwortfolge grabbe christian dietrich cid dramaturgie rezeption the subjectchain grabbe christian dietrich cid dramaturgie rezeption | 1 |
4,384 | 7,274,960,624 | IssuesEvent | 2018-02-21 11:52:11 | GoogleCloudPlatform/google-cloud-dotnet | https://api.github.com/repos/GoogleCloudPlatform/google-cloud-dotnet | closed | Check SourceLink best practices re PDB files | type: process | See https://github.com/dotnet/sdk/issues/1458 and https://github.com/ctaggart/SourceLink - we may be able to remove the extra chunk of the project file that talks about copying the PDBs. | 1.0 | Check SourceLink best practices re PDB files - See https://github.com/dotnet/sdk/issues/1458 and https://github.com/ctaggart/SourceLink - we may be able to remove the extra chunk of the project file that talks about copying the PDBs. | process | check sourcelink best practices re pdb files see and we may be able to remove the extra chunk of the project file that talks about copying the pdbs | 1 |
209,381 | 7,174,880,662 | IssuesEvent | 2018-01-31 01:58:01 | CarbonLDP/carbonldp-website | https://api.github.com/repos/CarbonLDP/carbonldp-website | closed | Incorrect signature in the access points documentation | priority2: required type: task | The code examples of the access point documentation have incorrect `createAccessPoint` usage.
The correct signature of the `createAccessPoint` method is
```typescript
. createAccessPoint( accessPointObject, [slug] );
```
| 1.0 | Incorrect signature in the access points documentation - The code examples of the access point documentation have incorrect `createAccessPoint` usage.
The correct signature of the `createAccessPoint` method is
```typescript
. createAccessPoint( accessPointObject, [slug] );
```
| non_process | incorrect signature in the access points documentation the code examples of the access point documentation have incorrect createaccesspoint usage the correct signature of the createaccesspoint method is typescript createaccesspoint accesspointobject | 0 |
12,323 | 14,879,626,380 | IssuesEvent | 2021-01-20 08:00:13 | microsoft/vscode | https://api.github.com/repos/microsoft/vscode | closed | Integration tests throw error at the end | bug shared-process | When running `./scripts/test-integration.sh`, any extension test suite will succeed but still show the following error right at the end. Note that the process still exits with `0`, though.
```
[main 2021-01-19T10:29:07.561Z] [uncaught exception in main]: TypeError: Cannot read property 'isDevToolsFocused' of null
[main 2021-01-19T10:29:07.561Z] TypeError: Cannot read property 'isDevToolsFocused' of null
at BrowserWindow.n.isDevToolsFocused (electron/js2c/browser_init.js:33:2600)
at Function.n.getFocusedWindow (electron/js2c/browser_init.js:33:1603)
at WindowsMainService.getFocusedWindow (/home/joao/Work/vscode/out/vs/platform/windows/electron-main/windowsMainService.js:998:53)
at WindowsMainService.sendToFocused (/home/joao/Work/vscode/out/vs/platform/windows/electron-main/windowsMainService.js:1015:40)
at CodeApplication.onUnexpectedError (/home/joao/Work/vscode/out/vs/code/electron-main/app.js:248:88)
at ErrorHandler.unexpectedErrorHandler (/home/joao/Work/vscode/out/vs/code/electron-main/app.js:34:60)
at ErrorHandler.onUnexpectedError (/home/joao/Work/vscode/out/vs/base/common/errors.js:43:18)
at Object.onUnexpectedError (/home/joao/Work/vscode/out/vs/base/common/errors.js:60:34)
at Emitter.fire (/home/joao/Work/vscode/out/vs/base/common/event.js:486:34)
at LifecycleMainService.beginOnWillShutdown (/home/joao/Work/vscode/out/vs/platform/lifecycle/electron-main/lifecycleMainService.js:137:34)
at App.<anonymous> (/home/joao/Work/vscode/out/vs/platform/lifecycle/electron-main/lifecycleMainService.js:117:46)
at Object.onceWrapper (events.js:422:26)
at App.emit (events.js:315:20)
at App.windowAllClosedListener (/home/joao/Work/vscode/out/vs/platform/lifecycle/electron-main/lifecycleMainService.js:106:36)
at App.emit (events.js:327:22)
at /home/joao/Work/vscode/out/vs/platform/lifecycle/electron-main/lifecycleMainService.js:383:36
at async LifecycleMainService.kill (/home/joao/Work/vscode/out/vs/platform/lifecycle/electron-main/lifecycleMainService.js:369:13)
at async NativeHostMainService.exit (/home/joao/Work/vscode/out/vs/platform/native/electron-main/nativeHostMainService.js:443:13)
[main 2021-01-19T10:29:07.561Z] [uncaught exception in main]: TypeError: Object has been destroyed
[main 2021-01-19T10:29:07.561Z] TypeError: Object has been destroyed
at SharedProcess.onWillShutdown (/home/joao/Work/vscode/out/vs/code/electron-main/sharedProcess.js:54:20)
at /home/joao/Work/vscode/out/vs/code/electron-main/sharedProcess.js:36:80
at Emitter.fire (/home/joao/Work/vscode/out/vs/base/common/event.js:479:38)
at LifecycleMainService.beginOnWillShutdown (/home/joao/Work/vscode/out/vs/platform/lifecycle/electron-main/lifecycleMainService.js:137:34)
at BrowserWindow.<anonymous> (/home/joao/Work/vscode/out/vs/platform/lifecycle/electron-main/lifecycleMainService.js:209:26)
at BrowserWindow.emit (events.js:327:22)
at /home/joao/Work/vscode/out/vs/platform/lifecycle/electron-main/lifecycleMainService.js:383:36
at async LifecycleMainService.kill (/home/joao/Work/vscode/out/vs/platform/lifecycle/electron-main/lifecycleMainService.js:369:13)
at async NativeHostMainService.exit (/home/joao/Work/vscode/out/vs/platform/native/electron-main/nativeHostMainService.js:443:13)
``` | 1.0 | Integration tests throw error at the end - When running `./scripts/test-integration.sh`, any extension test suite will succeed but still show the following error right at the end. Note that the process still exits with `0`, though.
```
[main 2021-01-19T10:29:07.561Z] [uncaught exception in main]: TypeError: Cannot read property 'isDevToolsFocused' of null
[main 2021-01-19T10:29:07.561Z] TypeError: Cannot read property 'isDevToolsFocused' of null
at BrowserWindow.n.isDevToolsFocused (electron/js2c/browser_init.js:33:2600)
at Function.n.getFocusedWindow (electron/js2c/browser_init.js:33:1603)
at WindowsMainService.getFocusedWindow (/home/joao/Work/vscode/out/vs/platform/windows/electron-main/windowsMainService.js:998:53)
at WindowsMainService.sendToFocused (/home/joao/Work/vscode/out/vs/platform/windows/electron-main/windowsMainService.js:1015:40)
at CodeApplication.onUnexpectedError (/home/joao/Work/vscode/out/vs/code/electron-main/app.js:248:88)
at ErrorHandler.unexpectedErrorHandler (/home/joao/Work/vscode/out/vs/code/electron-main/app.js:34:60)
at ErrorHandler.onUnexpectedError (/home/joao/Work/vscode/out/vs/base/common/errors.js:43:18)
at Object.onUnexpectedError (/home/joao/Work/vscode/out/vs/base/common/errors.js:60:34)
at Emitter.fire (/home/joao/Work/vscode/out/vs/base/common/event.js:486:34)
at LifecycleMainService.beginOnWillShutdown (/home/joao/Work/vscode/out/vs/platform/lifecycle/electron-main/lifecycleMainService.js:137:34)
at App.<anonymous> (/home/joao/Work/vscode/out/vs/platform/lifecycle/electron-main/lifecycleMainService.js:117:46)
at Object.onceWrapper (events.js:422:26)
at App.emit (events.js:315:20)
at App.windowAllClosedListener (/home/joao/Work/vscode/out/vs/platform/lifecycle/electron-main/lifecycleMainService.js:106:36)
at App.emit (events.js:327:22)
at /home/joao/Work/vscode/out/vs/platform/lifecycle/electron-main/lifecycleMainService.js:383:36
at async LifecycleMainService.kill (/home/joao/Work/vscode/out/vs/platform/lifecycle/electron-main/lifecycleMainService.js:369:13)
at async NativeHostMainService.exit (/home/joao/Work/vscode/out/vs/platform/native/electron-main/nativeHostMainService.js:443:13)
[main 2021-01-19T10:29:07.561Z] [uncaught exception in main]: TypeError: Object has been destroyed
[main 2021-01-19T10:29:07.561Z] TypeError: Object has been destroyed
at SharedProcess.onWillShutdown (/home/joao/Work/vscode/out/vs/code/electron-main/sharedProcess.js:54:20)
at /home/joao/Work/vscode/out/vs/code/electron-main/sharedProcess.js:36:80
at Emitter.fire (/home/joao/Work/vscode/out/vs/base/common/event.js:479:38)
at LifecycleMainService.beginOnWillShutdown (/home/joao/Work/vscode/out/vs/platform/lifecycle/electron-main/lifecycleMainService.js:137:34)
at BrowserWindow.<anonymous> (/home/joao/Work/vscode/out/vs/platform/lifecycle/electron-main/lifecycleMainService.js:209:26)
at BrowserWindow.emit (events.js:327:22)
at /home/joao/Work/vscode/out/vs/platform/lifecycle/electron-main/lifecycleMainService.js:383:36
at async LifecycleMainService.kill (/home/joao/Work/vscode/out/vs/platform/lifecycle/electron-main/lifecycleMainService.js:369:13)
at async NativeHostMainService.exit (/home/joao/Work/vscode/out/vs/platform/native/electron-main/nativeHostMainService.js:443:13)
``` | process | integration tests throw error at the end when running scripts test integration sh any extension test suite will succeed but still show the following error right at the end note that the process still exits with though typeerror cannot read property isdevtoolsfocused of null typeerror cannot read property isdevtoolsfocused of null at browserwindow n isdevtoolsfocused electron browser init js at function n getfocusedwindow electron browser init js at windowsmainservice getfocusedwindow home joao work vscode out vs platform windows electron main windowsmainservice js at windowsmainservice sendtofocused home joao work vscode out vs platform windows electron main windowsmainservice js at codeapplication onunexpectederror home joao work vscode out vs code electron main app js at errorhandler unexpectederrorhandler home joao work vscode out vs code electron main app js at errorhandler onunexpectederror home joao work vscode out vs base common errors js at object onunexpectederror home joao work vscode out vs base common errors js at emitter fire home joao work vscode out vs base common event js at lifecyclemainservice beginonwillshutdown home joao work vscode out vs platform lifecycle electron main lifecyclemainservice js at app home joao work vscode out vs platform lifecycle electron main lifecyclemainservice js at object oncewrapper events js at app emit events js at app windowallclosedlistener home joao work vscode out vs platform lifecycle electron main lifecyclemainservice js at app emit events js at home joao work vscode out vs platform lifecycle electron main lifecyclemainservice js at async lifecyclemainservice kill home joao work vscode out vs platform lifecycle electron main lifecyclemainservice js at async nativehostmainservice exit home joao work vscode out vs platform native electron main nativehostmainservice js typeerror object has been destroyed typeerror object has been destroyed at sharedprocess onwillshutdown home joao work vscode out vs code electron main sharedprocess js at home joao work vscode out vs code electron main sharedprocess js at emitter fire home joao work vscode out vs base common event js at lifecyclemainservice beginonwillshutdown home joao work vscode out vs platform lifecycle electron main lifecyclemainservice js at browserwindow home joao work vscode out vs platform lifecycle electron main lifecyclemainservice js at browserwindow emit events js at home joao work vscode out vs platform lifecycle electron main lifecyclemainservice js at async lifecyclemainservice kill home joao work vscode out vs platform lifecycle electron main lifecyclemainservice js at async nativehostmainservice exit home joao work vscode out vs platform native electron main nativehostmainservice js | 1 |
428,490 | 12,412,372,394 | IssuesEvent | 2020-05-22 10:25:23 | hildebro/moneysplitter | https://api.github.com/repos/hildebro/moneysplitter | closed | Equalization extensions | high priority up next | - make new table for transactions and equalizations
- add option to show unpaid transaction
- add option to mark transactions as paid
- add option to remind people who still need to pay through the bot | 1.0 | Equalization extensions - - make new table for transactions and equalizations
- add option to show unpaid transaction
- add option to mark transactions as paid
- add option to remind people who still need to pay through the bot | non_process | equalization extensions make new table for transactions and equalizations add option to show unpaid transaction add option to mark transactions as paid add option to remind people who still need to pay through the bot | 0 |
706,135 | 24,261,021,045 | IssuesEvent | 2022-09-27 22:46:54 | brave/brave-browser | https://api.github.com/repos/brave/brave-browser | closed | Refactor: Network delegate helpers shouldn't need to have new_url at all | priority/P5 dev-concern closed/stale | We can instead store the original_request_url and compare to that, and then it will make each network delegate helper logic easier. | 1.0 | Refactor: Network delegate helpers shouldn't need to have new_url at all - We can instead store the original_request_url and compare to that, and then it will make each network delegate helper logic easier. | non_process | refactor network delegate helpers shouldn t need to have new url at all we can instead store the original request url and compare to that and then it will make each network delegate helper logic easier | 0 |
20,985 | 27,852,276,315 | IssuesEvent | 2023-03-20 19:41:05 | microsoft/vscode | https://api.github.com/repos/microsoft/vscode | closed | Empty environment variables passed to vs code are unset | bug terminal-process | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.75.1 (Universal)
- OS Version: macOS 13.2
When VS Code is launched with a set but empty env var, it will drop it and treat it as unset. This breaks workflows which check for the presence of env vars (which may be empty in some cases) when running VS Code integrations, e.g. running tests, running the debugger.
Steps to Reproduce:
1. Launch vscode using command with `FOO="" BAR="test" code .`
2. In a terminal within vscode run: `[[ -v FOO ]] && echo "foo set"`, and then `[[ -v BAR ]] && echo "bar set"`
3. Observe that `FOO` is missing, `BAR` is set
Compare with running in a terminal manually, with an empty `FOO`:
```sh
$ export FOO=""
$ [[ -v FOO ]] && echo "foo set"
foo set
```
Of the available historical downloads, I found that vscode 1.73.1 still replicates the above, but does seem to correctly pass the empty env vars down to other processes (e.g. the debugger).
| 1.0 | Empty environment variables passed to vs code are unset - <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.75.1 (Universal)
- OS Version: macOS 13.2
When VS Code is launched with a set but empty env var, it will drop it and treat it as unset. This breaks workflows which check for the presence of env vars (which may be empty in some cases) when running VS Code integrations, e.g. running tests, running the debugger.
Steps to Reproduce:
1. Launch vscode using command with `FOO="" BAR="test" code .`
2. In a terminal within vscode run: `[[ -v FOO ]] && echo "foo set"`, and then `[[ -v BAR ]] && echo "bar set"`
3. Observe that `FOO` is missing, `BAR` is set
Compare with running in a terminal manually, with an empty `FOO`:
```sh
$ export FOO=""
$ [[ -v FOO ]] && echo "foo set"
foo set
```
Of the available historical downloads, I found that vscode 1.73.1 still replicates the above, but does seem to correctly pass the empty env vars down to other processes (e.g. the debugger).
| process | empty environment variables passed to vs code are unset does this issue occur when all extensions are disabled yes report issue dialog can assist with this vs code version universal os version macos when vs code is launched with a set but empty env var it will drop it and treat it as unset this breaks workflows which check for the presence of env vars which may be empty in some cases when running vs code integrations e g running tests running the debugger steps to reproduce launch vscode using command with foo bar test code in a terminal within vscode run echo foo set and then echo bar set observe that foo is missing bar is set compare with running in a terminal manually with an empty foo sh export foo echo foo set foo set of the available historical downloads i found that vscode still replicates the above but does seem to correctly pass the empty env vars down to other processes e g the debugger | 1 |
15,991 | 20,188,203,525 | IssuesEvent | 2022-02-11 01:17:43 | savitamittalmsft/WAS-SEC-TEST | https://api.github.com/repos/savitamittalmsft/WAS-SEC-TEST | opened | Review and consider elevated security capabilities for Azure workloads | WARP-Import WAF FEB 2021 Security Performance and Scalability Capacity Management Processes Governance Standards | <a href="https://azure.microsoft.com/solutions/confidential-compute/">Review and consider elevated security capabilities for Azure workloads</a>
<p><b>Why Consider This?</b></p>
Dedicated HSMs and Confidential Computing can enhance the security of an Azure workload, but can add operational complexity.
<p><b>Context</b></p>
<p><span>These types of measures have the potential to enhance security and meet regulatory requirements, but can introduce complexity that may negatively impact operations and efficiency.</span></p>
<p><b>Suggested Actions</b></p>
<p><span>Dedicated Hardware Security Modules and Confidential Computing may address organizational security or regulatory requirements."nbsp; Review these technologies and consider whether they are required to meet specific organizational security or regulatory requirements.</span></p>
<p><b>Learn More</b></p>
<p><a href="https://docs.microsoft.com/en-us/azure/architecture/framework/Security/governance#elevated-security-capabilities" target="_blank"><span>Elevated security capabilities</span></a><span /></p> | 1.0 | Review and consider elevated security capabilities for Azure workloads - <a href="https://azure.microsoft.com/solutions/confidential-compute/">Review and consider elevated security capabilities for Azure workloads</a>
<p><b>Why Consider This?</b></p>
Dedicated HSMs and Confidential Computing can enhance the security of an Azure workload, but can add operational complexity.
<p><b>Context</b></p>
<p><span>These types of measures have the potential to enhance security and meet regulatory requirements, but can introduce complexity that may negatively impact operations and efficiency.</span></p>
<p><b>Suggested Actions</b></p>
<p><span>Dedicated Hardware Security Modules and Confidential Computing may address organizational security or regulatory requirements."nbsp; Review these technologies and consider whether they are required to meet specific organizational security or regulatory requirements.</span></p>
<p><b>Learn More</b></p>
<p><a href="https://docs.microsoft.com/en-us/azure/architecture/framework/Security/governance#elevated-security-capabilities" target="_blank"><span>Elevated security capabilities</span></a><span /></p> | process | review and consider elevated security capabilities for azure workloads why consider this dedicated hsms and confidential computing can enhance the security of an azure workload but can add operational complexity context these types of measures have the potential to enhance security and meet regulatory requirements but can introduce complexity that may negatively impact operations and efficiency suggested actions dedicated hardware security modules and confidential computing may address organizational security or regulatory requirements nbsp review these technologies and consider whether they are required to meet specific organizational security or regulatory requirements learn more elevated security capabilities | 1 |
21,337 | 29,047,111,361 | IssuesEvent | 2023-05-13 18:01:00 | python/cpython | https://api.github.com/repos/python/cpython | closed | Disallow fork in a subinterpreter affects multiprocessing plugin | 3.8 topic-C-API topic-multiprocessing | BPO | [43514](https://bugs.python.org/issue43514)
--- | :---
Nosy | @tiran, @franku
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = None
closed_at = None
created_at = <Date 2021-03-16.14:41:07.868>
labels = ['expert-C-API', 'type-bug', '3.8']
title = 'Disallow fork in a subinterpreter affects multiprocessing plugin'
updated_at = <Date 2021-03-16.20:48:00.370>
user = 'https://github.com/franku'
```
bugs.python.org fields:
```python
activity = <Date 2021-03-16.20:48:00.370>
actor = 'franku'
assignee = 'none'
closed = False
closed_date = None
closer = None
components = ['C API']
creation = <Date 2021-03-16.14:41:07.868>
creator = 'franku'
dependencies = []
files = []
hgrepos = []
issue_num = 43514
keywords = []
message_count = 5.0
messages = ['388841', '388842', '388876', '388877', '388879']
nosy_count = 2.0
nosy_names = ['christian.heimes', 'franku']
pr_nums = []
priority = 'normal'
resolution = None
stage = None
status = 'open'
superseder = None
type = 'behavior'
url = 'https://bugs.python.org/issue43514'
versions = ['Python 3.8']
```
</p></details>
| 1.0 | Disallow fork in a subinterpreter affects multiprocessing plugin - BPO | [43514](https://bugs.python.org/issue43514)
--- | :---
Nosy | @tiran, @franku
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = None
closed_at = None
created_at = <Date 2021-03-16.14:41:07.868>
labels = ['expert-C-API', 'type-bug', '3.8']
title = 'Disallow fork in a subinterpreter affects multiprocessing plugin'
updated_at = <Date 2021-03-16.20:48:00.370>
user = 'https://github.com/franku'
```
bugs.python.org fields:
```python
activity = <Date 2021-03-16.20:48:00.370>
actor = 'franku'
assignee = 'none'
closed = False
closed_date = None
closer = None
components = ['C API']
creation = <Date 2021-03-16.14:41:07.868>
creator = 'franku'
dependencies = []
files = []
hgrepos = []
issue_num = 43514
keywords = []
message_count = 5.0
messages = ['388841', '388842', '388876', '388877', '388879']
nosy_count = 2.0
nosy_names = ['christian.heimes', 'franku']
pr_nums = []
priority = 'normal'
resolution = None
stage = None
status = 'open'
superseder = None
type = 'behavior'
url = 'https://bugs.python.org/issue43514'
versions = ['Python 3.8']
```
</p></details>
| process | disallow fork in a subinterpreter affects multiprocessing plugin bpo nosy tiran franku note these values reflect the state of the issue at the time it was migrated and might not reflect the current state show more details github fields python assignee none closed at none created at labels title disallow fork in a subinterpreter affects multiprocessing plugin updated at user bugs python org fields python activity actor franku assignee none closed false closed date none closer none components creation creator franku dependencies files hgrepos issue num keywords message count messages nosy count nosy names pr nums priority normal resolution none stage none status open superseder none type behavior url versions | 1 |
7,547 | 10,673,991,918 | IssuesEvent | 2019-10-21 08:30:55 | Swind/pure-python-adb | https://api.github.com/repos/Swind/pure-python-adb | closed | cant use su command | enhancement processing | device.shell('su cd /data/data/<app_pkg>/databases/')
blocking code. How to make request with su??
| 1.0 | cant use su command - device.shell('su cd /data/data/<app_pkg>/databases/')
blocking code. How to make request with su??
| process | cant use su command device shell su cd data data databases blocking code how to make request with su | 1 |
41,673 | 16,828,626,205 | IssuesEvent | 2021-06-17 22:45:40 | Rothamsted/knetminer | https://api.github.com/repos/Rothamsted/knetminer | closed | Gene View shows wrong numbers in Evidence column | bug project:web service | In the current master branch, searching with a Genelist but without keywords shows the wrong numbers in the EVIDENCE column of Gene View. Many concept types like Trait, Phenotype, BioProc have always TWO duplicated evidences. On the production it's working as expected. See here comparison of genome API output on dev and live server:
http://babvs72.rothamsted.ac.uk:9100/ws/wheatknet/genome?list=TRAESCS3D02G004100
{"geneTable":"ONDEX-ID\tACCESSION\tGENE NAME\tCHRO\tSTART\tTAXID\tSCORE\tUSER\tQTL\tEVIDENCE\n35264\tTRAESCS3D02G004100\tMFT\t3D\t1733084\t4565\t193.30\tyes\t\tPath__2__Hormone signaling, transpor...//Hormone signaling, transpor...||ProtDomain__2__PEBP_euk//PEBP_euk||Phenotype__2__DECREASED GERMINATION WHEN ...//DECREASED GERMINATION WHEN ...||MolFunc__1__Phosphatidylethanolamine Bi...||CelComp__2__Endoplasmic Reticulum//Endoplasmic Reticulum||**BioProc__2__Abscisic Acid-activated Sig...//Abscisic Acid-activated Sig...**||Gene__2__WRI1//WRI1||Publication__48__PMID:31077147//PMID:30794545//PMID:31012027//PMID:31481195//PMID:30061395//PMID:28254780//PMID:28332758//PMID:29398940//PMID:28240777//PMID:26950112//PMID:27446143//PMID:27293186//PMID:27389636//PMID:26891287//PMID:26873978//PMID:25931984//PMID:24932489//PMID:25464340//PMID:24879400//PMID:24444091||Trait__2__FT10//FT10||Reaction__1__ARF inactivation by AUX/IAA||Protein__2__Q41261//Q41261||SNPEffect__2__START_LOST//START_LOST\n",
https://knetminer.com/wheatknet/genome?list=TRAESCS3D02G004100
{"geneTable":"ONDEX-ID\tACCESSION\tGENE NAME\tCHRO\tSTART\tTAXID\tSCORE\tUSER\tQTL\tEVIDENCE\n35264\tTRAESCS3D02G004100\tMFT\t3D\t1733084\t4565\t183.84\tyes\t\tProtDomain__4__Phosphatidylethanolamine-bd_CS//PEBP//PEBP-like_sf//PEBP_euk||Phenotype__2__DECREASED RATE OF GERMINATION...//DECREASED GERMINATION WHEN GR...||MolFunc__1__Phosphatidylethanolamine Bind...||Gene__36__ERF039//TRAESCS6A02G343300//NFYA8//TRAESCS6D02G268100//TRAESCS2A02G370400//MYB96//TRAESCS4A02G180900//TRAESCS3A02G485400//NFYA6//MKRN//TRAESCS3D02G434700//TULP7//TRAESCS5D02G496300//MFT//TRAESCS4B02G101400//TRAESCS5D02G158300//TRAESCS3A02G486500//NFYB10//MFT//TULP4//TRAESCS1D02G275400//ATMYB69//TRAESCS7D02G259600//ATMYB69//ATMYB69//TRAESCS5A02G491500//NF-YB1//WRI1//TCX2//MFT//MFT//ATY13//TRB1//TRAESCS3B02G533200//TRAESCS7A02G258700//WRI1||**BioProc__13__Regulation Of Timing Of Trans...//Negative Regulation Of Flower...//Cell Differentiation//Inflorescence Development//Response To Abscisic Acid//Photoperiodism, Flowering//Regulation Of Flower Developm...//Short-day Photoperiodism, Flo...//Flower Development//Short-day Photoperiodism//Positive Regulation Of Seed G...//Vegetative To Reproductive Ph...//Abscisic Acid-activated Signa...**||CelComp__3__Nucleus//Cytoplasm//Endoplasmic Reticulum||Publication__46__PMID:30794545//PMID:30061395//PMID:28254780//PMID:27862469//PMID:29398940//PMID:28240777//PMID:26950112//PMID:27446143//PMID:27293186//PMID:27389636//PMID:26891287//PMID:26873978//PMID:25931984//PMID:24932489//PMID:25464340//PMID:24879400//PMID:24444091//PMID:25330236//PMID:24280374//PMID:23590427||Trait__17__Grain germination//Storage 56 days//Mg25//Ca43//Grain yield//Anthesis time//mineral and ion content trait//seed maturation//Seed Dormancy//grain number//flowering time trait//seed dormancy//avrB//Seedling Growth//grain yield trait//Cd114//FT10||Protein__12__Q8VWH2//Q9XH44//Q9XH43//Q656A5//Q93WI9//O82088//Q9ASJ1//AT1G18100.1//Q9XH42//TRAESCS3D02G004100.1//Q9XFK7//Q41261\n" | 1.0 | Gene View shows wrong numbers in Evidence column - In the current master branch, searching with a Genelist but without keywords shows the wrong numbers in the EVIDENCE column of Gene View. Many concept types like Trait, Phenotype, BioProc have always TWO duplicated evidences. On the production it's working as expected. See here comparison of genome API output on dev and live server:
http://babvs72.rothamsted.ac.uk:9100/ws/wheatknet/genome?list=TRAESCS3D02G004100
{"geneTable":"ONDEX-ID\tACCESSION\tGENE NAME\tCHRO\tSTART\tTAXID\tSCORE\tUSER\tQTL\tEVIDENCE\n35264\tTRAESCS3D02G004100\tMFT\t3D\t1733084\t4565\t193.30\tyes\t\tPath__2__Hormone signaling, transpor...//Hormone signaling, transpor...||ProtDomain__2__PEBP_euk//PEBP_euk||Phenotype__2__DECREASED GERMINATION WHEN ...//DECREASED GERMINATION WHEN ...||MolFunc__1__Phosphatidylethanolamine Bi...||CelComp__2__Endoplasmic Reticulum//Endoplasmic Reticulum||**BioProc__2__Abscisic Acid-activated Sig...//Abscisic Acid-activated Sig...**||Gene__2__WRI1//WRI1||Publication__48__PMID:31077147//PMID:30794545//PMID:31012027//PMID:31481195//PMID:30061395//PMID:28254780//PMID:28332758//PMID:29398940//PMID:28240777//PMID:26950112//PMID:27446143//PMID:27293186//PMID:27389636//PMID:26891287//PMID:26873978//PMID:25931984//PMID:24932489//PMID:25464340//PMID:24879400//PMID:24444091||Trait__2__FT10//FT10||Reaction__1__ARF inactivation by AUX/IAA||Protein__2__Q41261//Q41261||SNPEffect__2__START_LOST//START_LOST\n",
https://knetminer.com/wheatknet/genome?list=TRAESCS3D02G004100
{"geneTable":"ONDEX-ID\tACCESSION\tGENE NAME\tCHRO\tSTART\tTAXID\tSCORE\tUSER\tQTL\tEVIDENCE\n35264\tTRAESCS3D02G004100\tMFT\t3D\t1733084\t4565\t183.84\tyes\t\tProtDomain__4__Phosphatidylethanolamine-bd_CS//PEBP//PEBP-like_sf//PEBP_euk||Phenotype__2__DECREASED RATE OF GERMINATION...//DECREASED GERMINATION WHEN GR...||MolFunc__1__Phosphatidylethanolamine Bind...||Gene__36__ERF039//TRAESCS6A02G343300//NFYA8//TRAESCS6D02G268100//TRAESCS2A02G370400//MYB96//TRAESCS4A02G180900//TRAESCS3A02G485400//NFYA6//MKRN//TRAESCS3D02G434700//TULP7//TRAESCS5D02G496300//MFT//TRAESCS4B02G101400//TRAESCS5D02G158300//TRAESCS3A02G486500//NFYB10//MFT//TULP4//TRAESCS1D02G275400//ATMYB69//TRAESCS7D02G259600//ATMYB69//ATMYB69//TRAESCS5A02G491500//NF-YB1//WRI1//TCX2//MFT//MFT//ATY13//TRB1//TRAESCS3B02G533200//TRAESCS7A02G258700//WRI1||**BioProc__13__Regulation Of Timing Of Trans...//Negative Regulation Of Flower...//Cell Differentiation//Inflorescence Development//Response To Abscisic Acid//Photoperiodism, Flowering//Regulation Of Flower Developm...//Short-day Photoperiodism, Flo...//Flower Development//Short-day Photoperiodism//Positive Regulation Of Seed G...//Vegetative To Reproductive Ph...//Abscisic Acid-activated Signa...**||CelComp__3__Nucleus//Cytoplasm//Endoplasmic Reticulum||Publication__46__PMID:30794545//PMID:30061395//PMID:28254780//PMID:27862469//PMID:29398940//PMID:28240777//PMID:26950112//PMID:27446143//PMID:27293186//PMID:27389636//PMID:26891287//PMID:26873978//PMID:25931984//PMID:24932489//PMID:25464340//PMID:24879400//PMID:24444091//PMID:25330236//PMID:24280374//PMID:23590427||Trait__17__Grain germination//Storage 56 days//Mg25//Ca43//Grain yield//Anthesis time//mineral and ion content trait//seed maturation//Seed Dormancy//grain number//flowering time trait//seed dormancy//avrB//Seedling Growth//grain yield trait//Cd114//FT10||Protein__12__Q8VWH2//Q9XH44//Q9XH43//Q656A5//Q93WI9//O82088//Q9ASJ1//AT1G18100.1//Q9XH42//TRAESCS3D02G004100.1//Q9XFK7//Q41261\n" | non_process | gene view shows wrong numbers in evidence column in the current master branch searching with a genelist but without keywords shows the wrong numbers in the evidence column of gene view many concept types like trait phenotype bioproc have always two duplicated evidences on the production it s working as expected see here comparison of genome api output on dev and live server genetable ondex id taccession tgene name tchro tstart ttaxid tscore tuser tqtl tevidence tmft tyes t tpath hormone signaling transpor hormone signaling transpor protdomain pebp euk pebp euk phenotype decreased germination when decreased germination when molfunc phosphatidylethanolamine bi celcomp endoplasmic reticulum endoplasmic reticulum bioproc abscisic acid activated sig abscisic acid activated sig gene publication pmid pmid pmid pmid pmid pmid pmid pmid pmid pmid pmid pmid pmid pmid pmid pmid pmid pmid pmid pmid trait reaction arf inactivation by aux iaa protein snpeffect start lost start lost n genetable ondex id taccession tgene name tchro tstart ttaxid tscore tuser tqtl tevidence tmft tyes t tprotdomain phosphatidylethanolamine bd cs pebp pebp like sf pebp euk phenotype decreased rate of germination decreased germination when gr molfunc phosphatidylethanolamine bind gene mkrn mft mft nf mft mft bioproc regulation of timing of trans negative regulation of flower cell differentiation inflorescence development response to abscisic acid photoperiodism flowering regulation of flower developm short day photoperiodism flo flower development short day photoperiodism positive regulation of seed g vegetative to reproductive ph abscisic acid activated signa celcomp nucleus cytoplasm endoplasmic reticulum publication pmid pmid pmid pmid pmid pmid pmid pmid pmid pmid pmid pmid pmid pmid pmid pmid pmid pmid pmid pmid trait grain germination storage days grain yield anthesis time mineral and ion content trait seed maturation seed dormancy grain number flowering time trait seed dormancy avrb seedling growth grain yield trait protein n | 0 |
89,802 | 18,045,462,030 | IssuesEvent | 2021-09-18 20:29:02 | julz0815/veracode-flaws-to-issues | https://api.github.com/repos/julz0815/veracode-flaws-to-issues | closed | Improper Neutralization of Special Elements used in an SQL Command ('SQL Injection') ('SQL Injection') [VID:10] | VeracodeFlaw: High Veracode Policy Scan | **Filename:** https://github.com/julz0815/veracode-flaws-to-issues/blob/d80f36548eefc759f32d8f170fdde9e3c4af2f1e/src/main/java/com/veracode/verademo/controller/UserController.java#L384
**Line:** 384
**CWE:** 89 (Improper Neutralization of Special Elements used in an SQL Command ('SQL Injection') ('SQL Injection'))
<span>This database query contains a SQL injection flaw. The call to java.sql.Statement.execute() constructs a dynamic SQL query using a variable derived from untrusted input. An attacker could exploit this flaw to execute arbitrary SQL queries against the database. The first argument to execute() contains tainted data from the variable query. The tainted data originated from earlier calls to AnnotationVirtualController.vc_annotation_entry, and java.sql.Statement.executeQuery.</span> <span>Avoid dynamically constructing SQL queries. Instead, use parameterized prepared statements to prevent the database from interpreting the contents of bind variables as part of the query. Always validate untrusted input to ensure that it conforms to the expected format, using centralized data validation routines when possible.</span> <span>References: <a href="https://cwe.mitre.org/data/definitions/89.html">CWE</a> <a href="https://www.owasp.org/index.php/SQL_injection">OWASP</a> <a href="https://webappsec.pbworks.com/SQL-Injection">WASC</a></span> | 2.0 | Improper Neutralization of Special Elements used in an SQL Command ('SQL Injection') ('SQL Injection') [VID:10] - **Filename:** https://github.com/julz0815/veracode-flaws-to-issues/blob/d80f36548eefc759f32d8f170fdde9e3c4af2f1e/src/main/java/com/veracode/verademo/controller/UserController.java#L384
**Line:** 384
**CWE:** 89 (Improper Neutralization of Special Elements used in an SQL Command ('SQL Injection') ('SQL Injection'))
<span>This database query contains a SQL injection flaw. The call to java.sql.Statement.execute() constructs a dynamic SQL query using a variable derived from untrusted input. An attacker could exploit this flaw to execute arbitrary SQL queries against the database. The first argument to execute() contains tainted data from the variable query. The tainted data originated from earlier calls to AnnotationVirtualController.vc_annotation_entry, and java.sql.Statement.executeQuery.</span> <span>Avoid dynamically constructing SQL queries. Instead, use parameterized prepared statements to prevent the database from interpreting the contents of bind variables as part of the query. Always validate untrusted input to ensure that it conforms to the expected format, using centralized data validation routines when possible.</span> <span>References: <a href="https://cwe.mitre.org/data/definitions/89.html">CWE</a> <a href="https://www.owasp.org/index.php/SQL_injection">OWASP</a> <a href="https://webappsec.pbworks.com/SQL-Injection">WASC</a></span> | non_process | improper neutralization of special elements used in an sql command sql injection sql injection filename line cwe improper neutralization of special elements used in an sql command sql injection sql injection this database query contains a sql injection flaw the call to java sql statement execute constructs a dynamic sql query using a variable derived from untrusted input an attacker could exploit this flaw to execute arbitrary sql queries against the database the first argument to execute contains tainted data from the variable query the tainted data originated from earlier calls to annotationvirtualcontroller vc annotation entry and java sql statement executequery avoid dynamically constructing sql queries instead use parameterized prepared statements to prevent the database from interpreting the contents of bind variables as part of the query always validate untrusted input to ensure that it conforms to the expected format using centralized data validation routines when possible references a href a href a href | 0 |
48,418 | 13,068,519,455 | IssuesEvent | 2020-07-31 03:50:15 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | closed | [IceHive] Unused variables (Trac #2363) | Migrated from Trac combo reconstruction defect | I get the following warning when compiling IceHive
```text
[281/1701] Building CXX object IceHive/CMakeFiles/IceHive.dir/private/IceHive/TriggerSplitter.cxx.o
In file included from /Users/kmeagher/icecube/combo/src/IceHive/private/IceHive/TriggerSplitter.cxx:32:
/Users/kmeagher/icecube/combo/src/IceHive/private/IceHive/TriggerSplitter.h:123:12: warning: private field 'min_offset_' is not used [-Wunused-private-field]
double min_offset_, max_offset_;
^
/Users/kmeagher/icecube/combo/src/IceHive/private/IceHive/TriggerSplitter.h:123:25: warning: private field 'max_offset_' is not used [-Wunused-private-field]
double min_offset_, max_offset_;
^
2 warnings generated.
```
Both of the classes `TriggerSplitter` and `TriggerSplitter::SubTrigger` have variables `min_offset_` and `max_offset_`. Neither of them appear to do anything and should be removed
Migrated from https://code.icecube.wisc.edu/ticket/2363
```json
{
"status": "closed",
"changetime": "2020-06-24T12:31:42",
"description": "I get the following warning when compiling IceHive\n\n{{{\n[281/1701] Building CXX object IceHive/CMakeFiles/IceHive.dir/private/IceHive/TriggerSplitter.cxx.o\nIn file included from /Users/kmeagher/icecube/combo/src/IceHive/private/IceHive/TriggerSplitter.cxx:32:\n/Users/kmeagher/icecube/combo/src/IceHive/private/IceHive/TriggerSplitter.h:123:12: warning: private field 'min_offset_' is not used [-Wunused-private-field]\n double min_offset_, max_offset_;\n ^\n/Users/kmeagher/icecube/combo/src/IceHive/private/IceHive/TriggerSplitter.h:123:25: warning: private field 'max_offset_' is not used [-Wunused-private-field]\n double min_offset_, max_offset_;\n ^\n2 warnings generated.\n\n}}}\n\nBoth of the classes `TriggerSplitter` and `TriggerSplitter::SubTrigger` have variables `min_offset_` and `max_offset_`. Neither of them appear to do anything and should be removed",
"reporter": "kjmeagher",
"cc": "",
"resolution": "insufficient resources",
"_ts": "1593001902142004",
"component": "combo reconstruction",
"summary": "[IceHive] Unused variables",
"priority": "normal",
"keywords": "",
"time": "2019-10-04T16:31:27",
"milestone": "Autumnal Equinox 2020",
"owner": "",
"type": "defect"
}
```
| 1.0 | [IceHive] Unused variables (Trac #2363) - I get the following warning when compiling IceHive
```text
[281/1701] Building CXX object IceHive/CMakeFiles/IceHive.dir/private/IceHive/TriggerSplitter.cxx.o
In file included from /Users/kmeagher/icecube/combo/src/IceHive/private/IceHive/TriggerSplitter.cxx:32:
/Users/kmeagher/icecube/combo/src/IceHive/private/IceHive/TriggerSplitter.h:123:12: warning: private field 'min_offset_' is not used [-Wunused-private-field]
double min_offset_, max_offset_;
^
/Users/kmeagher/icecube/combo/src/IceHive/private/IceHive/TriggerSplitter.h:123:25: warning: private field 'max_offset_' is not used [-Wunused-private-field]
double min_offset_, max_offset_;
^
2 warnings generated.
```
Both of the classes `TriggerSplitter` and `TriggerSplitter::SubTrigger` have variables `min_offset_` and `max_offset_`. Neither of them appear to do anything and should be removed
Migrated from https://code.icecube.wisc.edu/ticket/2363
```json
{
"status": "closed",
"changetime": "2020-06-24T12:31:42",
"description": "I get the following warning when compiling IceHive\n\n{{{\n[281/1701] Building CXX object IceHive/CMakeFiles/IceHive.dir/private/IceHive/TriggerSplitter.cxx.o\nIn file included from /Users/kmeagher/icecube/combo/src/IceHive/private/IceHive/TriggerSplitter.cxx:32:\n/Users/kmeagher/icecube/combo/src/IceHive/private/IceHive/TriggerSplitter.h:123:12: warning: private field 'min_offset_' is not used [-Wunused-private-field]\n double min_offset_, max_offset_;\n ^\n/Users/kmeagher/icecube/combo/src/IceHive/private/IceHive/TriggerSplitter.h:123:25: warning: private field 'max_offset_' is not used [-Wunused-private-field]\n double min_offset_, max_offset_;\n ^\n2 warnings generated.\n\n}}}\n\nBoth of the classes `TriggerSplitter` and `TriggerSplitter::SubTrigger` have variables `min_offset_` and `max_offset_`. Neither of them appear to do anything and should be removed",
"reporter": "kjmeagher",
"cc": "",
"resolution": "insufficient resources",
"_ts": "1593001902142004",
"component": "combo reconstruction",
"summary": "[IceHive] Unused variables",
"priority": "normal",
"keywords": "",
"time": "2019-10-04T16:31:27",
"milestone": "Autumnal Equinox 2020",
"owner": "",
"type": "defect"
}
```
| non_process | unused variables trac i get the following warning when compiling icehive text building cxx object icehive cmakefiles icehive dir private icehive triggersplitter cxx o in file included from users kmeagher icecube combo src icehive private icehive triggersplitter cxx users kmeagher icecube combo src icehive private icehive triggersplitter h warning private field min offset is not used double min offset max offset users kmeagher icecube combo src icehive private icehive triggersplitter h warning private field max offset is not used double min offset max offset warnings generated both of the classes triggersplitter and triggersplitter subtrigger have variables min offset and max offset neither of them appear to do anything and should be removed migrated from json status closed changetime description i get the following warning when compiling icehive n n n building cxx object icehive cmakefiles icehive dir private icehive triggersplitter cxx o nin file included from users kmeagher icecube combo src icehive private icehive triggersplitter cxx n users kmeagher icecube combo src icehive private icehive triggersplitter h warning private field min offset is not used n double min offset max offset n n users kmeagher icecube combo src icehive private icehive triggersplitter h warning private field max offset is not used n double min offset max offset n warnings generated n n n nboth of the classes triggersplitter and triggersplitter subtrigger have variables min offset and max offset neither of them appear to do anything and should be removed reporter kjmeagher cc resolution insufficient resources ts component combo reconstruction summary unused variables priority normal keywords time milestone autumnal equinox owner type defect | 0 |
41,587 | 21,784,878,045 | IssuesEvent | 2022-05-14 01:39:03 | flybywiresim/a32nx | https://api.github.com/repos/flybywiresim/a32nx | closed | Coordinates north of 8959.9N/0000.0E cause the sim to freeze | Bug ND Performance | ### Aircraft Version
Development
### Build info
```json
{
"built": "2021-12-25T19:32:18+00:00",
"ref": "master",
"sha": "ed16b41706c8f8a97144b3f22c3a0a28eb9eb327",
"actor": "aguther",
"event_name": "manual"
}
```
### Describe the bug
1. when I try to enter coordinates north of 8959.9N/0000.0E into the MCDU (e.g. 9000.0N/0000.0E), the simulation freezes. Or the MCDU gives me the feedback that it is an incorrect format.
It freezes with stored waypoints, but also with direct input (see video).
2. if I fly from the south to the waypoint NOPOL, the sim freezes as soon as I pass the coordinates 8959.9N/0000.0E (see screenshot).
3. when I enter the coordinates 8950.0N/0000.0E, the entry is automatically changed to 8949.10N/00000.0E (but maybe this is another bug)
https://user-images.githubusercontent.com/96742598/147495337-32ad208a-6ccd-46b9-8f2b-0ef01923612a.mp4

### Expected behavior
1.
- Entering the coordinates 9000.0N/0000.0E should have the same effect as entering the waypoint NOPOL.
- The sim should not freeze when entering the coordinates 9000.0N/0000.0E.
- the MCDU should accept e.g. the coordinates 8995.0N/0000.0E without error message.
2. the simulator should not freeze when flying north of 8959.9N/0000.0E.
3. the coordinates 8950.0N/0000.0E should not be changed automatically, but should be kept exactly as entered.
### Steps to reproduce
FIRST PROBLEM:
1. enter the departure and arrival airports
2. enter the coordinates 9000.0N/0000.0E
3. insert the coordinates after the airport of departure.
4. -> simulator freezes
SECOND PROBLEM:
1. spawn near the north pole
2. enter the waypoint nopol into the mcdu
3. fly towards the waypoint
4. -> simulator freezes exactly at coordinates 8959.9N/0000.0E
THIRD PROBLEM:
1. enter the coordinates 8950.0N/0000.0E as stored waypoint.
2. call up the stored waypoints
3. -> coordinates have been changed to 8949.10N/00000.0E
### References (optional)
_No response_
### Additional info (optional)
_No response_
### Discord Username (optional)
Gamer_0999 | True | Coordinates north of 8959.9N/0000.0E cause the sim to freeze - ### Aircraft Version
Development
### Build info
```json
{
"built": "2021-12-25T19:32:18+00:00",
"ref": "master",
"sha": "ed16b41706c8f8a97144b3f22c3a0a28eb9eb327",
"actor": "aguther",
"event_name": "manual"
}
```
### Describe the bug
1. when I try to enter coordinates north of 8959.9N/0000.0E into the MCDU (e.g. 9000.0N/0000.0E), the simulation freezes. Or the MCDU gives me the feedback that it is an incorrect format.
It freezes with stored waypoints, but also with direct input (see video).
2. if I fly from the south to the waypoint NOPOL, the sim freezes as soon as I pass the coordinates 8959.9N/0000.0E (see screenshot).
3. when I enter the coordinates 8950.0N/0000.0E, the entry is automatically changed to 8949.10N/00000.0E (but maybe this is another bug)
https://user-images.githubusercontent.com/96742598/147495337-32ad208a-6ccd-46b9-8f2b-0ef01923612a.mp4

### Expected behavior
1.
- Entering the coordinates 9000.0N/0000.0E should have the same effect as entering the waypoint NOPOL.
- The sim should not freeze when entering the coordinates 9000.0N/0000.0E.
- the MCDU should accept e.g. the coordinates 8995.0N/0000.0E without error message.
2. the simulator should not freeze when flying north of 8959.9N/0000.0E.
3. the coordinates 8950.0N/0000.0E should not be changed automatically, but should be kept exactly as entered.
### Steps to reproduce
FIRST PROBLEM:
1. enter the departure and arrival airports
2. enter the coordinates 9000.0N/0000.0E
3. insert the coordinates after the airport of departure.
4. -> simulator freezes
SECOND PROBLEM:
1. spawn near the north pole
2. enter the waypoint nopol into the mcdu
3. fly towards the waypoint
4. -> simulator freezes exactly at coordinates 8959.9N/0000.0E
THIRD PROBLEM:
1. enter the coordinates 8950.0N/0000.0E as stored waypoint.
2. call up the stored waypoints
3. -> coordinates have been changed to 8949.10N/00000.0E
### References (optional)
_No response_
### Additional info (optional)
_No response_
### Discord Username (optional)
Gamer_0999 | non_process | coordinates north of cause the sim to freeze aircraft version development build info json built ref master sha actor aguther event name manual describe the bug when i try to enter coordinates north of into the mcdu e g the simulation freezes or the mcdu gives me the feedback that it is an incorrect format it freezes with stored waypoints but also with direct input see video if i fly from the south to the waypoint nopol the sim freezes as soon as i pass the coordinates see screenshot when i enter the coordinates the entry is automatically changed to but maybe this is another bug expected behavior entering the coordinates should have the same effect as entering the waypoint nopol the sim should not freeze when entering the coordinates the mcdu should accept e g the coordinates without error message the simulator should not freeze when flying north of the coordinates should not be changed automatically but should be kept exactly as entered steps to reproduce first problem enter the departure and arrival airports enter the coordinates insert the coordinates after the airport of departure simulator freezes second problem spawn near the north pole enter the waypoint nopol into the mcdu fly towards the waypoint simulator freezes exactly at coordinates third problem enter the coordinates as stored waypoint call up the stored waypoints coordinates have been changed to references optional no response additional info optional no response discord username optional gamer | 0 |
4,993 | 7,822,404,253 | IssuesEvent | 2018-06-14 02:16:43 | StrikeNP/trac_test | https://api.github.com/repos/StrikeNP/trac_test | closed | GABLS2 rtm, rtp2, thlm, and thlp2 are set to zero when plotgen is run manually (but not for the nightly tests) (Trac #24) | Migrated from Trac enhancement post_processing senkbeil@uwm.edu | Some time ago, in order to test CLUBB's scalars, Brandon changed plotgen so that it outputs scalars in place of rtm and thlm. The nightly plots work great.
However, if CLUBB is run manually without outputting scalars, and then plotgen is executed manually, then rtm, thlm, rtp2, and thlp2 are set to zero. For manual runs, typically we don't want to check scalars; we just want to plot standard versions of rtm, thlm, rtp2, and thlp2. I probably forgot to mention this earlier.
Is it feasible to insert some nightly flags or re-arrange some code so that the nightly plots test the scalars, but the manual plots simply plot rtm, thlm, rtp2, and thlp2? I believe that this is what is done for other specialized nightly tests, e.g. the restart test and some of the altered grid tests. Perhaps those pieces of code would provide ideas on how to implement separate behavior for nightly and manual runs.
However, we have a deadline on the TWP-ICE case, so don't bother with this until TWP-ICE is submitted, unless it is trivial to fix.
Attachments:
[Plotgen.pdf](https://github.com/larson-group/trac_attachment_archive/blob/master/clubb/Plotgen.pdf)
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/24
```json
{
"status": "closed",
"changetime": "2009-09-02T20:37:37",
"description": "Some time ago, in order to test CLUBB's scalars, Brandon changed plotgen so that it outputs scalars in place of rtm and thlm. The nightly plots work great.\n\nHowever, if CLUBB is run manually without outputting scalars, and then plotgen is executed manually, then rtm, thlm, rtp2, and thlp2 are set to zero. For manual runs, typically we don't want to check scalars; we just want to plot standard versions of rtm, thlm, rtp2, and thlp2. I probably forgot to mention this earlier.\n\nIs it feasible to insert some nightly flags or re-arrange some code so that the nightly plots test the scalars, but the manual plots simply plot rtm, thlm, rtp2, and thlp2? I believe that this is what is done for other specialized nightly tests, e.g. the restart test and some of the altered grid tests. Perhaps those pieces of code would provide ideas on how to implement separate behavior for nightly and manual runs.\n\nHowever, we have a deadline on the TWP-ICE case, so don't bother with this until TWP-ICE is submitted, unless it is trivial to fix.",
"reporter": "vlarson@uwm.edu",
"cc": "",
"resolution": "Verified by V. Larson",
"_ts": "1251923857000000",
"component": "post_processing",
"summary": "GABLS2 rtm, rtp2, thlm, and thlp2 are set to zero when plotgen is run manually (but not for the nightly tests)",
"priority": "minor",
"keywords": "scalars, gabls2, nightly plots, rtm, thlm, rtp2, thlp2",
"time": "2009-05-13T14:26:46",
"milestone": "Plotgen 3.0",
"owner": "senkbeil@uwm.edu",
"type": "enhancement"
}
```
| 1.0 | GABLS2 rtm, rtp2, thlm, and thlp2 are set to zero when plotgen is run manually (but not for the nightly tests) (Trac #24) - Some time ago, in order to test CLUBB's scalars, Brandon changed plotgen so that it outputs scalars in place of rtm and thlm. The nightly plots work great.
However, if CLUBB is run manually without outputting scalars, and then plotgen is executed manually, then rtm, thlm, rtp2, and thlp2 are set to zero. For manual runs, typically we don't want to check scalars; we just want to plot standard versions of rtm, thlm, rtp2, and thlp2. I probably forgot to mention this earlier.
Is it feasible to insert some nightly flags or re-arrange some code so that the nightly plots test the scalars, but the manual plots simply plot rtm, thlm, rtp2, and thlp2? I believe that this is what is done for other specialized nightly tests, e.g. the restart test and some of the altered grid tests. Perhaps those pieces of code would provide ideas on how to implement separate behavior for nightly and manual runs.
However, we have a deadline on the TWP-ICE case, so don't bother with this until TWP-ICE is submitted, unless it is trivial to fix.
Attachments:
[Plotgen.pdf](https://github.com/larson-group/trac_attachment_archive/blob/master/clubb/Plotgen.pdf)
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/24
```json
{
"status": "closed",
"changetime": "2009-09-02T20:37:37",
"description": "Some time ago, in order to test CLUBB's scalars, Brandon changed plotgen so that it outputs scalars in place of rtm and thlm. The nightly plots work great.\n\nHowever, if CLUBB is run manually without outputting scalars, and then plotgen is executed manually, then rtm, thlm, rtp2, and thlp2 are set to zero. For manual runs, typically we don't want to check scalars; we just want to plot standard versions of rtm, thlm, rtp2, and thlp2. I probably forgot to mention this earlier.\n\nIs it feasible to insert some nightly flags or re-arrange some code so that the nightly plots test the scalars, but the manual plots simply plot rtm, thlm, rtp2, and thlp2? I believe that this is what is done for other specialized nightly tests, e.g. the restart test and some of the altered grid tests. Perhaps those pieces of code would provide ideas on how to implement separate behavior for nightly and manual runs.\n\nHowever, we have a deadline on the TWP-ICE case, so don't bother with this until TWP-ICE is submitted, unless it is trivial to fix.",
"reporter": "vlarson@uwm.edu",
"cc": "",
"resolution": "Verified by V. Larson",
"_ts": "1251923857000000",
"component": "post_processing",
"summary": "GABLS2 rtm, rtp2, thlm, and thlp2 are set to zero when plotgen is run manually (but not for the nightly tests)",
"priority": "minor",
"keywords": "scalars, gabls2, nightly plots, rtm, thlm, rtp2, thlp2",
"time": "2009-05-13T14:26:46",
"milestone": "Plotgen 3.0",
"owner": "senkbeil@uwm.edu",
"type": "enhancement"
}
```
| process | rtm thlm and are set to zero when plotgen is run manually but not for the nightly tests trac some time ago in order to test clubb s scalars brandon changed plotgen so that it outputs scalars in place of rtm and thlm the nightly plots work great however if clubb is run manually without outputting scalars and then plotgen is executed manually then rtm thlm and are set to zero for manual runs typically we don t want to check scalars we just want to plot standard versions of rtm thlm and i probably forgot to mention this earlier is it feasible to insert some nightly flags or re arrange some code so that the nightly plots test the scalars but the manual plots simply plot rtm thlm and i believe that this is what is done for other specialized nightly tests e g the restart test and some of the altered grid tests perhaps those pieces of code would provide ideas on how to implement separate behavior for nightly and manual runs however we have a deadline on the twp ice case so don t bother with this until twp ice is submitted unless it is trivial to fix attachments migrated from json status closed changetime description some time ago in order to test clubb s scalars brandon changed plotgen so that it outputs scalars in place of rtm and thlm the nightly plots work great n nhowever if clubb is run manually without outputting scalars and then plotgen is executed manually then rtm thlm and are set to zero for manual runs typically we don t want to check scalars we just want to plot standard versions of rtm thlm and i probably forgot to mention this earlier n nis it feasible to insert some nightly flags or re arrange some code so that the nightly plots test the scalars but the manual plots simply plot rtm thlm and i believe that this is what is done for other specialized nightly tests e g the restart test and some of the altered grid tests perhaps those pieces of code would provide ideas on how to implement separate behavior for nightly and manual runs n nhowever we have a deadline on the twp ice case so don t bother with this until twp ice is submitted unless it is trivial to fix reporter vlarson uwm edu cc resolution verified by v larson ts component post processing summary rtm thlm and are set to zero when plotgen is run manually but not for the nightly tests priority minor keywords scalars nightly plots rtm thlm time milestone plotgen owner senkbeil uwm edu type enhancement | 1 |
21,397 | 29,202,232,924 | IssuesEvent | 2023-05-21 00:37:51 | devssa/onde-codar-em-salvador | https://api.github.com/repos/devssa/onde-codar-em-salvador | closed | [Remoto] Tech Lead Python/Django (Pleno) na Coodesh | SALVADOR PJ PYTHON PLENO REST TYPESCRIPT POSTGRESQL DOCKER DJANGO REACT MOBILE REQUISITOS LINUX REMOTO GITHUB UMA C QUALIDADE APIs PADRÕ GEOPROCESSAMENTO NEGÓCIOS TECH LEAD ARQUITETURA DE SOFTWARE COLETA DE DADOS POSTGIS GEOSERVER DASHBOARD Stale | ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/tech-lead-pythondjango-pleno-134546300?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A <strong>Geobyte</strong> está em busca de <strong><ins>Tech Lead Python/Django</ins></strong> para compor seu time!</p>
<p><strong>Quem somos nós:</strong></p>
<p>Somos uma empresa de desenvolvimento de software, focada em soluções ambientais utilizando geoprocessamento. Nossos principais clientes são ONG’s e consultorias ambientais. Trabalhamos sempre em conjunto com nossos parceiros, estabelecendo uma comunicação transparente para entregar os melhores resultados, através de muito esforço e dedicação.</p>
<p><strong>Sobre a oportunidade </strong></p>
<p>O líder técnico deve trabalhar em conjunto com a equipe de desenvolvimento e com a equipe de negócios para aplicar soluções técnicas escaláveis, confiáveis e adequadas aos produtos. Para isso é necessário que o profissional tenha o conhecimento técnico, capacidade de decisão, seja responsável, criativo e tenha visão estratégica.</p>
<p><strong>Responsabilidades:</strong></p>
<ul>
<li>Determinar os requisitos técnicos junto a equipe de negócios;</li>
<li>Apresentar e documentar as soluções técnicas, avaliando seus impactos no projeto, esforço e recursos necessários;</li>
<li>Interpretar problemas de negócios e recomendar soluções técnicas e melhores práticas;</li>
<li>Liderar a equipe de desenvolvimento atuando como mentor;</li>
<li>Estabelecer e manter padrões de alta qualidade de engenharia e arquitetura de software que se aplicam adequadamente aos produtos;</li>
<li>Manter a equipe de desenvolvimento envolvida com as tecnologias utilizadas em nossos projetos;</li>
<li>Estar em constante contato com a equipe de negócios e com a equipe de desenvolvimento, mantendo uma comunicação clara, garantindo que todas as informações técnicas serão repassadas corretamente para os desenvolvedores;</li>
<li>Atuar como referência, orientando os desenvolvedores para que sejam aplicadas as práticas mais adequadas de acordo com o escopo do projeto;</li>
<li>Revisar os códigos antes de passá-los para o ambiente de homologação;</li>
<li>Realizar deploy nos ambientes de homologação e produção.</li>
</ul>
## Geobyte:
<p>A Geobyte é uma empresa especializada em tecnologia, meio ambiente e Geoprocessamento que busca, por meio do seu conhecimento, agregar valor às soluções dos clientes. Além de dominar as tecnologias necessárias para desenvolver as soluções, possuímos amplo conhecimento em diversas áreas do meio ambiente e geoprocessamento, que auxilia seus clientes a encontrar as melhores alternativas para seu projeto. </p>
<p>Possuímos projetos elaborados em diversos segmentos relacionados ao meio ambiente, como análise de cobertura e uso do solo, análise e mapeamento social, criação de diversos sistemas webgis, aplicativo mobile para coleta de dados off-line em campo e posterior alimentação do sistema web, geração de relatórios e dashboard personalidades além de análises e filtros espaciais. Trabalhamos com empresas privadas, em consultorias ambientais, mineradores e outros. Setor público, com projetos nas secretarias de meio ambiente de Minas Gerais e Espírito Santo. Terceiro setor, em ONGs e Observatórios ambientais.</p><a href='https://coodesh.com/empresas/geobyte'>Veja mais no site</a>
## Habilidades:
- React Native
- Typescript
- REST APIs
## Local:
100% Remoto
## Requisitos:
- Experiência sólida com Python;
- Experiência sólida com Django;
- Django Rest Framework;
- PostgreSQL (postgis);
- Docker;
- Linux.
## Diferenciais:
- Conhecer geoprocessamento com django (GeoDjango);
- Conhecer Geoserver.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Tech Lead Python/Django (Pleno) na Geobyte](https://coodesh.com/vagas/tech-lead-pythondjango-pleno-134546300?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Remoto
#### Regime
PJ
#### Categoria
Gestão em TI | 1.0 | [Remoto] Tech Lead Python/Django (Pleno) na Coodesh - ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/tech-lead-pythondjango-pleno-134546300?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A <strong>Geobyte</strong> está em busca de <strong><ins>Tech Lead Python/Django</ins></strong> para compor seu time!</p>
<p><strong>Quem somos nós:</strong></p>
<p>Somos uma empresa de desenvolvimento de software, focada em soluções ambientais utilizando geoprocessamento. Nossos principais clientes são ONG’s e consultorias ambientais. Trabalhamos sempre em conjunto com nossos parceiros, estabelecendo uma comunicação transparente para entregar os melhores resultados, através de muito esforço e dedicação.</p>
<p><strong>Sobre a oportunidade </strong></p>
<p>O líder técnico deve trabalhar em conjunto com a equipe de desenvolvimento e com a equipe de negócios para aplicar soluções técnicas escaláveis, confiáveis e adequadas aos produtos. Para isso é necessário que o profissional tenha o conhecimento técnico, capacidade de decisão, seja responsável, criativo e tenha visão estratégica.</p>
<p><strong>Responsabilidades:</strong></p>
<ul>
<li>Determinar os requisitos técnicos junto a equipe de negócios;</li>
<li>Apresentar e documentar as soluções técnicas, avaliando seus impactos no projeto, esforço e recursos necessários;</li>
<li>Interpretar problemas de negócios e recomendar soluções técnicas e melhores práticas;</li>
<li>Liderar a equipe de desenvolvimento atuando como mentor;</li>
<li>Estabelecer e manter padrões de alta qualidade de engenharia e arquitetura de software que se aplicam adequadamente aos produtos;</li>
<li>Manter a equipe de desenvolvimento envolvida com as tecnologias utilizadas em nossos projetos;</li>
<li>Estar em constante contato com a equipe de negócios e com a equipe de desenvolvimento, mantendo uma comunicação clara, garantindo que todas as informações técnicas serão repassadas corretamente para os desenvolvedores;</li>
<li>Atuar como referência, orientando os desenvolvedores para que sejam aplicadas as práticas mais adequadas de acordo com o escopo do projeto;</li>
<li>Revisar os códigos antes de passá-los para o ambiente de homologação;</li>
<li>Realizar deploy nos ambientes de homologação e produção.</li>
</ul>
## Geobyte:
<p>A Geobyte é uma empresa especializada em tecnologia, meio ambiente e Geoprocessamento que busca, por meio do seu conhecimento, agregar valor às soluções dos clientes. Além de dominar as tecnologias necessárias para desenvolver as soluções, possuímos amplo conhecimento em diversas áreas do meio ambiente e geoprocessamento, que auxilia seus clientes a encontrar as melhores alternativas para seu projeto. </p>
<p>Possuímos projetos elaborados em diversos segmentos relacionados ao meio ambiente, como análise de cobertura e uso do solo, análise e mapeamento social, criação de diversos sistemas webgis, aplicativo mobile para coleta de dados off-line em campo e posterior alimentação do sistema web, geração de relatórios e dashboard personalidades além de análises e filtros espaciais. Trabalhamos com empresas privadas, em consultorias ambientais, mineradores e outros. Setor público, com projetos nas secretarias de meio ambiente de Minas Gerais e Espírito Santo. Terceiro setor, em ONGs e Observatórios ambientais.</p><a href='https://coodesh.com/empresas/geobyte'>Veja mais no site</a>
## Habilidades:
- React Native
- Typescript
- REST APIs
## Local:
100% Remoto
## Requisitos:
- Experiência sólida com Python;
- Experiência sólida com Django;
- Django Rest Framework;
- PostgreSQL (postgis);
- Docker;
- Linux.
## Diferenciais:
- Conhecer geoprocessamento com django (GeoDjango);
- Conhecer Geoserver.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Tech Lead Python/Django (Pleno) na Geobyte](https://coodesh.com/vagas/tech-lead-pythondjango-pleno-134546300?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Remoto
#### Regime
PJ
#### Categoria
Gestão em TI | process | tech lead python django pleno na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 a geobyte está em busca de tech lead python django para compor seu time quem somos nós somos uma empresa de desenvolvimento de software focada em soluções ambientais utilizando geoprocessamento nossos principais clientes são ong’s e consultorias ambientais trabalhamos sempre em conjunto com nossos parceiros estabelecendo uma comunicação transparente para entregar os melhores resultados através de muito esforço e dedicação sobre a oportunidade o líder técnico deve trabalhar em conjunto com a equipe de desenvolvimento e com a equipe de negócios para aplicar soluções técnicas escaláveis confiáveis e adequadas aos produtos para isso é necessário que o profissional tenha o conhecimento técnico capacidade de decisão seja responsável criativo e tenha visão estratégica responsabilidades determinar os requisitos técnicos junto a equipe de negócios apresentar e documentar as soluções técnicas avaliando seus impactos no projeto esforço e recursos necessários interpretar problemas de negócios e recomendar soluções técnicas e melhores práticas liderar a equipe de desenvolvimento atuando como mentor estabelecer e manter padrões de alta qualidade de engenharia e arquitetura de software que se aplicam adequadamente aos produtos manter a equipe de desenvolvimento envolvida com as tecnologias utilizadas em nossos projetos estar em constante contato com a equipe de negócios e com a equipe de desenvolvimento mantendo uma comunicação clara garantindo que todas as informações técnicas serão repassadas corretamente para os desenvolvedores atuar como referência orientando os desenvolvedores para que sejam aplicadas as práticas mais adequadas de acordo com o escopo do projeto revisar os códigos antes de passá los para o ambiente de homologação realizar deploy nos ambientes de homologação e produção geobyte a geobyte é uma empresa especializada em tecnologia meio ambiente e geoprocessamento que busca por meio do seu conhecimento agregar valor às soluções dos clientes além de dominar as tecnologias necessárias para desenvolver as soluções possuímos amplo conhecimento em diversas áreas do meio ambiente e geoprocessamento que auxilia seus clientes a encontrar as melhores alternativas para seu projeto nbsp possuímos projetos elaborados em diversos segmentos relacionados ao meio ambiente como análise de cobertura e uso do solo análise e mapeamento social criação de diversos sistemas webgis aplicativo mobile para coleta de dados off line em campo e posterior alimentação do sistema web geração de relatórios e dashboard personalidades além de análises e filtros espaciais trabalhamos com empresas privadas em consultorias ambientais mineradores e outros setor público com projetos nas secretarias de meio ambiente de minas gerais e espírito santo terceiro setor em ongs e observatórios ambientais habilidades react native typescript rest apis local remoto requisitos experiência sólida com python experiência sólida com django django rest framework postgresql postgis docker linux diferenciais conhecer geoprocessamento com django geodjango conhecer geoserver como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação remoto regime pj categoria gestão em ti | 1 |
57,222 | 11,728,130,266 | IssuesEvent | 2020-03-10 17:02:09 | phetsims/aqua | https://api.github.com/repos/phetsims/aqua | opened | documentation issues in continuous-server.js | dev:code-review | I was trying to make changes to continuous-server.js for #84. I ran into problems with some missing documentation, so I did a general review. Overall the doc is good, but there are some things missing. @jonathanolson please address these documentation issues:
- [ ] Missing `@param` doc for `testLintEverything`:
```js
// Kicks off linting of everything
function testLintEverything( snapshot, callback ) {
```
- [ ] No doc for `copyReposToSnapshot`:
```js
function copyReposToSnapshot( repos, snapshotName, callback, errorCallback ) {
```
- [ ] No doc for the object literals that are pushed onto `testQueue` in `createSnapshot`. Is it `{Test[]}`?
```js
testQueue: [], // Filled in later, and can be appended to
```
- [ ] Missing `@param` doc for `createServer`:
```js
// Main server creation
http.createServer( function( req, res ) {
```
- [ ] Incorrect param doc for `randomBrowserTest`:
```js
/**
* Sends a random browser test (from those with the lowest count) to the ServerResponse.
* @private
*
* @param {ServerResponse} res
* @param {boolean} - Whether
*/
function randomBrowserTest( res, isOld ) {
```
- [ ] Suggested to put a comment above the 8 calls to `buildLoop()` at the end of the file, something like:
`// Call buildLoop once for each build "thread".
Something like this might even be nicer:
```js
// Start build "threads"
const numberOfBuildThreads = 8;
for ( let i = 0; i < numberOfBuildThreads; i++ ) {
buildLoop();
}
`` | 1.0 | documentation issues in continuous-server.js - I was trying to make changes to continuous-server.js for #84. I ran into problems with some missing documentation, so I did a general review. Overall the doc is good, but there are some things missing. @jonathanolson please address these documentation issues:
- [ ] Missing `@param` doc for `testLintEverything`:
```js
// Kicks off linting of everything
function testLintEverything( snapshot, callback ) {
```
- [ ] No doc for `copyReposToSnapshot`:
```js
function copyReposToSnapshot( repos, snapshotName, callback, errorCallback ) {
```
- [ ] No doc for the object literals that are pushed onto `testQueue` in `createSnapshot`. Is it `{Test[]}`?
```js
testQueue: [], // Filled in later, and can be appended to
```
- [ ] Missing `@param` doc for `createServer`:
```js
// Main server creation
http.createServer( function( req, res ) {
```
- [ ] Incorrect param doc for `randomBrowserTest`:
```js
/**
* Sends a random browser test (from those with the lowest count) to the ServerResponse.
* @private
*
* @param {ServerResponse} res
* @param {boolean} - Whether
*/
function randomBrowserTest( res, isOld ) {
```
- [ ] Suggested to put a comment above the 8 calls to `buildLoop()` at the end of the file, something like:
`// Call buildLoop once for each build "thread".
Something like this might even be nicer:
```js
// Start build "threads"
const numberOfBuildThreads = 8;
for ( let i = 0; i < numberOfBuildThreads; i++ ) {
buildLoop();
}
`` | non_process | documentation issues in continuous server js i was trying to make changes to continuous server js for i ran into problems with some missing documentation so i did a general review overall the doc is good but there are some things missing jonathanolson please address these documentation issues missing param doc for testlinteverything js kicks off linting of everything function testlinteverything snapshot callback no doc for copyrepostosnapshot js function copyrepostosnapshot repos snapshotname callback errorcallback no doc for the object literals that are pushed onto testqueue in createsnapshot is it test js testqueue filled in later and can be appended to missing param doc for createserver js main server creation http createserver function req res incorrect param doc for randombrowsertest js sends a random browser test from those with the lowest count to the serverresponse private param serverresponse res param boolean whether function randombrowsertest res isold suggested to put a comment above the calls to buildloop at the end of the file something like call buildloop once for each build thread something like this might even be nicer js start build threads const numberofbuildthreads for let i i numberofbuildthreads i buildloop | 0 |
5,922 | 8,743,015,824 | IssuesEvent | 2018-12-12 17:55:27 | kerubistan/kerub | https://api.github.com/repos/kerubistan/kerub | closed | create a problem if the CPU temperature is too high | component:data processing | In the planner too high CPU temperature should be registered as a problem. Lowering any kind of load (like migrating a cpu-intensive VM elsewhere, or shutting it down) should offer to drop that temperature. | 1.0 | create a problem if the CPU temperature is too high - In the planner too high CPU temperature should be registered as a problem. Lowering any kind of load (like migrating a cpu-intensive VM elsewhere, or shutting it down) should offer to drop that temperature. | process | create a problem if the cpu temperature is too high in the planner too high cpu temperature should be registered as a problem lowering any kind of load like migrating a cpu intensive vm elsewhere or shutting it down should offer to drop that temperature | 1 |
1,107 | 3,587,693,487 | IssuesEvent | 2016-01-30 14:03:53 | nodejs/node | https://api.github.com/repos/nodejs/node | closed | process.env.<something> = null unexpected results | process | The following produce unexpected results:
```js
process.env.THING = null
process.env.THING // => 'null'
process.env.THING = undefined
process.env.THING // => 'undefined'
```
Should there be better docs on setting process.env to null/undefined? | 1.0 | process.env.<something> = null unexpected results - The following produce unexpected results:
```js
process.env.THING = null
process.env.THING // => 'null'
process.env.THING = undefined
process.env.THING // => 'undefined'
```
Should there be better docs on setting process.env to null/undefined? | process | process env null unexpected results the following produce unexpected results js process env thing null process env thing null process env thing undefined process env thing undefined should there be better docs on setting process env to null undefined | 1 |
218,316 | 16,759,114,475 | IssuesEvent | 2021-06-13 12:20:28 | bounswe/2021SpringGroup7 | https://api.github.com/repos/bounswe/2021SpringGroup7 | opened | Write Individual Report for Onur Can Avci | Priority: Medium Status: To Do Type: Documentation | I will write my individual report which includes my contributions to practice-app | 1.0 | Write Individual Report for Onur Can Avci - I will write my individual report which includes my contributions to practice-app | non_process | write individual report for onur can avci i will write my individual report which includes my contributions to practice app | 0 |
365,732 | 25,549,699,932 | IssuesEvent | 2022-11-29 22:13:50 | todogroup/osposurvey | https://api.github.com/repos/todogroup/osposurvey | closed | OSPO Survey 2022 pdf page 24 missing legend | documentation community feedback | It seems that on the OSPO Survey 2022 pdf version (https://github.com/todogroup/todogroup.org/files/9557802/OSPOSurveyResults_2022.pdf) page 24 "Consensus about top OSPO responsibilities is greater among North American organizations" the graph is missing the legend. Right side there is a graph that has 13 categories and for each there are 4 pillars (%) and these pillars do not have any legend.
These pillars are likely regions like they are on page 30:
- Total
- USA & Canada
- Europe
- Asia-Pacific (including Oceania)
But the pillars could be something else, e.g., organization size.
| 1.0 | OSPO Survey 2022 pdf page 24 missing legend - It seems that on the OSPO Survey 2022 pdf version (https://github.com/todogroup/todogroup.org/files/9557802/OSPOSurveyResults_2022.pdf) page 24 "Consensus about top OSPO responsibilities is greater among North American organizations" the graph is missing the legend. Right side there is a graph that has 13 categories and for each there are 4 pillars (%) and these pillars do not have any legend.
These pillars are likely regions like they are on page 30:
- Total
- USA & Canada
- Europe
- Asia-Pacific (including Oceania)
But the pillars could be something else, e.g., organization size.
| non_process | ospo survey pdf page missing legend it seems that on the ospo survey pdf version page consensus about top ospo responsibilities is greater among north american organizations the graph is missing the legend right side there is a graph that has categories and for each there are pillars and these pillars do not have any legend these pillars are likely regions like they are on page total usa canada europe asia pacific including oceania but the pillars could be something else e g organization size | 0 |
794,075 | 28,021,451,095 | IssuesEvent | 2023-03-28 05:53:43 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [DocDB][Packed Columns] Corruption (yb/master/sys_catalog_writer.cc:219): Unable to initialize catalog manager: Failed to initialize sys tables async: Failed log replay. Reason: System catalog snapshot is corrupted or built using different build type: Fou | kind/bug area/docdb priority/medium area/ycql QA long_running_universe | Jira Link: [DB-3792](https://yugabyte.atlassian.net/browse/DB-3792)
### Description
During an upgrade of my [puppy-food-arm-1 universe](https://portal.dev.yugabyte.com/universes/25bb1fc6-d74c-41ec-afa0-b0b331bdecc8/) from 2.15.4.0-b54 to 2.15.4.0-b72 the master server fails to come up after upgrade:
```
[yugabyte@ip logs]$ cat yb-master.FATAL.details.2022-10-07T07_13_16.pid3668968.txt
F20221007 07:13:16 ../../src/yb/master/master_main.cc:136] Corruption (yb/master/sys_catalog_writer.cc:219): Unable to initialize catalog manager: Failed to initialize sys tables async: Failed log replay. Reason: System catalog snapshot is corrupted or built using different build type: Found wrong metadata type: 0 vs 10
@ 0xffff7ca59d7c (unknown)
@ 0xffff7ca54874 (unknown)
@ 0xffff7ca551ac (unknown)
@ 0xffff7ca57938 (unknown)
@ 0x21429c main
@ 0xffff7b6e0de4 __libc_start_main
@ 0x213784 (unknown)
```
It keeps failing like this every minute when trying to start master again.
The upgrade failure is also about this master server:
```
Failed to execute task {"sleepAfterMasterRestartMillis":180000,"sleepAfterTServerRestartMillis":180000,"nodeExporterUser":"prometheus","universeUUID":"25bb1fc6-d74c-41ec-afa0-b0b331bdecc8","enableYbc":false,"installYbc":false,"ybcInstalled":false,"encryptionAtRestConfig":{"encryptionAtRestEnabled":false,"opType":"UNDEFINED","type":"DATA_KEY"},"communicationPorts":{"masterHttpPort":7000,"masterRpcPort":7100,"tserverHttpPort":9000,"tserverRpcPort":9100,"ybControllerHttpPort":14000,"ybControllerrRpcPort":18018,"redisS..., hit error:
WaitForServer(25bb1fc6-d74c-41ec-afa0-b0b331bdecc8, yb-15-puppy-food-arm-1-n1, type=MASTER) did not respond in the set time..
```
This is with packed columns enabled on YSQL and YCQL, tserver and master.
I will leave the universe in the current state for further analysis, tell me when I can destroy and recreate it. | 1.0 | [DocDB][Packed Columns] Corruption (yb/master/sys_catalog_writer.cc:219): Unable to initialize catalog manager: Failed to initialize sys tables async: Failed log replay. Reason: System catalog snapshot is corrupted or built using different build type: Fou - Jira Link: [DB-3792](https://yugabyte.atlassian.net/browse/DB-3792)
### Description
During an upgrade of my [puppy-food-arm-1 universe](https://portal.dev.yugabyte.com/universes/25bb1fc6-d74c-41ec-afa0-b0b331bdecc8/) from 2.15.4.0-b54 to 2.15.4.0-b72 the master server fails to come up after upgrade:
```
[yugabyte@ip logs]$ cat yb-master.FATAL.details.2022-10-07T07_13_16.pid3668968.txt
F20221007 07:13:16 ../../src/yb/master/master_main.cc:136] Corruption (yb/master/sys_catalog_writer.cc:219): Unable to initialize catalog manager: Failed to initialize sys tables async: Failed log replay. Reason: System catalog snapshot is corrupted or built using different build type: Found wrong metadata type: 0 vs 10
@ 0xffff7ca59d7c (unknown)
@ 0xffff7ca54874 (unknown)
@ 0xffff7ca551ac (unknown)
@ 0xffff7ca57938 (unknown)
@ 0x21429c main
@ 0xffff7b6e0de4 __libc_start_main
@ 0x213784 (unknown)
```
It keeps failing like this every minute when trying to start master again.
The upgrade failure is also about this master server:
```
Failed to execute task {"sleepAfterMasterRestartMillis":180000,"sleepAfterTServerRestartMillis":180000,"nodeExporterUser":"prometheus","universeUUID":"25bb1fc6-d74c-41ec-afa0-b0b331bdecc8","enableYbc":false,"installYbc":false,"ybcInstalled":false,"encryptionAtRestConfig":{"encryptionAtRestEnabled":false,"opType":"UNDEFINED","type":"DATA_KEY"},"communicationPorts":{"masterHttpPort":7000,"masterRpcPort":7100,"tserverHttpPort":9000,"tserverRpcPort":9100,"ybControllerHttpPort":14000,"ybControllerrRpcPort":18018,"redisS..., hit error:
WaitForServer(25bb1fc6-d74c-41ec-afa0-b0b331bdecc8, yb-15-puppy-food-arm-1-n1, type=MASTER) did not respond in the set time..
```
This is with packed columns enabled on YSQL and YCQL, tserver and master.
I will leave the universe in the current state for further analysis, tell me when I can destroy and recreate it. | non_process | corruption yb master sys catalog writer cc unable to initialize catalog manager failed to initialize sys tables async failed log replay reason system catalog snapshot is corrupted or built using different build type fou jira link description during an upgrade of my from to the master server fails to come up after upgrade cat yb master fatal details txt src yb master master main cc corruption yb master sys catalog writer cc unable to initialize catalog manager failed to initialize sys tables async failed log replay reason system catalog snapshot is corrupted or built using different build type found wrong metadata type vs unknown unknown unknown unknown main libc start main unknown it keeps failing like this every minute when trying to start master again the upgrade failure is also about this master server failed to execute task sleepaftermasterrestartmillis sleepaftertserverrestartmillis nodeexporteruser prometheus universeuuid enableybc false installybc false ybcinstalled false encryptionatrestconfig encryptionatrestenabled false optype undefined type data key communicationports masterhttpport masterrpcport tserverhttpport tserverrpcport ybcontrollerhttpport ybcontrollerrrpcport rediss hit error waitforserver yb puppy food arm type master did not respond in the set time this is with packed columns enabled on ysql and ycql tserver and master i will leave the universe in the current state for further analysis tell me when i can destroy and recreate it | 0 |
39,453 | 6,743,673,948 | IssuesEvent | 2017-10-20 13:05:20 | hassio-addons/addon-terminal | https://api.github.com/repos/hassio-addons/addon-terminal | closed | ERR: lws_context_init_server_ssl: SSL_CTX_load_verify_locations unhappy | Accepted Beginner Friendly Documentation Hacktoberfest Medium Priority RFC | ## Problem/Motivation
The following error is displayed in the logs of the add-on:
> ERR: lws_context_init_server_ssl: SSL_CTX_load_verify_locations unhappy
The add-on, however, functions perfectly.
## Proposed changes
Find out of this error can be fixed or suppressed.
At least add it to the documentation as a known error.
| 1.0 | ERR: lws_context_init_server_ssl: SSL_CTX_load_verify_locations unhappy - ## Problem/Motivation
The following error is displayed in the logs of the add-on:
> ERR: lws_context_init_server_ssl: SSL_CTX_load_verify_locations unhappy
The add-on, however, functions perfectly.
## Proposed changes
Find out of this error can be fixed or suppressed.
At least add it to the documentation as a known error.
| non_process | err lws context init server ssl ssl ctx load verify locations unhappy problem motivation the following error is displayed in the logs of the add on err lws context init server ssl ssl ctx load verify locations unhappy the add on however functions perfectly proposed changes find out of this error can be fixed or suppressed at least add it to the documentation as a known error | 0 |
16,158 | 20,594,280,467 | IssuesEvent | 2022-03-05 08:21:27 | tushushu/ulist | https://api.github.com/repos/tushushu/ulist | closed | Implement IndexList | enhancement data processing | The IndexList can slice the ulist array by its indexes. For example:
```Python
import ulist as ul
idx = IndexList([0, 2, 4])
arr = ul.from_seq([1, 2, 3, 5, 7, 11])
arr[idx]
[1, 3, 7]
``` | 1.0 | Implement IndexList - The IndexList can slice the ulist array by its indexes. For example:
```Python
import ulist as ul
idx = IndexList([0, 2, 4])
arr = ul.from_seq([1, 2, 3, 5, 7, 11])
arr[idx]
[1, 3, 7]
``` | process | implement indexlist the indexlist can slice the ulist array by its indexes for example python import ulist as ul idx indexlist arr ul from seq arr | 1 |
73,967 | 14,149,888,482 | IssuesEvent | 2020-11-11 02:00:27 | atomist/antlr | https://api.github.com/repos/atomist/antlr | closed | Code Inspection: Tslint on no-ac | code-inspection enhancement stale | ### array-type
- [`lib/tree/ast/antlr/kotlin/antlr-gen/KotlinLexer.ts:273`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/kotlin/antlr-gen/KotlinLexer.ts#L273): _(warn)_ Array type using 'T[]' is forbidden for non-simple types. Use 'Array<T>' instead.
- [`lib/tree/ast/antlr/kotlin/antlr-gen/KotlinLexer.ts:295`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/kotlin/antlr-gen/KotlinLexer.ts#L295): _(warn)_ Array type using 'T[]' is forbidden for non-simple types. Use 'Array<T>' instead.
- [`lib/tree/ast/antlr/kotlin/antlr-gen/KotlinParser.ts:6416`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/kotlin/antlr-gen/KotlinParser.ts#L6416): _(warn)_ Array type using 'T[]' is forbidden for non-simple types. Use 'Array<T>' instead.
- [`lib/tree/ast/antlr/kotlin/antlr-gen/KotlinParser.ts:6438`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/kotlin/antlr-gen/KotlinParser.ts#L6438): _(warn)_ Array type using 'T[]' is forbidden for non-simple types. Use 'Array<T>' instead.
### class-name
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:15962`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L15962): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:16007`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L16007): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:16085`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L16085): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:16118`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L16118): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:18279`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L18279): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:18324`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L18324): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:18393`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L18393): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:18426`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L18426): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:18996`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L18996): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:22476`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L22476): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:22812`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L22812): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:22842`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L22842): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:22893`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L22893): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:22938`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L22938): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:22968`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L22968): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:23010`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L23010): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:23067`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L23067): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:23097`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L23097): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:23253`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L23253): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:23307`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L23307): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:23442`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L23442): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:23475`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L23475): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:23565`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L23565): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:23616`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L23616): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:23721`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L23721): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:23760`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L23760): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:23901`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L23901): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:23937`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L23937): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:25116`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L25116): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:25179`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L25179): _(warn)_ Class name must be in pascal case
### comment-format
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Lexer.ts:250`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Lexer.ts#L250): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Lexer.ts:253`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Lexer.ts#L253): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Lexer.ts:261`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Lexer.ts#L261): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Lexer.ts:264`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Lexer.ts#L264): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:17`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L17): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:526`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L526): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:562`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L562): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:629`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L629): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:674`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L674): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:710`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L710): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:746`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L746): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:791`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L791): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:859`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L859): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:950`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L950): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1002`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1002): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1052`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1052): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1077`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1077): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1102`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1102): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1127`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1127): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1167`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1167): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1218`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1218): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1293`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1293): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1343`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1343): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1368`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1368): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1425`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1425): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1452`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1452): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1481`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1481): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1523`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1523): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1561`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1561): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1611`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1611): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1658`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1658): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1726`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1726): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1791`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1791): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1836`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1836): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1901`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1901): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1943`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1943): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1971`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1971): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2036`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2036): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2074`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2074): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2138`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2138): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2178`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2178): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2222`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2222): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2247`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2247): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2299`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2299): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2328`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2328): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2361`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2361): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2396`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2396): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2431`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2431): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2476`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2476): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2546`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2546): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2718`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2718): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2754`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2754): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2792`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2792): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2866`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2866): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2948`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2948): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2977`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2977): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3019`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3019): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3046`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3046): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3073`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3073): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3115`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3115): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3157`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3157): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3209`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3209): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3268`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3268): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3312`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3312): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3394`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3394): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3436`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3436): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3474`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3474): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3510`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3510): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3585`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3585): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3623`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3623): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3669`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3669): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3714`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3714): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3782`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3782): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3859`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3859): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3911`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3911): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3947`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3947): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3972`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3972): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3997`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3997): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4022`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4022): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4047`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4047): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4098`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4098): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4140`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4140): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4236`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4236): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4335`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4335): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4392`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4392): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4442`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4442): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4491`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4491): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4566`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4566): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4608`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4608): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4648`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4648): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4719`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4719): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4773`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4773): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4800`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4800): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4842`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4842): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4880`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4880): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4920`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4920): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4945`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4945): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4972`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4972): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5024`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5024): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5078`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5078): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5128`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5128): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5153`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5153): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5201`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5201): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5366`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5366): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5420`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5420): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5478`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5478): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5522`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5522): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5594`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5594): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5619`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5619): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5659`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5659): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5697`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5697): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5761`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5761): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5836`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5836): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5863`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5863): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5905`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5905): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5964`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5964): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6008`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6008): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6062`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6062): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6104`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6104): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6179`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6179): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6227`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6227): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6269`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6269): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6328`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6328): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6396`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6396): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6443`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6443): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6470`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6470): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6515`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6515): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6557`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6557): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6599`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6599): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6628`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6628): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6673`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6673): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6721`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6721): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6765`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6765): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6792`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6792): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6825`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6825): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6873`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6873): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6917`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6917): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6955`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6955): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6993`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6993): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7038`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7038): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7065`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7065): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7107`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7107): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7173`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7173): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7232`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7232): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7373`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7373): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7398`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7398): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7427`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7427): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7456`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7456): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7483`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7483): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7556`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7556): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7589`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7589): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7626`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7626): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7663`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7663): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7713`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7713): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7746`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7746): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7805`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7805): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7832`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7832): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7870`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7870): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7925`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7925): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7950`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7950): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7983`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7983): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8016`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8016): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8053`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8053): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8091`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8091): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8129`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8129): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8195`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8195): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8261`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8261): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8299`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8299): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8324`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8324): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8366`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8366): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8420`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8420): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8474`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8474): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8512`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8512): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8550`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8550): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8588`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8588): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8617`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8617): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8650`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8650): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8714`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8714): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8752`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8752): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8785`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8785): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8827`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8827): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8869`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8869): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8896`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8896): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8946`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8946): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8986`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8986): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:9030`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L9030): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:9089`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L9089): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:9127`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L9127): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:9183`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L9183): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:9285`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L9285): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:9309`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L9309): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:9436`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L9436): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:9495`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L9495): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:9519`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L9519): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:9571`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L9571): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:9732`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L9732): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:9756`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L9756): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:9910`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L9910): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:10029`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L10029): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:10293`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L10293): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:10381`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L10381): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:10585`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L10585): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:10625`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L10625): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:10686`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L10686): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:10713`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L10713): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:10774`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L10774): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:10846`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L10846): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:10902`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L10902): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:10976`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L10976): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:11201`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L11201): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:11253`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L11253): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:11443`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L11443): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:11485`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L11485): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:11651`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L11651): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:11689`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L11689): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:11834`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L11834): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:11926`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L11926): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:11970`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L11970): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12014`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12014): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12039`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12039): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12077`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12077): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12106`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12106): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12168`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12168): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12210`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12210): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12285`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12285): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12323`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12323): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12352`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12352): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12397`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12397): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12433`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12433): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12496`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12496): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12564`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12564): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12632`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12632): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12700`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12700): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12768`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12768): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12836`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12836): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12924`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12924): _(warn)_ comment must start with a space
_Body truncated…_
[atomist:code-inspection:no-ac=@atomist/atomist-sdm] | 1.0 | Code Inspection: Tslint on no-ac - ### array-type
- [`lib/tree/ast/antlr/kotlin/antlr-gen/KotlinLexer.ts:273`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/kotlin/antlr-gen/KotlinLexer.ts#L273): _(warn)_ Array type using 'T[]' is forbidden for non-simple types. Use 'Array<T>' instead.
- [`lib/tree/ast/antlr/kotlin/antlr-gen/KotlinLexer.ts:295`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/kotlin/antlr-gen/KotlinLexer.ts#L295): _(warn)_ Array type using 'T[]' is forbidden for non-simple types. Use 'Array<T>' instead.
- [`lib/tree/ast/antlr/kotlin/antlr-gen/KotlinParser.ts:6416`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/kotlin/antlr-gen/KotlinParser.ts#L6416): _(warn)_ Array type using 'T[]' is forbidden for non-simple types. Use 'Array<T>' instead.
- [`lib/tree/ast/antlr/kotlin/antlr-gen/KotlinParser.ts:6438`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/kotlin/antlr-gen/KotlinParser.ts#L6438): _(warn)_ Array type using 'T[]' is forbidden for non-simple types. Use 'Array<T>' instead.
### class-name
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:15962`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L15962): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:16007`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L16007): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:16085`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L16085): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:16118`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L16118): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:18279`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L18279): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:18324`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L18324): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:18393`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L18393): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:18426`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L18426): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:18996`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L18996): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:22476`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L22476): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:22812`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L22812): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:22842`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L22842): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:22893`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L22893): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:22938`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L22938): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:22968`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L22968): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:23010`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L23010): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:23067`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L23067): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:23097`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L23097): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:23253`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L23253): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:23307`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L23307): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:23442`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L23442): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:23475`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L23475): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:23565`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L23565): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:23616`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L23616): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:23721`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L23721): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:23760`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L23760): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:23901`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L23901): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:23937`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L23937): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:25116`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L25116): _(warn)_ Class name must be in pascal case
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:25179`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L25179): _(warn)_ Class name must be in pascal case
### comment-format
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Lexer.ts:250`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Lexer.ts#L250): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Lexer.ts:253`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Lexer.ts#L253): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Lexer.ts:261`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Lexer.ts#L261): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Lexer.ts:264`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Lexer.ts#L264): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:17`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L17): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:526`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L526): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:562`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L562): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:629`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L629): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:674`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L674): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:710`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L710): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:746`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L746): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:791`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L791): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:859`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L859): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:950`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L950): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1002`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1002): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1052`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1052): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1077`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1077): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1102`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1102): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1127`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1127): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1167`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1167): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1218`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1218): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1293`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1293): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1343`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1343): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1368`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1368): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1425`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1425): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1452`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1452): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1481`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1481): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1523`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1523): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1561`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1561): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1611`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1611): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1658`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1658): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1726`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1726): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1791`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1791): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1836`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1836): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1901`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1901): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1943`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1943): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:1971`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L1971): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2036`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2036): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2074`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2074): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2138`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2138): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2178`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2178): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2222`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2222): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2247`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2247): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2299`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2299): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2328`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2328): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2361`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2361): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2396`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2396): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2431`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2431): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2476`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2476): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2546`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2546): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2718`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2718): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2754`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2754): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2792`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2792): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2866`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2866): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2948`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2948): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:2977`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L2977): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3019`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3019): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3046`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3046): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3073`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3073): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3115`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3115): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3157`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3157): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3209`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3209): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3268`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3268): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3312`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3312): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3394`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3394): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3436`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3436): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3474`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3474): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3510`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3510): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3585`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3585): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3623`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3623): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3669`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3669): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3714`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3714): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3782`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3782): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3859`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3859): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3911`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3911): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3947`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3947): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3972`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3972): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:3997`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L3997): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4022`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4022): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4047`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4047): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4098`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4098): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4140`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4140): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4236`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4236): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4335`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4335): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4392`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4392): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4442`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4442): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4491`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4491): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4566`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4566): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4608`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4608): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4648`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4648): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4719`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4719): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4773`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4773): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4800`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4800): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4842`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4842): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4880`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4880): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4920`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4920): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4945`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4945): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:4972`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L4972): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5024`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5024): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5078`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5078): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5128`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5128): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5153`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5153): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5201`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5201): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5366`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5366): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5420`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5420): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5478`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5478): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5522`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5522): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5594`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5594): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5619`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5619): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5659`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5659): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5697`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5697): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5761`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5761): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5836`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5836): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5863`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5863): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5905`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5905): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:5964`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L5964): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6008`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6008): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6062`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6062): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6104`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6104): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6179`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6179): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6227`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6227): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6269`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6269): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6328`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6328): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6396`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6396): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6443`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6443): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6470`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6470): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6515`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6515): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6557`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6557): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6599`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6599): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6628`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6628): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6673`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6673): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6721`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6721): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6765`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6765): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6792`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6792): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6825`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6825): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6873`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6873): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6917`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6917): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6955`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6955): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:6993`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L6993): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7038`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7038): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7065`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7065): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7107`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7107): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7173`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7173): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7232`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7232): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7373`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7373): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7398`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7398): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7427`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7427): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7456`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7456): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7483`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7483): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7556`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7556): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7589`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7589): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7626`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7626): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7663`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7663): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7713`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7713): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7746`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7746): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7805`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7805): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7832`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7832): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7870`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7870): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7925`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7925): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7950`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7950): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:7983`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L7983): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8016`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8016): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8053`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8053): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8091`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8091): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8129`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8129): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8195`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8195): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8261`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8261): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8299`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8299): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8324`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8324): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8366`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8366): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8420`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8420): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8474`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8474): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8512`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8512): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8550`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8550): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8588`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8588): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8617`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8617): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8650`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8650): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8714`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8714): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8752`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8752): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8785`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8785): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8827`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8827): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8869`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8869): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8896`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8896): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8946`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8946): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:8986`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L8986): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:9030`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L9030): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:9089`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L9089): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:9127`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L9127): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:9183`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L9183): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:9285`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L9285): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:9309`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L9309): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:9436`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L9436): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:9495`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L9495): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:9519`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L9519): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:9571`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L9571): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:9732`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L9732): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:9756`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L9756): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:9910`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L9910): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:10029`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L10029): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:10293`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L10293): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:10381`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L10381): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:10585`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L10585): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:10625`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L10625): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:10686`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L10686): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:10713`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L10713): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:10774`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L10774): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:10846`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L10846): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:10902`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L10902): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:10976`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L10976): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:11201`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L11201): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:11253`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L11253): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:11443`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L11443): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:11485`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L11485): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:11651`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L11651): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:11689`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L11689): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:11834`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L11834): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:11926`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L11926): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:11970`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L11970): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12014`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12014): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12039`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12039): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12077`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12077): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12106`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12106): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12168`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12168): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12210`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12210): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12285`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12285): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12323`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12323): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12352`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12352): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12397`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12397): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12433`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12433): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12496`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12496): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12564`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12564): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12632`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12632): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12700`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12700): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12768`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12768): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12836`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12836): _(warn)_ comment must start with a space
- [`lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts:12924`](https://github.com/atomist/antlr/blob/f2d124898a768f615768fde848a5e1987a8bca90/lib/tree/ast/antlr/java/antlr-gen/Java9Parser.ts#L12924): _(warn)_ comment must start with a space
_Body truncated…_
[atomist:code-inspection:no-ac=@atomist/atomist-sdm] | non_process | code inspection tslint on no ac array type warn array type using t is forbidden for non simple types use array instead warn array type using t is forbidden for non simple types use array instead warn array type using t is forbidden for non simple types use array instead warn array type using t is forbidden for non simple types use array instead class name warn class name must be in pascal case warn class name must be in pascal case warn class name must be in pascal case warn class name must be in pascal case warn class name must be in pascal case warn class name must be in pascal case warn class name must be in pascal case warn class name must be in pascal case warn class name must be in pascal case warn class name must be in pascal case warn class name must be in pascal case warn class name must be in pascal case warn class name must be in pascal case warn class name must be in pascal case warn class name must be in pascal case warn class name must be in pascal case warn class name must be in pascal case warn class name must be in pascal case warn class name must be in pascal case warn class name must be in pascal case warn class name must be in pascal case warn class name must be in pascal case warn class name must be in pascal case warn class name must be in pascal case warn class name must be in pascal case warn class name must be in pascal case warn class name must be in pascal case warn class name must be in pascal case warn class name must be in pascal case warn class name must be in pascal case comment format warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space warn comment must start with a space body truncated… | 0 |
18,360 | 24,492,140,995 | IssuesEvent | 2022-10-10 04:02:48 | phamtanduongtk29/html-css-training | https://api.github.com/repos/phamtanduongtk29/html-css-training | opened | Create and resposive feature section | not yet processing | - Estimates: 5 hours
- Create heading
- Create feature item
| 1.0 | Create and resposive feature section - - Estimates: 5 hours
- Create heading
- Create feature item
| process | create and resposive feature section estimates hours create heading create feature item | 1 |
4,820 | 7,715,072,529 | IssuesEvent | 2018-05-23 06:04:46 | geneontology/go-ontology | https://api.github.com/repos/geneontology/go-ontology | closed | Difference between cytokinesis and morphogenesis checkpoint terms | cell cycle and DNA processes | hi,
This may be a question for @ValWood and @mah11 :
While fixing terms sharing the same exact synonym, I came across these 2 pairs of terms:
GO:0072398 signal transduction involved in cytokinesis checkpoint
GO:1903822 signal transduction involved in morphogenesis checkpoint
GO:1903821 detection of stimulus involved in morphogenesis checkpoint
GO:0072397 detection of stimulus involved in cytokinesis checkpoint
That all mention 'septin checkpoint' in the synonym. There are no annotations to any of these terms; Can I get help WRT which synonyms are correct, and perhaps improvements to the definitions?
Thanks, Pascale
| 1.0 | Difference between cytokinesis and morphogenesis checkpoint terms - hi,
This may be a question for @ValWood and @mah11 :
While fixing terms sharing the same exact synonym, I came across these 2 pairs of terms:
GO:0072398 signal transduction involved in cytokinesis checkpoint
GO:1903822 signal transduction involved in morphogenesis checkpoint
GO:1903821 detection of stimulus involved in morphogenesis checkpoint
GO:0072397 detection of stimulus involved in cytokinesis checkpoint
That all mention 'septin checkpoint' in the synonym. There are no annotations to any of these terms; Can I get help WRT which synonyms are correct, and perhaps improvements to the definitions?
Thanks, Pascale
| process | difference between cytokinesis and morphogenesis checkpoint terms hi this may be a question for valwood and while fixing terms sharing the same exact synonym i came across these pairs of terms go signal transduction involved in cytokinesis checkpoint go signal transduction involved in morphogenesis checkpoint go detection of stimulus involved in morphogenesis checkpoint go detection of stimulus involved in cytokinesis checkpoint that all mention septin checkpoint in the synonym there are no annotations to any of these terms can i get help wrt which synonyms are correct and perhaps improvements to the definitions thanks pascale | 1 |
341,200 | 10,289,088,782 | IssuesEvent | 2019-08-27 12:29:17 | ahmedkaludi/accelerated-mobile-pages | https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages | closed | Add Avia page builder(Enfold) support in pagebuilder section | NEED FAST REVIEW [Priority: HIGH] enhancement | Add Avia page builder support in a page builder section
Notice to the user we are supporting AVIA(Enfold page builder) support | 1.0 | Add Avia page builder(Enfold) support in pagebuilder section - Add Avia page builder support in a page builder section
Notice to the user we are supporting AVIA(Enfold page builder) support | non_process | add avia page builder enfold support in pagebuilder section add avia page builder support in a page builder section notice to the user we are supporting avia enfold page builder support | 0 |
154,304 | 19,711,877,115 | IssuesEvent | 2022-01-13 06:44:22 | Shai-Demo-Org/JS-Demo | https://api.github.com/repos/Shai-Demo-Org/JS-Demo | opened | CVE-2018-3721 (Medium) detected in lodash-4.13.1.tgz | security vulnerability | ## CVE-2018-3721 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.13.1.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.13.1.tgz">https://registry.npmjs.org/lodash/-/lodash-4.13.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/nyc/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-if-0.2.0.tgz (Root Library)
- grunt-contrib-nodeunit-1.0.0.tgz
- nodeunit-0.9.5.tgz
- tap-7.1.2.tgz
- nyc-7.1.0.tgz
- istanbul-lib-instrument-1.1.0-alpha.4.tgz
- babel-types-6.11.1.tgz
- :x: **lodash-4.13.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Shai-Demo-Org/JS-Demo/commit/5aaf68c266a0270ee6d4a0ac684351efb0b24dbf">5aaf68c266a0270ee6d4a0ac684351efb0b24dbf</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
lodash node module before 4.17.5 suffers from a Modification of Assumed-Immutable Data (MAID) vulnerability via defaultsDeep, merge, and mergeWith functions, which allows a malicious user to modify the prototype of "Object" via __proto__, causing the addition or modification of an existing property that will exist on all objects.
<p>Publish Date: 2018-06-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-3721>CVE-2018-3721</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-3721">https://nvd.nist.gov/vuln/detail/CVE-2018-3721</a></p>
<p>Release Date: 2018-06-07</p>
<p>Fix Resolution: 4.17.5</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"4.13.1","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-if:0.2.0;grunt-contrib-nodeunit:1.0.0;nodeunit:0.9.5;tap:7.1.2;nyc:7.1.0;istanbul-lib-instrument:1.1.0-alpha.4;babel-types:6.11.1;lodash:4.13.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"4.17.5","isBinary":false}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2018-3721","vulnerabilityDetails":"lodash node module before 4.17.5 suffers from a Modification of Assumed-Immutable Data (MAID) vulnerability via defaultsDeep, merge, and mergeWith functions, which allows a malicious user to modify the prototype of \"Object\" via __proto__, causing the addition or modification of an existing property that will exist on all objects.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-3721","cvss3Severity":"medium","cvss3Score":"6.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2018-3721 (Medium) detected in lodash-4.13.1.tgz - ## CVE-2018-3721 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.13.1.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.13.1.tgz">https://registry.npmjs.org/lodash/-/lodash-4.13.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/nyc/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- grunt-if-0.2.0.tgz (Root Library)
- grunt-contrib-nodeunit-1.0.0.tgz
- nodeunit-0.9.5.tgz
- tap-7.1.2.tgz
- nyc-7.1.0.tgz
- istanbul-lib-instrument-1.1.0-alpha.4.tgz
- babel-types-6.11.1.tgz
- :x: **lodash-4.13.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Shai-Demo-Org/JS-Demo/commit/5aaf68c266a0270ee6d4a0ac684351efb0b24dbf">5aaf68c266a0270ee6d4a0ac684351efb0b24dbf</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
lodash node module before 4.17.5 suffers from a Modification of Assumed-Immutable Data (MAID) vulnerability via defaultsDeep, merge, and mergeWith functions, which allows a malicious user to modify the prototype of "Object" via __proto__, causing the addition or modification of an existing property that will exist on all objects.
<p>Publish Date: 2018-06-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-3721>CVE-2018-3721</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-3721">https://nvd.nist.gov/vuln/detail/CVE-2018-3721</a></p>
<p>Release Date: 2018-06-07</p>
<p>Fix Resolution: 4.17.5</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"4.13.1","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-if:0.2.0;grunt-contrib-nodeunit:1.0.0;nodeunit:0.9.5;tap:7.1.2;nyc:7.1.0;istanbul-lib-instrument:1.1.0-alpha.4;babel-types:6.11.1;lodash:4.13.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"4.17.5","isBinary":false}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2018-3721","vulnerabilityDetails":"lodash node module before 4.17.5 suffers from a Modification of Assumed-Immutable Data (MAID) vulnerability via defaultsDeep, merge, and mergeWith functions, which allows a malicious user to modify the prototype of \"Object\" via __proto__, causing the addition or modification of an existing property that will exist on all objects.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-3721","cvss3Severity":"medium","cvss3Score":"6.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_process | cve medium detected in lodash tgz cve medium severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file package json path to vulnerable library node modules nyc node modules lodash package json dependency hierarchy grunt if tgz root library grunt contrib nodeunit tgz nodeunit tgz tap tgz nyc tgz istanbul lib instrument alpha tgz babel types tgz x lodash tgz vulnerable library found in head commit a href vulnerability details lodash node module before suffers from a modification of assumed immutable data maid vulnerability via defaultsdeep merge and mergewith functions which allows a malicious user to modify the prototype of object via proto causing the addition or modification of an existing property that will exist on all objects publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree grunt if grunt contrib nodeunit nodeunit tap nyc istanbul lib instrument alpha babel types lodash isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails lodash node module before suffers from a modification of assumed immutable data maid vulnerability via defaultsdeep merge and mergewith functions which allows a malicious user to modify the prototype of object via proto causing the addition or modification of an existing property that will exist on all objects vulnerabilityurl | 0 |
5,784 | 8,632,706,544 | IssuesEvent | 2018-11-22 11:36:49 | arxiv-vanity/engrafo | https://api.github.com/repos/arxiv-vanity/engrafo | closed | Size images correctly | area/postprocessor needs test document priority/medium type/bug | Sometimes images are too big or small. I wonder if we can do something clever to figure out how big an image is. Maybe how big it was in the source document?
- [x] https://www.arxiv-vanity.com/papers/1602.07188v2/ a good example of multiple \includegraphics in a figure that need to be sized sensibly across multiple rows
- [x] http://localhost:8010/html/1707.08901v1/ enormous images (currently failing to parse for some reason)
| 1.0 | Size images correctly - Sometimes images are too big or small. I wonder if we can do something clever to figure out how big an image is. Maybe how big it was in the source document?
- [x] https://www.arxiv-vanity.com/papers/1602.07188v2/ a good example of multiple \includegraphics in a figure that need to be sized sensibly across multiple rows
- [x] http://localhost:8010/html/1707.08901v1/ enormous images (currently failing to parse for some reason)
| process | size images correctly sometimes images are too big or small i wonder if we can do something clever to figure out how big an image is maybe how big it was in the source document a good example of multiple includegraphics in a figure that need to be sized sensibly across multiple rows enormous images currently failing to parse for some reason | 1 |
45,353 | 5,923,092,010 | IssuesEvent | 2017-05-23 06:51:11 | geetsisbac/HYUDMDQI7KH46FHYFHPJLMSI | https://api.github.com/repos/geetsisbac/HYUDMDQI7KH46FHYFHPJLMSI | reopened | vXi0GRRVrwy1u0ZMYRJK2qanThO0XdM4x64nCWokwZ11f9kWbhes2RWknBcHMMf+rDydtbzll0EkB7kkciW7qYVDfieVrFHyH8aAR2kI/cJuYXPkNjm4H/WleLC3VVxF0M7eJXqTxTVxYYMKm1cud54nPWxOjt4UFWVHEOArHb8= | design | rKerKDkm2x0mrEuKSUk8eruUcP7LVhTKZPL3rlNKiD0EfGs1Ghs8q31upH+m1Eu8KErBkeun7IUovl5uNHFU740Obf4MUbEIAWa1DaektJ9WX21Vk9Y03GHLWNDZA/TP9vHhIRMs8LzeK0Yg1YxBKte1zbJjP5DxOsAtO3tgI+NfrZs8woBI7JSuovS6a8FN2MgIlcZ+0TL3hnNxwAuYB8PYru7J8gTJFnRy6L/S5OtyHHLlZcFNPla0706ub49SuW26gDSP8PnfFxdgEcTfPK97mVqTvboe2rd7USe/KjFA7W5zHSRGWzJQv5gpVfBVH3Dk8vAdZeDpHO17dnibfw2o0DeURqnSa2l+5YNSWfJPSns4uW7ddLzOYL+ZnDatNdVW1EvDZA9xtyoRJuo3/omibQaPyHPLdenOhToF+TSIYjXwypnrXg1Bh0fhrWuovs1uzZUGBP7AchY3GCCV/fbPFUFbJmk8ixTs71UiO3KXEUBqfQhEVQY2b4mcmXYXUqpwiAHQHIQpivyQpVlEm2iyu33sd0+aGesM5xUdp/0bdNwzp9cLpT7dH9YBCGAaW+5te5jhzpZNaPQq68JeeI0pkt/ddmSzIPkH6+u/uZrJHlaPOlIZhJnciaYb75lgWG/jdU5x4Q6Vn1RTFHG0yum+LqbZn0h4seSR6/TXmhwzh5yeYpBlyekNQrRrxoBHBIUSlgjBRc8KLhiMqRwRw4+iU6BwwvM17ZnGK3SC34uM2UtReF200N6gPoC8hDZBo50zh2E3s0Hq92x+QlAINpVFPTzLdq3WLbkbs0ADH5FIgg1C2roFhB9oZIohwKPsoDE+7NVD0otKl1EuQqNyj6CjI+oVfSmcuaIJk7qjHk4EjLihOKxlL7EroCrhFHqy5o3E4LG40okAP6n3SFsx4WtCpo7Z+Y5Rig2xIkpMPcNHdCo7MdaUqJk4vR+U8gASL9yJZn+xgwSqKhytK+0Q8JQtg6l37FFOsWNhjN603pAkhxvJeplM4ayGiaH1ceAIPNB9xm5n1/v5TtV6O5XyXwzmRsB8nVInvLvMSPp5gkA= | 1.0 | vXi0GRRVrwy1u0ZMYRJK2qanThO0XdM4x64nCWokwZ11f9kWbhes2RWknBcHMMf+rDydtbzll0EkB7kkciW7qYVDfieVrFHyH8aAR2kI/cJuYXPkNjm4H/WleLC3VVxF0M7eJXqTxTVxYYMKm1cud54nPWxOjt4UFWVHEOArHb8= - rKerKDkm2x0mrEuKSUk8eruUcP7LVhTKZPL3rlNKiD0EfGs1Ghs8q31upH+m1Eu8KErBkeun7IUovl5uNHFU740Obf4MUbEIAWa1DaektJ9WX21Vk9Y03GHLWNDZA/TP9vHhIRMs8LzeK0Yg1YxBKte1zbJjP5DxOsAtO3tgI+NfrZs8woBI7JSuovS6a8FN2MgIlcZ+0TL3hnNxwAuYB8PYru7J8gTJFnRy6L/S5OtyHHLlZcFNPla0706ub49SuW26gDSP8PnfFxdgEcTfPK97mVqTvboe2rd7USe/KjFA7W5zHSRGWzJQv5gpVfBVH3Dk8vAdZeDpHO17dnibfw2o0DeURqnSa2l+5YNSWfJPSns4uW7ddLzOYL+ZnDatNdVW1EvDZA9xtyoRJuo3/omibQaPyHPLdenOhToF+TSIYjXwypnrXg1Bh0fhrWuovs1uzZUGBP7AchY3GCCV/fbPFUFbJmk8ixTs71UiO3KXEUBqfQhEVQY2b4mcmXYXUqpwiAHQHIQpivyQpVlEm2iyu33sd0+aGesM5xUdp/0bdNwzp9cLpT7dH9YBCGAaW+5te5jhzpZNaPQq68JeeI0pkt/ddmSzIPkH6+u/uZrJHlaPOlIZhJnciaYb75lgWG/jdU5x4Q6Vn1RTFHG0yum+LqbZn0h4seSR6/TXmhwzh5yeYpBlyekNQrRrxoBHBIUSlgjBRc8KLhiMqRwRw4+iU6BwwvM17ZnGK3SC34uM2UtReF200N6gPoC8hDZBo50zh2E3s0Hq92x+QlAINpVFPTzLdq3WLbkbs0ADH5FIgg1C2roFhB9oZIohwKPsoDE+7NVD0otKl1EuQqNyj6CjI+oVfSmcuaIJk7qjHk4EjLihOKxlL7EroCrhFHqy5o3E4LG40okAP6n3SFsx4WtCpo7Z+Y5Rig2xIkpMPcNHdCo7MdaUqJk4vR+U8gASL9yJZn+xgwSqKhytK+0Q8JQtg6l37FFOsWNhjN603pAkhxvJeplM4ayGiaH1ceAIPNB9xm5n1/v5TtV6O5XyXwzmRsB8nVInvLvMSPp5gkA= | non_process | omibqapyhpldenohtof u xgwsqkhytk | 0 |
246,441 | 20,865,169,952 | IssuesEvent | 2022-03-22 06:03:38 | kubeedge/edgemesh | https://api.github.com/repos/kubeedge/edgemesh | opened | Is HTTP'S and gRPC supported by edgemesh??? | kind/failing-test | Hi i am using KubeEdge v1. 8.1.
On top of that i am using Edgemesh. Please tell whether this setup supports HTTPS and gRPC?? | 1.0 | Is HTTP'S and gRPC supported by edgemesh??? - Hi i am using KubeEdge v1. 8.1.
On top of that i am using Edgemesh. Please tell whether this setup supports HTTPS and gRPC?? | non_process | is http s and grpc supported by edgemesh hi i am using kubeedge on top of that i am using edgemesh please tell whether this setup supports https and grpc | 0 |
1,318 | 3,869,901,015 | IssuesEvent | 2016-04-10 21:28:37 | moxie-leean/generators | https://api.github.com/repos/moxie-leean/generators | closed | Update base name from Leean to Lean. | Pending deploy process | There are some instances like `licence` where this is still present. | 1.0 | Update base name from Leean to Lean. - There are some instances like `licence` where this is still present. | process | update base name from leean to lean there are some instances like licence where this is still present | 1 |
128,097 | 12,359,817,909 | IssuesEvent | 2020-05-17 12:42:41 | adamjestem/PZ_2020_NST_1 | https://api.github.com/repos/adamjestem/PZ_2020_NST_1 | closed | Fix sprint numbers on sprint raports | bug documentation | Raport z datą 30.04.2020 powinien znajdować się w sprincie 4 - coś wcześniej macie namieszane. | 1.0 | Fix sprint numbers on sprint raports - Raport z datą 30.04.2020 powinien znajdować się w sprincie 4 - coś wcześniej macie namieszane. | non_process | fix sprint numbers on sprint raports raport z datą powinien znajdować się w sprincie coś wcześniej macie namieszane | 0 |
65,561 | 16,413,482,199 | IssuesEvent | 2021-05-19 01:12:58 | home-climate-control/xbee-api | https://api.github.com/repos/home-climate-control/xbee-api | closed | Add ErrorProne support to build | build | As an Application Maintainer, I want local and CI/CD builds to produce ErrorProne reports so that they can be available on the spot without jumping through the hoops with IDE plugins.
Acceptance Criteria: Gradle build composes ErrorProne report. | 1.0 | Add ErrorProne support to build - As an Application Maintainer, I want local and CI/CD builds to produce ErrorProne reports so that they can be available on the spot without jumping through the hoops with IDE plugins.
Acceptance Criteria: Gradle build composes ErrorProne report. | non_process | add errorprone support to build as an application maintainer i want local and ci cd builds to produce errorprone reports so that they can be available on the spot without jumping through the hoops with ide plugins acceptance criteria gradle build composes errorprone report | 0 |
174,966 | 21,300,608,982 | IssuesEvent | 2022-04-15 02:14:48 | Guillerbr/api-laravel-auth | https://api.github.com/repos/Guillerbr/api-laravel-auth | opened | CVE-2021-43138 (High) detected in async-1.5.2.tgz, async-2.6.2.tgz | security vulnerability | ## CVE-2021-43138 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>async-1.5.2.tgz</b>, <b>async-2.6.2.tgz</b></p></summary>
<p>
<details><summary><b>async-1.5.2.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-1.5.2.tgz">https://registry.npmjs.org/async/-/async-1.5.2.tgz</a></p>
<p>Path to dependency file: /api-laravel-auth/package.json</p>
<p>Path to vulnerable library: /node_modules/portfinder/node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- laravel-mix-4.0.16.tgz (Root Library)
- webpack-dev-server-3.7.2.tgz
- portfinder-1.0.20.tgz
- :x: **async-1.5.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>async-2.6.2.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-2.6.2.tgz">https://registry.npmjs.org/async/-/async-2.6.2.tgz</a></p>
<p>Path to dependency file: /api-laravel-auth/package.json</p>
<p>Path to vulnerable library: /node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- laravel-mix-4.0.16.tgz (Root Library)
- extract-text-webpack-plugin-4.0.0-beta.0.tgz
- :x: **async-2.6.2.tgz** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability exists in Async through 3.2.1 (fixed in 3.2.2) , which could let a malicious user obtain privileges via the mapValues() method.
<p>Publish Date: 2022-04-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43138>CVE-2021-43138</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-43138">https://nvd.nist.gov/vuln/detail/CVE-2021-43138</a></p>
<p>Release Date: 2022-04-06</p>
<p>Fix Resolution (async): 3.2.2</p>
<p>Direct dependency fix Resolution (laravel-mix): 6.0.40</p><p>Fix Resolution (async): 3.2.2</p>
<p>Direct dependency fix Resolution (laravel-mix): 6.0.40</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-43138 (High) detected in async-1.5.2.tgz, async-2.6.2.tgz - ## CVE-2021-43138 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>async-1.5.2.tgz</b>, <b>async-2.6.2.tgz</b></p></summary>
<p>
<details><summary><b>async-1.5.2.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-1.5.2.tgz">https://registry.npmjs.org/async/-/async-1.5.2.tgz</a></p>
<p>Path to dependency file: /api-laravel-auth/package.json</p>
<p>Path to vulnerable library: /node_modules/portfinder/node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- laravel-mix-4.0.16.tgz (Root Library)
- webpack-dev-server-3.7.2.tgz
- portfinder-1.0.20.tgz
- :x: **async-1.5.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>async-2.6.2.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-2.6.2.tgz">https://registry.npmjs.org/async/-/async-2.6.2.tgz</a></p>
<p>Path to dependency file: /api-laravel-auth/package.json</p>
<p>Path to vulnerable library: /node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- laravel-mix-4.0.16.tgz (Root Library)
- extract-text-webpack-plugin-4.0.0-beta.0.tgz
- :x: **async-2.6.2.tgz** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability exists in Async through 3.2.1 (fixed in 3.2.2) , which could let a malicious user obtain privileges via the mapValues() method.
<p>Publish Date: 2022-04-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43138>CVE-2021-43138</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-43138">https://nvd.nist.gov/vuln/detail/CVE-2021-43138</a></p>
<p>Release Date: 2022-04-06</p>
<p>Fix Resolution (async): 3.2.2</p>
<p>Direct dependency fix Resolution (laravel-mix): 6.0.40</p><p>Fix Resolution (async): 3.2.2</p>
<p>Direct dependency fix Resolution (laravel-mix): 6.0.40</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve high detected in async tgz async tgz cve high severity vulnerability vulnerable libraries async tgz async tgz async tgz higher order functions and common patterns for asynchronous code library home page a href path to dependency file api laravel auth package json path to vulnerable library node modules portfinder node modules async package json dependency hierarchy laravel mix tgz root library webpack dev server tgz portfinder tgz x async tgz vulnerable library async tgz higher order functions and common patterns for asynchronous code library home page a href path to dependency file api laravel auth package json path to vulnerable library node modules async package json dependency hierarchy laravel mix tgz root library extract text webpack plugin beta tgz x async tgz vulnerable library vulnerability details a vulnerability exists in async through fixed in which could let a malicious user obtain privileges via the mapvalues method publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution async direct dependency fix resolution laravel mix fix resolution async direct dependency fix resolution laravel mix step up your open source security game with whitesource | 0 |
22,343 | 6,245,674,377 | IssuesEvent | 2017-07-13 00:32:56 | xceedsoftware/wpftoolkit | https://api.github.com/repos/xceedsoftware/wpftoolkit | closed | Feature request: PropertyGrid show/hide title | CodePlex | <b>Cpt_Kork[CodePlex]</b> <br />Sometimes it is desirable to hide the header (title) of the PropertyGrid, since the user doesn't need or want to know the concrete data type of the underlying object bound to the PropertyGrid. In some other cases there might be just too little space to
guarantee a convenient usage of the PropertyGrid. Thus leaving out the title might be a good solution in both cases.
nbsp
At the moment there is no such property allowing to hide the title though, so I just added a DependencyProperty called ShowHeader to the PropertyGrid class. Unfortunately, adding the property is not enough. For being able to trigger any actions small changes
to the PropertyGrid's ControlTemplate have to be applied first:
nbsp
The visibility of the StackPanel hosting SelectedObjectTypeName and SelectedObjectName has to be bound to the ShowHeader property. Surprisingly, when trying to hide the header this way and also hiding sort options and the search box you will notice a small
quotborderquot of ca. 4-6 pixels above the first property of the PropertyGrid.
nbsp
This annoying border results from the margins of the Grid hosting the sort options and the search box. In order to get rid of the border the visibility of this hosting Grid have to be adjusted as well. This task can be accomplished by a MultiTrigger verifying
the visibility of both the search box and the sort options and in cases where both are invisible hiding the whole Grid.
nbsp
You can find the particular changes within the attached patch.
nbsp
Edited on April 2: Edited the provided patch to match the new Extended WPF Toolkit version 1.6.0
| 1.0 | Feature request: PropertyGrid show/hide title - <b>Cpt_Kork[CodePlex]</b> <br />Sometimes it is desirable to hide the header (title) of the PropertyGrid, since the user doesn't need or want to know the concrete data type of the underlying object bound to the PropertyGrid. In some other cases there might be just too little space to
guarantee a convenient usage of the PropertyGrid. Thus leaving out the title might be a good solution in both cases.
nbsp
At the moment there is no such property allowing to hide the title though, so I just added a DependencyProperty called ShowHeader to the PropertyGrid class. Unfortunately, adding the property is not enough. For being able to trigger any actions small changes
to the PropertyGrid's ControlTemplate have to be applied first:
nbsp
The visibility of the StackPanel hosting SelectedObjectTypeName and SelectedObjectName has to be bound to the ShowHeader property. Surprisingly, when trying to hide the header this way and also hiding sort options and the search box you will notice a small
quotborderquot of ca. 4-6 pixels above the first property of the PropertyGrid.
nbsp
This annoying border results from the margins of the Grid hosting the sort options and the search box. In order to get rid of the border the visibility of this hosting Grid have to be adjusted as well. This task can be accomplished by a MultiTrigger verifying
the visibility of both the search box and the sort options and in cases where both are invisible hiding the whole Grid.
nbsp
You can find the particular changes within the attached patch.
nbsp
Edited on April 2: Edited the provided patch to match the new Extended WPF Toolkit version 1.6.0
| non_process | feature request propertygrid show hide title cpt kork sometimes it is desirable to hide the header title of the propertygrid since the user doesn t need or want to know the concrete data type of the underlying object bound to the propertygrid in some other cases there might be just too little space to guarantee a convenient usage of the propertygrid thus leaving out the title might be a good solution in both cases nbsp at the moment there is no such property allowing to hide the title though so i just added a dependencyproperty called showheader to the propertygrid class unfortunately adding the property is not enough for being able to trigger any actions small changes to the propertygrid s controltemplate have to be applied first nbsp the visibility of the stackpanel hosting selectedobjecttypename and selectedobjectname has to be bound to the showheader property surprisingly when trying to hide the header this way and also hiding sort options and the search box you will notice a small quotborderquot of ca pixels above the first property of the propertygrid nbsp this annoying border results from the margins of the grid hosting the sort options and the search box in order to get rid of the border the visibility of this hosting grid have to be adjusted as well this task can be accomplished by a multitrigger verifying the visibility of both the search box and the sort options and in cases where both are invisible hiding the whole grid nbsp you can find the particular changes within the attached patch nbsp edited on april edited the provided patch to match the new extended wpf toolkit version | 0 |
6,908 | 10,059,471,032 | IssuesEvent | 2019-07-22 16:30:53 | cypress-io/cypress | https://api.github.com/repos/cypress-io/cypress | closed | `preinstall` script is broken on Windows | OS: windows process: contributing stage: work in progress | <!-- Is this a question? Don't open an issue. Ask in our chat https://on.cypress.io/chat -->
### Current behavior:
`preinstall` in the main package doesn't work on Windows because of `rm` and `true`:

AppVeyor appears to be passing because it has busybox or something similar installed
| 1.0 | `preinstall` script is broken on Windows - <!-- Is this a question? Don't open an issue. Ask in our chat https://on.cypress.io/chat -->
### Current behavior:
`preinstall` in the main package doesn't work on Windows because of `rm` and `true`:

AppVeyor appears to be passing because it has busybox or something similar installed
| process | preinstall script is broken on windows current behavior preinstall in the main package doesn t work on windows because of rm and true appveyor appears to be passing because it has busybox or something similar installed | 1 |
57,943 | 6,561,618,482 | IssuesEvent | 2017-09-07 13:55:50 | NetsBlox/NetsBlox | https://api.github.com/repos/NetsBlox/NetsBlox | closed | tests are failing on master (active-room) | testing | ```
3 failing
1) active-room changing roles should send update message on changing roles:
AssertionError: { uuid: '_netsblox1504640713802', username: 'first' } == 'first'
at Context.<anonymous> (test/server/rooms/active-room.spec.js:168:20)
2) active-room add should send updated room:
AssertionError: false == true
+ expected - actual
-false
+true
at Context.<anonymous> (test/server/rooms/active-room.spec.js:220:13)
3) active-room join role should send correct update message:
AssertionError: false == true
+ expected - actual
-false
+true
at Context.<anonymous> (test/server/rooms/active-room.spec.js:252:13)
``` | 1.0 | tests are failing on master (active-room) - ```
3 failing
1) active-room changing roles should send update message on changing roles:
AssertionError: { uuid: '_netsblox1504640713802', username: 'first' } == 'first'
at Context.<anonymous> (test/server/rooms/active-room.spec.js:168:20)
2) active-room add should send updated room:
AssertionError: false == true
+ expected - actual
-false
+true
at Context.<anonymous> (test/server/rooms/active-room.spec.js:220:13)
3) active-room join role should send correct update message:
AssertionError: false == true
+ expected - actual
-false
+true
at Context.<anonymous> (test/server/rooms/active-room.spec.js:252:13)
``` | non_process | tests are failing on master active room failing active room changing roles should send update message on changing roles assertionerror uuid username first first at context test server rooms active room spec js active room add should send updated room assertionerror false true expected actual false true at context test server rooms active room spec js active room join role should send correct update message assertionerror false true expected actual false true at context test server rooms active room spec js | 0 |
40,359 | 9,967,084,995 | IssuesEvent | 2019-07-08 12:51:25 | primefaces/primeng | https://api.github.com/repos/primefaces/primeng | closed | Touch UI does not delete the background overlay properly - Breaks entire app state | defect | Discovered this bug using touch UI on an android device but it can be replicated on web easily.
1) Open https://www.primefaces.org/primeng/#/calendar
2) Open touch UI.
3) While touch UI is open navigate to https://www.primefaces.org/primeng/#/chips
On android all you would do it press the back button when having touch UI open which is very easy for a user to trigger.
Your entire app is now broken and needs to be restarted since the touch UI background will block everything out.

| 1.0 | Touch UI does not delete the background overlay properly - Breaks entire app state - Discovered this bug using touch UI on an android device but it can be replicated on web easily.
1) Open https://www.primefaces.org/primeng/#/calendar
2) Open touch UI.
3) While touch UI is open navigate to https://www.primefaces.org/primeng/#/chips
On android all you would do it press the back button when having touch UI open which is very easy for a user to trigger.
Your entire app is now broken and needs to be restarted since the touch UI background will block everything out.

| non_process | touch ui does not delete the background overlay properly breaks entire app state discovered this bug using touch ui on an android device but it can be replicated on web easily open open touch ui while touch ui is open navigate to on android all you would do it press the back button when having touch ui open which is very easy for a user to trigger your entire app is now broken and needs to be restarted since the touch ui background will block everything out | 0 |
10,786 | 13,608,984,940 | IssuesEvent | 2020-09-23 03:56:22 | googleapis/java-vision | https://api.github.com/repos/googleapis/java-vision | closed | Dependency Dashboard | api: vision type: process | This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/org.apache.maven.plugins-maven-project-info-reports-plugin-3.x -->build(deps): update dependency org.apache.maven.plugins:maven-project-info-reports-plugin to v3.1.1
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-storage-1.x -->deps: update dependency com.google.cloud:google-cloud-storage to v1.113.1
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
| 1.0 | Dependency Dashboard - This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/org.apache.maven.plugins-maven-project-info-reports-plugin-3.x -->build(deps): update dependency org.apache.maven.plugins:maven-project-info-reports-plugin to v3.1.1
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-storage-1.x -->deps: update dependency com.google.cloud:google-cloud-storage to v1.113.1
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
| process | dependency dashboard this issue contains a list of renovate updates and their statuses open these updates have all been created already click a checkbox below to force a retry rebase of any build deps update dependency org apache maven plugins maven project info reports plugin to deps update dependency com google cloud google cloud storage to check this box to trigger a request for renovate to run again on this repository | 1 |
20,368 | 27,025,942,613 | IssuesEvent | 2023-02-11 15:58:42 | allinurl/goaccess | https://api.github.com/repos/allinurl/goaccess | closed | Parsing traefik JSON logs timestamps | add log/date/time format log-processing | It seems that the timestamps in traefik JSON logs are in nanoseconds format. Is there a way to convert/parse them in correct format?
I was not able to find a way to do it, just to ignore the fields.
Could there be added a specifier for nanoseconds ? | 1.0 | Parsing traefik JSON logs timestamps - It seems that the timestamps in traefik JSON logs are in nanoseconds format. Is there a way to convert/parse them in correct format?
I was not able to find a way to do it, just to ignore the fields.
Could there be added a specifier for nanoseconds ? | process | parsing traefik json logs timestamps it seems that the timestamps in traefik json logs are in nanoseconds format is there a way to convert parse them in correct format i was not able to find a way to do it just to ignore the fields could there be added a specifier for nanoseconds | 1 |
9,568 | 12,519,709,373 | IssuesEvent | 2020-06-03 14:49:29 | tikv/tikv | https://api.github.com/repos/tikv/tikv | closed | UCP: Migrate scalar function `Insert` from TiDB | challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor |
## Description
Port the scalar function `Insert` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @mapleFU
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
| 2.0 | UCP: Migrate scalar function `Insert` from TiDB -
## Description
Port the scalar function `Insert` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @mapleFU
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
| process | ucp migrate scalar function insert from tidb description port the scalar function insert from tidb to coprocessor score mentor s maplefu recommended skills rust programming learning materials already implemented expressions ported from tidb | 1 |
8,741 | 11,870,287,580 | IssuesEvent | 2020-03-26 12:32:09 | prisma/prisma2 | https://api.github.com/repos/prisma/prisma2 | closed | Float type unique field query | bug/2-confirmed kind/bug process/candidate topic: types | ## Bug description
Can't query on unique field when it has Float type.
## How to reproduce
1. Create passenger with next phone number 48999888777
2. Try to query it using `crud.passenger()` or `passenger.findOne` based on phone
```
query {
passenger(
where:{
phone:48999888777
}
) {
id
phone
}
}
```
## Expected behavior
In our case query payload is `null`. Need to be a passenger data
## Prisma information
```
model Passenger {
id String @id @default(cuid())
lastName String
firstName String
password String?
phone Float @unique
email String? @unique
isBlocked Boolean @default(false)
}
```
## Environment & setup
- OS: **Mac OS Catalina**
- Database: **PostgreSQL**
- Prisma version: **prisma2@2.0.0-preview024, binary version: 377df4fe30aa992f13f1ba152cf83d5770bdbc85**
- Node.js version: **12.8.1**
## Logs
```
express:router dispatching POST / +5s
express:router query : / +0ms
express:router expressInit : / +0ms
express:router corsMiddleware : / +0ms
body-parser:json content-type "application/json" +0ms
body-parser:json content-encoding "identity" +0ms
body-parser:json read body +0ms
body-parser:json parse body +1ms
body-parser:json parse json +0ms
prisma-client Prisma Client call: +9s
prisma-client prisma.passenger.findOne({
prisma-client where: {
prisma-client phone: 48999888777
prisma-client }
prisma-client }) +0ms
prisma-client Generated request: +1ms
prisma-client query {
prisma-client findOnePassenger(where: {
prisma-client phone: 48999888777
prisma-client }) {
prisma-client id
prisma-client lastName
prisma-client firstName
prisma-client password
prisma-client phone
prisma-client email
prisma-client isBlocked
prisma-client }
prisma-client }
prisma-client +0ms
engine {
engine PRISMA_DML_PATH: '.../node_modules/@prisma/client/schema.prisma',
engine PORT: '49773',
engine RUST_BACKTRACE: '1',
engine RUST_LOG: 'info',
engine LOG_QUERIES: 'true',
engine OVERWRITE_DATASOURCES: '[]',
engine CLICOLOR_FORCE: '1'
engine } +0ms
engine {
engine cwd: '.../prisma'
engine } +0ms
plusX Execution permissions of /.../node_modules/@prisma/client/runtime/query-engine-darwin are fine +0ms
engine { flags: [ '--enable-raw-queries' ] } +1ms
engine stderr Printing to stderr for debugging +14ms
engine stderr Listening on 127.0.0.1:49773 +0ms
engine stdout {
timestamp: 'Mar 19 19:05:31.841',
level: 'INFO',
target: 'quaint::pooled',
fields: { message: 'Starting a postgresql pool with 9 connections.' }
} +8ms
prisma:info Starting a postgresql pool with 9 connections.
engine stdout {
timestamp: 'Mar 19 19:05:31.850',
level: 'INFO',
target: 'prisma::server',
fields: {
message: 'Started http server on 127.0.0.1:49773',
'log.target': 'prisma::server',
'log.module_path': 'prisma::server',
'log.file': 'query-engine/prisma/src/server.rs',
'log.line': 95
}
} +9ms
prisma:info Started http server on 127.0.0.1:49773
express:router dispatching POST / +624ms
express:router query : / +0ms
express:router expressInit : / +0ms
express:router corsMiddleware : / +0ms
body-parser:json content-type "application/json" +0ms
body-parser:json content-encoding "identity" +0ms
body-parser:json read body +0ms
body-parser:json parse body +0ms
body-parser:json parse json +0ms
engine stdout {
timestamp: 'Mar 19 19:05:32.446',
level: 'INFO',
target: 'quaint::connector::metrics',
fields: {
query: 'SELECT "v1"."Passenger"."id", "v1"."Passenger"."lastName", "v1"."Passenger"."firstName", "v1"."Passenger"."password", "v1"."Passenger"."phone", "v1"."Passenger"."email", "v1"."Passenger"."isBlocked" FROM "v1"."Passenger" WHERE "v1"."Passenger"."phone" IN ($1) OFFSET $2',
item_type: 'query',
params: '[48999888777,0]',
duration_ms: 117
}
} +605ms
prisma:query SELECT "v1"."Passenger"."id", "v1"."Passenger"."lastName", "v1"."Passenger"."firstName", "v1"."Passenger"."password", "v1"."Passenger"."phone", "v1"."Passenger"."email", "v1"."Passenger"."isBlocked" FROM "v1"."Passenger" WHERE "v1"."Passenger"."phone" IN ($1) OFFSET $2
express:router dispatching POST / +1m
express:router query : / +1ms
express:router expressInit : / +0ms
express:router corsMiddleware : / +0ms
body-parser:json content-type "application/json" +0ms
body-parser:json content-encoding "identity" +0ms
body-parser:json read body +0ms
body-parser:json parse body +0ms
body-parser:json parse json +0ms
```
| 1.0 | Float type unique field query - ## Bug description
Can't query on unique field when it has Float type.
## How to reproduce
1. Create passenger with next phone number 48999888777
2. Try to query it using `crud.passenger()` or `passenger.findOne` based on phone
```
query {
passenger(
where:{
phone:48999888777
}
) {
id
phone
}
}
```
## Expected behavior
In our case query payload is `null`. Need to be a passenger data
## Prisma information
```
model Passenger {
id String @id @default(cuid())
lastName String
firstName String
password String?
phone Float @unique
email String? @unique
isBlocked Boolean @default(false)
}
```
## Environment & setup
- OS: **Mac OS Catalina**
- Database: **PostgreSQL**
- Prisma version: **prisma2@2.0.0-preview024, binary version: 377df4fe30aa992f13f1ba152cf83d5770bdbc85**
- Node.js version: **12.8.1**
## Logs
```
express:router dispatching POST / +5s
express:router query : / +0ms
express:router expressInit : / +0ms
express:router corsMiddleware : / +0ms
body-parser:json content-type "application/json" +0ms
body-parser:json content-encoding "identity" +0ms
body-parser:json read body +0ms
body-parser:json parse body +1ms
body-parser:json parse json +0ms
prisma-client Prisma Client call: +9s
prisma-client prisma.passenger.findOne({
prisma-client where: {
prisma-client phone: 48999888777
prisma-client }
prisma-client }) +0ms
prisma-client Generated request: +1ms
prisma-client query {
prisma-client findOnePassenger(where: {
prisma-client phone: 48999888777
prisma-client }) {
prisma-client id
prisma-client lastName
prisma-client firstName
prisma-client password
prisma-client phone
prisma-client email
prisma-client isBlocked
prisma-client }
prisma-client }
prisma-client +0ms
engine {
engine PRISMA_DML_PATH: '.../node_modules/@prisma/client/schema.prisma',
engine PORT: '49773',
engine RUST_BACKTRACE: '1',
engine RUST_LOG: 'info',
engine LOG_QUERIES: 'true',
engine OVERWRITE_DATASOURCES: '[]',
engine CLICOLOR_FORCE: '1'
engine } +0ms
engine {
engine cwd: '.../prisma'
engine } +0ms
plusX Execution permissions of /.../node_modules/@prisma/client/runtime/query-engine-darwin are fine +0ms
engine { flags: [ '--enable-raw-queries' ] } +1ms
engine stderr Printing to stderr for debugging +14ms
engine stderr Listening on 127.0.0.1:49773 +0ms
engine stdout {
timestamp: 'Mar 19 19:05:31.841',
level: 'INFO',
target: 'quaint::pooled',
fields: { message: 'Starting a postgresql pool with 9 connections.' }
} +8ms
prisma:info Starting a postgresql pool with 9 connections.
engine stdout {
timestamp: 'Mar 19 19:05:31.850',
level: 'INFO',
target: 'prisma::server',
fields: {
message: 'Started http server on 127.0.0.1:49773',
'log.target': 'prisma::server',
'log.module_path': 'prisma::server',
'log.file': 'query-engine/prisma/src/server.rs',
'log.line': 95
}
} +9ms
prisma:info Started http server on 127.0.0.1:49773
express:router dispatching POST / +624ms
express:router query : / +0ms
express:router expressInit : / +0ms
express:router corsMiddleware : / +0ms
body-parser:json content-type "application/json" +0ms
body-parser:json content-encoding "identity" +0ms
body-parser:json read body +0ms
body-parser:json parse body +0ms
body-parser:json parse json +0ms
engine stdout {
timestamp: 'Mar 19 19:05:32.446',
level: 'INFO',
target: 'quaint::connector::metrics',
fields: {
query: 'SELECT "v1"."Passenger"."id", "v1"."Passenger"."lastName", "v1"."Passenger"."firstName", "v1"."Passenger"."password", "v1"."Passenger"."phone", "v1"."Passenger"."email", "v1"."Passenger"."isBlocked" FROM "v1"."Passenger" WHERE "v1"."Passenger"."phone" IN ($1) OFFSET $2',
item_type: 'query',
params: '[48999888777,0]',
duration_ms: 117
}
} +605ms
prisma:query SELECT "v1"."Passenger"."id", "v1"."Passenger"."lastName", "v1"."Passenger"."firstName", "v1"."Passenger"."password", "v1"."Passenger"."phone", "v1"."Passenger"."email", "v1"."Passenger"."isBlocked" FROM "v1"."Passenger" WHERE "v1"."Passenger"."phone" IN ($1) OFFSET $2
express:router dispatching POST / +1m
express:router query : / +1ms
express:router expressInit : / +0ms
express:router corsMiddleware : / +0ms
body-parser:json content-type "application/json" +0ms
body-parser:json content-encoding "identity" +0ms
body-parser:json read body +0ms
body-parser:json parse body +0ms
body-parser:json parse json +0ms
```
| process | float type unique field query bug description can t query on unique field when it has float type how to reproduce create passenger with next phone number try to query it using crud passenger or passenger findone based on phone query passenger where phone id phone expected behavior in our case query payload is null need to be a passenger data prisma information model passenger id string id default cuid lastname string firstname string password string phone float unique email string unique isblocked boolean default false environment setup os mac os catalina database postgresql prisma version binary version node js version logs express router dispatching post express router query express router expressinit express router corsmiddleware body parser json content type application json body parser json content encoding identity body parser json read body body parser json parse body body parser json parse json prisma client prisma client call prisma client prisma passenger findone prisma client where prisma client phone prisma client prisma client prisma client generated request prisma client query prisma client findonepassenger where prisma client phone prisma client prisma client id prisma client lastname prisma client firstname prisma client password prisma client phone prisma client email prisma client isblocked prisma client prisma client prisma client engine engine prisma dml path node modules prisma client schema prisma engine port engine rust backtrace engine rust log info engine log queries true engine overwrite datasources engine clicolor force engine engine engine cwd prisma engine plusx execution permissions of node modules prisma client runtime query engine darwin are fine engine flags engine stderr printing to stderr for debugging engine stderr listening on engine stdout timestamp mar level info target quaint pooled fields message starting a postgresql pool with connections prisma info starting a postgresql pool with connections engine stdout timestamp mar level info target prisma server fields message started http server on log target prisma server log module path prisma server log file query engine prisma src server rs log line prisma info started http server on express router dispatching post express router query express router expressinit express router corsmiddleware body parser json content type application json body parser json content encoding identity body parser json read body body parser json parse body body parser json parse json engine stdout timestamp mar level info target quaint connector metrics fields query select passenger id passenger lastname passenger firstname passenger password passenger phone passenger email passenger isblocked from passenger where passenger phone in offset item type query params duration ms prisma query select passenger id passenger lastname passenger firstname passenger password passenger phone passenger email passenger isblocked from passenger where passenger phone in offset express router dispatching post express router query express router expressinit express router corsmiddleware body parser json content type application json body parser json content encoding identity body parser json read body body parser json parse body body parser json parse json | 1 |
228,993 | 17,496,585,314 | IssuesEvent | 2021-08-10 01:45:11 | pfnet-research/pfhedge | https://api.github.com/repos/pfnet-research/pfhedge | closed | DOC: Add equations to docstrings of brownian motion | documentation good first issue | Add equations `dS(t) = ...` to generate_brownian and generate_geometric_brownian. | 1.0 | DOC: Add equations to docstrings of brownian motion - Add equations `dS(t) = ...` to generate_brownian and generate_geometric_brownian. | non_process | doc add equations to docstrings of brownian motion add equations ds t to generate brownian and generate geometric brownian | 0 |
18,063 | 24,068,081,257 | IssuesEvent | 2022-09-17 19:41:40 | lynnandtonic/nestflix.fun | https://api.github.com/repos/lynnandtonic/nestflix.fun | closed | Add Lucid Dreaming from "The Callback Queen" (Screenshots and Title Card Added) | suggested title in process | Please add as much of the following info as you can:
Title: Lucid Dreaming
Type (film/tv show): Film - amateur romance
Film or show in which it appears: The Callback Queen
Is the parent film/show streaming anywhere? Yes: Amazon Prime
About when in the parent film/show does it appear? Near the end
Actual footage of the film/show can be seen (yes/no)? Yes
Timestamp: 1:16:35 - 1:17:27
Quote: I dream, I dream, I dream of you.
Starring: Kate Loughlin, Lucy Wilkinson
A Daithi Carroll Film
Studio: Deborah Whitton Film Academy







 | 1.0 | Add Lucid Dreaming from "The Callback Queen" (Screenshots and Title Card Added) - Please add as much of the following info as you can:
Title: Lucid Dreaming
Type (film/tv show): Film - amateur romance
Film or show in which it appears: The Callback Queen
Is the parent film/show streaming anywhere? Yes: Amazon Prime
About when in the parent film/show does it appear? Near the end
Actual footage of the film/show can be seen (yes/no)? Yes
Timestamp: 1:16:35 - 1:17:27
Quote: I dream, I dream, I dream of you.
Starring: Kate Loughlin, Lucy Wilkinson
A Daithi Carroll Film
Studio: Deborah Whitton Film Academy







 | process | add lucid dreaming from the callback queen screenshots and title card added please add as much of the following info as you can title lucid dreaming type film tv show film amateur romance film or show in which it appears the callback queen is the parent film show streaming anywhere yes amazon prime about when in the parent film show does it appear near the end actual footage of the film show can be seen yes no yes timestamp quote i dream i dream i dream of you starring kate loughlin lucy wilkinson a daithi carroll film studio deborah whitton film academy | 1 |
367 | 2,806,265,949 | IssuesEvent | 2015-05-15 00:32:26 | joyent/node | https://api.github.com/repos/joyent/node | closed | process.chdir is not thread-safe | process | Calling process.chdir() while libeio is processing fs operations that require resolving a relative pathname to an absolute one could be dangerous because chdir() may not be thread-safe, so a new CWD could be partially applied when a relative path is resolved in a thread.
Apart from that, it yields unpredictable result, because you can never know which cwd is used in a fs operation that is pending while chdir is called.
Consensus solution is to absolute-ize paths before they hit the thread pool. It could be done by calling path.resolve() in fs.js before invoking the binding.
Known affected: open, symlink, mkdir, maybe more
A test is here: https://gist.github.com/805263#file_test.js
This is assigned to piscisaureus. Remind him if he forgets.
| 1.0 | process.chdir is not thread-safe - Calling process.chdir() while libeio is processing fs operations that require resolving a relative pathname to an absolute one could be dangerous because chdir() may not be thread-safe, so a new CWD could be partially applied when a relative path is resolved in a thread.
Apart from that, it yields unpredictable result, because you can never know which cwd is used in a fs operation that is pending while chdir is called.
Consensus solution is to absolute-ize paths before they hit the thread pool. It could be done by calling path.resolve() in fs.js before invoking the binding.
Known affected: open, symlink, mkdir, maybe more
A test is here: https://gist.github.com/805263#file_test.js
This is assigned to piscisaureus. Remind him if he forgets.
| process | process chdir is not thread safe calling process chdir while libeio is processing fs operations that require resolving a relative pathname to an absolute one could be dangerous because chdir may not be thread safe so a new cwd could be partially applied when a relative path is resolved in a thread apart from that it yields unpredictable result because you can never know which cwd is used in a fs operation that is pending while chdir is called consensus solution is to absolute ize paths before they hit the thread pool it could be done by calling path resolve in fs js before invoking the binding known affected open symlink mkdir maybe more a test is here this is assigned to piscisaureus remind him if he forgets | 1 |
258,823 | 27,582,520,147 | IssuesEvent | 2023-03-08 17:09:38 | feemstr/integrations-core-6.0.0 | https://api.github.com/repos/feemstr/integrations-core-6.0.0 | opened | urllib3-1.22-py2.py3-none-any.whl: 6 vulnerabilities (highest severity is: 9.8) | Mend: dependency security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>urllib3-1.22-py2.py3-none-any.whl</b></p></summary>
<p>HTTP library with thread-safe connection pooling, file post, and more.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /vsphere/requirements.txt</p>
<p>Path to vulnerable library: /vsphere/requirements.txt,/vsphere,/sqlserver</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/feemstr/integrations-core-6.0.0/commit/453e76e86cafcd5aed397f423a176cfb806a1bda">453e76e86cafcd5aed397f423a176cfb806a1bda</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (urllib3 version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2018-20060](https://www.mend.io/vulnerability-database/CVE-2018-20060) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | urllib3-1.22-py2.py3-none-any.whl | Direct | 1.23 | ✅ |
| [CVE-2019-11324](https://www.mend.io/vulnerability-database/CVE-2019-11324) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | urllib3-1.22-py2.py3-none-any.whl | Direct | 1.24.2 | ✅ |
| [CVE-2021-33503](https://www.mend.io/vulnerability-database/CVE-2021-33503) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | urllib3-1.22-py2.py3-none-any.whl | Direct | 1.26.5 | ✅ |
| [CVE-2020-26137](https://www.mend.io/vulnerability-database/CVE-2020-26137) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.5 | urllib3-1.22-py2.py3-none-any.whl | Direct | 1.25.9 | ✅ |
| [CVE-2019-9740](https://www.mend.io/vulnerability-database/CVE-2019-9740) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | urllib3-1.22-py2.py3-none-any.whl | Direct | 1.24.3 | ✅ |
| [CVE-2019-11236](https://www.mend.io/vulnerability-database/CVE-2019-11236) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | urllib3-1.22-py2.py3-none-any.whl | Direct | 1.24.3 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2018-20060</summary>
### Vulnerable Library - <b>urllib3-1.22-py2.py3-none-any.whl</b></p>
<p>HTTP library with thread-safe connection pooling, file post, and more.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /vsphere/requirements.txt</p>
<p>Path to vulnerable library: /vsphere/requirements.txt,/vsphere,/sqlserver</p>
<p>
Dependency Hierarchy:
- :x: **urllib3-1.22-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/feemstr/integrations-core-6.0.0/commit/453e76e86cafcd5aed397f423a176cfb806a1bda">453e76e86cafcd5aed397f423a176cfb806a1bda</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
urllib3 before version 1.23 does not remove the Authorization HTTP header when following a cross-origin redirect (i.e., a redirect that differs in host, port, or scheme). This can allow for credentials in the Authorization header to be exposed to unintended hosts or transmitted in cleartext.
<p>Publish Date: 2018-12-11
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-20060>CVE-2018-20060</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20060">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20060</a></p>
<p>Release Date: 2018-12-11</p>
<p>Fix Resolution: 1.23</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2019-11324</summary>
### Vulnerable Library - <b>urllib3-1.22-py2.py3-none-any.whl</b></p>
<p>HTTP library with thread-safe connection pooling, file post, and more.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /vsphere/requirements.txt</p>
<p>Path to vulnerable library: /vsphere/requirements.txt,/vsphere,/sqlserver</p>
<p>
Dependency Hierarchy:
- :x: **urllib3-1.22-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/feemstr/integrations-core-6.0.0/commit/453e76e86cafcd5aed397f423a176cfb806a1bda">453e76e86cafcd5aed397f423a176cfb806a1bda</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The urllib3 library before 1.24.2 for Python mishandles certain cases where the desired set of CA certificates is different from the OS store of CA certificates, which results in SSL connections succeeding in situations where a verification failure is the correct outcome. This is related to use of the ssl_context, ca_certs, or ca_certs_dir argument.
<p>Publish Date: 2019-04-18
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-11324>CVE-2019-11324</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11324">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11324</a></p>
<p>Release Date: 2019-04-18</p>
<p>Fix Resolution: 1.24.2</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-33503</summary>
### Vulnerable Library - <b>urllib3-1.22-py2.py3-none-any.whl</b></p>
<p>HTTP library with thread-safe connection pooling, file post, and more.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /vsphere/requirements.txt</p>
<p>Path to vulnerable library: /vsphere/requirements.txt,/vsphere,/sqlserver</p>
<p>
Dependency Hierarchy:
- :x: **urllib3-1.22-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/feemstr/integrations-core-6.0.0/commit/453e76e86cafcd5aed397f423a176cfb806a1bda">453e76e86cafcd5aed397f423a176cfb806a1bda</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
An issue was discovered in urllib3 before 1.26.5. When provided with a URL containing many @ characters in the authority component, the authority regular expression exhibits catastrophic backtracking, causing a denial of service if a URL were passed as a parameter or redirected to via an HTTP redirect.
<p>Publish Date: 2021-06-29
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-33503>CVE-2021-33503</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/urllib3/urllib3/security/advisories/GHSA-q2q7-5pp4-w6pg">https://github.com/urllib3/urllib3/security/advisories/GHSA-q2q7-5pp4-w6pg</a></p>
<p>Release Date: 2021-06-29</p>
<p>Fix Resolution: 1.26.5</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-26137</summary>
### Vulnerable Library - <b>urllib3-1.22-py2.py3-none-any.whl</b></p>
<p>HTTP library with thread-safe connection pooling, file post, and more.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /vsphere/requirements.txt</p>
<p>Path to vulnerable library: /vsphere/requirements.txt,/vsphere,/sqlserver</p>
<p>
Dependency Hierarchy:
- :x: **urllib3-1.22-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/feemstr/integrations-core-6.0.0/commit/453e76e86cafcd5aed397f423a176cfb806a1bda">453e76e86cafcd5aed397f423a176cfb806a1bda</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
urllib3 before 1.25.9 allows CRLF injection if the attacker controls the HTTP request method, as demonstrated by inserting CR and LF control characters in the first argument of putrequest(). NOTE: this is similar to CVE-2020-26116.
<p>Publish Date: 2020-09-30
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-26137>CVE-2020-26137</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-26137">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-26137</a></p>
<p>Release Date: 2020-09-30</p>
<p>Fix Resolution: 1.25.9</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2019-9740</summary>
### Vulnerable Library - <b>urllib3-1.22-py2.py3-none-any.whl</b></p>
<p>HTTP library with thread-safe connection pooling, file post, and more.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /vsphere/requirements.txt</p>
<p>Path to vulnerable library: /vsphere/requirements.txt,/vsphere,/sqlserver</p>
<p>
Dependency Hierarchy:
- :x: **urllib3-1.22-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/feemstr/integrations-core-6.0.0/commit/453e76e86cafcd5aed397f423a176cfb806a1bda">453e76e86cafcd5aed397f423a176cfb806a1bda</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
An issue was discovered in urllib2 in Python 2.x through 2.7.16 and urllib in Python 3.x through 3.7.3. CRLF injection is possible if the attacker controls a url parameter, as demonstrated by the first argument to urllib.request.urlopen with \r\n (specifically in the query string after a ? character) followed by an HTTP header or a Redis command. This is fixed in: v2.7.17, v2.7.17rc1, v2.7.18, v2.7.18rc1; v3.5.10, v3.5.10rc1, v3.5.8, v3.5.8rc1, v3.5.8rc2, v3.5.9; v3.6.10, v3.6.10rc1, v3.6.11, v3.6.11rc1, v3.6.12, v3.6.9, v3.6.9rc1; v3.7.4, v3.7.4rc1, v3.7.4rc2, v3.7.5, v3.7.5rc1, v3.7.6, v3.7.6rc1, v3.7.7, v3.7.7rc1, v3.7.8, v3.7.8rc1, v3.7.9.
<p>Publish Date: 2019-03-13
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-9740>CVE-2019-9740</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9740">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9740</a></p>
<p>Release Date: 2019-03-13</p>
<p>Fix Resolution: 1.24.3</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2019-11236</summary>
### Vulnerable Library - <b>urllib3-1.22-py2.py3-none-any.whl</b></p>
<p>HTTP library with thread-safe connection pooling, file post, and more.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /vsphere/requirements.txt</p>
<p>Path to vulnerable library: /vsphere/requirements.txt,/vsphere,/sqlserver</p>
<p>
Dependency Hierarchy:
- :x: **urllib3-1.22-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/feemstr/integrations-core-6.0.0/commit/453e76e86cafcd5aed397f423a176cfb806a1bda">453e76e86cafcd5aed397f423a176cfb806a1bda</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In the urllib3 library through 1.24.1 for Python, CRLF injection is possible if the attacker controls the request parameter.
<p>Publish Date: 2019-04-15
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-11236>CVE-2019-11236</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-r64q-w8jr-g9qp">https://github.com/advisories/GHSA-r64q-w8jr-g9qp</a></p>
<p>Release Date: 2019-04-15</p>
<p>Fix Resolution: 1.24.3</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p> | True | urllib3-1.22-py2.py3-none-any.whl: 6 vulnerabilities (highest severity is: 9.8) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>urllib3-1.22-py2.py3-none-any.whl</b></p></summary>
<p>HTTP library with thread-safe connection pooling, file post, and more.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /vsphere/requirements.txt</p>
<p>Path to vulnerable library: /vsphere/requirements.txt,/vsphere,/sqlserver</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/feemstr/integrations-core-6.0.0/commit/453e76e86cafcd5aed397f423a176cfb806a1bda">453e76e86cafcd5aed397f423a176cfb806a1bda</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (urllib3 version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2018-20060](https://www.mend.io/vulnerability-database/CVE-2018-20060) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | urllib3-1.22-py2.py3-none-any.whl | Direct | 1.23 | ✅ |
| [CVE-2019-11324](https://www.mend.io/vulnerability-database/CVE-2019-11324) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | urllib3-1.22-py2.py3-none-any.whl | Direct | 1.24.2 | ✅ |
| [CVE-2021-33503](https://www.mend.io/vulnerability-database/CVE-2021-33503) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | urllib3-1.22-py2.py3-none-any.whl | Direct | 1.26.5 | ✅ |
| [CVE-2020-26137](https://www.mend.io/vulnerability-database/CVE-2020-26137) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.5 | urllib3-1.22-py2.py3-none-any.whl | Direct | 1.25.9 | ✅ |
| [CVE-2019-9740](https://www.mend.io/vulnerability-database/CVE-2019-9740) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | urllib3-1.22-py2.py3-none-any.whl | Direct | 1.24.3 | ✅ |
| [CVE-2019-11236](https://www.mend.io/vulnerability-database/CVE-2019-11236) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.1 | urllib3-1.22-py2.py3-none-any.whl | Direct | 1.24.3 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2018-20060</summary>
### Vulnerable Library - <b>urllib3-1.22-py2.py3-none-any.whl</b></p>
<p>HTTP library with thread-safe connection pooling, file post, and more.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /vsphere/requirements.txt</p>
<p>Path to vulnerable library: /vsphere/requirements.txt,/vsphere,/sqlserver</p>
<p>
Dependency Hierarchy:
- :x: **urllib3-1.22-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/feemstr/integrations-core-6.0.0/commit/453e76e86cafcd5aed397f423a176cfb806a1bda">453e76e86cafcd5aed397f423a176cfb806a1bda</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
urllib3 before version 1.23 does not remove the Authorization HTTP header when following a cross-origin redirect (i.e., a redirect that differs in host, port, or scheme). This can allow for credentials in the Authorization header to be exposed to unintended hosts or transmitted in cleartext.
<p>Publish Date: 2018-12-11
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-20060>CVE-2018-20060</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20060">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20060</a></p>
<p>Release Date: 2018-12-11</p>
<p>Fix Resolution: 1.23</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2019-11324</summary>
### Vulnerable Library - <b>urllib3-1.22-py2.py3-none-any.whl</b></p>
<p>HTTP library with thread-safe connection pooling, file post, and more.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /vsphere/requirements.txt</p>
<p>Path to vulnerable library: /vsphere/requirements.txt,/vsphere,/sqlserver</p>
<p>
Dependency Hierarchy:
- :x: **urllib3-1.22-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/feemstr/integrations-core-6.0.0/commit/453e76e86cafcd5aed397f423a176cfb806a1bda">453e76e86cafcd5aed397f423a176cfb806a1bda</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The urllib3 library before 1.24.2 for Python mishandles certain cases where the desired set of CA certificates is different from the OS store of CA certificates, which results in SSL connections succeeding in situations where a verification failure is the correct outcome. This is related to use of the ssl_context, ca_certs, or ca_certs_dir argument.
<p>Publish Date: 2019-04-18
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-11324>CVE-2019-11324</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11324">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11324</a></p>
<p>Release Date: 2019-04-18</p>
<p>Fix Resolution: 1.24.2</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-33503</summary>
### Vulnerable Library - <b>urllib3-1.22-py2.py3-none-any.whl</b></p>
<p>HTTP library with thread-safe connection pooling, file post, and more.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /vsphere/requirements.txt</p>
<p>Path to vulnerable library: /vsphere/requirements.txt,/vsphere,/sqlserver</p>
<p>
Dependency Hierarchy:
- :x: **urllib3-1.22-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/feemstr/integrations-core-6.0.0/commit/453e76e86cafcd5aed397f423a176cfb806a1bda">453e76e86cafcd5aed397f423a176cfb806a1bda</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
An issue was discovered in urllib3 before 1.26.5. When provided with a URL containing many @ characters in the authority component, the authority regular expression exhibits catastrophic backtracking, causing a denial of service if a URL were passed as a parameter or redirected to via an HTTP redirect.
<p>Publish Date: 2021-06-29
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-33503>CVE-2021-33503</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/urllib3/urllib3/security/advisories/GHSA-q2q7-5pp4-w6pg">https://github.com/urllib3/urllib3/security/advisories/GHSA-q2q7-5pp4-w6pg</a></p>
<p>Release Date: 2021-06-29</p>
<p>Fix Resolution: 1.26.5</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-26137</summary>
### Vulnerable Library - <b>urllib3-1.22-py2.py3-none-any.whl</b></p>
<p>HTTP library with thread-safe connection pooling, file post, and more.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /vsphere/requirements.txt</p>
<p>Path to vulnerable library: /vsphere/requirements.txt,/vsphere,/sqlserver</p>
<p>
Dependency Hierarchy:
- :x: **urllib3-1.22-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/feemstr/integrations-core-6.0.0/commit/453e76e86cafcd5aed397f423a176cfb806a1bda">453e76e86cafcd5aed397f423a176cfb806a1bda</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
urllib3 before 1.25.9 allows CRLF injection if the attacker controls the HTTP request method, as demonstrated by inserting CR and LF control characters in the first argument of putrequest(). NOTE: this is similar to CVE-2020-26116.
<p>Publish Date: 2020-09-30
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-26137>CVE-2020-26137</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-26137">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-26137</a></p>
<p>Release Date: 2020-09-30</p>
<p>Fix Resolution: 1.25.9</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2019-9740</summary>
### Vulnerable Library - <b>urllib3-1.22-py2.py3-none-any.whl</b></p>
<p>HTTP library with thread-safe connection pooling, file post, and more.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /vsphere/requirements.txt</p>
<p>Path to vulnerable library: /vsphere/requirements.txt,/vsphere,/sqlserver</p>
<p>
Dependency Hierarchy:
- :x: **urllib3-1.22-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/feemstr/integrations-core-6.0.0/commit/453e76e86cafcd5aed397f423a176cfb806a1bda">453e76e86cafcd5aed397f423a176cfb806a1bda</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
An issue was discovered in urllib2 in Python 2.x through 2.7.16 and urllib in Python 3.x through 3.7.3. CRLF injection is possible if the attacker controls a url parameter, as demonstrated by the first argument to urllib.request.urlopen with \r\n (specifically in the query string after a ? character) followed by an HTTP header or a Redis command. This is fixed in: v2.7.17, v2.7.17rc1, v2.7.18, v2.7.18rc1; v3.5.10, v3.5.10rc1, v3.5.8, v3.5.8rc1, v3.5.8rc2, v3.5.9; v3.6.10, v3.6.10rc1, v3.6.11, v3.6.11rc1, v3.6.12, v3.6.9, v3.6.9rc1; v3.7.4, v3.7.4rc1, v3.7.4rc2, v3.7.5, v3.7.5rc1, v3.7.6, v3.7.6rc1, v3.7.7, v3.7.7rc1, v3.7.8, v3.7.8rc1, v3.7.9.
<p>Publish Date: 2019-03-13
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-9740>CVE-2019-9740</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9740">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9740</a></p>
<p>Release Date: 2019-03-13</p>
<p>Fix Resolution: 1.24.3</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2019-11236</summary>
### Vulnerable Library - <b>urllib3-1.22-py2.py3-none-any.whl</b></p>
<p>HTTP library with thread-safe connection pooling, file post, and more.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /vsphere/requirements.txt</p>
<p>Path to vulnerable library: /vsphere/requirements.txt,/vsphere,/sqlserver</p>
<p>
Dependency Hierarchy:
- :x: **urllib3-1.22-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/feemstr/integrations-core-6.0.0/commit/453e76e86cafcd5aed397f423a176cfb806a1bda">453e76e86cafcd5aed397f423a176cfb806a1bda</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In the urllib3 library through 1.24.1 for Python, CRLF injection is possible if the attacker controls the request parameter.
<p>Publish Date: 2019-04-15
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-11236>CVE-2019-11236</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-r64q-w8jr-g9qp">https://github.com/advisories/GHSA-r64q-w8jr-g9qp</a></p>
<p>Release Date: 2019-04-15</p>
<p>Fix Resolution: 1.24.3</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p> | non_process | none any whl vulnerabilities highest severity is vulnerable library none any whl http library with thread safe connection pooling file post and more library home page a href path to dependency file vsphere requirements txt path to vulnerable library vsphere requirements txt vsphere sqlserver found in head commit a href vulnerabilities cve severity cvss dependency type fixed in version remediation available high none any whl direct high none any whl direct high none any whl direct medium none any whl direct medium none any whl direct medium none any whl direct details cve vulnerable library none any whl http library with thread safe connection pooling file post and more library home page a href path to dependency file vsphere requirements txt path to vulnerable library vsphere requirements txt vsphere sqlserver dependency hierarchy x none any whl vulnerable library found in head commit a href found in base branch main vulnerability details before version does not remove the authorization http header when following a cross origin redirect i e a redirect that differs in host port or scheme this can allow for credentials in the authorization header to be exposed to unintended hosts or transmitted in cleartext publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library none any whl http library with thread safe connection pooling file post and more library home page a href path to dependency file vsphere requirements txt path to vulnerable library vsphere requirements txt vsphere sqlserver dependency hierarchy x none any whl vulnerable library found in head commit a href found in base branch main vulnerability details the library before for python mishandles certain cases where the desired set of ca certificates is different from the os store of ca certificates which results in ssl connections succeeding in situations where a verification failure is the correct outcome this is related to use of the ssl context ca certs or ca certs dir argument publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library none any whl http library with thread safe connection pooling file post and more library home page a href path to dependency file vsphere requirements txt path to vulnerable library vsphere requirements txt vsphere sqlserver dependency hierarchy x none any whl vulnerable library found in head commit a href found in base branch main vulnerability details an issue was discovered in before when provided with a url containing many characters in the authority component the authority regular expression exhibits catastrophic backtracking causing a denial of service if a url were passed as a parameter or redirected to via an http redirect publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library none any whl http library with thread safe connection pooling file post and more library home page a href path to dependency file vsphere requirements txt path to vulnerable library vsphere requirements txt vsphere sqlserver dependency hierarchy x none any whl vulnerable library found in head commit a href found in base branch main vulnerability details before allows crlf injection if the attacker controls the http request method as demonstrated by inserting cr and lf control characters in the first argument of putrequest note this is similar to cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library none any whl http library with thread safe connection pooling file post and more library home page a href path to dependency file vsphere requirements txt path to vulnerable library vsphere requirements txt vsphere sqlserver dependency hierarchy x none any whl vulnerable library found in head commit a href found in base branch main vulnerability details an issue was discovered in in python x through and urllib in python x through crlf injection is possible if the attacker controls a url parameter as demonstrated by the first argument to urllib request urlopen with r n specifically in the query string after a character followed by an http header or a redis command this is fixed in publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library none any whl http library with thread safe connection pooling file post and more library home page a href path to dependency file vsphere requirements txt path to vulnerable library vsphere requirements txt vsphere sqlserver dependency hierarchy x none any whl vulnerable library found in head commit a href found in base branch main vulnerability details in the library through for python crlf injection is possible if the attacker controls the request parameter publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue rescue worker helmet automatic remediation is available for this issue | 0 |
34,795 | 6,377,515,978 | IssuesEvent | 2017-08-02 10:12:57 | apinf/platform | https://api.github.com/repos/apinf/platform | closed | Add OpenAPI designer iframe integration | Documentation Editor in progress | Wireframes available [here](https://github.com/apinf/platform/issues/1998).
Example OpenAPI designer in iframe with authtoken sending support available [here](https://dev.openapi.design/iframe.html). | 1.0 | Add OpenAPI designer iframe integration - Wireframes available [here](https://github.com/apinf/platform/issues/1998).
Example OpenAPI designer in iframe with authtoken sending support available [here](https://dev.openapi.design/iframe.html). | non_process | add openapi designer iframe integration wireframes available example openapi designer in iframe with authtoken sending support available | 0 |
18,971 | 24,947,533,869 | IssuesEvent | 2022-11-01 02:26:20 | solop-develop/frontend-core | https://api.github.com/repos/solop-develop/frontend-core | closed | [Bug Report] Abrir un proceso/reporte asociado sin campos genera error | bug (PRC) Processes (RPT) Reports (WIN) Windows | <!--
Note: In order to better solve your problem, please refer to the template to provide complete information, accurately describe the problem, and the incomplete information issue will be closed.
-->
## Bug report
#### Steps to reproduce
1. Iniciar sesión con el rol `System`.
2. Abrir la ventana `Ventana, Pestaña y Campo`.
3. Seleccionar la pestaña `Pestaña`.
4. Abrir el proceso asociado `Crear Campos`.
#### Screenshot or Gif(截图或动态图)
https://user-images.githubusercontent.com/20288327/199144794-1cce9345-6039-4ca1-999c-8eebd8b07f32.mp4
#### Additional context
Si el proceso/reporte asociado no tiene campos/parámetros, genera un error.
| 1.0 | [Bug Report] Abrir un proceso/reporte asociado sin campos genera error - <!--
Note: In order to better solve your problem, please refer to the template to provide complete information, accurately describe the problem, and the incomplete information issue will be closed.
-->
## Bug report
#### Steps to reproduce
1. Iniciar sesión con el rol `System`.
2. Abrir la ventana `Ventana, Pestaña y Campo`.
3. Seleccionar la pestaña `Pestaña`.
4. Abrir el proceso asociado `Crear Campos`.
#### Screenshot or Gif(截图或动态图)
https://user-images.githubusercontent.com/20288327/199144794-1cce9345-6039-4ca1-999c-8eebd8b07f32.mp4
#### Additional context
Si el proceso/reporte asociado no tiene campos/parámetros, genera un error.
| process | abrir un proceso reporte asociado sin campos genera error note in order to better solve your problem please refer to the template to provide complete information accurately describe the problem and the incomplete information issue will be closed bug report steps to reproduce iniciar sesión con el rol system abrir la ventana ventana pestaña y campo seleccionar la pestaña pestaña abrir el proceso asociado crear campos screenshot or gif(截图或动态图) additional context si el proceso reporte asociado no tiene campos parámetros genera un error | 1 |
21,063 | 28,012,213,867 | IssuesEvent | 2023-03-27 19:31:04 | MicrosoftDocs/azure-devops-docs | https://api.github.com/repos/MicrosoftDocs/azure-devops-docs | closed | cron format is incorrect - should use single quotes | doc-bug Pri1 azure-devops-pipelines/svc azure-devops-pipelines-process/subsvc | cron format is incorrect, there is a big in Ado that when double quotes are used for cron times pipeline does not run by cron or run incorrectly, user should use single quotes
```yaml
cron: '0 12 * * 0'
```
microsoft should either fix this bug or at least update documentation
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 2ea2c851-bd1e-cddc-b4d0-e9f4112b8565
* Version Independent ID: 07c23fdd-14b5-985b-1c63-3f26f3a216ad
* Content: [Configure schedules to run pipelines - Azure Pipelines](https://learn.microsoft.com/en-us/azure/devops/pipelines/process/scheduled-triggers?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/scheduled-triggers.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/scheduled-triggers.md)
* Service: **azure-devops-pipelines**
* Sub-service: **azure-devops-pipelines-process**
* GitHub Login: @steved0x
* Microsoft Alias: **sdanie** | 1.0 | cron format is incorrect - should use single quotes - cron format is incorrect, there is a big in Ado that when double quotes are used for cron times pipeline does not run by cron or run incorrectly, user should use single quotes
```yaml
cron: '0 12 * * 0'
```
microsoft should either fix this bug or at least update documentation
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 2ea2c851-bd1e-cddc-b4d0-e9f4112b8565
* Version Independent ID: 07c23fdd-14b5-985b-1c63-3f26f3a216ad
* Content: [Configure schedules to run pipelines - Azure Pipelines](https://learn.microsoft.com/en-us/azure/devops/pipelines/process/scheduled-triggers?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/scheduled-triggers.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/scheduled-triggers.md)
* Service: **azure-devops-pipelines**
* Sub-service: **azure-devops-pipelines-process**
* GitHub Login: @steved0x
* Microsoft Alias: **sdanie** | process | cron format is incorrect should use single quotes cron format is incorrect there is a big in ado that when double quotes are used for cron times pipeline does not run by cron or run incorrectly user should use single quotes yaml cron microsoft should either fix this bug or at least update documentation document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id cddc version independent id content content source service azure devops pipelines sub service azure devops pipelines process github login microsoft alias sdanie | 1 |
3,581 | 6,620,667,103 | IssuesEvent | 2017-09-21 16:16:55 | HelpyTeam/HelpyDocuments | https://api.github.com/repos/HelpyTeam/HelpyDocuments | closed | [Report 3] Create Conceptual Diagram Report | In Process | # Overview
As the result of team meeting, create a Conceptual Diagram with Data Dictionary.
# Targets
- [ ] Draw a diagram with Crow Foot Notation Symbols.
- [ ] Diagram give out the main concept of system entyties.
- [ ] Give out description for each entity in Data Dictionary. | 1.0 | [Report 3] Create Conceptual Diagram Report - # Overview
As the result of team meeting, create a Conceptual Diagram with Data Dictionary.
# Targets
- [ ] Draw a diagram with Crow Foot Notation Symbols.
- [ ] Diagram give out the main concept of system entyties.
- [ ] Give out description for each entity in Data Dictionary. | process | create conceptual diagram report overview as the result of team meeting create a conceptual diagram with data dictionary targets draw a diagram with crow foot notation symbols diagram give out the main concept of system entyties give out description for each entity in data dictionary | 1 |
19,052 | 25,067,457,374 | IssuesEvent | 2022-11-07 09:29:08 | ESMValGroup/ESMValCore | https://api.github.com/repos/ESMValGroup/ESMValCore | closed | Preprocessor feature request: Add a way to specify Dataset start_month and end_month | enhancement preprocessor | Hi,
I'm attempting to look at the South Hemisphere Summer seasonal mean, (DJF), so I've used the `extract_season` preprocessor. If I run the analysis from January to December, then the first seasonal_year includes only January and February, and the final seasonal_year only includes December. These data are not comparable, and shows a huge bias at the first and last points:

. To remove the erroneous months from the analysis, I've added the `start_month` and `end_month` fields to the `dataset` dictionary.
I'm using the following dataset:
- {dataset: HadGEM2-CC, project: CMIP5, exp: historical, ensemble: r1i1p1, start_year: 1989, end_year: 2004, start_month: 3, end_month: 10}
However, the `start_month` and `end_month` fields are not taken into account by the preprocessor, and the preprocessed data runs from January 1889 to December 2004.
Is this a bug? Can someone reproduce it? what is the problem? Cheers! | 1.0 | Preprocessor feature request: Add a way to specify Dataset start_month and end_month - Hi,
I'm attempting to look at the South Hemisphere Summer seasonal mean, (DJF), so I've used the `extract_season` preprocessor. If I run the analysis from January to December, then the first seasonal_year includes only January and February, and the final seasonal_year only includes December. These data are not comparable, and shows a huge bias at the first and last points:

. To remove the erroneous months from the analysis, I've added the `start_month` and `end_month` fields to the `dataset` dictionary.
I'm using the following dataset:
- {dataset: HadGEM2-CC, project: CMIP5, exp: historical, ensemble: r1i1p1, start_year: 1989, end_year: 2004, start_month: 3, end_month: 10}
However, the `start_month` and `end_month` fields are not taken into account by the preprocessor, and the preprocessed data runs from January 1889 to December 2004.
Is this a bug? Can someone reproduce it? what is the problem? Cheers! | process | preprocessor feature request add a way to specify dataset start month and end month hi i m attempting to look at the south hemisphere summer seasonal mean djf so i ve used the extract season preprocessor if i run the analysis from january to december then the first seasonal year includes only january and february and the final seasonal year only includes december these data are not comparable and shows a huge bias at the first and last points to remove the erroneous months from the analysis i ve added the start month and end month fields to the dataset dictionary i m using the following dataset dataset cc project exp historical ensemble start year end year start month end month however the start month and end month fields are not taken into account by the preprocessor and the preprocessed data runs from january to december is this a bug can someone reproduce it what is the problem cheers | 1 |
20,994 | 27,858,118,373 | IssuesEvent | 2023-03-21 02:00:12 | lizhihao6/get-daily-arxiv-noti | https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti | opened | New submissions for Tue, 21 Mar 23 | event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB | ## Keyword: events
### PseudoBound: Limiting the anomaly reconstruction capability of one-class classifiers using pseudo anomalies
- **Authors:** Marcella Astrid, Muhammad Zaigham Zaheer, Seung-Ik Lee
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.10704
- **Pdf link:** https://arxiv.org/pdf/2303.10704
- **Abstract**
Due to the rarity of anomalous events, video anomaly detection is typically approached as one-class classification (OCC) problem. Typically in OCC, an autoencoder (AE) is trained to reconstruct the normal only training data with the expectation that, in test time, it can poorly reconstruct the anomalous data. However, previous studies have shown that, even trained with only normal data, AEs can often reconstruct anomalous data as well, resulting in a decreased performance. To mitigate this problem, we propose to limit the anomaly reconstruction capability of AEs by incorporating pseudo anomalies during the training of an AE. Extensive experiments using five types of pseudo anomalies show the robustness of our training mechanism towards any kind of pseudo anomaly. Moreover, we demonstrate the effectiveness of our proposed pseudo anomaly based training approach against several existing state-ofthe-art (SOTA) methods on three benchmark video anomaly datasets, outperforming all the other reconstruction-based approaches in two datasets and showing the second best performance in the other dataset.
### k-SALSA: k-anonymous synthetic averaging of retinal images via local style alignment
- **Authors:** Minkyu Jeon, Hyeonjin Park, Hyunwoo J. Kim, Michael Morley, Hyunghoon Cho
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Cryptography and Security (cs.CR)
- **Arxiv link:** https://arxiv.org/abs/2303.10824
- **Pdf link:** https://arxiv.org/pdf/2303.10824
- **Abstract**
The application of modern machine learning to retinal image analyses offers valuable insights into a broad range of human health conditions beyond ophthalmic diseases. Additionally, data sharing is key to fully realizing the potential of machine learning models by providing a rich and diverse collection of training data. However, the personally-identifying nature of retinal images, encompassing the unique vascular structure of each individual, often prevents this data from being shared openly. While prior works have explored image de-identification strategies based on synthetic averaging of images in other domains (e.g. facial images), existing techniques face difficulty in preserving both privacy and clinical utility in retinal images, as we demonstrate in our work. We therefore introduce k-SALSA, a generative adversarial network (GAN)-based framework for synthesizing retinal fundus images that summarize a given private dataset while satisfying the privacy notion of k-anonymity. k-SALSA brings together state-of-the-art techniques for training and inverting GANs to achieve practical performance on retinal images. Furthermore, k-SALSA leverages a new technique, called local style alignment, to generate a synthetic average that maximizes the retention of fine-grain visual patterns in the source images, thus improving the clinical utility of the generated images. On two benchmark datasets of diabetic retinopathy (EyePACS and APTOS), we demonstrate our improvement upon existing methods with respect to image fidelity, classification performance, and mitigation of membership inference attacks. Our work represents a step toward broader sharing of retinal images for scientific collaboration. Code is available at https://github.com/hcholab/k-salsa.
### Leapfrog Diffusion Model for Stochastic Trajectory Prediction
- **Authors:** Weibo Mao, Chenxin Xu, Qi Zhu, Siheng Chen, Yanfeng Wang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.10895
- **Pdf link:** https://arxiv.org/pdf/2303.10895
- **Abstract**
To model the indeterminacy of human behaviors, stochastic trajectory prediction requires a sophisticated multi-modal distribution of future trajectories. Emerging diffusion models have revealed their tremendous representation capacities in numerous generation tasks, showing potential for stochastic trajectory prediction. However, expensive time consumption prevents diffusion models from real-time prediction, since a large number of denoising steps are required to assure sufficient representation ability. To resolve the dilemma, we present LEapfrog Diffusion model (LED), a novel diffusion-based trajectory prediction model, which provides real-time, precise, and diverse predictions. The core of the proposed LED is to leverage a trainable leapfrog initializer to directly learn an expressive multi-modal distribution of future trajectories, which skips a large number of denoising steps, significantly accelerating inference speed. Moreover, the leapfrog initializer is trained to appropriately allocate correlated samples to provide a diversity of predicted future trajectories, significantly improving prediction performances. Extensive experiments on four real-world datasets, including NBA/NFL/SDD/ETH-UCY, show that LED consistently improves performance and achieves 23.7%/21.9% ADE/FDE improvement on NFL. The proposed LED also speeds up the inference 19.3/30.8/24.3/25.1 times compared to the standard diffusion model on NBA/NFL/SDD/ETH-UCY, satisfying real-time inference needs. Code is available at https://github.com/MediaBrain-SJTU/LED.
### Learning Optical Flow from Event Camera with Rendered Dataset
- **Authors:** Xinglong Luo, Kunming Luo, Ao Luo, Zhengning Wang, Ping Tan, Shuaicheng Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.11011
- **Pdf link:** https://arxiv.org/pdf/2303.11011
- **Abstract**
We study the problem of estimating optical flow from event cameras. One important issue is how to build a high-quality event-flow dataset with accurate event values and flow labels. Previous datasets are created by either capturing real scenes by event cameras or synthesizing from images with pasted foreground objects. The former case can produce real event values but with calculated flow labels, which are sparse and inaccurate. The later case can generate dense flow labels but the interpolated events are prone to errors. In this work, we propose to render a physically correct event-flow dataset using computer graphics models. In particular, we first create indoor and outdoor 3D scenes by Blender with rich scene content variations. Second, diverse camera motions are included for the virtual capturing, producing images and accurate flow labels. Third, we render high-framerate videos between images for accurate events. The rendered dataset can adjust the density of events, based on which we further introduce an adaptive density module (ADM). Experiments show that our proposed dataset can facilitate event-flow learning, whereas previous approaches when trained on our dataset can improve their performances constantly by a relatively large margin. In addition, event-flow pipelines when equipped with our ADM can further improve performances.
### Localizing Object-level Shape Variations with Text-to-Image Diffusion Models
- **Authors:** Or Patashnik, Daniel Garibi, Idan Azuri, Hadar Averbuch-Elor, Daniel Cohen-Or
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2303.11306
- **Pdf link:** https://arxiv.org/pdf/2303.11306
- **Abstract**
Text-to-image models give rise to workflows which often begin with an exploration step, where users sift through a large collection of generated images. The global nature of the text-to-image generation process prevents users from narrowing their exploration to a particular object in the image. In this paper, we present a technique to generate a collection of images that depicts variations in the shape of a specific object, enabling an object-level shape exploration process. Creating plausible variations is challenging as it requires control over the shape of the generated object while respecting its semantics. A particular challenge when generating object variations is accurately localizing the manipulation applied over the object's shape. We introduce a prompt-mixing technique that switches between prompts along the denoising process to attain a variety of shape choices. To localize the image-space operation, we present two techniques that use the self-attention layers in conjunction with the cross-attention layers. Moreover, we show that these localization techniques are general and effective beyond the scope of generating object variations. Extensive results and comparisons demonstrate the effectiveness of our method in generating object variations, and the competence of our localization techniques.
## Keyword: event camera
### ANMS: Asynchronous Non-Maximum Suppression in Event Stream
- **Authors:** Qianang Zhou, JunLin Xiong, Youfu Li
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.10575
- **Pdf link:** https://arxiv.org/pdf/2303.10575
- **Abstract**
The non-maximum suppression (NMS) is widely used in frame-based tasks as an essential post-processing algorithm. However, event-based NMS either has high computational complexity or leads to frequent discontinuities. As a result, the performance of event-based corner detectors is limited. This paper proposes a general-purpose asynchronous non-maximum suppression pipeline (ANMS), and applies it to corner event detection. The proposed pipeline extract fine feature stream from the output of original detectors and adapts to the speed of motion. The ANMS runs directly on the asynchronous event stream with extremely low latency, which hardly affects the speed of original detectors. Additionally, we evaluate the DAVIS-based ground-truth labeling method to fill the gap between frame and event. Evaluation on public dataset indicates that the proposed ANMS pipeline significantly improves the performance of three classical asynchronous detectors with negligible latency. More importantly, the proposed ANMS framework is a natural extension of NMS, which is applicable to other asynchronous scoring tasks for event cameras.
### Learning Optical Flow from Event Camera with Rendered Dataset
- **Authors:** Xinglong Luo, Kunming Luo, Ao Luo, Zhengning Wang, Ping Tan, Shuaicheng Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.11011
- **Pdf link:** https://arxiv.org/pdf/2303.11011
- **Abstract**
We study the problem of estimating optical flow from event cameras. One important issue is how to build a high-quality event-flow dataset with accurate event values and flow labels. Previous datasets are created by either capturing real scenes by event cameras or synthesizing from images with pasted foreground objects. The former case can produce real event values but with calculated flow labels, which are sparse and inaccurate. The later case can generate dense flow labels but the interpolated events are prone to errors. In this work, we propose to render a physically correct event-flow dataset using computer graphics models. In particular, we first create indoor and outdoor 3D scenes by Blender with rich scene content variations. Second, diverse camera motions are included for the virtual capturing, producing images and accurate flow labels. Third, we render high-framerate videos between images for accurate events. The rendered dataset can adjust the density of events, based on which we further introduce an adaptive density module (ADM). Experiments show that our proposed dataset can facilitate event-flow learning, whereas previous approaches when trained on our dataset can improve their performances constantly by a relatively large margin. In addition, event-flow pipelines when equipped with our ADM can further improve performances.
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
### HGIB: Prognosis for Alzheimer's Disease via Hypergraph Information Bottleneck
- **Authors:** Shujun Wang, Angelica I Aviles-Rivero, Zoe Kourtzi, Carola-Bibiane Schönlieb
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.10390
- **Pdf link:** https://arxiv.org/pdf/2303.10390
- **Abstract**
Alzheimer's disease prognosis is critical for early Mild Cognitive Impairment patients for timely treatment to improve the patient's quality of life. Whilst existing prognosis techniques demonstrate potential results, they are highly limited in terms of using a single modality. Most importantly, they fail in considering a key element for prognosis: not all features extracted at the current moment may contribute to the prognosis prediction several years later. To address the current drawbacks of the literature, we propose a novel hypergraph framework based on an information bottleneck strategy (HGIB). Firstly, our framework seeks to discriminate irrelevant information, and therefore, solely focus on harmonising relevant information for future MCI conversion prediction e.g., two years later). Secondly, our model simultaneously accounts for multi-modal data based on imaging and non-imaging modalities. HGIB uses a hypergraph structure to represent the multi-modality data and accounts for various data modality types. Thirdly, the key of our model is based on a new optimisation scheme. It is based on modelling the principle of information bottleneck into loss functions that can be integrated into our hypergraph neural network. We demonstrate, through extensive experiments on ADNI, that our proposed HGIB framework outperforms existing state-of-the-art hypergraph neural networks for Alzheimer's disease prognosis. We showcase our model even under fewer labels. Finally, we further support the robustness and generalisation capabilities of our framework under both topological and feature perturbations.
### Multi-modal reward for visual relationships-based image captioning
- **Authors:** Ali Abedi, Hossein Karshenas, Peyman Adibi
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2303.10766
- **Pdf link:** https://arxiv.org/pdf/2303.10766
- **Abstract**
Deep neural networks have achieved promising results in automatic image captioning due to their effective representation learning and context-based content generation capabilities. As a prominent type of deep features used in many of the recent image captioning methods, the well-known bottomup features provide a detailed representation of different objects of the image in comparison with the feature maps directly extracted from the raw image. However, the lack of high-level semantic information about the relationships between these objects is an important drawback of bottom-up features, despite their expensive and resource-demanding extraction procedure. To take advantage of visual relationships in caption generation, this paper proposes a deep neural network architecture for image captioning based on fusing the visual relationships information extracted from an image's scene graph with the spatial feature maps of the image. A multi-modal reward function is then introduced for deep reinforcement learning of the proposed network using a combination of language and vision similarities in a common embedding space. The results of extensive experimentation on the MSCOCO dataset show the effectiveness of using visual relationships in the proposed captioning method. Moreover, the results clearly indicate that the proposed multi-modal reward in deep reinforcement learning leads to better model optimization, outperforming several state-of-the-art image captioning algorithms, while using light and easy to extract image features. A detailed experimental study of the components constituting the proposed method is also presented.
### Attribute-preserving Face Dataset Anonymization via Latent Code Optimization
- **Authors:** Simone Barattin, Christos Tzelepis, Ioannis Patras, Nicu Sebe
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.11296
- **Pdf link:** https://arxiv.org/pdf/2303.11296
- **Abstract**
This work addresses the problem of anonymizing the identity of faces in a dataset of images, such that the privacy of those depicted is not violated, while at the same time the dataset is useful for downstream task such as for training machine learning models. To the best of our knowledge, we are the first to explicitly address this issue and deal with two major drawbacks of the existing state-of-the-art approaches, namely that they (i) require the costly training of additional, purpose-trained neural networks, and/or (ii) fail to retain the facial attributes of the original images in the anonymized counterparts, the preservation of which is of paramount importance for their use in downstream tasks. We accordingly present a task-agnostic anonymization procedure that directly optimizes the images' latent representation in the latent space of a pre-trained GAN. By optimizing the latent codes directly, we ensure both that the identity is of a desired distance away from the original (with an identity obfuscation loss), whilst preserving the facial attributes (using a novel feature-matching loss in FaRL's deep feature space). We demonstrate through a series of both qualitative and quantitative experiments that our method is capable of anonymizing the identity of the images whilst -- crucially -- better-preserving the facial attributes. We make the code and the pre-trained models publicly available at: https://github.com/chi0tzp/FALCO.
## Keyword: ISP
### Towards Diverse Binary Segmentation via A Simple yet General Gated Network
- **Authors:** Xiaoqi Zhao, Youwei Pang, Lihe Zhang, Huchuan Lu, Lei Zhang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.10396
- **Pdf link:** https://arxiv.org/pdf/2303.10396
- **Abstract**
In many binary segmentation tasks, most CNNs-based methods use a U-shape encoder-decoder network as their basic structure. They ignore two key problems when the encoder exchanges information with the decoder: one is the lack of interference control mechanism between them, the other is without considering the disparity of the contributions from different encoder levels. In this work, we propose a simple yet general gated network (GateNet) to tackle them all at once. With the help of multi-level gate units, the valuable context information from the encoder can be selectively transmitted to the decoder. In addition, we design a gated dual branch structure to build the cooperation among the features of different levels and improve the discrimination ability of the network. Furthermore, we introduce a ``Fold'' operation to improve the atrous convolution and form a novel folded atrous convolution, which can be flexibly embedded in ASPP or DenseASPP to accurately localize foreground objects of various scales. GateNet can be easily generalized to many binary segmentation tasks, including general and specific object segmentation and multi-modal segmentation. Without bells and whistles, our network consistently performs favorably against the state-of-the-art methods under 10 metrics on 33 datasets of 10 binary segmentation tasks.
### Benchmarking Robustness of 3D Object Detection to Common Corruptions in Autonomous Driving
- **Authors:** Yinpeng Dong, Caixin Kang, Jinlai Zhang, Zijian Zhu, Yikai Wang, Xiao Yang, Hang Su, Xingxing Wei, Jun Zhu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR)
- **Arxiv link:** https://arxiv.org/abs/2303.11040
- **Pdf link:** https://arxiv.org/pdf/2303.11040
- **Abstract**
3D object detection is an important task in autonomous driving to perceive the surroundings. Despite the excellent performance, the existing 3D detectors lack the robustness to real-world corruptions caused by adverse weathers, sensor noises, etc., provoking concerns about the safety and reliability of autonomous driving systems. To comprehensively and rigorously benchmark the corruption robustness of 3D detectors, in this paper we design 27 types of common corruptions for both LiDAR and camera inputs considering real-world driving scenarios. By synthesizing these corruptions on public datasets, we establish three corruption robustness benchmarks -- KITTI-C, nuScenes-C, and Waymo-C. Then, we conduct large-scale experiments on 24 diverse 3D object detection models to evaluate their corruption robustness. Based on the evaluation results, we draw several important findings, including: 1) motion-level corruptions are the most threatening ones that lead to significant performance drop of all models; 2) LiDAR-camera fusion models demonstrate better robustness; 3) camera-only models are extremely vulnerable to image corruptions, showing the indispensability of LiDAR point clouds. We release the benchmarks and codes at https://github.com/kkkcx/3D_Corruptions_AD. We hope that our benchmarks and findings can provide insights for future research on developing robust 3D object detection models.
### HiFace: High-Fidelity 3D Face Reconstruction by Learning Static and Dynamic Details
- **Authors:** Zenghao Chai, Tianke Zhang, Tianyu He, Xu Tan, Tadas Baltrusaitis, HsiangTao Wu, Runnan Li, Sheng Zhao, Chun Yuan, Jiang Bian
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR)
- **Arxiv link:** https://arxiv.org/abs/2303.11225
- **Pdf link:** https://arxiv.org/pdf/2303.11225
- **Abstract**
3D Morphable Models (3DMMs) demonstrate great potential for reconstructing faithful and animatable 3D facial surfaces from a single image. The facial surface is influenced by the coarse shape, as well as the static detail (e,g., person-specific appearance) and dynamic detail (e.g., expression-driven wrinkles). Previous work struggles to decouple the static and dynamic details through image-level supervision, leading to reconstructions that are not realistic. In this paper, we aim at high-fidelity 3D face reconstruction and propose HiFace to explicitly model the static and dynamic details. Specifically, the static detail is modeled as the linear combination of a displacement basis, while the dynamic detail is modeled as the linear interpolation of two displacement maps with polarized expressions. We exploit several loss functions to jointly learn the coarse shape and fine details with both synthetic and real-world datasets, which enable HiFace to reconstruct high-fidelity 3D shapes with animatable details. Extensive quantitative and qualitative experiments demonstrate that HiFace presents state-of-the-art reconstruction quality and faithfully recovers both the static and dynamic details. Our project page can be found at https://project-hiface.github.io
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### VIMI: Vehicle-Infrastructure Multi-view Intermediate Fusion for Camera-based 3D Object Detection
- **Authors:** Zhe Wang, Siqi Fan, Xiaoliang Huo, Tongda Xu, Yan Wang, Jingjing Liu, Yilun Chen, Ya-Qin Zhang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.10975
- **Pdf link:** https://arxiv.org/pdf/2303.10975
- **Abstract**
In autonomous driving, Vehicle-Infrastructure Cooperative 3D Object Detection (VIC3D) makes use of multi-view cameras from both vehicles and traffic infrastructure, providing a global vantage point with rich semantic context of road conditions beyond a single vehicle viewpoint. Two major challenges prevail in VIC3D: 1) inherent calibration noise when fusing multi-view images, caused by time asynchrony across cameras; 2) information loss when projecting 2D features into 3D space. To address these issues, We propose a novel 3D object detection framework, Vehicles-Infrastructure Multi-view Intermediate fusion (VIMI). First, to fully exploit the holistic perspectives from both vehicles and infrastructure, we propose a Multi-scale Cross Attention (MCA) module that fuses infrastructure and vehicle features on selective multi-scales to correct the calibration noise introduced by camera asynchrony. Then, we design a Camera-aware Channel Masking (CCM) module that uses camera parameters as priors to augment the fused features. We further introduce a Feature Compression (FC) module with channel and spatial compression blocks to reduce the size of transmitted features for enhanced efficiency. Experiments show that VIMI achieves 15.61% overall AP_3D and 21.44% AP_BEV on the new VIC3D dataset, DAIR-V2X-C, significantly outperforming state-of-the-art early fusion and late fusion methods with comparable transmission cost.
## Keyword: RAW
### HGIB: Prognosis for Alzheimer's Disease via Hypergraph Information Bottleneck
- **Authors:** Shujun Wang, Angelica I Aviles-Rivero, Zoe Kourtzi, Carola-Bibiane Schönlieb
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.10390
- **Pdf link:** https://arxiv.org/pdf/2303.10390
- **Abstract**
Alzheimer's disease prognosis is critical for early Mild Cognitive Impairment patients for timely treatment to improve the patient's quality of life. Whilst existing prognosis techniques demonstrate potential results, they are highly limited in terms of using a single modality. Most importantly, they fail in considering a key element for prognosis: not all features extracted at the current moment may contribute to the prognosis prediction several years later. To address the current drawbacks of the literature, we propose a novel hypergraph framework based on an information bottleneck strategy (HGIB). Firstly, our framework seeks to discriminate irrelevant information, and therefore, solely focus on harmonising relevant information for future MCI conversion prediction e.g., two years later). Secondly, our model simultaneously accounts for multi-modal data based on imaging and non-imaging modalities. HGIB uses a hypergraph structure to represent the multi-modality data and accounts for various data modality types. Thirdly, the key of our model is based on a new optimisation scheme. It is based on modelling the principle of information bottleneck into loss functions that can be integrated into our hypergraph neural network. We demonstrate, through extensive experiments on ADNI, that our proposed HGIB framework outperforms existing state-of-the-art hypergraph neural networks for Alzheimer's disease prognosis. We showcase our model even under fewer labels. Finally, we further support the robustness and generalisation capabilities of our framework under both topological and feature perturbations.
### Exploring Expression-related Self-supervised Learning for Affective Behaviour Analysis
- **Authors:** Fanglei Xue, Yifan Sun, Yi Yang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.10511
- **Pdf link:** https://arxiv.org/pdf/2303.10511
- **Abstract**
This paper explores an expression-related self-supervised learning (SSL) method (ContraWarping) to perform expression classification in the 5th Affective Behavior Analysis in-the-wild (ABAW) competition. Affective datasets are expensive to annotate, and SSL methods could learn from large-scale unlabeled data, which is more suitable for this task. By evaluating on the Aff-Wild2 dataset, we demonstrate that ContraWarping outperforms most existing supervised methods and shows great application potential in the affective analysis area. Codes will be released on: https://github.com/youqingxiaozhua/ABAW5.
### Vehicle-Infrastructure Cooperative 3D Object Detection via Feature Flow Prediction
- **Authors:** Haibao Yu, Yingjuan Tang, Enze Xie, Jilei Mao, Jirui Yuan, Ping Luo, Zaiqing Nie
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.10552
- **Pdf link:** https://arxiv.org/pdf/2303.10552
- **Abstract**
Cooperatively utilizing both ego-vehicle and infrastructure sensor data can significantly enhance autonomous driving perception abilities. However, temporal asynchrony and limited wireless communication in traffic environments can lead to fusion misalignment and impact detection performance. This paper proposes Feature Flow Net (FFNet), a novel cooperative detection framework that uses a feature flow prediction module to address these issues in vehicle-infrastructure cooperative 3D object detection. Rather than transmitting feature maps extracted from still-images, FFNet transmits feature flow, which leverages the temporal coherence of sequential infrastructure frames to predict future features and compensate for asynchrony. Additionally, we introduce a self-supervised approach to enable FFNet to generate feature flow with feature prediction ability. Experimental results demonstrate that our proposed method outperforms existing cooperative detection methods while requiring no more than 1/10 transmission cost of raw data on the DAIR-V2X dataset when temporal asynchrony exceeds 200$ms$. The code is available at \href{https://github.com/haibao-yu/FFNet-VIC3D}{https://github.com/haibao-yu/FFNet-VIC3D}.
### SECAD-Net: Self-Supervised CAD Reconstruction by Learning Sketch-Extrude Operations
- **Authors:** Pu Li, Jianwei Guo, Xiaopeng Zhang, Dong-ming Yan
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2303.10613
- **Pdf link:** https://arxiv.org/pdf/2303.10613
- **Abstract**
Reverse engineering CAD models from raw geometry is a classic but strenuous research problem. Previous learning-based methods rely heavily on labels due to the supervised design patterns or reconstruct CAD shapes that are not easily editable. In this work, we introduce SECAD-Net, an end-to-end neural network aimed at reconstructing compact and easy-to-edit CAD models in a self-supervised manner. Drawing inspiration from the modeling language that is most commonly used in modern CAD software, we propose to learn 2D sketches and 3D extrusion parameters from raw shapes, from which a set of extrusion cylinders can be generated by extruding each sketch from a 2D plane into a 3D body. By incorporating the Boolean operation (i.e., union), these cylinders can be combined to closely approximate the target geometry. We advocate the use of implicit fields for sketch representation, which allows for creating CAD variations by interpolating latent codes in the sketch latent space. Extensive experiments on both ABC and Fusion 360 datasets demonstrate the effectiveness of our method, and show superiority over state-of-the-art alternatives including the closely related method for supervised CAD reconstruction. We further apply our approach to CAD editing and single-view CAD reconstruction. The code is released at https://github.com/BunnySoCrazy/SECAD-Net.
### Multi-modal reward for visual relationships-based image captioning
- **Authors:** Ali Abedi, Hossein Karshenas, Peyman Adibi
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2303.10766
- **Pdf link:** https://arxiv.org/pdf/2303.10766
- **Abstract**
Deep neural networks have achieved promising results in automatic image captioning due to their effective representation learning and context-based content generation capabilities. As a prominent type of deep features used in many of the recent image captioning methods, the well-known bottomup features provide a detailed representation of different objects of the image in comparison with the feature maps directly extracted from the raw image. However, the lack of high-level semantic information about the relationships between these objects is an important drawback of bottom-up features, despite their expensive and resource-demanding extraction procedure. To take advantage of visual relationships in caption generation, this paper proposes a deep neural network architecture for image captioning based on fusing the visual relationships information extracted from an image's scene graph with the spatial feature maps of the image. A multi-modal reward function is then introduced for deep reinforcement learning of the proposed network using a combination of language and vision similarities in a common embedding space. The results of extensive experimentation on the MSCOCO dataset show the effectiveness of using visual relationships in the proposed captioning method. Moreover, the results clearly indicate that the proposed multi-modal reward in deep reinforcement learning leads to better model optimization, outperforming several state-of-the-art image captioning algorithms, while using light and easy to extract image features. A detailed experimental study of the components constituting the proposed method is also presented.
### Self-Supervised Learning for Multimodal Non-Rigid 3D Shape Matching
- **Authors:** Dongliang Cao, Florian Bernard
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Computational Geometry (cs.CG)
- **Arxiv link:** https://arxiv.org/abs/2303.10971
- **Pdf link:** https://arxiv.org/pdf/2303.10971
- **Abstract**
The matching of 3D shapes has been extensively studied for shapes represented as surface meshes, as well as for shapes represented as point clouds. While point clouds are a common representation of raw real-world 3D data (e.g. from laser scanners), meshes encode rich and expressive topological information, but their creation typically requires some form of (often manual) curation. In turn, methods that purely rely on point clouds are unable to meet the matching quality of mesh-based methods that utilise the additional topological structure. In this work we close this gap by introducing a self-supervised multimodal learning strategy that combines mesh-based functional map regularisation with a contrastive loss that couples mesh and point cloud data. Our shape matching approach allows to obtain intramodal correspondences for triangle meshes, complete point clouds, and partially observed point clouds, as well as correspondences across these data modalities. We demonstrate that our method achieves state-of-the-art results on several challenging benchmark datasets even in comparison to recent supervised methods, and that our method reaches previously unseen cross-dataset generalisation ability.
### Benchmarking Robustness of 3D Object Detection to Common Corruptions in Autonomous Driving
- **Authors:** Yinpeng Dong, Caixin Kang, Jinlai Zhang, Zijian Zhu, Yikai Wang, Xiao Yang, Hang Su, Xingxing Wei, Jun Zhu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR)
- **Arxiv link:** https://arxiv.org/abs/2303.11040
- **Pdf link:** https://arxiv.org/pdf/2303.11040
- **Abstract**
3D object detection is an important task in autonomous driving to perceive the surroundings. Despite the excellent performance, the existing 3D detectors lack the robustness to real-world corruptions caused by adverse weathers, sensor noises, etc., provoking concerns about the safety and reliability of autonomous driving systems. To comprehensively and rigorously benchmark the corruption robustness of 3D detectors, in this paper we design 27 types of common corruptions for both LiDAR and camera inputs considering real-world driving scenarios. By synthesizing these corruptions on public datasets, we establish three corruption robustness benchmarks -- KITTI-C, nuScenes-C, and Waymo-C. Then, we conduct large-scale experiments on 24 diverse 3D object detection models to evaluate their corruption robustness. Based on the evaluation results, we draw several important findings, including: 1) motion-level corruptions are the most threatening ones that lead to significant performance drop of all models; 2) LiDAR-camera fusion models demonstrate better robustness; 3) camera-only models are extremely vulnerable to image corruptions, showing the indispensability of LiDAR point clouds. We release the benchmarks and codes at https://github.com/kkkcx/3D_Corruptions_AD. We hope that our benchmarks and findings can provide insights for future research on developing robust 3D object detection models.
### From Sparse to Precise: A Practical Editing Approach for Intracardiac Echocardiography Segmentation
- **Authors:** Ahmed H. Shahin, Yan Zhuang, Noha El-Zehiry
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.11041
- **Pdf link:** https://arxiv.org/pdf/2303.11041
- **Abstract**
Accurate and safe catheter ablation procedures for patients with atrial fibrillation require precise segmentation of cardiac structures in Intracardiac Echocardiography (ICE) imaging. Prior studies have suggested methods that employ 3D geometry information from the ICE transducer to create a sparse ICE volume by placing 2D frames in a 3D grid, enabling training of 3D segmentation models. However, the resulting 3D masks from these models can be inaccurate and may lead to serious clinical complications due to the sparse sampling in ICE data, frames misalignment, and cardiac motion. To address this issue, we propose an interactive editing framework that allows users to edit segmentation output by drawing scribbles on a 2D frame. The user interaction is mapped to the 3D grid and utilized to execute an editing step that modifies the segmentation in the vicinity of the interaction while preserving the previous segmentation away from the interaction. Furthermore, our framework accommodates multiple edits to the segmentation output in a sequential manner without compromising previous edits. This paper presents a novel loss function and a novel evaluation metric specifically designed for editing. Results from cross-validation and testing indicate that our proposed loss function outperforms standard losses and training strategies in terms of segmentation quality and following user input. Additionally, we show quantitatively and qualitatively that subsequent edits do not compromise previous edits when using our method, as opposed to standard segmentation losses. Overall, our approach enhances the accuracy of the segmentation while avoiding undesired changes away from user interactions and without compromising the quality of previously edited regions, leading to better patient outcomes.
### A Multi-Task Deep Learning Approach for Sensor-based Human Activity Recognition and Segmentation
- **Authors:** Furong Duan, Tao Zhu, Jinqiang Wang, Liming Chen, Huansheng Ning, Yaping Wan
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.11100
- **Pdf link:** https://arxiv.org/pdf/2303.11100
- **Abstract**
Sensor-based human activity segmentation and recognition are two important and challenging problems in many real-world applications and they have drawn increasing attention from the deep learning community in recent years. Most of the existing deep learning works were designed based on pre-segmented sensor streams and they have treated activity segmentation and recognition as two separate tasks. In practice, performing data stream segmentation is very challenging. We believe that both activity segmentation and recognition may convey unique information which can complement each other to improve the performance of the two tasks. In this paper, we firstly proposes a new multitask deep neural network to solve the two tasks simultaneously. The proposed neural network adopts selective convolution and features multiscale windows to segment activities of long or short time durations. First, multiple windows of different scales are generated to center on each unit of the feature sequence. Then, the model is trained to predict, for each window, the activity class and the offset to the true activity boundaries. Finally, overlapping windows are filtered out by non-maximum suppression, and adjacent windows of the same activity are concatenated to complete the segmentation task. Extensive experiments were conducted on eight popular benchmarking datasets, and the results show that our proposed method outperforms the state-of-the-art methods both for activity recognition and segmentation.
### I2Edit: Towards Multi-turn Interactive Image Editing via Dialogue
- **Authors:** Xing Cui, Zekun Li, Peipei Li, Yibo Hu, Hailin Shi, Zhaofeng He
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.11108
- **Pdf link:** https://arxiv.org/pdf/2303.11108
- **Abstract**
Although there have been considerable research efforts on controllable facial image editing, the desirable interactive setting where the users can interact with the system to adjust their requirements dynamically hasn't been well explored. This paper focuses on facial image editing via dialogue and introduces a new benchmark dataset, Multi-turn Interactive Image Editing (I2Edit), for evaluating image editing quality and interaction ability in real-world interactive facial editing scenarios. The dataset is constructed upon the CelebA-HQ dataset with images annotated with a multi-turn dialogue that corresponds to the user editing requirements. I2Edit is challenging, as it needs to 1) track the dynamically updated user requirements and edit the images accordingly, as well as 2) generate the appropriate natural language response to communicate with the user. To address these challenges, we propose a framework consisting of a dialogue module and an image editing module. The former is for user edit requirements tracking and generating the corresponding indicative responses, while the latter edits the images conditioned on the tracked user edit requirements. In contrast to previous works that simply treat multi-turn interaction as a sequence of single-turn interactions, we extract the user edit requirements from the whole dialogue history instead of the current single turn. The extracted global user edit requirements enable us to directly edit the input raw image to avoid error accumulation and attribute forgetting issues. Extensive quantitative and qualitative experiments on the I2Edit dataset demonstrate the advantage of our proposed framework over the previous single-turn methods. We believe our new dataset could serve as a valuable resource to push forward the exploration of real-world, complex interactive image editing. Code and data will be made public.
### SeiT: Storage-Efficient Vision Training with Tokens Using 1% of Pixel Storage
- **Authors:** Song Park, Sanghyuk Chun, Byeongho Heo, Wonjae Kim, Sangdoo Yun
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.11114
- **Pdf link:** https://arxiv.org/pdf/2303.11114
- **Abstract**
We need billion-scale images to achieve more generalizable and ground-breaking vision models, as well as massive dataset storage to ship the images (e.g., the LAION-4B dataset needs 240TB storage space). However, it has become challenging to deal with unlimited dataset storage with limited storage infrastructure. A number of storage-efficient training methods have been proposed to tackle the problem, but they are rarely scalable or suffer from severe damage to performance. In this paper, we propose a storage-efficient training strategy for vision classifiers for large-scale datasets (e.g., ImageNet) that only uses 1024 tokens per instance without using the raw level pixels; our token storage only needs <1% of the original JPEG-compressed raw pixels. We also propose token augmentations and a Stem-adaptor module to make our approach able to use the same architecture as pixel-based approaches with only minimal modifications on the stem layer and the carefully tuned optimization settings. Our experimental results on ImageNet-1k show that our method significantly outperforms other storage-efficient training methods with a large gap. We further show the effectiveness of our method in other practical scenarios, storage-efficient pre-training, and continual learning. Code is available at https://github.com/naver-ai/seit
### AnimeDiffusion: Anime Face Line Drawing Colorization via Diffusion Models
- **Authors:** Yu Cao, Xiangqiao Meng, P.Y. Mok, Xueting Liu, Tong-Yee Lee, Ping Li
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.11137
- **Pdf link:** https://arxiv.org/pdf/2303.11137
- **Abstract**
It is a time-consuming and tedious work for manually colorizing anime line drawing images, which is an essential stage in cartoon animation creation pipeline. Reference-based line drawing colorization is a challenging task that relies on the precise cross-domain long-range dependency modelling between the line drawing and reference image. Existing learning methods still utilize generative adversarial networks (GANs) as one key module of their model architecture. In this paper, we propose a novel method called AnimeDiffusion using diffusion models that performs anime face line drawing colorization automatically. To the best of our knowledge, this is the first diffusion model tailored for anime content creation. In order to solve the huge training consumption problem of diffusion models, we design a hybrid training strategy, first pre-training a diffusion model with classifier-free guidance and then fine-tuning it with image reconstruction guidance. We find that with a few iterations of fine-tuning, the model shows wonderful colorization performance, as illustrated in Fig. 1. For training AnimeDiffusion, we conduct an anime face line drawing colorization benchmark dataset, which contains 31696 training data and 579 testing data. We hope this dataset can fill the gap of no available high resolution anime face dataset for colorization method evaluation. Through multiple quantitative metrics evaluated on our dataset and a user study, we demonstrate AnimeDiffusion outperforms state-of-the-art GANs-based models for anime face line drawing colorization. We also collaborate with professional artists to test and apply our AnimeDiffusion for their creation work. We release our code on https://github.com/xq-meng/AnimeDiffusion.
### Attribute-preserving Face Dataset Anonymization via Latent Code Optimization
- **Authors:** Simone Barattin, Christos Tzelepis, Ioannis Patras, Nicu Sebe
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.11296
- **Pdf link:** https://arxiv.org/pdf/2303.11296
- **Abstract**
This work addresses the problem of anonymizing the identity of faces in a dataset of images, such that the privacy of those depicted is not violated, while at the same time the dataset is useful for downstream task such as for training machine learning models. To the best of our knowledge, we are the first to explicitly address this issue and deal with two major drawbacks of the existing state-of-the-art approaches, namely that they (i) require the costly training of additional, purpose-trained neural networks, and/or (ii) fail to retain the facial attributes of the original images in the anonymized counterparts, the preservation of which is of paramount importance for their use in downstream tasks. We accordingly present a task-agnostic anonymization procedure that directly optimizes the images' latent representation in the latent space of a pre-trained GAN. By optimizing the latent codes directly, we ensure both that the identity is of a desired distance away from the original (with an identity obfuscation loss), whilst preserving the facial attributes (using a novel feature-matching loss in FaRL's deep feature space). We demonstrate through a series of both qualitative and quantitative experiments that our method is capable of anonymizing the identity of the images whilst -- crucially -- better-preserving the facial attributes. We make the code and the pre-trained models publicly available at: https://github.com/chi0tzp/FALCO.
## Keyword: raw image
### Multi-modal reward for visual relationships-based image captioning
- **Authors:** Ali Abedi, Hossein Karshenas, Peyman Adibi
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2303.10766
- **Pdf link:** https://arxiv.org/pdf/2303.10766
- **Abstract**
Deep neural networks have achieved promising results in automatic image captioning due to their effective representation learning and context-based content generation capabilities. As a prominent type of deep features used in many of the recent image captioning methods, the well-known bottomup features provide a detailed representation of different objects of the image in comparison with the feature maps directly extracted from the raw image. However, the lack of high-level semantic information about the relationships between these objects is an important drawback of bottom-up features, despite their expensive and resource-demanding extraction procedure. To take advantage of visual relationships in caption generation, this paper proposes a deep neural network architecture for image captioning based on fusing the visual relationships information extracted from an image's scene graph with the spatial feature maps of the image. A multi-modal reward function is then introduced for deep reinforcement learning of the proposed network using a combination of language and vision similarities in a common embedding space. The results of extensive experimentation on the MSCOCO dataset show the effectiveness of using visual relationships in the proposed captioning method. Moreover, the results clearly indicate that the proposed multi-modal reward in deep reinforcement learning leads to better model optimization, outperforming several state-of-the-art image captioning algorithms, while using light and easy to extract image features. A detailed experimental study of the components constituting the proposed method is also presented.
### I2Edit: Towards Multi-turn Interactive Image Editing via Dialogue
- **Authors:** Xing Cui, Zekun Li, Peipei Li, Yibo Hu, Hailin Shi, Zhaofeng He
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.11108
- **Pdf link:** https://arxiv.org/pdf/2303.11108
- **Abstract**
Although there have been considerable research efforts on controllable facial image editing, the desirable interactive setting where the users can interact with the system to adjust their requirements dynamically hasn't been well explored. This paper focuses on facial image editing via dialogue and introduces a new benchmark dataset, Multi-turn Interactive Image Editing (I2Edit), for evaluating image editing quality and interaction ability in real-world interactive facial editing scenarios. The dataset is constructed upon the CelebA-HQ dataset with images annotated with a multi-turn dialogue that corresponds to the user editing requirements. I2Edit is challenging, as it needs to 1) track the dynamically updated user requirements and edit the images accordingly, as well as 2) generate the appropriate natural language response to communicate with the user. To address these challenges, we propose a framework consisting of a dialogue module and an image editing module. The former is for user edit requirements tracking and generating the corresponding indicative responses, while the latter edits the images conditioned on the tracked user edit requirements. In contrast to previous works that simply treat multi-turn interaction as a sequence of single-turn interactions, we extract the user edit requirements from the whole dialogue history instead of the current single turn. The extracted global user edit requirements enable us to directly edit the input raw image to avoid error accumulation and attribute forgetting issues. Extensive quantitative and qualitative experiments on the I2Edit dataset demonstrate the advantage of our proposed framework over the previous single-turn methods. We believe our new dataset could serve as a valuable resource to push forward the exploration of real-world, complex interactive image editing. Code and data will be made public.
| 2.0 | New submissions for Tue, 21 Mar 23 - ## Keyword: events
### PseudoBound: Limiting the anomaly reconstruction capability of one-class classifiers using pseudo anomalies
- **Authors:** Marcella Astrid, Muhammad Zaigham Zaheer, Seung-Ik Lee
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.10704
- **Pdf link:** https://arxiv.org/pdf/2303.10704
- **Abstract**
Due to the rarity of anomalous events, video anomaly detection is typically approached as one-class classification (OCC) problem. Typically in OCC, an autoencoder (AE) is trained to reconstruct the normal only training data with the expectation that, in test time, it can poorly reconstruct the anomalous data. However, previous studies have shown that, even trained with only normal data, AEs can often reconstruct anomalous data as well, resulting in a decreased performance. To mitigate this problem, we propose to limit the anomaly reconstruction capability of AEs by incorporating pseudo anomalies during the training of an AE. Extensive experiments using five types of pseudo anomalies show the robustness of our training mechanism towards any kind of pseudo anomaly. Moreover, we demonstrate the effectiveness of our proposed pseudo anomaly based training approach against several existing state-ofthe-art (SOTA) methods on three benchmark video anomaly datasets, outperforming all the other reconstruction-based approaches in two datasets and showing the second best performance in the other dataset.
### k-SALSA: k-anonymous synthetic averaging of retinal images via local style alignment
- **Authors:** Minkyu Jeon, Hyeonjin Park, Hyunwoo J. Kim, Michael Morley, Hyunghoon Cho
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Cryptography and Security (cs.CR)
- **Arxiv link:** https://arxiv.org/abs/2303.10824
- **Pdf link:** https://arxiv.org/pdf/2303.10824
- **Abstract**
The application of modern machine learning to retinal image analyses offers valuable insights into a broad range of human health conditions beyond ophthalmic diseases. Additionally, data sharing is key to fully realizing the potential of machine learning models by providing a rich and diverse collection of training data. However, the personally-identifying nature of retinal images, encompassing the unique vascular structure of each individual, often prevents this data from being shared openly. While prior works have explored image de-identification strategies based on synthetic averaging of images in other domains (e.g. facial images), existing techniques face difficulty in preserving both privacy and clinical utility in retinal images, as we demonstrate in our work. We therefore introduce k-SALSA, a generative adversarial network (GAN)-based framework for synthesizing retinal fundus images that summarize a given private dataset while satisfying the privacy notion of k-anonymity. k-SALSA brings together state-of-the-art techniques for training and inverting GANs to achieve practical performance on retinal images. Furthermore, k-SALSA leverages a new technique, called local style alignment, to generate a synthetic average that maximizes the retention of fine-grain visual patterns in the source images, thus improving the clinical utility of the generated images. On two benchmark datasets of diabetic retinopathy (EyePACS and APTOS), we demonstrate our improvement upon existing methods with respect to image fidelity, classification performance, and mitigation of membership inference attacks. Our work represents a step toward broader sharing of retinal images for scientific collaboration. Code is available at https://github.com/hcholab/k-salsa.
### Leapfrog Diffusion Model for Stochastic Trajectory Prediction
- **Authors:** Weibo Mao, Chenxin Xu, Qi Zhu, Siheng Chen, Yanfeng Wang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.10895
- **Pdf link:** https://arxiv.org/pdf/2303.10895
- **Abstract**
To model the indeterminacy of human behaviors, stochastic trajectory prediction requires a sophisticated multi-modal distribution of future trajectories. Emerging diffusion models have revealed their tremendous representation capacities in numerous generation tasks, showing potential for stochastic trajectory prediction. However, expensive time consumption prevents diffusion models from real-time prediction, since a large number of denoising steps are required to assure sufficient representation ability. To resolve the dilemma, we present LEapfrog Diffusion model (LED), a novel diffusion-based trajectory prediction model, which provides real-time, precise, and diverse predictions. The core of the proposed LED is to leverage a trainable leapfrog initializer to directly learn an expressive multi-modal distribution of future trajectories, which skips a large number of denoising steps, significantly accelerating inference speed. Moreover, the leapfrog initializer is trained to appropriately allocate correlated samples to provide a diversity of predicted future trajectories, significantly improving prediction performances. Extensive experiments on four real-world datasets, including NBA/NFL/SDD/ETH-UCY, show that LED consistently improves performance and achieves 23.7%/21.9% ADE/FDE improvement on NFL. The proposed LED also speeds up the inference 19.3/30.8/24.3/25.1 times compared to the standard diffusion model on NBA/NFL/SDD/ETH-UCY, satisfying real-time inference needs. Code is available at https://github.com/MediaBrain-SJTU/LED.
### Learning Optical Flow from Event Camera with Rendered Dataset
- **Authors:** Xinglong Luo, Kunming Luo, Ao Luo, Zhengning Wang, Ping Tan, Shuaicheng Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.11011
- **Pdf link:** https://arxiv.org/pdf/2303.11011
- **Abstract**
We study the problem of estimating optical flow from event cameras. One important issue is how to build a high-quality event-flow dataset with accurate event values and flow labels. Previous datasets are created by either capturing real scenes by event cameras or synthesizing from images with pasted foreground objects. The former case can produce real event values but with calculated flow labels, which are sparse and inaccurate. The later case can generate dense flow labels but the interpolated events are prone to errors. In this work, we propose to render a physically correct event-flow dataset using computer graphics models. In particular, we first create indoor and outdoor 3D scenes by Blender with rich scene content variations. Second, diverse camera motions are included for the virtual capturing, producing images and accurate flow labels. Third, we render high-framerate videos between images for accurate events. The rendered dataset can adjust the density of events, based on which we further introduce an adaptive density module (ADM). Experiments show that our proposed dataset can facilitate event-flow learning, whereas previous approaches when trained on our dataset can improve their performances constantly by a relatively large margin. In addition, event-flow pipelines when equipped with our ADM can further improve performances.
### Localizing Object-level Shape Variations with Text-to-Image Diffusion Models
- **Authors:** Or Patashnik, Daniel Garibi, Idan Azuri, Hadar Averbuch-Elor, Daniel Cohen-Or
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2303.11306
- **Pdf link:** https://arxiv.org/pdf/2303.11306
- **Abstract**
Text-to-image models give rise to workflows which often begin with an exploration step, where users sift through a large collection of generated images. The global nature of the text-to-image generation process prevents users from narrowing their exploration to a particular object in the image. In this paper, we present a technique to generate a collection of images that depicts variations in the shape of a specific object, enabling an object-level shape exploration process. Creating plausible variations is challenging as it requires control over the shape of the generated object while respecting its semantics. A particular challenge when generating object variations is accurately localizing the manipulation applied over the object's shape. We introduce a prompt-mixing technique that switches between prompts along the denoising process to attain a variety of shape choices. To localize the image-space operation, we present two techniques that use the self-attention layers in conjunction with the cross-attention layers. Moreover, we show that these localization techniques are general and effective beyond the scope of generating object variations. Extensive results and comparisons demonstrate the effectiveness of our method in generating object variations, and the competence of our localization techniques.
## Keyword: event camera
### ANMS: Asynchronous Non-Maximum Suppression in Event Stream
- **Authors:** Qianang Zhou, JunLin Xiong, Youfu Li
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.10575
- **Pdf link:** https://arxiv.org/pdf/2303.10575
- **Abstract**
The non-maximum suppression (NMS) is widely used in frame-based tasks as an essential post-processing algorithm. However, event-based NMS either has high computational complexity or leads to frequent discontinuities. As a result, the performance of event-based corner detectors is limited. This paper proposes a general-purpose asynchronous non-maximum suppression pipeline (ANMS), and applies it to corner event detection. The proposed pipeline extract fine feature stream from the output of original detectors and adapts to the speed of motion. The ANMS runs directly on the asynchronous event stream with extremely low latency, which hardly affects the speed of original detectors. Additionally, we evaluate the DAVIS-based ground-truth labeling method to fill the gap between frame and event. Evaluation on public dataset indicates that the proposed ANMS pipeline significantly improves the performance of three classical asynchronous detectors with negligible latency. More importantly, the proposed ANMS framework is a natural extension of NMS, which is applicable to other asynchronous scoring tasks for event cameras.
### Learning Optical Flow from Event Camera with Rendered Dataset
- **Authors:** Xinglong Luo, Kunming Luo, Ao Luo, Zhengning Wang, Ping Tan, Shuaicheng Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.11011
- **Pdf link:** https://arxiv.org/pdf/2303.11011
- **Abstract**
We study the problem of estimating optical flow from event cameras. One important issue is how to build a high-quality event-flow dataset with accurate event values and flow labels. Previous datasets are created by either capturing real scenes by event cameras or synthesizing from images with pasted foreground objects. The former case can produce real event values but with calculated flow labels, which are sparse and inaccurate. The later case can generate dense flow labels but the interpolated events are prone to errors. In this work, we propose to render a physically correct event-flow dataset using computer graphics models. In particular, we first create indoor and outdoor 3D scenes by Blender with rich scene content variations. Second, diverse camera motions are included for the virtual capturing, producing images and accurate flow labels. Third, we render high-framerate videos between images for accurate events. The rendered dataset can adjust the density of events, based on which we further introduce an adaptive density module (ADM). Experiments show that our proposed dataset can facilitate event-flow learning, whereas previous approaches when trained on our dataset can improve their performances constantly by a relatively large margin. In addition, event-flow pipelines when equipped with our ADM can further improve performances.
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
### HGIB: Prognosis for Alzheimer's Disease via Hypergraph Information Bottleneck
- **Authors:** Shujun Wang, Angelica I Aviles-Rivero, Zoe Kourtzi, Carola-Bibiane Schönlieb
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.10390
- **Pdf link:** https://arxiv.org/pdf/2303.10390
- **Abstract**
Alzheimer's disease prognosis is critical for early Mild Cognitive Impairment patients for timely treatment to improve the patient's quality of life. Whilst existing prognosis techniques demonstrate potential results, they are highly limited in terms of using a single modality. Most importantly, they fail in considering a key element for prognosis: not all features extracted at the current moment may contribute to the prognosis prediction several years later. To address the current drawbacks of the literature, we propose a novel hypergraph framework based on an information bottleneck strategy (HGIB). Firstly, our framework seeks to discriminate irrelevant information, and therefore, solely focus on harmonising relevant information for future MCI conversion prediction e.g., two years later). Secondly, our model simultaneously accounts for multi-modal data based on imaging and non-imaging modalities. HGIB uses a hypergraph structure to represent the multi-modality data and accounts for various data modality types. Thirdly, the key of our model is based on a new optimisation scheme. It is based on modelling the principle of information bottleneck into loss functions that can be integrated into our hypergraph neural network. We demonstrate, through extensive experiments on ADNI, that our proposed HGIB framework outperforms existing state-of-the-art hypergraph neural networks for Alzheimer's disease prognosis. We showcase our model even under fewer labels. Finally, we further support the robustness and generalisation capabilities of our framework under both topological and feature perturbations.
### Multi-modal reward for visual relationships-based image captioning
- **Authors:** Ali Abedi, Hossein Karshenas, Peyman Adibi
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2303.10766
- **Pdf link:** https://arxiv.org/pdf/2303.10766
- **Abstract**
Deep neural networks have achieved promising results in automatic image captioning due to their effective representation learning and context-based content generation capabilities. As a prominent type of deep features used in many of the recent image captioning methods, the well-known bottomup features provide a detailed representation of different objects of the image in comparison with the feature maps directly extracted from the raw image. However, the lack of high-level semantic information about the relationships between these objects is an important drawback of bottom-up features, despite their expensive and resource-demanding extraction procedure. To take advantage of visual relationships in caption generation, this paper proposes a deep neural network architecture for image captioning based on fusing the visual relationships information extracted from an image's scene graph with the spatial feature maps of the image. A multi-modal reward function is then introduced for deep reinforcement learning of the proposed network using a combination of language and vision similarities in a common embedding space. The results of extensive experimentation on the MSCOCO dataset show the effectiveness of using visual relationships in the proposed captioning method. Moreover, the results clearly indicate that the proposed multi-modal reward in deep reinforcement learning leads to better model optimization, outperforming several state-of-the-art image captioning algorithms, while using light and easy to extract image features. A detailed experimental study of the components constituting the proposed method is also presented.
### Attribute-preserving Face Dataset Anonymization via Latent Code Optimization
- **Authors:** Simone Barattin, Christos Tzelepis, Ioannis Patras, Nicu Sebe
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.11296
- **Pdf link:** https://arxiv.org/pdf/2303.11296
- **Abstract**
This work addresses the problem of anonymizing the identity of faces in a dataset of images, such that the privacy of those depicted is not violated, while at the same time the dataset is useful for downstream task such as for training machine learning models. To the best of our knowledge, we are the first to explicitly address this issue and deal with two major drawbacks of the existing state-of-the-art approaches, namely that they (i) require the costly training of additional, purpose-trained neural networks, and/or (ii) fail to retain the facial attributes of the original images in the anonymized counterparts, the preservation of which is of paramount importance for their use in downstream tasks. We accordingly present a task-agnostic anonymization procedure that directly optimizes the images' latent representation in the latent space of a pre-trained GAN. By optimizing the latent codes directly, we ensure both that the identity is of a desired distance away from the original (with an identity obfuscation loss), whilst preserving the facial attributes (using a novel feature-matching loss in FaRL's deep feature space). We demonstrate through a series of both qualitative and quantitative experiments that our method is capable of anonymizing the identity of the images whilst -- crucially -- better-preserving the facial attributes. We make the code and the pre-trained models publicly available at: https://github.com/chi0tzp/FALCO.
## Keyword: ISP
### Towards Diverse Binary Segmentation via A Simple yet General Gated Network
- **Authors:** Xiaoqi Zhao, Youwei Pang, Lihe Zhang, Huchuan Lu, Lei Zhang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.10396
- **Pdf link:** https://arxiv.org/pdf/2303.10396
- **Abstract**
In many binary segmentation tasks, most CNNs-based methods use a U-shape encoder-decoder network as their basic structure. They ignore two key problems when the encoder exchanges information with the decoder: one is the lack of interference control mechanism between them, the other is without considering the disparity of the contributions from different encoder levels. In this work, we propose a simple yet general gated network (GateNet) to tackle them all at once. With the help of multi-level gate units, the valuable context information from the encoder can be selectively transmitted to the decoder. In addition, we design a gated dual branch structure to build the cooperation among the features of different levels and improve the discrimination ability of the network. Furthermore, we introduce a ``Fold'' operation to improve the atrous convolution and form a novel folded atrous convolution, which can be flexibly embedded in ASPP or DenseASPP to accurately localize foreground objects of various scales. GateNet can be easily generalized to many binary segmentation tasks, including general and specific object segmentation and multi-modal segmentation. Without bells and whistles, our network consistently performs favorably against the state-of-the-art methods under 10 metrics on 33 datasets of 10 binary segmentation tasks.
### Benchmarking Robustness of 3D Object Detection to Common Corruptions in Autonomous Driving
- **Authors:** Yinpeng Dong, Caixin Kang, Jinlai Zhang, Zijian Zhu, Yikai Wang, Xiao Yang, Hang Su, Xingxing Wei, Jun Zhu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR)
- **Arxiv link:** https://arxiv.org/abs/2303.11040
- **Pdf link:** https://arxiv.org/pdf/2303.11040
- **Abstract**
3D object detection is an important task in autonomous driving to perceive the surroundings. Despite the excellent performance, the existing 3D detectors lack the robustness to real-world corruptions caused by adverse weathers, sensor noises, etc., provoking concerns about the safety and reliability of autonomous driving systems. To comprehensively and rigorously benchmark the corruption robustness of 3D detectors, in this paper we design 27 types of common corruptions for both LiDAR and camera inputs considering real-world driving scenarios. By synthesizing these corruptions on public datasets, we establish three corruption robustness benchmarks -- KITTI-C, nuScenes-C, and Waymo-C. Then, we conduct large-scale experiments on 24 diverse 3D object detection models to evaluate their corruption robustness. Based on the evaluation results, we draw several important findings, including: 1) motion-level corruptions are the most threatening ones that lead to significant performance drop of all models; 2) LiDAR-camera fusion models demonstrate better robustness; 3) camera-only models are extremely vulnerable to image corruptions, showing the indispensability of LiDAR point clouds. We release the benchmarks and codes at https://github.com/kkkcx/3D_Corruptions_AD. We hope that our benchmarks and findings can provide insights for future research on developing robust 3D object detection models.
### HiFace: High-Fidelity 3D Face Reconstruction by Learning Static and Dynamic Details
- **Authors:** Zenghao Chai, Tianke Zhang, Tianyu He, Xu Tan, Tadas Baltrusaitis, HsiangTao Wu, Runnan Li, Sheng Zhao, Chun Yuan, Jiang Bian
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR)
- **Arxiv link:** https://arxiv.org/abs/2303.11225
- **Pdf link:** https://arxiv.org/pdf/2303.11225
- **Abstract**
3D Morphable Models (3DMMs) demonstrate great potential for reconstructing faithful and animatable 3D facial surfaces from a single image. The facial surface is influenced by the coarse shape, as well as the static detail (e,g., person-specific appearance) and dynamic detail (e.g., expression-driven wrinkles). Previous work struggles to decouple the static and dynamic details through image-level supervision, leading to reconstructions that are not realistic. In this paper, we aim at high-fidelity 3D face reconstruction and propose HiFace to explicitly model the static and dynamic details. Specifically, the static detail is modeled as the linear combination of a displacement basis, while the dynamic detail is modeled as the linear interpolation of two displacement maps with polarized expressions. We exploit several loss functions to jointly learn the coarse shape and fine details with both synthetic and real-world datasets, which enable HiFace to reconstruct high-fidelity 3D shapes with animatable details. Extensive quantitative and qualitative experiments demonstrate that HiFace presents state-of-the-art reconstruction quality and faithfully recovers both the static and dynamic details. Our project page can be found at https://project-hiface.github.io
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### VIMI: Vehicle-Infrastructure Multi-view Intermediate Fusion for Camera-based 3D Object Detection
- **Authors:** Zhe Wang, Siqi Fan, Xiaoliang Huo, Tongda Xu, Yan Wang, Jingjing Liu, Yilun Chen, Ya-Qin Zhang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.10975
- **Pdf link:** https://arxiv.org/pdf/2303.10975
- **Abstract**
In autonomous driving, Vehicle-Infrastructure Cooperative 3D Object Detection (VIC3D) makes use of multi-view cameras from both vehicles and traffic infrastructure, providing a global vantage point with rich semantic context of road conditions beyond a single vehicle viewpoint. Two major challenges prevail in VIC3D: 1) inherent calibration noise when fusing multi-view images, caused by time asynchrony across cameras; 2) information loss when projecting 2D features into 3D space. To address these issues, We propose a novel 3D object detection framework, Vehicles-Infrastructure Multi-view Intermediate fusion (VIMI). First, to fully exploit the holistic perspectives from both vehicles and infrastructure, we propose a Multi-scale Cross Attention (MCA) module that fuses infrastructure and vehicle features on selective multi-scales to correct the calibration noise introduced by camera asynchrony. Then, we design a Camera-aware Channel Masking (CCM) module that uses camera parameters as priors to augment the fused features. We further introduce a Feature Compression (FC) module with channel and spatial compression blocks to reduce the size of transmitted features for enhanced efficiency. Experiments show that VIMI achieves 15.61% overall AP_3D and 21.44% AP_BEV on the new VIC3D dataset, DAIR-V2X-C, significantly outperforming state-of-the-art early fusion and late fusion methods with comparable transmission cost.
## Keyword: RAW
### HGIB: Prognosis for Alzheimer's Disease via Hypergraph Information Bottleneck
- **Authors:** Shujun Wang, Angelica I Aviles-Rivero, Zoe Kourtzi, Carola-Bibiane Schönlieb
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.10390
- **Pdf link:** https://arxiv.org/pdf/2303.10390
- **Abstract**
Alzheimer's disease prognosis is critical for early Mild Cognitive Impairment patients for timely treatment to improve the patient's quality of life. Whilst existing prognosis techniques demonstrate potential results, they are highly limited in terms of using a single modality. Most importantly, they fail in considering a key element for prognosis: not all features extracted at the current moment may contribute to the prognosis prediction several years later. To address the current drawbacks of the literature, we propose a novel hypergraph framework based on an information bottleneck strategy (HGIB). Firstly, our framework seeks to discriminate irrelevant information, and therefore, solely focus on harmonising relevant information for future MCI conversion prediction e.g., two years later). Secondly, our model simultaneously accounts for multi-modal data based on imaging and non-imaging modalities. HGIB uses a hypergraph structure to represent the multi-modality data and accounts for various data modality types. Thirdly, the key of our model is based on a new optimisation scheme. It is based on modelling the principle of information bottleneck into loss functions that can be integrated into our hypergraph neural network. We demonstrate, through extensive experiments on ADNI, that our proposed HGIB framework outperforms existing state-of-the-art hypergraph neural networks for Alzheimer's disease prognosis. We showcase our model even under fewer labels. Finally, we further support the robustness and generalisation capabilities of our framework under both topological and feature perturbations.
### Exploring Expression-related Self-supervised Learning for Affective Behaviour Analysis
- **Authors:** Fanglei Xue, Yifan Sun, Yi Yang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.10511
- **Pdf link:** https://arxiv.org/pdf/2303.10511
- **Abstract**
This paper explores an expression-related self-supervised learning (SSL) method (ContraWarping) to perform expression classification in the 5th Affective Behavior Analysis in-the-wild (ABAW) competition. Affective datasets are expensive to annotate, and SSL methods could learn from large-scale unlabeled data, which is more suitable for this task. By evaluating on the Aff-Wild2 dataset, we demonstrate that ContraWarping outperforms most existing supervised methods and shows great application potential in the affective analysis area. Codes will be released on: https://github.com/youqingxiaozhua/ABAW5.
### Vehicle-Infrastructure Cooperative 3D Object Detection via Feature Flow Prediction
- **Authors:** Haibao Yu, Yingjuan Tang, Enze Xie, Jilei Mao, Jirui Yuan, Ping Luo, Zaiqing Nie
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.10552
- **Pdf link:** https://arxiv.org/pdf/2303.10552
- **Abstract**
Cooperatively utilizing both ego-vehicle and infrastructure sensor data can significantly enhance autonomous driving perception abilities. However, temporal asynchrony and limited wireless communication in traffic environments can lead to fusion misalignment and impact detection performance. This paper proposes Feature Flow Net (FFNet), a novel cooperative detection framework that uses a feature flow prediction module to address these issues in vehicle-infrastructure cooperative 3D object detection. Rather than transmitting feature maps extracted from still-images, FFNet transmits feature flow, which leverages the temporal coherence of sequential infrastructure frames to predict future features and compensate for asynchrony. Additionally, we introduce a self-supervised approach to enable FFNet to generate feature flow with feature prediction ability. Experimental results demonstrate that our proposed method outperforms existing cooperative detection methods while requiring no more than 1/10 transmission cost of raw data on the DAIR-V2X dataset when temporal asynchrony exceeds 200$ms$. The code is available at \href{https://github.com/haibao-yu/FFNet-VIC3D}{https://github.com/haibao-yu/FFNet-VIC3D}.
### SECAD-Net: Self-Supervised CAD Reconstruction by Learning Sketch-Extrude Operations
- **Authors:** Pu Li, Jianwei Guo, Xiaopeng Zhang, Dong-ming Yan
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2303.10613
- **Pdf link:** https://arxiv.org/pdf/2303.10613
- **Abstract**
Reverse engineering CAD models from raw geometry is a classic but strenuous research problem. Previous learning-based methods rely heavily on labels due to the supervised design patterns or reconstruct CAD shapes that are not easily editable. In this work, we introduce SECAD-Net, an end-to-end neural network aimed at reconstructing compact and easy-to-edit CAD models in a self-supervised manner. Drawing inspiration from the modeling language that is most commonly used in modern CAD software, we propose to learn 2D sketches and 3D extrusion parameters from raw shapes, from which a set of extrusion cylinders can be generated by extruding each sketch from a 2D plane into a 3D body. By incorporating the Boolean operation (i.e., union), these cylinders can be combined to closely approximate the target geometry. We advocate the use of implicit fields for sketch representation, which allows for creating CAD variations by interpolating latent codes in the sketch latent space. Extensive experiments on both ABC and Fusion 360 datasets demonstrate the effectiveness of our method, and show superiority over state-of-the-art alternatives including the closely related method for supervised CAD reconstruction. We further apply our approach to CAD editing and single-view CAD reconstruction. The code is released at https://github.com/BunnySoCrazy/SECAD-Net.
### Multi-modal reward for visual relationships-based image captioning
- **Authors:** Ali Abedi, Hossein Karshenas, Peyman Adibi
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2303.10766
- **Pdf link:** https://arxiv.org/pdf/2303.10766
- **Abstract**
Deep neural networks have achieved promising results in automatic image captioning due to their effective representation learning and context-based content generation capabilities. As a prominent type of deep features used in many of the recent image captioning methods, the well-known bottomup features provide a detailed representation of different objects of the image in comparison with the feature maps directly extracted from the raw image. However, the lack of high-level semantic information about the relationships between these objects is an important drawback of bottom-up features, despite their expensive and resource-demanding extraction procedure. To take advantage of visual relationships in caption generation, this paper proposes a deep neural network architecture for image captioning based on fusing the visual relationships information extracted from an image's scene graph with the spatial feature maps of the image. A multi-modal reward function is then introduced for deep reinforcement learning of the proposed network using a combination of language and vision similarities in a common embedding space. The results of extensive experimentation on the MSCOCO dataset show the effectiveness of using visual relationships in the proposed captioning method. Moreover, the results clearly indicate that the proposed multi-modal reward in deep reinforcement learning leads to better model optimization, outperforming several state-of-the-art image captioning algorithms, while using light and easy to extract image features. A detailed experimental study of the components constituting the proposed method is also presented.
### Self-Supervised Learning for Multimodal Non-Rigid 3D Shape Matching
- **Authors:** Dongliang Cao, Florian Bernard
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Computational Geometry (cs.CG)
- **Arxiv link:** https://arxiv.org/abs/2303.10971
- **Pdf link:** https://arxiv.org/pdf/2303.10971
- **Abstract**
The matching of 3D shapes has been extensively studied for shapes represented as surface meshes, as well as for shapes represented as point clouds. While point clouds are a common representation of raw real-world 3D data (e.g. from laser scanners), meshes encode rich and expressive topological information, but their creation typically requires some form of (often manual) curation. In turn, methods that purely rely on point clouds are unable to meet the matching quality of mesh-based methods that utilise the additional topological structure. In this work we close this gap by introducing a self-supervised multimodal learning strategy that combines mesh-based functional map regularisation with a contrastive loss that couples mesh and point cloud data. Our shape matching approach allows to obtain intramodal correspondences for triangle meshes, complete point clouds, and partially observed point clouds, as well as correspondences across these data modalities. We demonstrate that our method achieves state-of-the-art results on several challenging benchmark datasets even in comparison to recent supervised methods, and that our method reaches previously unseen cross-dataset generalisation ability.
### Benchmarking Robustness of 3D Object Detection to Common Corruptions in Autonomous Driving
- **Authors:** Yinpeng Dong, Caixin Kang, Jinlai Zhang, Zijian Zhu, Yikai Wang, Xiao Yang, Hang Su, Xingxing Wei, Jun Zhu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR)
- **Arxiv link:** https://arxiv.org/abs/2303.11040
- **Pdf link:** https://arxiv.org/pdf/2303.11040
- **Abstract**
3D object detection is an important task in autonomous driving to perceive the surroundings. Despite the excellent performance, the existing 3D detectors lack the robustness to real-world corruptions caused by adverse weathers, sensor noises, etc., provoking concerns about the safety and reliability of autonomous driving systems. To comprehensively and rigorously benchmark the corruption robustness of 3D detectors, in this paper we design 27 types of common corruptions for both LiDAR and camera inputs considering real-world driving scenarios. By synthesizing these corruptions on public datasets, we establish three corruption robustness benchmarks -- KITTI-C, nuScenes-C, and Waymo-C. Then, we conduct large-scale experiments on 24 diverse 3D object detection models to evaluate their corruption robustness. Based on the evaluation results, we draw several important findings, including: 1) motion-level corruptions are the most threatening ones that lead to significant performance drop of all models; 2) LiDAR-camera fusion models demonstrate better robustness; 3) camera-only models are extremely vulnerable to image corruptions, showing the indispensability of LiDAR point clouds. We release the benchmarks and codes at https://github.com/kkkcx/3D_Corruptions_AD. We hope that our benchmarks and findings can provide insights for future research on developing robust 3D object detection models.
### From Sparse to Precise: A Practical Editing Approach for Intracardiac Echocardiography Segmentation
- **Authors:** Ahmed H. Shahin, Yan Zhuang, Noha El-Zehiry
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.11041
- **Pdf link:** https://arxiv.org/pdf/2303.11041
- **Abstract**
Accurate and safe catheter ablation procedures for patients with atrial fibrillation require precise segmentation of cardiac structures in Intracardiac Echocardiography (ICE) imaging. Prior studies have suggested methods that employ 3D geometry information from the ICE transducer to create a sparse ICE volume by placing 2D frames in a 3D grid, enabling training of 3D segmentation models. However, the resulting 3D masks from these models can be inaccurate and may lead to serious clinical complications due to the sparse sampling in ICE data, frames misalignment, and cardiac motion. To address this issue, we propose an interactive editing framework that allows users to edit segmentation output by drawing scribbles on a 2D frame. The user interaction is mapped to the 3D grid and utilized to execute an editing step that modifies the segmentation in the vicinity of the interaction while preserving the previous segmentation away from the interaction. Furthermore, our framework accommodates multiple edits to the segmentation output in a sequential manner without compromising previous edits. This paper presents a novel loss function and a novel evaluation metric specifically designed for editing. Results from cross-validation and testing indicate that our proposed loss function outperforms standard losses and training strategies in terms of segmentation quality and following user input. Additionally, we show quantitatively and qualitatively that subsequent edits do not compromise previous edits when using our method, as opposed to standard segmentation losses. Overall, our approach enhances the accuracy of the segmentation while avoiding undesired changes away from user interactions and without compromising the quality of previously edited regions, leading to better patient outcomes.
### A Multi-Task Deep Learning Approach for Sensor-based Human Activity Recognition and Segmentation
- **Authors:** Furong Duan, Tao Zhu, Jinqiang Wang, Liming Chen, Huansheng Ning, Yaping Wan
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.11100
- **Pdf link:** https://arxiv.org/pdf/2303.11100
- **Abstract**
Sensor-based human activity segmentation and recognition are two important and challenging problems in many real-world applications and they have drawn increasing attention from the deep learning community in recent years. Most of the existing deep learning works were designed based on pre-segmented sensor streams and they have treated activity segmentation and recognition as two separate tasks. In practice, performing data stream segmentation is very challenging. We believe that both activity segmentation and recognition may convey unique information which can complement each other to improve the performance of the two tasks. In this paper, we firstly proposes a new multitask deep neural network to solve the two tasks simultaneously. The proposed neural network adopts selective convolution and features multiscale windows to segment activities of long or short time durations. First, multiple windows of different scales are generated to center on each unit of the feature sequence. Then, the model is trained to predict, for each window, the activity class and the offset to the true activity boundaries. Finally, overlapping windows are filtered out by non-maximum suppression, and adjacent windows of the same activity are concatenated to complete the segmentation task. Extensive experiments were conducted on eight popular benchmarking datasets, and the results show that our proposed method outperforms the state-of-the-art methods both for activity recognition and segmentation.
### I2Edit: Towards Multi-turn Interactive Image Editing via Dialogue
- **Authors:** Xing Cui, Zekun Li, Peipei Li, Yibo Hu, Hailin Shi, Zhaofeng He
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.11108
- **Pdf link:** https://arxiv.org/pdf/2303.11108
- **Abstract**
Although there have been considerable research efforts on controllable facial image editing, the desirable interactive setting where the users can interact with the system to adjust their requirements dynamically hasn't been well explored. This paper focuses on facial image editing via dialogue and introduces a new benchmark dataset, Multi-turn Interactive Image Editing (I2Edit), for evaluating image editing quality and interaction ability in real-world interactive facial editing scenarios. The dataset is constructed upon the CelebA-HQ dataset with images annotated with a multi-turn dialogue that corresponds to the user editing requirements. I2Edit is challenging, as it needs to 1) track the dynamically updated user requirements and edit the images accordingly, as well as 2) generate the appropriate natural language response to communicate with the user. To address these challenges, we propose a framework consisting of a dialogue module and an image editing module. The former is for user edit requirements tracking and generating the corresponding indicative responses, while the latter edits the images conditioned on the tracked user edit requirements. In contrast to previous works that simply treat multi-turn interaction as a sequence of single-turn interactions, we extract the user edit requirements from the whole dialogue history instead of the current single turn. The extracted global user edit requirements enable us to directly edit the input raw image to avoid error accumulation and attribute forgetting issues. Extensive quantitative and qualitative experiments on the I2Edit dataset demonstrate the advantage of our proposed framework over the previous single-turn methods. We believe our new dataset could serve as a valuable resource to push forward the exploration of real-world, complex interactive image editing. Code and data will be made public.
### SeiT: Storage-Efficient Vision Training with Tokens Using 1% of Pixel Storage
- **Authors:** Song Park, Sanghyuk Chun, Byeongho Heo, Wonjae Kim, Sangdoo Yun
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.11114
- **Pdf link:** https://arxiv.org/pdf/2303.11114
- **Abstract**
We need billion-scale images to achieve more generalizable and ground-breaking vision models, as well as massive dataset storage to ship the images (e.g., the LAION-4B dataset needs 240TB storage space). However, it has become challenging to deal with unlimited dataset storage with limited storage infrastructure. A number of storage-efficient training methods have been proposed to tackle the problem, but they are rarely scalable or suffer from severe damage to performance. In this paper, we propose a storage-efficient training strategy for vision classifiers for large-scale datasets (e.g., ImageNet) that only uses 1024 tokens per instance without using the raw level pixels; our token storage only needs <1% of the original JPEG-compressed raw pixels. We also propose token augmentations and a Stem-adaptor module to make our approach able to use the same architecture as pixel-based approaches with only minimal modifications on the stem layer and the carefully tuned optimization settings. Our experimental results on ImageNet-1k show that our method significantly outperforms other storage-efficient training methods with a large gap. We further show the effectiveness of our method in other practical scenarios, storage-efficient pre-training, and continual learning. Code is available at https://github.com/naver-ai/seit
### AnimeDiffusion: Anime Face Line Drawing Colorization via Diffusion Models
- **Authors:** Yu Cao, Xiangqiao Meng, P.Y. Mok, Xueting Liu, Tong-Yee Lee, Ping Li
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.11137
- **Pdf link:** https://arxiv.org/pdf/2303.11137
- **Abstract**
It is a time-consuming and tedious work for manually colorizing anime line drawing images, which is an essential stage in cartoon animation creation pipeline. Reference-based line drawing colorization is a challenging task that relies on the precise cross-domain long-range dependency modelling between the line drawing and reference image. Existing learning methods still utilize generative adversarial networks (GANs) as one key module of their model architecture. In this paper, we propose a novel method called AnimeDiffusion using diffusion models that performs anime face line drawing colorization automatically. To the best of our knowledge, this is the first diffusion model tailored for anime content creation. In order to solve the huge training consumption problem of diffusion models, we design a hybrid training strategy, first pre-training a diffusion model with classifier-free guidance and then fine-tuning it with image reconstruction guidance. We find that with a few iterations of fine-tuning, the model shows wonderful colorization performance, as illustrated in Fig. 1. For training AnimeDiffusion, we conduct an anime face line drawing colorization benchmark dataset, which contains 31696 training data and 579 testing data. We hope this dataset can fill the gap of no available high resolution anime face dataset for colorization method evaluation. Through multiple quantitative metrics evaluated on our dataset and a user study, we demonstrate AnimeDiffusion outperforms state-of-the-art GANs-based models for anime face line drawing colorization. We also collaborate with professional artists to test and apply our AnimeDiffusion for their creation work. We release our code on https://github.com/xq-meng/AnimeDiffusion.
### Attribute-preserving Face Dataset Anonymization via Latent Code Optimization
- **Authors:** Simone Barattin, Christos Tzelepis, Ioannis Patras, Nicu Sebe
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.11296
- **Pdf link:** https://arxiv.org/pdf/2303.11296
- **Abstract**
This work addresses the problem of anonymizing the identity of faces in a dataset of images, such that the privacy of those depicted is not violated, while at the same time the dataset is useful for downstream task such as for training machine learning models. To the best of our knowledge, we are the first to explicitly address this issue and deal with two major drawbacks of the existing state-of-the-art approaches, namely that they (i) require the costly training of additional, purpose-trained neural networks, and/or (ii) fail to retain the facial attributes of the original images in the anonymized counterparts, the preservation of which is of paramount importance for their use in downstream tasks. We accordingly present a task-agnostic anonymization procedure that directly optimizes the images' latent representation in the latent space of a pre-trained GAN. By optimizing the latent codes directly, we ensure both that the identity is of a desired distance away from the original (with an identity obfuscation loss), whilst preserving the facial attributes (using a novel feature-matching loss in FaRL's deep feature space). We demonstrate through a series of both qualitative and quantitative experiments that our method is capable of anonymizing the identity of the images whilst -- crucially -- better-preserving the facial attributes. We make the code and the pre-trained models publicly available at: https://github.com/chi0tzp/FALCO.
## Keyword: raw image
### Multi-modal reward for visual relationships-based image captioning
- **Authors:** Ali Abedi, Hossein Karshenas, Peyman Adibi
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2303.10766
- **Pdf link:** https://arxiv.org/pdf/2303.10766
- **Abstract**
Deep neural networks have achieved promising results in automatic image captioning due to their effective representation learning and context-based content generation capabilities. As a prominent type of deep features used in many of the recent image captioning methods, the well-known bottomup features provide a detailed representation of different objects of the image in comparison with the feature maps directly extracted from the raw image. However, the lack of high-level semantic information about the relationships between these objects is an important drawback of bottom-up features, despite their expensive and resource-demanding extraction procedure. To take advantage of visual relationships in caption generation, this paper proposes a deep neural network architecture for image captioning based on fusing the visual relationships information extracted from an image's scene graph with the spatial feature maps of the image. A multi-modal reward function is then introduced for deep reinforcement learning of the proposed network using a combination of language and vision similarities in a common embedding space. The results of extensive experimentation on the MSCOCO dataset show the effectiveness of using visual relationships in the proposed captioning method. Moreover, the results clearly indicate that the proposed multi-modal reward in deep reinforcement learning leads to better model optimization, outperforming several state-of-the-art image captioning algorithms, while using light and easy to extract image features. A detailed experimental study of the components constituting the proposed method is also presented.
### I2Edit: Towards Multi-turn Interactive Image Editing via Dialogue
- **Authors:** Xing Cui, Zekun Li, Peipei Li, Yibo Hu, Hailin Shi, Zhaofeng He
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.11108
- **Pdf link:** https://arxiv.org/pdf/2303.11108
- **Abstract**
Although there have been considerable research efforts on controllable facial image editing, the desirable interactive setting where the users can interact with the system to adjust their requirements dynamically hasn't been well explored. This paper focuses on facial image editing via dialogue and introduces a new benchmark dataset, Multi-turn Interactive Image Editing (I2Edit), for evaluating image editing quality and interaction ability in real-world interactive facial editing scenarios. The dataset is constructed upon the CelebA-HQ dataset with images annotated with a multi-turn dialogue that corresponds to the user editing requirements. I2Edit is challenging, as it needs to 1) track the dynamically updated user requirements and edit the images accordingly, as well as 2) generate the appropriate natural language response to communicate with the user. To address these challenges, we propose a framework consisting of a dialogue module and an image editing module. The former is for user edit requirements tracking and generating the corresponding indicative responses, while the latter edits the images conditioned on the tracked user edit requirements. In contrast to previous works that simply treat multi-turn interaction as a sequence of single-turn interactions, we extract the user edit requirements from the whole dialogue history instead of the current single turn. The extracted global user edit requirements enable us to directly edit the input raw image to avoid error accumulation and attribute forgetting issues. Extensive quantitative and qualitative experiments on the I2Edit dataset demonstrate the advantage of our proposed framework over the previous single-turn methods. We believe our new dataset could serve as a valuable resource to push forward the exploration of real-world, complex interactive image editing. Code and data will be made public.
| process | new submissions for tue mar keyword events pseudobound limiting the anomaly reconstruction capability of one class classifiers using pseudo anomalies authors marcella astrid muhammad zaigham zaheer seung ik lee subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract due to the rarity of anomalous events video anomaly detection is typically approached as one class classification occ problem typically in occ an autoencoder ae is trained to reconstruct the normal only training data with the expectation that in test time it can poorly reconstruct the anomalous data however previous studies have shown that even trained with only normal data aes can often reconstruct anomalous data as well resulting in a decreased performance to mitigate this problem we propose to limit the anomaly reconstruction capability of aes by incorporating pseudo anomalies during the training of an ae extensive experiments using five types of pseudo anomalies show the robustness of our training mechanism towards any kind of pseudo anomaly moreover we demonstrate the effectiveness of our proposed pseudo anomaly based training approach against several existing state ofthe art sota methods on three benchmark video anomaly datasets outperforming all the other reconstruction based approaches in two datasets and showing the second best performance in the other dataset k salsa k anonymous synthetic averaging of retinal images via local style alignment authors minkyu jeon hyeonjin park hyunwoo j kim michael morley hyunghoon cho subjects computer vision and pattern recognition cs cv cryptography and security cs cr arxiv link pdf link abstract the application of modern machine learning to retinal image analyses offers valuable insights into a broad range of human health conditions beyond ophthalmic diseases additionally data sharing is key to fully realizing the potential of machine learning models by providing a rich and diverse collection of training data however the personally identifying nature of retinal images encompassing the unique vascular structure of each individual often prevents this data from being shared openly while prior works have explored image de identification strategies based on synthetic averaging of images in other domains e g facial images existing techniques face difficulty in preserving both privacy and clinical utility in retinal images as we demonstrate in our work we therefore introduce k salsa a generative adversarial network gan based framework for synthesizing retinal fundus images that summarize a given private dataset while satisfying the privacy notion of k anonymity k salsa brings together state of the art techniques for training and inverting gans to achieve practical performance on retinal images furthermore k salsa leverages a new technique called local style alignment to generate a synthetic average that maximizes the retention of fine grain visual patterns in the source images thus improving the clinical utility of the generated images on two benchmark datasets of diabetic retinopathy eyepacs and aptos we demonstrate our improvement upon existing methods with respect to image fidelity classification performance and mitigation of membership inference attacks our work represents a step toward broader sharing of retinal images for scientific collaboration code is available at leapfrog diffusion model for stochastic trajectory prediction authors weibo mao chenxin xu qi zhu siheng chen yanfeng wang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract to model the indeterminacy of human behaviors stochastic trajectory prediction requires a sophisticated multi modal distribution of future trajectories emerging diffusion models have revealed their tremendous representation capacities in numerous generation tasks showing potential for stochastic trajectory prediction however expensive time consumption prevents diffusion models from real time prediction since a large number of denoising steps are required to assure sufficient representation ability to resolve the dilemma we present leapfrog diffusion model led a novel diffusion based trajectory prediction model which provides real time precise and diverse predictions the core of the proposed led is to leverage a trainable leapfrog initializer to directly learn an expressive multi modal distribution of future trajectories which skips a large number of denoising steps significantly accelerating inference speed moreover the leapfrog initializer is trained to appropriately allocate correlated samples to provide a diversity of predicted future trajectories significantly improving prediction performances extensive experiments on four real world datasets including nba nfl sdd eth ucy show that led consistently improves performance and achieves ade fde improvement on nfl the proposed led also speeds up the inference times compared to the standard diffusion model on nba nfl sdd eth ucy satisfying real time inference needs code is available at learning optical flow from event camera with rendered dataset authors xinglong luo kunming luo ao luo zhengning wang ping tan shuaicheng liu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract we study the problem of estimating optical flow from event cameras one important issue is how to build a high quality event flow dataset with accurate event values and flow labels previous datasets are created by either capturing real scenes by event cameras or synthesizing from images with pasted foreground objects the former case can produce real event values but with calculated flow labels which are sparse and inaccurate the later case can generate dense flow labels but the interpolated events are prone to errors in this work we propose to render a physically correct event flow dataset using computer graphics models in particular we first create indoor and outdoor scenes by blender with rich scene content variations second diverse camera motions are included for the virtual capturing producing images and accurate flow labels third we render high framerate videos between images for accurate events the rendered dataset can adjust the density of events based on which we further introduce an adaptive density module adm experiments show that our proposed dataset can facilitate event flow learning whereas previous approaches when trained on our dataset can improve their performances constantly by a relatively large margin in addition event flow pipelines when equipped with our adm can further improve performances localizing object level shape variations with text to image diffusion models authors or patashnik daniel garibi idan azuri hadar averbuch elor daniel cohen or subjects computer vision and pattern recognition cs cv graphics cs gr machine learning cs lg arxiv link pdf link abstract text to image models give rise to workflows which often begin with an exploration step where users sift through a large collection of generated images the global nature of the text to image generation process prevents users from narrowing their exploration to a particular object in the image in this paper we present a technique to generate a collection of images that depicts variations in the shape of a specific object enabling an object level shape exploration process creating plausible variations is challenging as it requires control over the shape of the generated object while respecting its semantics a particular challenge when generating object variations is accurately localizing the manipulation applied over the object s shape we introduce a prompt mixing technique that switches between prompts along the denoising process to attain a variety of shape choices to localize the image space operation we present two techniques that use the self attention layers in conjunction with the cross attention layers moreover we show that these localization techniques are general and effective beyond the scope of generating object variations extensive results and comparisons demonstrate the effectiveness of our method in generating object variations and the competence of our localization techniques keyword event camera anms asynchronous non maximum suppression in event stream authors qianang zhou junlin xiong youfu li subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract the non maximum suppression nms is widely used in frame based tasks as an essential post processing algorithm however event based nms either has high computational complexity or leads to frequent discontinuities as a result the performance of event based corner detectors is limited this paper proposes a general purpose asynchronous non maximum suppression pipeline anms and applies it to corner event detection the proposed pipeline extract fine feature stream from the output of original detectors and adapts to the speed of motion the anms runs directly on the asynchronous event stream with extremely low latency which hardly affects the speed of original detectors additionally we evaluate the davis based ground truth labeling method to fill the gap between frame and event evaluation on public dataset indicates that the proposed anms pipeline significantly improves the performance of three classical asynchronous detectors with negligible latency more importantly the proposed anms framework is a natural extension of nms which is applicable to other asynchronous scoring tasks for event cameras learning optical flow from event camera with rendered dataset authors xinglong luo kunming luo ao luo zhengning wang ping tan shuaicheng liu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract we study the problem of estimating optical flow from event cameras one important issue is how to build a high quality event flow dataset with accurate event values and flow labels previous datasets are created by either capturing real scenes by event cameras or synthesizing from images with pasted foreground objects the former case can produce real event values but with calculated flow labels which are sparse and inaccurate the later case can generate dense flow labels but the interpolated events are prone to errors in this work we propose to render a physically correct event flow dataset using computer graphics models in particular we first create indoor and outdoor scenes by blender with rich scene content variations second diverse camera motions are included for the virtual capturing producing images and accurate flow labels third we render high framerate videos between images for accurate events the rendered dataset can adjust the density of events based on which we further introduce an adaptive density module adm experiments show that our proposed dataset can facilitate event flow learning whereas previous approaches when trained on our dataset can improve their performances constantly by a relatively large margin in addition event flow pipelines when equipped with our adm can further improve performances keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb hgib prognosis for alzheimer s disease via hypergraph information bottleneck authors shujun wang angelica i aviles rivero zoe kourtzi carola bibiane schönlieb subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract alzheimer s disease prognosis is critical for early mild cognitive impairment patients for timely treatment to improve the patient s quality of life whilst existing prognosis techniques demonstrate potential results they are highly limited in terms of using a single modality most importantly they fail in considering a key element for prognosis not all features extracted at the current moment may contribute to the prognosis prediction several years later to address the current drawbacks of the literature we propose a novel hypergraph framework based on an information bottleneck strategy hgib firstly our framework seeks to discriminate irrelevant information and therefore solely focus on harmonising relevant information for future mci conversion prediction e g two years later secondly our model simultaneously accounts for multi modal data based on imaging and non imaging modalities hgib uses a hypergraph structure to represent the multi modality data and accounts for various data modality types thirdly the key of our model is based on a new optimisation scheme it is based on modelling the principle of information bottleneck into loss functions that can be integrated into our hypergraph neural network we demonstrate through extensive experiments on adni that our proposed hgib framework outperforms existing state of the art hypergraph neural networks for alzheimer s disease prognosis we showcase our model even under fewer labels finally we further support the robustness and generalisation capabilities of our framework under both topological and feature perturbations multi modal reward for visual relationships based image captioning authors ali abedi hossein karshenas peyman adibi subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract deep neural networks have achieved promising results in automatic image captioning due to their effective representation learning and context based content generation capabilities as a prominent type of deep features used in many of the recent image captioning methods the well known bottomup features provide a detailed representation of different objects of the image in comparison with the feature maps directly extracted from the raw image however the lack of high level semantic information about the relationships between these objects is an important drawback of bottom up features despite their expensive and resource demanding extraction procedure to take advantage of visual relationships in caption generation this paper proposes a deep neural network architecture for image captioning based on fusing the visual relationships information extracted from an image s scene graph with the spatial feature maps of the image a multi modal reward function is then introduced for deep reinforcement learning of the proposed network using a combination of language and vision similarities in a common embedding space the results of extensive experimentation on the mscoco dataset show the effectiveness of using visual relationships in the proposed captioning method moreover the results clearly indicate that the proposed multi modal reward in deep reinforcement learning leads to better model optimization outperforming several state of the art image captioning algorithms while using light and easy to extract image features a detailed experimental study of the components constituting the proposed method is also presented attribute preserving face dataset anonymization via latent code optimization authors simone barattin christos tzelepis ioannis patras nicu sebe subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract this work addresses the problem of anonymizing the identity of faces in a dataset of images such that the privacy of those depicted is not violated while at the same time the dataset is useful for downstream task such as for training machine learning models to the best of our knowledge we are the first to explicitly address this issue and deal with two major drawbacks of the existing state of the art approaches namely that they i require the costly training of additional purpose trained neural networks and or ii fail to retain the facial attributes of the original images in the anonymized counterparts the preservation of which is of paramount importance for their use in downstream tasks we accordingly present a task agnostic anonymization procedure that directly optimizes the images latent representation in the latent space of a pre trained gan by optimizing the latent codes directly we ensure both that the identity is of a desired distance away from the original with an identity obfuscation loss whilst preserving the facial attributes using a novel feature matching loss in farl s deep feature space we demonstrate through a series of both qualitative and quantitative experiments that our method is capable of anonymizing the identity of the images whilst crucially better preserving the facial attributes we make the code and the pre trained models publicly available at keyword isp towards diverse binary segmentation via a simple yet general gated network authors xiaoqi zhao youwei pang lihe zhang huchuan lu lei zhang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract in many binary segmentation tasks most cnns based methods use a u shape encoder decoder network as their basic structure they ignore two key problems when the encoder exchanges information with the decoder one is the lack of interference control mechanism between them the other is without considering the disparity of the contributions from different encoder levels in this work we propose a simple yet general gated network gatenet to tackle them all at once with the help of multi level gate units the valuable context information from the encoder can be selectively transmitted to the decoder in addition we design a gated dual branch structure to build the cooperation among the features of different levels and improve the discrimination ability of the network furthermore we introduce a fold operation to improve the atrous convolution and form a novel folded atrous convolution which can be flexibly embedded in aspp or denseaspp to accurately localize foreground objects of various scales gatenet can be easily generalized to many binary segmentation tasks including general and specific object segmentation and multi modal segmentation without bells and whistles our network consistently performs favorably against the state of the art methods under metrics on datasets of binary segmentation tasks benchmarking robustness of object detection to common corruptions in autonomous driving authors yinpeng dong caixin kang jinlai zhang zijian zhu yikai wang xiao yang hang su xingxing wei jun zhu subjects computer vision and pattern recognition cs cv artificial intelligence cs ai cryptography and security cs cr arxiv link pdf link abstract object detection is an important task in autonomous driving to perceive the surroundings despite the excellent performance the existing detectors lack the robustness to real world corruptions caused by adverse weathers sensor noises etc provoking concerns about the safety and reliability of autonomous driving systems to comprehensively and rigorously benchmark the corruption robustness of detectors in this paper we design types of common corruptions for both lidar and camera inputs considering real world driving scenarios by synthesizing these corruptions on public datasets we establish three corruption robustness benchmarks kitti c nuscenes c and waymo c then we conduct large scale experiments on diverse object detection models to evaluate their corruption robustness based on the evaluation results we draw several important findings including motion level corruptions are the most threatening ones that lead to significant performance drop of all models lidar camera fusion models demonstrate better robustness camera only models are extremely vulnerable to image corruptions showing the indispensability of lidar point clouds we release the benchmarks and codes at we hope that our benchmarks and findings can provide insights for future research on developing robust object detection models hiface high fidelity face reconstruction by learning static and dynamic details authors zenghao chai tianke zhang tianyu he xu tan tadas baltrusaitis hsiangtao wu runnan li sheng zhao chun yuan jiang bian subjects computer vision and pattern recognition cs cv graphics cs gr arxiv link pdf link abstract morphable models demonstrate great potential for reconstructing faithful and animatable facial surfaces from a single image the facial surface is influenced by the coarse shape as well as the static detail e g person specific appearance and dynamic detail e g expression driven wrinkles previous work struggles to decouple the static and dynamic details through image level supervision leading to reconstructions that are not realistic in this paper we aim at high fidelity face reconstruction and propose hiface to explicitly model the static and dynamic details specifically the static detail is modeled as the linear combination of a displacement basis while the dynamic detail is modeled as the linear interpolation of two displacement maps with polarized expressions we exploit several loss functions to jointly learn the coarse shape and fine details with both synthetic and real world datasets which enable hiface to reconstruct high fidelity shapes with animatable details extensive quantitative and qualitative experiments demonstrate that hiface presents state of the art reconstruction quality and faithfully recovers both the static and dynamic details our project page can be found at keyword image signal processing there is no result keyword image signal process there is no result keyword compression vimi vehicle infrastructure multi view intermediate fusion for camera based object detection authors zhe wang siqi fan xiaoliang huo tongda xu yan wang jingjing liu yilun chen ya qin zhang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract in autonomous driving vehicle infrastructure cooperative object detection makes use of multi view cameras from both vehicles and traffic infrastructure providing a global vantage point with rich semantic context of road conditions beyond a single vehicle viewpoint two major challenges prevail in inherent calibration noise when fusing multi view images caused by time asynchrony across cameras information loss when projecting features into space to address these issues we propose a novel object detection framework vehicles infrastructure multi view intermediate fusion vimi first to fully exploit the holistic perspectives from both vehicles and infrastructure we propose a multi scale cross attention mca module that fuses infrastructure and vehicle features on selective multi scales to correct the calibration noise introduced by camera asynchrony then we design a camera aware channel masking ccm module that uses camera parameters as priors to augment the fused features we further introduce a feature compression fc module with channel and spatial compression blocks to reduce the size of transmitted features for enhanced efficiency experiments show that vimi achieves overall ap and ap bev on the new dataset dair c significantly outperforming state of the art early fusion and late fusion methods with comparable transmission cost keyword raw hgib prognosis for alzheimer s disease via hypergraph information bottleneck authors shujun wang angelica i aviles rivero zoe kourtzi carola bibiane schönlieb subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract alzheimer s disease prognosis is critical for early mild cognitive impairment patients for timely treatment to improve the patient s quality of life whilst existing prognosis techniques demonstrate potential results they are highly limited in terms of using a single modality most importantly they fail in considering a key element for prognosis not all features extracted at the current moment may contribute to the prognosis prediction several years later to address the current drawbacks of the literature we propose a novel hypergraph framework based on an information bottleneck strategy hgib firstly our framework seeks to discriminate irrelevant information and therefore solely focus on harmonising relevant information for future mci conversion prediction e g two years later secondly our model simultaneously accounts for multi modal data based on imaging and non imaging modalities hgib uses a hypergraph structure to represent the multi modality data and accounts for various data modality types thirdly the key of our model is based on a new optimisation scheme it is based on modelling the principle of information bottleneck into loss functions that can be integrated into our hypergraph neural network we demonstrate through extensive experiments on adni that our proposed hgib framework outperforms existing state of the art hypergraph neural networks for alzheimer s disease prognosis we showcase our model even under fewer labels finally we further support the robustness and generalisation capabilities of our framework under both topological and feature perturbations exploring expression related self supervised learning for affective behaviour analysis authors fanglei xue yifan sun yi yang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract this paper explores an expression related self supervised learning ssl method contrawarping to perform expression classification in the affective behavior analysis in the wild abaw competition affective datasets are expensive to annotate and ssl methods could learn from large scale unlabeled data which is more suitable for this task by evaluating on the aff dataset we demonstrate that contrawarping outperforms most existing supervised methods and shows great application potential in the affective analysis area codes will be released on vehicle infrastructure cooperative object detection via feature flow prediction authors haibao yu yingjuan tang enze xie jilei mao jirui yuan ping luo zaiqing nie subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract cooperatively utilizing both ego vehicle and infrastructure sensor data can significantly enhance autonomous driving perception abilities however temporal asynchrony and limited wireless communication in traffic environments can lead to fusion misalignment and impact detection performance this paper proposes feature flow net ffnet a novel cooperative detection framework that uses a feature flow prediction module to address these issues in vehicle infrastructure cooperative object detection rather than transmitting feature maps extracted from still images ffnet transmits feature flow which leverages the temporal coherence of sequential infrastructure frames to predict future features and compensate for asynchrony additionally we introduce a self supervised approach to enable ffnet to generate feature flow with feature prediction ability experimental results demonstrate that our proposed method outperforms existing cooperative detection methods while requiring no more than transmission cost of raw data on the dair dataset when temporal asynchrony exceeds ms the code is available at href secad net self supervised cad reconstruction by learning sketch extrude operations authors pu li jianwei guo xiaopeng zhang dong ming yan subjects computer vision and pattern recognition cs cv graphics cs gr machine learning cs lg arxiv link pdf link abstract reverse engineering cad models from raw geometry is a classic but strenuous research problem previous learning based methods rely heavily on labels due to the supervised design patterns or reconstruct cad shapes that are not easily editable in this work we introduce secad net an end to end neural network aimed at reconstructing compact and easy to edit cad models in a self supervised manner drawing inspiration from the modeling language that is most commonly used in modern cad software we propose to learn sketches and extrusion parameters from raw shapes from which a set of extrusion cylinders can be generated by extruding each sketch from a plane into a body by incorporating the boolean operation i e union these cylinders can be combined to closely approximate the target geometry we advocate the use of implicit fields for sketch representation which allows for creating cad variations by interpolating latent codes in the sketch latent space extensive experiments on both abc and fusion datasets demonstrate the effectiveness of our method and show superiority over state of the art alternatives including the closely related method for supervised cad reconstruction we further apply our approach to cad editing and single view cad reconstruction the code is released at multi modal reward for visual relationships based image captioning authors ali abedi hossein karshenas peyman adibi subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract deep neural networks have achieved promising results in automatic image captioning due to their effective representation learning and context based content generation capabilities as a prominent type of deep features used in many of the recent image captioning methods the well known bottomup features provide a detailed representation of different objects of the image in comparison with the feature maps directly extracted from the raw image however the lack of high level semantic information about the relationships between these objects is an important drawback of bottom up features despite their expensive and resource demanding extraction procedure to take advantage of visual relationships in caption generation this paper proposes a deep neural network architecture for image captioning based on fusing the visual relationships information extracted from an image s scene graph with the spatial feature maps of the image a multi modal reward function is then introduced for deep reinforcement learning of the proposed network using a combination of language and vision similarities in a common embedding space the results of extensive experimentation on the mscoco dataset show the effectiveness of using visual relationships in the proposed captioning method moreover the results clearly indicate that the proposed multi modal reward in deep reinforcement learning leads to better model optimization outperforming several state of the art image captioning algorithms while using light and easy to extract image features a detailed experimental study of the components constituting the proposed method is also presented self supervised learning for multimodal non rigid shape matching authors dongliang cao florian bernard subjects computer vision and pattern recognition cs cv artificial intelligence cs ai computational geometry cs cg arxiv link pdf link abstract the matching of shapes has been extensively studied for shapes represented as surface meshes as well as for shapes represented as point clouds while point clouds are a common representation of raw real world data e g from laser scanners meshes encode rich and expressive topological information but their creation typically requires some form of often manual curation in turn methods that purely rely on point clouds are unable to meet the matching quality of mesh based methods that utilise the additional topological structure in this work we close this gap by introducing a self supervised multimodal learning strategy that combines mesh based functional map regularisation with a contrastive loss that couples mesh and point cloud data our shape matching approach allows to obtain intramodal correspondences for triangle meshes complete point clouds and partially observed point clouds as well as correspondences across these data modalities we demonstrate that our method achieves state of the art results on several challenging benchmark datasets even in comparison to recent supervised methods and that our method reaches previously unseen cross dataset generalisation ability benchmarking robustness of object detection to common corruptions in autonomous driving authors yinpeng dong caixin kang jinlai zhang zijian zhu yikai wang xiao yang hang su xingxing wei jun zhu subjects computer vision and pattern recognition cs cv artificial intelligence cs ai cryptography and security cs cr arxiv link pdf link abstract object detection is an important task in autonomous driving to perceive the surroundings despite the excellent performance the existing detectors lack the robustness to real world corruptions caused by adverse weathers sensor noises etc provoking concerns about the safety and reliability of autonomous driving systems to comprehensively and rigorously benchmark the corruption robustness of detectors in this paper we design types of common corruptions for both lidar and camera inputs considering real world driving scenarios by synthesizing these corruptions on public datasets we establish three corruption robustness benchmarks kitti c nuscenes c and waymo c then we conduct large scale experiments on diverse object detection models to evaluate their corruption robustness based on the evaluation results we draw several important findings including motion level corruptions are the most threatening ones that lead to significant performance drop of all models lidar camera fusion models demonstrate better robustness camera only models are extremely vulnerable to image corruptions showing the indispensability of lidar point clouds we release the benchmarks and codes at we hope that our benchmarks and findings can provide insights for future research on developing robust object detection models from sparse to precise a practical editing approach for intracardiac echocardiography segmentation authors ahmed h shahin yan zhuang noha el zehiry subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract accurate and safe catheter ablation procedures for patients with atrial fibrillation require precise segmentation of cardiac structures in intracardiac echocardiography ice imaging prior studies have suggested methods that employ geometry information from the ice transducer to create a sparse ice volume by placing frames in a grid enabling training of segmentation models however the resulting masks from these models can be inaccurate and may lead to serious clinical complications due to the sparse sampling in ice data frames misalignment and cardiac motion to address this issue we propose an interactive editing framework that allows users to edit segmentation output by drawing scribbles on a frame the user interaction is mapped to the grid and utilized to execute an editing step that modifies the segmentation in the vicinity of the interaction while preserving the previous segmentation away from the interaction furthermore our framework accommodates multiple edits to the segmentation output in a sequential manner without compromising previous edits this paper presents a novel loss function and a novel evaluation metric specifically designed for editing results from cross validation and testing indicate that our proposed loss function outperforms standard losses and training strategies in terms of segmentation quality and following user input additionally we show quantitatively and qualitatively that subsequent edits do not compromise previous edits when using our method as opposed to standard segmentation losses overall our approach enhances the accuracy of the segmentation while avoiding undesired changes away from user interactions and without compromising the quality of previously edited regions leading to better patient outcomes a multi task deep learning approach for sensor based human activity recognition and segmentation authors furong duan tao zhu jinqiang wang liming chen huansheng ning yaping wan subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract sensor based human activity segmentation and recognition are two important and challenging problems in many real world applications and they have drawn increasing attention from the deep learning community in recent years most of the existing deep learning works were designed based on pre segmented sensor streams and they have treated activity segmentation and recognition as two separate tasks in practice performing data stream segmentation is very challenging we believe that both activity segmentation and recognition may convey unique information which can complement each other to improve the performance of the two tasks in this paper we firstly proposes a new multitask deep neural network to solve the two tasks simultaneously the proposed neural network adopts selective convolution and features multiscale windows to segment activities of long or short time durations first multiple windows of different scales are generated to center on each unit of the feature sequence then the model is trained to predict for each window the activity class and the offset to the true activity boundaries finally overlapping windows are filtered out by non maximum suppression and adjacent windows of the same activity are concatenated to complete the segmentation task extensive experiments were conducted on eight popular benchmarking datasets and the results show that our proposed method outperforms the state of the art methods both for activity recognition and segmentation towards multi turn interactive image editing via dialogue authors xing cui zekun li peipei li yibo hu hailin shi zhaofeng he subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract although there have been considerable research efforts on controllable facial image editing the desirable interactive setting where the users can interact with the system to adjust their requirements dynamically hasn t been well explored this paper focuses on facial image editing via dialogue and introduces a new benchmark dataset multi turn interactive image editing for evaluating image editing quality and interaction ability in real world interactive facial editing scenarios the dataset is constructed upon the celeba hq dataset with images annotated with a multi turn dialogue that corresponds to the user editing requirements is challenging as it needs to track the dynamically updated user requirements and edit the images accordingly as well as generate the appropriate natural language response to communicate with the user to address these challenges we propose a framework consisting of a dialogue module and an image editing module the former is for user edit requirements tracking and generating the corresponding indicative responses while the latter edits the images conditioned on the tracked user edit requirements in contrast to previous works that simply treat multi turn interaction as a sequence of single turn interactions we extract the user edit requirements from the whole dialogue history instead of the current single turn the extracted global user edit requirements enable us to directly edit the input raw image to avoid error accumulation and attribute forgetting issues extensive quantitative and qualitative experiments on the dataset demonstrate the advantage of our proposed framework over the previous single turn methods we believe our new dataset could serve as a valuable resource to push forward the exploration of real world complex interactive image editing code and data will be made public seit storage efficient vision training with tokens using of pixel storage authors song park sanghyuk chun byeongho heo wonjae kim sangdoo yun subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract we need billion scale images to achieve more generalizable and ground breaking vision models as well as massive dataset storage to ship the images e g the laion dataset needs storage space however it has become challenging to deal with unlimited dataset storage with limited storage infrastructure a number of storage efficient training methods have been proposed to tackle the problem but they are rarely scalable or suffer from severe damage to performance in this paper we propose a storage efficient training strategy for vision classifiers for large scale datasets e g imagenet that only uses tokens per instance without using the raw level pixels our token storage only needs of the original jpeg compressed raw pixels we also propose token augmentations and a stem adaptor module to make our approach able to use the same architecture as pixel based approaches with only minimal modifications on the stem layer and the carefully tuned optimization settings our experimental results on imagenet show that our method significantly outperforms other storage efficient training methods with a large gap we further show the effectiveness of our method in other practical scenarios storage efficient pre training and continual learning code is available at animediffusion anime face line drawing colorization via diffusion models authors yu cao xiangqiao meng p y mok xueting liu tong yee lee ping li subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract it is a time consuming and tedious work for manually colorizing anime line drawing images which is an essential stage in cartoon animation creation pipeline reference based line drawing colorization is a challenging task that relies on the precise cross domain long range dependency modelling between the line drawing and reference image existing learning methods still utilize generative adversarial networks gans as one key module of their model architecture in this paper we propose a novel method called animediffusion using diffusion models that performs anime face line drawing colorization automatically to the best of our knowledge this is the first diffusion model tailored for anime content creation in order to solve the huge training consumption problem of diffusion models we design a hybrid training strategy first pre training a diffusion model with classifier free guidance and then fine tuning it with image reconstruction guidance we find that with a few iterations of fine tuning the model shows wonderful colorization performance as illustrated in fig for training animediffusion we conduct an anime face line drawing colorization benchmark dataset which contains training data and testing data we hope this dataset can fill the gap of no available high resolution anime face dataset for colorization method evaluation through multiple quantitative metrics evaluated on our dataset and a user study we demonstrate animediffusion outperforms state of the art gans based models for anime face line drawing colorization we also collaborate with professional artists to test and apply our animediffusion for their creation work we release our code on attribute preserving face dataset anonymization via latent code optimization authors simone barattin christos tzelepis ioannis patras nicu sebe subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract this work addresses the problem of anonymizing the identity of faces in a dataset of images such that the privacy of those depicted is not violated while at the same time the dataset is useful for downstream task such as for training machine learning models to the best of our knowledge we are the first to explicitly address this issue and deal with two major drawbacks of the existing state of the art approaches namely that they i require the costly training of additional purpose trained neural networks and or ii fail to retain the facial attributes of the original images in the anonymized counterparts the preservation of which is of paramount importance for their use in downstream tasks we accordingly present a task agnostic anonymization procedure that directly optimizes the images latent representation in the latent space of a pre trained gan by optimizing the latent codes directly we ensure both that the identity is of a desired distance away from the original with an identity obfuscation loss whilst preserving the facial attributes using a novel feature matching loss in farl s deep feature space we demonstrate through a series of both qualitative and quantitative experiments that our method is capable of anonymizing the identity of the images whilst crucially better preserving the facial attributes we make the code and the pre trained models publicly available at keyword raw image multi modal reward for visual relationships based image captioning authors ali abedi hossein karshenas peyman adibi subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract deep neural networks have achieved promising results in automatic image captioning due to their effective representation learning and context based content generation capabilities as a prominent type of deep features used in many of the recent image captioning methods the well known bottomup features provide a detailed representation of different objects of the image in comparison with the feature maps directly extracted from the raw image however the lack of high level semantic information about the relationships between these objects is an important drawback of bottom up features despite their expensive and resource demanding extraction procedure to take advantage of visual relationships in caption generation this paper proposes a deep neural network architecture for image captioning based on fusing the visual relationships information extracted from an image s scene graph with the spatial feature maps of the image a multi modal reward function is then introduced for deep reinforcement learning of the proposed network using a combination of language and vision similarities in a common embedding space the results of extensive experimentation on the mscoco dataset show the effectiveness of using visual relationships in the proposed captioning method moreover the results clearly indicate that the proposed multi modal reward in deep reinforcement learning leads to better model optimization outperforming several state of the art image captioning algorithms while using light and easy to extract image features a detailed experimental study of the components constituting the proposed method is also presented towards multi turn interactive image editing via dialogue authors xing cui zekun li peipei li yibo hu hailin shi zhaofeng he subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract although there have been considerable research efforts on controllable facial image editing the desirable interactive setting where the users can interact with the system to adjust their requirements dynamically hasn t been well explored this paper focuses on facial image editing via dialogue and introduces a new benchmark dataset multi turn interactive image editing for evaluating image editing quality and interaction ability in real world interactive facial editing scenarios the dataset is constructed upon the celeba hq dataset with images annotated with a multi turn dialogue that corresponds to the user editing requirements is challenging as it needs to track the dynamically updated user requirements and edit the images accordingly as well as generate the appropriate natural language response to communicate with the user to address these challenges we propose a framework consisting of a dialogue module and an image editing module the former is for user edit requirements tracking and generating the corresponding indicative responses while the latter edits the images conditioned on the tracked user edit requirements in contrast to previous works that simply treat multi turn interaction as a sequence of single turn interactions we extract the user edit requirements from the whole dialogue history instead of the current single turn the extracted global user edit requirements enable us to directly edit the input raw image to avoid error accumulation and attribute forgetting issues extensive quantitative and qualitative experiments on the dataset demonstrate the advantage of our proposed framework over the previous single turn methods we believe our new dataset could serve as a valuable resource to push forward the exploration of real world complex interactive image editing code and data will be made public | 1 |
89,971 | 25,939,116,341 | IssuesEvent | 2022-12-16 16:38:03 | TrueBlocks/trueblocks-docker | https://api.github.com/repos/TrueBlocks/trueblocks-docker | closed | Choose consistent convention for tagging releases | enhancement TB-build | In the docker versions, we use `0.40.0-beta` for version tagging.
In the core repo, we use `v0.40.0-beta` for tagging.
I prefer `v0.40.0-beta` format (with the `v`), but it seems counter to the way docker does it.
Choices:
1) leave them different
2) switch to `0.40.0-beta` for all repos
3) switch to `v0.40.0-beta` for all repos
| 1.0 | Choose consistent convention for tagging releases - In the docker versions, we use `0.40.0-beta` for version tagging.
In the core repo, we use `v0.40.0-beta` for tagging.
I prefer `v0.40.0-beta` format (with the `v`), but it seems counter to the way docker does it.
Choices:
1) leave them different
2) switch to `0.40.0-beta` for all repos
3) switch to `v0.40.0-beta` for all repos
| non_process | choose consistent convention for tagging releases in the docker versions we use beta for version tagging in the core repo we use beta for tagging i prefer beta format with the v but it seems counter to the way docker does it choices leave them different switch to beta for all repos switch to beta for all repos | 0 |
13,692 | 16,449,700,264 | IssuesEvent | 2021-05-21 02:37:30 | pycaret/pycaret | https://api.github.com/repos/pycaret/pycaret | closed | [2.3] Preprocessing Refactoring | enhancement no-issue-activity preprocessing | All PyCaret transformers need to be refactored to conform to sklearn API (separate X and y). In this iteration, support only for Pandas is required To this end:
* Individual transformers need to be refactored - opportunities to improve performance should be considered
* A new pipeline class inheriting from `imblearn` Pipeline needs to be created to allow for additional features (ensuring that the X and y are pandas objects etc.)
* A new "pipeline" meta estimator class for y needs to be created to facilitate target transformation (details to follow)
* New tests should be added for individual transformators
The end result should have all PyCaret transformators being usable as a part of a sklearn workflow, provided that they receive Pandas objects.
This is necessary for passing Pipelines to CV and scalability. | 1.0 | [2.3] Preprocessing Refactoring - All PyCaret transformers need to be refactored to conform to sklearn API (separate X and y). In this iteration, support only for Pandas is required To this end:
* Individual transformers need to be refactored - opportunities to improve performance should be considered
* A new pipeline class inheriting from `imblearn` Pipeline needs to be created to allow for additional features (ensuring that the X and y are pandas objects etc.)
* A new "pipeline" meta estimator class for y needs to be created to facilitate target transformation (details to follow)
* New tests should be added for individual transformators
The end result should have all PyCaret transformators being usable as a part of a sklearn workflow, provided that they receive Pandas objects.
This is necessary for passing Pipelines to CV and scalability. | process | preprocessing refactoring all pycaret transformers need to be refactored to conform to sklearn api separate x and y in this iteration support only for pandas is required to this end individual transformers need to be refactored opportunities to improve performance should be considered a new pipeline class inheriting from imblearn pipeline needs to be created to allow for additional features ensuring that the x and y are pandas objects etc a new pipeline meta estimator class for y needs to be created to facilitate target transformation details to follow new tests should be added for individual transformators the end result should have all pycaret transformators being usable as a part of a sklearn workflow provided that they receive pandas objects this is necessary for passing pipelines to cv and scalability | 1 |
16,297 | 20,946,986,332 | IssuesEvent | 2022-03-26 02:46:35 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | GDAL Raster Calculator writing files correctly but logging overwrite errors | Feedback stale Processing Bug | ### What is the bug or the crash?
When using the GDAL raster calculator and overwriting the output file, this is what is happening in different QGIS and different use cases.
In 3.22.2 `--overwrite` argument is passed to GDAL and not in 3.16.5.
GDAL has successfully processed and written data despite messages in 3.16.5 but not in 3.22.2.
Note: sometimes you have to refresh the map to see changes.
Case | 3.16.5 | 3.22.2
-- | -- | --
Output file not loaded in canvas but exist | ERROR 1: C:\\Users\\asiddiqui\\Desktop\\plugin_test\\Runoff Local.tif, band 1: Write operation not permitted on dataset opened in read-only mode 0.. Block writing failed | No Error Message
Output file loaded in canvas | ERROR 1: C:\\Users\\asiddiqui\\Desktop\\plugin_test\\Runoff Local.tif, band 1: Write operation not permitted on dataset opened in read-only mode 0.. Block writing failed | [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\\\Users\\\\asiddiqui\\\\Desktop\\\\plugin_test8\\\\Runoff Local.tif' Process returned error code 0
### Steps to reproduce the issue
Overwrite a file using the GDAL raster calculator.
### Versions
QGIS version: 3.22.2-Białowieża
QGIS code revision: 1601ec46d0
Qt version: 5.15.2
Python version: 3.9.5
GDAL version: 3.4.0
GEOS version: 3.10.0-CAPI-1.16.0
PROJ version: Rel. 8.2.0, November 1st, 2021
PDAL version: 2.3.0 (git-version: 9f35b7)
QGIS version: 3.16.15-Hannover
QGIS code revision: e7fdad64
Qt version: 5.15.2
GDAL version: 3.4.0
GEOS version: 3.10.0-CAPI-1.16.0
PROJ version: Rel. 8.2.0, November 1st, 2021
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [X] I tried with a new QGIS profile
### Additional context
This might be related to work on #42447 | 1.0 | GDAL Raster Calculator writing files correctly but logging overwrite errors - ### What is the bug or the crash?
When using the GDAL raster calculator and overwriting the output file, this is what is happening in different QGIS and different use cases.
In 3.22.2 `--overwrite` argument is passed to GDAL and not in 3.16.5.
GDAL has successfully processed and written data despite messages in 3.16.5 but not in 3.22.2.
Note: sometimes you have to refresh the map to see changes.
Case | 3.16.5 | 3.22.2
-- | -- | --
Output file not loaded in canvas but exist | ERROR 1: C:\\Users\\asiddiqui\\Desktop\\plugin_test\\Runoff Local.tif, band 1: Write operation not permitted on dataset opened in read-only mode 0.. Block writing failed | No Error Message
Output file loaded in canvas | ERROR 1: C:\\Users\\asiddiqui\\Desktop\\plugin_test\\Runoff Local.tif, band 1: Write operation not permitted on dataset opened in read-only mode 0.. Block writing failed | [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\\\Users\\\\asiddiqui\\\\Desktop\\\\plugin_test8\\\\Runoff Local.tif' Process returned error code 0
### Steps to reproduce the issue
Overwrite a file using the GDAL raster calculator.
### Versions
QGIS version: 3.22.2-Białowieża
QGIS code revision: 1601ec46d0
Qt version: 5.15.2
Python version: 3.9.5
GDAL version: 3.4.0
GEOS version: 3.10.0-CAPI-1.16.0
PROJ version: Rel. 8.2.0, November 1st, 2021
PDAL version: 2.3.0 (git-version: 9f35b7)
QGIS version: 3.16.15-Hannover
QGIS code revision: e7fdad64
Qt version: 5.15.2
GDAL version: 3.4.0
GEOS version: 3.10.0-CAPI-1.16.0
PROJ version: Rel. 8.2.0, November 1st, 2021
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [X] I tried with a new QGIS profile
### Additional context
This might be related to work on #42447 | process | gdal raster calculator writing files correctly but logging overwrite errors what is the bug or the crash when using the gdal raster calculator and overwriting the output file this is what is happening in different qgis and different use cases in overwrite argument is passed to gdal and not in gdal has successfully processed and written data despite messages in but not in note sometimes you have to refresh the map to see changes case output file not loaded in canvas but exist error c users asiddiqui desktop plugin test runoff local tif band write operation not permitted on dataset opened in read only mode block writing failed no error message output file loaded in canvas error c users asiddiqui desktop plugin test runoff local tif band write operation not permitted on dataset opened in read only mode block writing failed the process cannot access the file because it is being used by another process c users asiddiqui desktop plugin runoff local tif process returned error code steps to reproduce the issue overwrite a file using the gdal raster calculator versions qgis version białowieża qgis code revision qt version python version gdal version geos version capi proj version rel november pdal version git version qgis version hannover qgis code revision qt version gdal version geos version capi proj version rel november supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context this might be related to work on | 1 |
143,846 | 5,531,192,695 | IssuesEvent | 2017-03-21 06:26:44 | dhowe/AdNauseam | https://api.github.com/repos/dhowe/AdNauseam | closed | Add update button for adnauseam.txt only | Enhancement PRIORITY: Medium | we need a quick way for ourselves and users to quickly update this one list to the version in uAssets on github | 1.0 | Add update button for adnauseam.txt only - we need a quick way for ourselves and users to quickly update this one list to the version in uAssets on github | non_process | add update button for adnauseam txt only we need a quick way for ourselves and users to quickly update this one list to the version in uassets on github | 0 |
12,718 | 15,092,043,804 | IssuesEvent | 2021-02-06 17:58:00 | LodestoneHQ/lodestone | https://api.github.com/repos/LodestoneHQ/lodestone | opened | Thumbnail processor crash when thumbnails directory doesn't exist | area/processor priority/high type/bug | The thumbnail-processor service will crash with exit code 1 in the event that the `data/storage/thumbnails` directory doesn't exist.
| 1.0 | Thumbnail processor crash when thumbnails directory doesn't exist - The thumbnail-processor service will crash with exit code 1 in the event that the `data/storage/thumbnails` directory doesn't exist.
| process | thumbnail processor crash when thumbnails directory doesn t exist the thumbnail processor service will crash with exit code in the event that the data storage thumbnails directory doesn t exist | 1 |
19,766 | 26,139,785,090 | IssuesEvent | 2022-12-29 16:43:35 | nodejs/node | https://api.github.com/repos/nodejs/node | reopened | Add Promise-based versions of child_process | child_process feature request promises | **Is your feature request related to a problem? Please describe.**
It would be nice if child_process functionality, such as `exec`, had Promised based versions by default. Currently it does not, see https://nodejs.org/api/child_process.html
**Describe the solution you'd like**
A new `child_process/promises` import path that can be used similar to how [fs promises](https://nodejs.org/api/fs.html#fs_promise_example) are used:
```js
// Using ESM Module syntax:
import { exec } from 'child_process/promises';
try {
const { stdout } = await exec(
'sysctl -n net.ipv4.ip_local_port_range'
);
console.log('successfully executed the child process command');
} catch (error) {
console.error('there was an error:', error.message);
}
```
**Describe alternatives you've considered**
I am already using `const exec = util.promisify(process.exec);` but that is not as nice. And this new proposal follows along what is happening in other Node.js APIs.
| 1.0 | Add Promise-based versions of child_process - **Is your feature request related to a problem? Please describe.**
It would be nice if child_process functionality, such as `exec`, had Promised based versions by default. Currently it does not, see https://nodejs.org/api/child_process.html
**Describe the solution you'd like**
A new `child_process/promises` import path that can be used similar to how [fs promises](https://nodejs.org/api/fs.html#fs_promise_example) are used:
```js
// Using ESM Module syntax:
import { exec } from 'child_process/promises';
try {
const { stdout } = await exec(
'sysctl -n net.ipv4.ip_local_port_range'
);
console.log('successfully executed the child process command');
} catch (error) {
console.error('there was an error:', error.message);
}
```
**Describe alternatives you've considered**
I am already using `const exec = util.promisify(process.exec);` but that is not as nice. And this new proposal follows along what is happening in other Node.js APIs.
| process | add promise based versions of child process is your feature request related to a problem please describe it would be nice if child process functionality such as exec had promised based versions by default currently it does not see describe the solution you d like a new child process promises import path that can be used similar to how are used js using esm module syntax import exec from child process promises try const stdout await exec sysctl n net ip local port range console log successfully executed the child process command catch error console error there was an error error message describe alternatives you ve considered i am already using const exec util promisify process exec but that is not as nice and this new proposal follows along what is happening in other node js apis | 1 |
19,201 | 25,337,827,321 | IssuesEvent | 2022-11-18 18:27:12 | geneontology/go-ontology | https://api.github.com/repos/geneontology/go-ontology | closed | Add viral replication children to 'DNA/RNA biosynthetic process' | cell cycle and DNA processes missing parentage | 'GO:0032774 RNA biosynthetic process' has a sentence in the definition that states "Refers not only to transcription but also to e.g. viral RNA replication."
So,
- [ ] 'GO:0039694 viral RNA genome replication' should be is_a 'RNA biosynthetic process'
And therefore,
- [ ] 'GO:0039693 viral DNA genome replication' should be is_a 'DNA biosynthetic process'
Thanks, Pascale | 1.0 | Add viral replication children to 'DNA/RNA biosynthetic process' - 'GO:0032774 RNA biosynthetic process' has a sentence in the definition that states "Refers not only to transcription but also to e.g. viral RNA replication."
So,
- [ ] 'GO:0039694 viral RNA genome replication' should be is_a 'RNA biosynthetic process'
And therefore,
- [ ] 'GO:0039693 viral DNA genome replication' should be is_a 'DNA biosynthetic process'
Thanks, Pascale | process | add viral replication children to dna rna biosynthetic process go rna biosynthetic process has a sentence in the definition that states refers not only to transcription but also to e g viral rna replication so go viral rna genome replication should be is a rna biosynthetic process and therefore go viral dna genome replication should be is a dna biosynthetic process thanks pascale | 1 |
12,050 | 14,739,003,715 | IssuesEvent | 2021-01-07 06:15:35 | kdjstudios/SABillingGitlab | https://api.github.com/repos/kdjstudios/SABillingGitlab | closed | SA Billing - Accounts Missing Resource ID |Parent:1410 | anc-process anp-2 ant-child/secondary ant-support | In GitLab by @kdjstudios on Aug 21, 2018, 14:19
This will maintain all updates received from me email to the plus ones and GMs to update accounts.
HD ticket: http://www.servicedesk.answernet.com/pages/30-mail | 1.0 | SA Billing - Accounts Missing Resource ID |Parent:1410 - In GitLab by @kdjstudios on Aug 21, 2018, 14:19
This will maintain all updates received from me email to the plus ones and GMs to update accounts.
HD ticket: http://www.servicedesk.answernet.com/pages/30-mail | process | sa billing accounts missing resource id parent in gitlab by kdjstudios on aug this will maintain all updates received from me email to the plus ones and gms to update accounts hd ticket | 1 |
17,418 | 23,233,986,901 | IssuesEvent | 2022-08-03 10:04:40 | apache/arrow-rs | https://api.github.com/repos/apache/arrow-rs | closed | Windows / Mac and Coverage jobs are no longer running: arduino/setup-protoc@v1 is not allowed to be used in apache/arrow-rs | bug development-process | **Describe the bug**
<!--
A clear and concise description of what the bug is.
-->
https://github.com/apache/arrow-rs/actions/runs/2781430582
```
Error: .github#L1arduino/setup-protoc@v1 is not allowed to be used in apache/arrow-rs. Actions in this workflow must be: within a repository owned by apache, created by GitHub, verified in the GitHub Marketplace, or matching the following: */*@[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]+, AdoptOpenJDK/install-jdk@*, JamesIves/github-pages-deploy-action@5dc1d5a192aeb5ab5b7d5a77b7d36aea4a7f5c92, TobKed/label-when-approved-action@*, actions-cool/issues-helper@*, actions-rs/*, al-cheb/configure-pagefile-action@*, amannn/action-semantic-pull-request@*, apache/*, burrunan/gradle-cache-action@*, bytedeco/javacpp-presets/.github/actions/*, chromaui/action@*, codecov/codecov-action@*, conda-incubator/setup-miniconda@*, container-tools/kind-action@*, container-tools/microshift-action@*, dawidd6/action-download-artifact@*, delaguardo/setup-graalvm@*, docker://jekyll/jekyll:*, docker://pandoc/core:2.9, eps1lon/actions-label-merge-conflict@*, gaurav-nelson/github-action-markdown-link-check@*, golang...
--
[Error: .github#L1](https://github.com/apache/arrow-rs/commit/ed9fc565f6b3bf0653ff342b523dbd1e2192d847#annotation_4184443006)
arduino/setup-protoc@v1 is not allowed to be used in apache/arrow-rs. Actions in this workflow must be: within a repository owned by apache, created by GitHub, verified in the GitHub Marketplace, or matching the following: */*@[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]+, AdoptOpenJDK/install-jdk@*, JamesIves/github-pages-deploy-action@5dc1d5a192aeb5ab5b7d5a77b7d36aea4a7f5c92, TobKed/label-when-approved-action@*, actions-cool/issues-helper@*, actions-rs/*, al-cheb/configure-pagefile-action@*, amannn/action-semantic-pull-request@*, apache/*, burrunan/gradle-cache-action@*, bytedeco/javacpp-presets/.github/actions/*, chromaui/action@*, codecov/codecov-action@*, conda-incubator/setup-miniconda@*, container-tools/kind-action@*, container-tools/microshift-action@*, dawidd6/action-download-artifact@*, delaguardo/setup-graalvm@*, docker://jekyll/jekyll:*, docker://pandoc/core:2.9, eps1lon/actions-label-merge-conflict@*, gaurav-nelson/github-action-markdown-link-check@*, golang...
```
This results in getting emails like this:

**To Reproduce**
<!--
Steps to reproduce the behavior:
-->
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
**Additional context**
<!--
Add any other context about the problem here.
--> | 1.0 | Windows / Mac and Coverage jobs are no longer running: arduino/setup-protoc@v1 is not allowed to be used in apache/arrow-rs - **Describe the bug**
<!--
A clear and concise description of what the bug is.
-->
https://github.com/apache/arrow-rs/actions/runs/2781430582
```
Error: .github#L1arduino/setup-protoc@v1 is not allowed to be used in apache/arrow-rs. Actions in this workflow must be: within a repository owned by apache, created by GitHub, verified in the GitHub Marketplace, or matching the following: */*@[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]+, AdoptOpenJDK/install-jdk@*, JamesIves/github-pages-deploy-action@5dc1d5a192aeb5ab5b7d5a77b7d36aea4a7f5c92, TobKed/label-when-approved-action@*, actions-cool/issues-helper@*, actions-rs/*, al-cheb/configure-pagefile-action@*, amannn/action-semantic-pull-request@*, apache/*, burrunan/gradle-cache-action@*, bytedeco/javacpp-presets/.github/actions/*, chromaui/action@*, codecov/codecov-action@*, conda-incubator/setup-miniconda@*, container-tools/kind-action@*, container-tools/microshift-action@*, dawidd6/action-download-artifact@*, delaguardo/setup-graalvm@*, docker://jekyll/jekyll:*, docker://pandoc/core:2.9, eps1lon/actions-label-merge-conflict@*, gaurav-nelson/github-action-markdown-link-check@*, golang...
--
[Error: .github#L1](https://github.com/apache/arrow-rs/commit/ed9fc565f6b3bf0653ff342b523dbd1e2192d847#annotation_4184443006)
arduino/setup-protoc@v1 is not allowed to be used in apache/arrow-rs. Actions in this workflow must be: within a repository owned by apache, created by GitHub, verified in the GitHub Marketplace, or matching the following: */*@[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]+, AdoptOpenJDK/install-jdk@*, JamesIves/github-pages-deploy-action@5dc1d5a192aeb5ab5b7d5a77b7d36aea4a7f5c92, TobKed/label-when-approved-action@*, actions-cool/issues-helper@*, actions-rs/*, al-cheb/configure-pagefile-action@*, amannn/action-semantic-pull-request@*, apache/*, burrunan/gradle-cache-action@*, bytedeco/javacpp-presets/.github/actions/*, chromaui/action@*, codecov/codecov-action@*, conda-incubator/setup-miniconda@*, container-tools/kind-action@*, container-tools/microshift-action@*, dawidd6/action-download-artifact@*, delaguardo/setup-graalvm@*, docker://jekyll/jekyll:*, docker://pandoc/core:2.9, eps1lon/actions-label-merge-conflict@*, gaurav-nelson/github-action-markdown-link-check@*, golang...
```
This results in getting emails like this:

**To Reproduce**
<!--
Steps to reproduce the behavior:
-->
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
**Additional context**
<!--
Add any other context about the problem here.
--> | process | windows mac and coverage jobs are no longer running arduino setup protoc is not allowed to be used in apache arrow rs describe the bug a clear and concise description of what the bug is error github setup protoc is not allowed to be used in apache arrow rs actions in this workflow must be within a repository owned by apache created by github verified in the github marketplace or matching the following adoptopenjdk install jdk jamesives github pages deploy action tobked label when approved action actions cool issues helper actions rs al cheb configure pagefile action amannn action semantic pull request apache burrunan gradle cache action bytedeco javacpp presets github actions chromaui action codecov codecov action conda incubator setup miniconda container tools kind action container tools microshift action action download artifact delaguardo setup graalvm docker jekyll jekyll docker pandoc core actions label merge conflict gaurav nelson github action markdown link check golang arduino setup protoc is not allowed to be used in apache arrow rs actions in this workflow must be within a repository owned by apache created by github verified in the github marketplace or matching the following adoptopenjdk install jdk jamesives github pages deploy action tobked label when approved action actions cool issues helper actions rs al cheb configure pagefile action amannn action semantic pull request apache burrunan gradle cache action bytedeco javacpp presets github actions chromaui action codecov codecov action conda incubator setup miniconda container tools kind action container tools microshift action action download artifact delaguardo setup graalvm docker jekyll jekyll docker pandoc core actions label merge conflict gaurav nelson github action markdown link check golang this results in getting emails like this to reproduce steps to reproduce the behavior expected behavior a clear and concise description of what you expected to happen additional context add any other context about the problem here | 1 |
15,761 | 19,912,513,531 | IssuesEvent | 2022-01-25 18:39:35 | MunchBit/MunchLove | https://api.github.com/repos/MunchBit/MunchLove | opened | Based on Restaurants payments method | feature Payment Process | **Title**
Based on Restaurants payments method
**Description**
Based on Restaurants payments method , include a toggle controlled by MunchLove Admin to change the default payment to Send Restaurant Administrators money of each sale straight into their payment account from the consumers wallet. For example Consumer Y purchases a meal costing £10. Of that £10 , £9 goes straight into Restaurants X's account £1 (or a configured value/percent) goes into MunchLoves Account
| 1.0 | Based on Restaurants payments method - **Title**
Based on Restaurants payments method
**Description**
Based on Restaurants payments method , include a toggle controlled by MunchLove Admin to change the default payment to Send Restaurant Administrators money of each sale straight into their payment account from the consumers wallet. For example Consumer Y purchases a meal costing £10. Of that £10 , £9 goes straight into Restaurants X's account £1 (or a configured value/percent) goes into MunchLoves Account
| process | based on restaurants payments method title based on restaurants payments method description based on restaurants payments method include a toggle controlled by munchlove admin to change the default payment to send restaurant administrators money of each sale straight into their payment account from the consumers wallet for example consumer y purchases a meal costing £ of that £ £ goes straight into restaurants x s account £ or a configured value percent goes into munchloves account | 1 |
10,984 | 13,783,479,428 | IssuesEvent | 2020-10-08 19:17:24 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | Processing “eliminate selected polygons” doesn't work with PyQGIS | Bug Feedback Processing | QQGIS 3.14.15 on Windows 10 64bit
I try to use the processing-tool qgis:eliminateselectedpolygons in a script:
_tempvlayer = QgsVectorLayer(inshape, "tempLayer", "ogr")
QgsProject.instance().addMapLayer(tempvlayer)
processing.run("qgis:selectbyattribute", {'INPUT':tempvlayer,'FIELD':'area','OPERATOR':4,'VALUE':'5','METHOD':0})
processing.run("qgis:eliminateselectedpolygons", {'INPUT':tempvlayer,'MODE':2,'OUTPUT':outshapefile})_
It seems that the the selected features aren't taken into account.
Is this a bug?
See also https://stackoverflow.com/questions/62938288/function-eliminate-selected-polygons-doesnt-work-with-pyqgis
Thanks
Peter
| 1.0 | Processing “eliminate selected polygons” doesn't work with PyQGIS - QQGIS 3.14.15 on Windows 10 64bit
I try to use the processing-tool qgis:eliminateselectedpolygons in a script:
_tempvlayer = QgsVectorLayer(inshape, "tempLayer", "ogr")
QgsProject.instance().addMapLayer(tempvlayer)
processing.run("qgis:selectbyattribute", {'INPUT':tempvlayer,'FIELD':'area','OPERATOR':4,'VALUE':'5','METHOD':0})
processing.run("qgis:eliminateselectedpolygons", {'INPUT':tempvlayer,'MODE':2,'OUTPUT':outshapefile})_
It seems that the the selected features aren't taken into account.
Is this a bug?
See also https://stackoverflow.com/questions/62938288/function-eliminate-selected-polygons-doesnt-work-with-pyqgis
Thanks
Peter
| process | processing “eliminate selected polygons” doesn t work with pyqgis qqgis on windows i try to use the processing tool qgis eliminateselectedpolygons in a script tempvlayer qgsvectorlayer inshape templayer ogr qgsproject instance addmaplayer tempvlayer processing run qgis selectbyattribute input tempvlayer field area operator value method processing run qgis eliminateselectedpolygons input tempvlayer mode output outshapefile it seems that the the selected features aren t taken into account is this a bug see also thanks peter | 1 |
12,031 | 14,738,600,767 | IssuesEvent | 2021-01-07 05:13:35 | kdjstudios/SABillingGitlab | https://api.github.com/repos/kdjstudios/SABillingGitlab | closed | 516@anjanabro.com - Undelivered Mail Returned to Sender | anc-process anp-1.5 ant-support | In GitLab by @kdjstudios on Jun 27, 2018, 09:49
**Submitted by:** NA
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-06-16-83487/conversation
**Server:** NA
**Client/Site:** NA
**Account:** NA
**Issue:**
This is the mail system at host ann200mail03.answernet.com.
I'm sorry to have to inform you that your message could not
be delivered to one or more recipients. It's attached below.
For further assistance, please send mail to postmaster.
If you do so, please include this problem report. You can
delete your own text from the attached returned message.
The mail system
<516@anjanabro.com>: Host or domain name not found. Name service error for
name=anjanabro.com type=A: Host found but no data record of requested type | 1.0 | 516@anjanabro.com - Undelivered Mail Returned to Sender - In GitLab by @kdjstudios on Jun 27, 2018, 09:49
**Submitted by:** NA
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-06-16-83487/conversation
**Server:** NA
**Client/Site:** NA
**Account:** NA
**Issue:**
This is the mail system at host ann200mail03.answernet.com.
I'm sorry to have to inform you that your message could not
be delivered to one or more recipients. It's attached below.
For further assistance, please send mail to postmaster.
If you do so, please include this problem report. You can
delete your own text from the attached returned message.
The mail system
<516@anjanabro.com>: Host or domain name not found. Name service error for
name=anjanabro.com type=A: Host found but no data record of requested type | process | anjanabro com undelivered mail returned to sender in gitlab by kdjstudios on jun submitted by na helpdesk server na client site na account na issue this is the mail system at host answernet com i m sorry to have to inform you that your message could not be delivered to one or more recipients it s attached below for further assistance please send mail to postmaster if you do so please include this problem report you can delete your own text from the attached returned message the mail system host or domain name not found name service error for name anjanabro com type a host found but no data record of requested type | 1 |
206,977 | 15,785,758,970 | IssuesEvent | 2021-04-01 16:46:32 | rancher/dashboard | https://api.github.com/repos/rancher/dashboard | closed | Dashboard AzureAD authentication with standard and custom endpoint is not working | [zube]: To Test kind/bug | Rancher | v2.5-4e48b8a9fbedad498fa21fd00e6e426ab3e13770-head
From the cluster explorer, Auth providers and Users --> AzureAD
- Add the details and select standard from endpoints
- Click on enable and users are redirected to the credential page the redirected URL
- In the redirected URL even before we provide credentails, we see the following error:
<img width="400" alt="Screen Shot 2021-03-15 at 3 14 14 PM" src="https://user-images.githubusercontent.com/38144301/111228287-3035e780-85a1-11eb-9f34-948740c176d9.png">
From the cluster explorer, Auth providers and Users --> AzureAD
- Add the details and select custom from endpoints
- add all details and click on enable, user is redirected to the credential page.
- Once we click on save, we see the following issue
<img width="400" alt="Screen Shot 2021-03-15 at 3 08 16 PM" src="https://user-images.githubusercontent.com/38144301/111228310-3af07c80-85a1-11eb-928d-7a843f2dec13.png">
`
window.opener.ls is not a function` | 1.0 | Dashboard AzureAD authentication with standard and custom endpoint is not working - Rancher | v2.5-4e48b8a9fbedad498fa21fd00e6e426ab3e13770-head
From the cluster explorer, Auth providers and Users --> AzureAD
- Add the details and select standard from endpoints
- Click on enable and users are redirected to the credential page the redirected URL
- In the redirected URL even before we provide credentails, we see the following error:
<img width="400" alt="Screen Shot 2021-03-15 at 3 14 14 PM" src="https://user-images.githubusercontent.com/38144301/111228287-3035e780-85a1-11eb-9f34-948740c176d9.png">
From the cluster explorer, Auth providers and Users --> AzureAD
- Add the details and select custom from endpoints
- add all details and click on enable, user is redirected to the credential page.
- Once we click on save, we see the following issue
<img width="400" alt="Screen Shot 2021-03-15 at 3 08 16 PM" src="https://user-images.githubusercontent.com/38144301/111228310-3af07c80-85a1-11eb-928d-7a843f2dec13.png">
`
window.opener.ls is not a function` | non_process | dashboard azuread authentication with standard and custom endpoint is not working rancher head from the cluster explorer auth providers and users azuread add the details and select standard from endpoints click on enable and users are redirected to the credential page the redirected url in the redirected url even before we provide credentails we see the following error img width alt screen shot at pm src from the cluster explorer auth providers and users azuread add the details and select custom from endpoints add all details and click on enable user is redirected to the credential page once we click on save we see the following issue img width alt screen shot at pm src window opener ls is not a function | 0 |
15,636 | 19,807,066,335 | IssuesEvent | 2022-01-19 08:13:13 | 2i2c-org/team-compass | https://api.github.com/repos/2i2c-org/team-compass | closed | Define team process around when and how to share a grafana API key for deploying dashboards | :label: team-process type: task | # Summary
When deploying grafana dashboards using [jupyterhub/grafana-dashboards](https://github.com/jupyterhub/grafana-dashboards), an API key is created to do the deployment. Should we keep this key for others to use? If so, where?
# Original Context
> What scenario do we envision others needing to use [the API key] once the grafana dashboards have been deployed? We need to figure that out and define that process
>
> _Originally posted by @sgibson91 in https://github.com/2i2c-org/pilot-hubs/pull/550#discussion_r675601491_
# Actions
- [ ] Define a set of scenarios where an engineer may need access to a grafana API key after the initial deployment of the grafana dashboards
- [ ] Decide where and how to store these keys with appropriate metadata about which cluster they relate to | 1.0 | Define team process around when and how to share a grafana API key for deploying dashboards - # Summary
When deploying grafana dashboards using [jupyterhub/grafana-dashboards](https://github.com/jupyterhub/grafana-dashboards), an API key is created to do the deployment. Should we keep this key for others to use? If so, where?
# Original Context
> What scenario do we envision others needing to use [the API key] once the grafana dashboards have been deployed? We need to figure that out and define that process
>
> _Originally posted by @sgibson91 in https://github.com/2i2c-org/pilot-hubs/pull/550#discussion_r675601491_
# Actions
- [ ] Define a set of scenarios where an engineer may need access to a grafana API key after the initial deployment of the grafana dashboards
- [ ] Decide where and how to store these keys with appropriate metadata about which cluster they relate to | process | define team process around when and how to share a grafana api key for deploying dashboards summary when deploying grafana dashboards using an api key is created to do the deployment should we keep this key for others to use if so where original context what scenario do we envision others needing to use once the grafana dashboards have been deployed we need to figure that out and define that process originally posted by in actions define a set of scenarios where an engineer may need access to a grafana api key after the initial deployment of the grafana dashboards decide where and how to store these keys with appropriate metadata about which cluster they relate to | 1 |
261,052 | 19,696,402,477 | IssuesEvent | 2022-01-12 12:37:22 | dekorateio/dekorate | https://api.github.com/repos/dekorateio/dekorate | opened | [new site] Document Helm in new site | documentation | Helm will be supported by Dekorate after https://github.com/dekorateio/dekorate/pull/841 is merged.
We need to document the usage and the user getting started. | 1.0 | [new site] Document Helm in new site - Helm will be supported by Dekorate after https://github.com/dekorateio/dekorate/pull/841 is merged.
We need to document the usage and the user getting started. | non_process | document helm in new site helm will be supported by dekorate after is merged we need to document the usage and the user getting started | 0 |
10,998 | 13,788,440,463 | IssuesEvent | 2020-10-09 07:12:57 | bisq-network/bisq | https://api.github.com/repos/bisq-network/bisq | closed | Bisq nodes leak TXID of every trade when TradeStatistics are generated | in:trade-process re:privacy | ### Background
When a Bisq trade offer is accepted, each Bisq node participating in the trade creates a TradeStatistics data object and broadcasts it to the P2P network. This trade statistics data is used by every Bisq node to generate trading volume graphs, price charts, and is also available on the Bisq Markets API service.
<img width="2557" alt="Screen Shot 2020-01-12 at 20 23 11" src="https://user-images.githubusercontent.com/232186/72218023-6a5cc100-3579-11ea-8599-9274b6eb2fb2.png">
### Issue
The TradeStatistics2 object contains excessive metadata about the trade, specifically the on-chain TXID of the maker's deposit. Unfortunately, because the offerId of every Bisq trade is mapped to the on-chain Bitcoin depositTxID, this allows malicious blockchain analysis of all Bisq trades.
Example data object:
```
{
"currency": "JPY",
"direction": "SELL",
"tradePrice": 8791986900,
"tradeAmount": 10000,
"tradeDate": 1578784489588,
"paymentMethod": "F2F",
"offerDate": 1578784398352,
"useMarketBasedPrice": true,
"marketPriceMargin": 0.0,
"offerAmount": 10000,
"offerMinAmount": 10000,
"offerId": "12635-224f7143-3366-46e7-9e14-7fa6f39fcb2b-125",
"depositTxId": "9c67453e57cfc80e2c121caf54f8f739cef6c5d7e9afdceec7843436a920f9d8",
"currencyPair": "BTC/JPY",
"primaryMarketDirection": "SELL",
"primaryMarketTradePrice": 87919869000000,
"primaryMarketTradeAmount": 10000,
"primaryMarketTradeVolume": 8791980000
}
```
Example blockchain analysis of this trade:
https://blockstream.info/tx/9c67453e57cfc80e2c121caf54f8f739cef6c5d7e9afdceec7843436a920f9d8?expand
### How to Reproduce
1. Start Bisq with `--dumpStatistics=true` option enabled
2. After a few minutes, a `trade_statistics.db` file will be generated in your `$HOME/.bisq/btc_mainnet/db/` datadir.
3. Extract the mapping of offer ID and deposit TXID by `grep Id trade_statistics.json`
4. Paste any Bitcoin TXID into any Bitcoin Block Explorer
### Expected Result
Bisq should not reveal the on-chain Bitcoin TXID for each trade.
### Actual Result
A full mapping of offer IDs to Bitcoin TXIDs for the past 50,000 trades on Bisq is generated. Snippet:
```
"depositTxId": "23f8dd12c6f772f9cf48eb586192d0852b7c001f9b52853eb2745c50085e7aad",
"offerId": "f5701917-1858-44f5-a81b-874c83c965f9",
"depositTxId": "c72d6f8816edd0d914988ee51f9cacc46cded48aff5b8bfebc0e3b04d6e30d77",
"offerId": "8f52b851-ab30-45de-9b00-978c6c1320d2",
"depositTxId": "4352525005912cad0af9b32ed131f5856f4f72add3b7e67fb8ed4a263f0ae00f",
"offerId": "b96da749-0910-4870-8c43-ffa0d6e5c15a",
"depositTxId": "0b76f73006b94fb69e2a4ac4e9cea25bc5a0af08ed1aadd4f3769053f14a326e",
"offerId": "940fd072-66de-405a-86a9-abf693c98146",
"depositTxId": "e251355d683b7e611fe85c03db64eb965402e53e7568ea652230acaef908ff56",
"offerId": "0f6ff881-7f13-4654-bc0b-3267fc99021a",
"depositTxId": "6a5001d1392e877f0c7058c76e9af01913143751690f2990842526b61ec30cda",
"offerId": "9de779ff-5e94-46a6-aa93-4dde1d49b6de",
"depositTxId": "6ba5e8d42814ea27d01c62eec1e1c8543a7627c19e282632a05fdae8e1df1b1e",
"offerId": "75edc3db-6dea-4ed1-b33a-e998765e8605",
"depositTxId": "be059d21e287e10876aa3e29ddad55455645cd4c3996f71d945c7d788bb4383c",
"offerId": "dce8c43e-1a91-4c98-8fdd-5776898589ed",
"depositTxId": "656ea12e55c31ed96e43de32c53155387bc08ba2d0be708bac3bda6b4682fbbe",
"offerId": "57a68fbd-26cc-4f8d-8f0f-4114e09cc57c",
"depositTxId": "f078d4191545a79b7dad6393648a63cf8b9bf337bcb43a84343a6fd923c10585",
``` | 1.0 | Bisq nodes leak TXID of every trade when TradeStatistics are generated - ### Background
When a Bisq trade offer is accepted, each Bisq node participating in the trade creates a TradeStatistics data object and broadcasts it to the P2P network. This trade statistics data is used by every Bisq node to generate trading volume graphs, price charts, and is also available on the Bisq Markets API service.
<img width="2557" alt="Screen Shot 2020-01-12 at 20 23 11" src="https://user-images.githubusercontent.com/232186/72218023-6a5cc100-3579-11ea-8599-9274b6eb2fb2.png">
### Issue
The TradeStatistics2 object contains excessive metadata about the trade, specifically the on-chain TXID of the maker's deposit. Unfortunately, because the offerId of every Bisq trade is mapped to the on-chain Bitcoin depositTxID, this allows malicious blockchain analysis of all Bisq trades.
Example data object:
```
{
"currency": "JPY",
"direction": "SELL",
"tradePrice": 8791986900,
"tradeAmount": 10000,
"tradeDate": 1578784489588,
"paymentMethod": "F2F",
"offerDate": 1578784398352,
"useMarketBasedPrice": true,
"marketPriceMargin": 0.0,
"offerAmount": 10000,
"offerMinAmount": 10000,
"offerId": "12635-224f7143-3366-46e7-9e14-7fa6f39fcb2b-125",
"depositTxId": "9c67453e57cfc80e2c121caf54f8f739cef6c5d7e9afdceec7843436a920f9d8",
"currencyPair": "BTC/JPY",
"primaryMarketDirection": "SELL",
"primaryMarketTradePrice": 87919869000000,
"primaryMarketTradeAmount": 10000,
"primaryMarketTradeVolume": 8791980000
}
```
Example blockchain analysis of this trade:
https://blockstream.info/tx/9c67453e57cfc80e2c121caf54f8f739cef6c5d7e9afdceec7843436a920f9d8?expand
### How to Reproduce
1. Start Bisq with `--dumpStatistics=true` option enabled
2. After a few minutes, a `trade_statistics.db` file will be generated in your `$HOME/.bisq/btc_mainnet/db/` datadir.
3. Extract the mapping of offer ID and deposit TXID by `grep Id trade_statistics.json`
4. Paste any Bitcoin TXID into any Bitcoin Block Explorer
### Expected Result
Bisq should not reveal the on-chain Bitcoin TXID for each trade.
### Actual Result
A full mapping of offer IDs to Bitcoin TXIDs for the past 50,000 trades on Bisq is generated. Snippet:
```
"depositTxId": "23f8dd12c6f772f9cf48eb586192d0852b7c001f9b52853eb2745c50085e7aad",
"offerId": "f5701917-1858-44f5-a81b-874c83c965f9",
"depositTxId": "c72d6f8816edd0d914988ee51f9cacc46cded48aff5b8bfebc0e3b04d6e30d77",
"offerId": "8f52b851-ab30-45de-9b00-978c6c1320d2",
"depositTxId": "4352525005912cad0af9b32ed131f5856f4f72add3b7e67fb8ed4a263f0ae00f",
"offerId": "b96da749-0910-4870-8c43-ffa0d6e5c15a",
"depositTxId": "0b76f73006b94fb69e2a4ac4e9cea25bc5a0af08ed1aadd4f3769053f14a326e",
"offerId": "940fd072-66de-405a-86a9-abf693c98146",
"depositTxId": "e251355d683b7e611fe85c03db64eb965402e53e7568ea652230acaef908ff56",
"offerId": "0f6ff881-7f13-4654-bc0b-3267fc99021a",
"depositTxId": "6a5001d1392e877f0c7058c76e9af01913143751690f2990842526b61ec30cda",
"offerId": "9de779ff-5e94-46a6-aa93-4dde1d49b6de",
"depositTxId": "6ba5e8d42814ea27d01c62eec1e1c8543a7627c19e282632a05fdae8e1df1b1e",
"offerId": "75edc3db-6dea-4ed1-b33a-e998765e8605",
"depositTxId": "be059d21e287e10876aa3e29ddad55455645cd4c3996f71d945c7d788bb4383c",
"offerId": "dce8c43e-1a91-4c98-8fdd-5776898589ed",
"depositTxId": "656ea12e55c31ed96e43de32c53155387bc08ba2d0be708bac3bda6b4682fbbe",
"offerId": "57a68fbd-26cc-4f8d-8f0f-4114e09cc57c",
"depositTxId": "f078d4191545a79b7dad6393648a63cf8b9bf337bcb43a84343a6fd923c10585",
``` | process | bisq nodes leak txid of every trade when tradestatistics are generated background when a bisq trade offer is accepted each bisq node participating in the trade creates a tradestatistics data object and broadcasts it to the network this trade statistics data is used by every bisq node to generate trading volume graphs price charts and is also available on the bisq markets api service img width alt screen shot at src issue the object contains excessive metadata about the trade specifically the on chain txid of the maker s deposit unfortunately because the offerid of every bisq trade is mapped to the on chain bitcoin deposittxid this allows malicious blockchain analysis of all bisq trades example data object currency jpy direction sell tradeprice tradeamount tradedate paymentmethod offerdate usemarketbasedprice true marketpricemargin offeramount offerminamount offerid deposittxid currencypair btc jpy primarymarketdirection sell primarymarkettradeprice primarymarkettradeamount primarymarkettradevolume example blockchain analysis of this trade how to reproduce start bisq with dumpstatistics true option enabled after a few minutes a trade statistics db file will be generated in your home bisq btc mainnet db datadir extract the mapping of offer id and deposit txid by grep id trade statistics json paste any bitcoin txid into any bitcoin block explorer expected result bisq should not reveal the on chain bitcoin txid for each trade actual result a full mapping of offer ids to bitcoin txids for the past trades on bisq is generated snippet deposittxid offerid deposittxid offerid deposittxid offerid deposittxid offerid deposittxid offerid deposittxid offerid deposittxid offerid deposittxid offerid deposittxid offerid deposittxid | 1 |
13,569 | 16,107,051,074 | IssuesEvent | 2021-04-27 16:05:16 | carbon-design-system/ibm-cloud-cognitive | https://api.github.com/repos/carbon-design-system/ibm-cloud-cognitive | closed | Prevent publication of sub-components | type: bug type: process improvement | Sub-components that are created while developing other patters should not be published for use by product teams.
At release review, add a step to encourage developers to discuss components that could be broken out.
Any such sub-components should not get their own top-level storybook folder. Perhaps a hidden dev storybook folder or a folder named "Internals" could be used.
Any such sub-component should not be published directly on npm e.g. not in the index.js and/or obscured in some other way. | 1.0 | Prevent publication of sub-components - Sub-components that are created while developing other patters should not be published for use by product teams.
At release review, add a step to encourage developers to discuss components that could be broken out.
Any such sub-components should not get their own top-level storybook folder. Perhaps a hidden dev storybook folder or a folder named "Internals" could be used.
Any such sub-component should not be published directly on npm e.g. not in the index.js and/or obscured in some other way. | process | prevent publication of sub components sub components that are created while developing other patters should not be published for use by product teams at release review add a step to encourage developers to discuss components that could be broken out any such sub components should not get their own top level storybook folder perhaps a hidden dev storybook folder or a folder named internals could be used any such sub component should not be published directly on npm e g not in the index js and or obscured in some other way | 1 |
48,834 | 3,000,348,155 | IssuesEvent | 2015-07-24 00:42:38 | USGCRP/gcis-ontology | https://api.github.com/repos/USGCRP/gcis-ontology | closed | C1. iterate on activity to dataset relationships | high-priority question | Check entries under http://data.globalchange.gov/activity and create terms accordingly, relating to other datasets.
Need something for
- [ ] computing environment
- [ ] data usage
- [ ] names of input/output data files
- [ ] etc. (?)
Maybe relate to pre-existing dbpedia entries in use at the turtle there.
See https://data.globalchange.gov/activity/0158fa86-nca3-ghcn-daily-r201305-process.ttl for an example of how we have tackled this problem to date.
```
@prefix dcterms: <http://purl.org/dc/terms/> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
@prefix prov: <http://www.w3.org/ns/prov#> .
@prefix dbpedia_owl: <http://dbpedia.org/ontology/> .
@prefix gcis: <http://data.globalchange.gov/gcis.owl#> .
@prefix meth: <http://sweet.jpl.nasa.gov/2.3/reprSciMethodology.owl#> .
<http://data.globalchange.gov/activity/0158fa86-nca3-ghcn-daily-r201305-process>
dcterms:identifier "0158fa86-nca3-ghcn-daily-r201305-process";
dcterms:description "Decadal average anomalies for the 99th percentile of precipitation (difference between the decade and the 1901-1960 average precipitation) for the Northwest region were plotted as a bar graph. Note: the far right bar contains data for 12 years (2001-2012)."^^xsd:string;
## The activity began and ended at the following times
## Duration of the activity
dcterms:SizeOrDuration "6 hours"^^xsd:string;
## Output datafiles
dbpedia_owl:filename "getprcpextremes99perc_May03_2013.f95\r\nprcpextremes99perc_1901_2012_May03_2013.txt \r\ngridaverage_99per_regions_May03_2013.f95\r\ngridprcpextremes99perc_regions_1901_2012_May03_2013.txt\r\ngridprecipextremes99perc_regions_1901_2012_May03_2013.csv\r\nperc99_decadal_barchart.pro\r\n99th_perc_anom_decadal_values_1901-2012.txt\r\nNW_99th_perc_anom_pct_1901_2012.eps\r\n2-17_nw.png\r\nCS_Extreme Heavy precipitation_v7.png"^^xsd:string;
## Software utilized
gcis:Software "Fortran 95; lf95 compiler (Lahey/Fujitsu Linux64 Fortran compiler release L8.10b); IDL (version 8.0)"^^xsd:string;
## Computing environment
dcterms:InteractiveResource "Linux (CentOS release 6.4); Mac OS X (darwin x86_64 m64)"^^xsd:string;
## Methodology employed
meth:Methodology "First, GHCN-D stations with minimal (less than 10%) missing precipitation data were identified. For each station the 99th percentile threshold of daily precipitation was determined using the entire period of record. Next, the total precipitation falling on days exceeding the 99th percentile threshold was calculated for each year. Then, grid box average values were calculated for each year, by averaging the values for each station available in that grid box. The annual values were then averaged for all grid boxes containing data in the Northwest region. Decadal averages were then calculated. Finally the 1901-1960 99th percentile average amount was subtracted from the decadal average amount, and a percentage change was calculated."^^xsd:string;
a prov:Activity .
## The following entity was derived from a dataset using this activity
<http://data.globalchange.gov/image/0158fa86-481b-4a0b-8a79-4fd56b553cfd>
a gcis:Image;
prov:wasDerivedFrom <http://data.globalchange.gov/dataset/nca3-ghcn-daily-r201305>;
prov:wasGeneratedBy <http://data.globalchange.gov/activity/0158fa86-nca3-ghcn-daily-r201305-process>.
``` | 1.0 | C1. iterate on activity to dataset relationships - Check entries under http://data.globalchange.gov/activity and create terms accordingly, relating to other datasets.
Need something for
- [ ] computing environment
- [ ] data usage
- [ ] names of input/output data files
- [ ] etc. (?)
Maybe relate to pre-existing dbpedia entries in use at the turtle there.
See https://data.globalchange.gov/activity/0158fa86-nca3-ghcn-daily-r201305-process.ttl for an example of how we have tackled this problem to date.
```
@prefix dcterms: <http://purl.org/dc/terms/> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
@prefix prov: <http://www.w3.org/ns/prov#> .
@prefix dbpedia_owl: <http://dbpedia.org/ontology/> .
@prefix gcis: <http://data.globalchange.gov/gcis.owl#> .
@prefix meth: <http://sweet.jpl.nasa.gov/2.3/reprSciMethodology.owl#> .
<http://data.globalchange.gov/activity/0158fa86-nca3-ghcn-daily-r201305-process>
dcterms:identifier "0158fa86-nca3-ghcn-daily-r201305-process";
dcterms:description "Decadal average anomalies for the 99th percentile of precipitation (difference between the decade and the 1901-1960 average precipitation) for the Northwest region were plotted as a bar graph. Note: the far right bar contains data for 12 years (2001-2012)."^^xsd:string;
## The activity began and ended at the following times
## Duration of the activity
dcterms:SizeOrDuration "6 hours"^^xsd:string;
## Output datafiles
dbpedia_owl:filename "getprcpextremes99perc_May03_2013.f95\r\nprcpextremes99perc_1901_2012_May03_2013.txt \r\ngridaverage_99per_regions_May03_2013.f95\r\ngridprcpextremes99perc_regions_1901_2012_May03_2013.txt\r\ngridprecipextremes99perc_regions_1901_2012_May03_2013.csv\r\nperc99_decadal_barchart.pro\r\n99th_perc_anom_decadal_values_1901-2012.txt\r\nNW_99th_perc_anom_pct_1901_2012.eps\r\n2-17_nw.png\r\nCS_Extreme Heavy precipitation_v7.png"^^xsd:string;
## Software utilized
gcis:Software "Fortran 95; lf95 compiler (Lahey/Fujitsu Linux64 Fortran compiler release L8.10b); IDL (version 8.0)"^^xsd:string;
## Computing environment
dcterms:InteractiveResource "Linux (CentOS release 6.4); Mac OS X (darwin x86_64 m64)"^^xsd:string;
## Methodology employed
meth:Methodology "First, GHCN-D stations with minimal (less than 10%) missing precipitation data were identified. For each station the 99th percentile threshold of daily precipitation was determined using the entire period of record. Next, the total precipitation falling on days exceeding the 99th percentile threshold was calculated for each year. Then, grid box average values were calculated for each year, by averaging the values for each station available in that grid box. The annual values were then averaged for all grid boxes containing data in the Northwest region. Decadal averages were then calculated. Finally the 1901-1960 99th percentile average amount was subtracted from the decadal average amount, and a percentage change was calculated."^^xsd:string;
a prov:Activity .
## The following entity was derived from a dataset using this activity
<http://data.globalchange.gov/image/0158fa86-481b-4a0b-8a79-4fd56b553cfd>
a gcis:Image;
prov:wasDerivedFrom <http://data.globalchange.gov/dataset/nca3-ghcn-daily-r201305>;
prov:wasGeneratedBy <http://data.globalchange.gov/activity/0158fa86-nca3-ghcn-daily-r201305-process>.
``` | non_process | iterate on activity to dataset relationships check entries under and create terms accordingly relating to other datasets need something for computing environment data usage names of input output data files etc maybe relate to pre existing dbpedia entries in use at the turtle there see for an example of how we have tackled this problem to date prefix dcterms prefix xsd prefix prov prefix dbpedia owl prefix gcis prefix meth dcterms identifier ghcn daily process dcterms description decadal average anomalies for the percentile of precipitation difference between the decade and the average precipitation for the northwest region were plotted as a bar graph note the far right bar contains data for years xsd string the activity began and ended at the following times duration of the activity dcterms sizeorduration hours xsd string output datafiles dbpedia owl filename r txt r ngridaverage regions r regions txt r regions csv r decadal barchart pro r perc anom decadal values txt r nnw perc anom pct eps r nw png r ncs extreme heavy precipitation png xsd string software utilized gcis software fortran compiler lahey fujitsu fortran compiler release idl version xsd string computing environment dcterms interactiveresource linux centos release mac os x darwin xsd string methodology employed meth methodology first ghcn d stations with minimal less than missing precipitation data were identified for each station the percentile threshold of daily precipitation was determined using the entire period of record next the total precipitation falling on days exceeding the percentile threshold was calculated for each year then grid box average values were calculated for each year by averaging the values for each station available in that grid box the annual values were then averaged for all grid boxes containing data in the northwest region decadal averages were then calculated finally the percentile average amount was subtracted from the decadal average amount and a percentage change was calculated xsd string a prov activity the following entity was derived from a dataset using this activity a gcis image prov wasderivedfrom prov wasgeneratedby | 0 |
818,560 | 30,694,577,399 | IssuesEvent | 2023-07-26 17:32:50 | helpwave/services | https://api.github.com/repos/helpwave/services | closed | Remove unnecessary warnings | priority: low chore | ### Describe the chore
warnings like `log.Warn().Err(err).Msg("database error")` are caught by our middleware and are redundant | 1.0 | Remove unnecessary warnings - ### Describe the chore
warnings like `log.Warn().Err(err).Msg("database error")` are caught by our middleware and are redundant | non_process | remove unnecessary warnings describe the chore warnings like log warn err err msg database error are caught by our middleware and are redundant | 0 |
21,808 | 30,316,438,452 | IssuesEvent | 2023-07-10 15:53:00 | tdwg/dwc | https://api.github.com/repos/tdwg/dwc | closed | Change term - scientificName | Term - change Class - Taxon non-normative Process - complete | ## Term change
* Submitter: Quentin Groom
* Efficacy Justification (why is this change necessary?): The names of hybrids are very poorly standardized and the International Code of Nomenclature for algae, fungi, and plants (Shenzhen Code) is not being followed in the case of hybrid names. Data managers are probably not aware that the Code rules on this and changing the not normative elements of this term will help guide people to create more standardized names.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): Almost every taxonomic resource for algae, fungi and plants needs to handle names of hybrids.
* Stability Justification (what concerns are there that this might affect existing implementations?): These changes are only likely to improve stability
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: none
Current Term definition: https://dwc.tdwg.org/list/#dwc_scientificName
Proposed attributes of the new term:
* Usage comments (recommendations regarding content, etc., not normative): Names of hybrids for algae, fungi and plants should follow the rules of the nomenclatural code (Articles H.1, H.2 and H.3). This means using the multiplication sign `×` to identify a hybrid, not an `x` or `X`. In Unicode U+00D7
* Examples (not normative): add examples `×Agropogon littoralis (Sm.) C. E. Hubb.` , `Mentha ×smithiana R. A. Graham` and `Agrostis stolonifera L. × Polypogon monspeliensis (L.) Desf.`
| 1.0 | Change term - scientificName - ## Term change
* Submitter: Quentin Groom
* Efficacy Justification (why is this change necessary?): The names of hybrids are very poorly standardized and the International Code of Nomenclature for algae, fungi, and plants (Shenzhen Code) is not being followed in the case of hybrid names. Data managers are probably not aware that the Code rules on this and changing the not normative elements of this term will help guide people to create more standardized names.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): Almost every taxonomic resource for algae, fungi and plants needs to handle names of hybrids.
* Stability Justification (what concerns are there that this might affect existing implementations?): These changes are only likely to improve stability
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: none
Current Term definition: https://dwc.tdwg.org/list/#dwc_scientificName
Proposed attributes of the new term:
* Usage comments (recommendations regarding content, etc., not normative): Names of hybrids for algae, fungi and plants should follow the rules of the nomenclatural code (Articles H.1, H.2 and H.3). This means using the multiplication sign `×` to identify a hybrid, not an `x` or `X`. In Unicode U+00D7
* Examples (not normative): add examples `×Agropogon littoralis (Sm.) C. E. Hubb.` , `Mentha ×smithiana R. A. Graham` and `Agrostis stolonifera L. × Polypogon monspeliensis (L.) Desf.`
| process | change term scientificname term change submitter quentin groom efficacy justification why is this change necessary the names of hybrids are very poorly standardized and the international code of nomenclature for algae fungi and plants shenzhen code is not being followed in the case of hybrid names data managers are probably not aware that the code rules on this and changing the not normative elements of this term will help guide people to create more standardized names demand justification if the change is semantic in nature name at least two organizations that independently need this term almost every taxonomic resource for algae fungi and plants needs to handle names of hybrids stability justification what concerns are there that this might affect existing implementations these changes are only likely to improve stability implications for dwciri namespace does this change affect a dwciri term version none current term definition proposed attributes of the new term usage comments recommendations regarding content etc not normative names of hybrids for algae fungi and plants should follow the rules of the nomenclatural code articles h h and h this means using the multiplication sign × to identify a hybrid not an x or x in unicode u examples not normative add examples ×agropogon littoralis sm c e hubb mentha ×smithiana r a graham and agrostis stolonifera l × polypogon monspeliensis l desf | 1 |
137,861 | 5,317,599,731 | IssuesEvent | 2017-02-13 22:59:36 | ampproject/amphtml | https://api.github.com/repos/ampproject/amphtml | closed | amp-bind: Improve init performance | Category: Dynamic/Personalized Content P1: High Priority Type: Bug | Related to #6199.
Several slow tasks are necessary during initialization. We should speed it up where possible and chunk/amortize across multiple frames otherwise:
- [x] DOM scan for bindings
- [ ] Expression parsing and AST generation | 1.0 | amp-bind: Improve init performance - Related to #6199.
Several slow tasks are necessary during initialization. We should speed it up where possible and chunk/amortize across multiple frames otherwise:
- [x] DOM scan for bindings
- [ ] Expression parsing and AST generation | non_process | amp bind improve init performance related to several slow tasks are necessary during initialization we should speed it up where possible and chunk amortize across multiple frames otherwise dom scan for bindings expression parsing and ast generation | 0 |
14,348 | 17,372,505,008 | IssuesEvent | 2021-07-30 15:46:11 | MicrosoftDocs/azure-devops-docs | https://api.github.com/repos/MicrosoftDocs/azure-devops-docs | closed | Is it possible to edit the YAML in a release pipeline? | devops-cicd-process/tech devops/prod doc-enhancement |
Thank you very much for this excellent page on defining variables in an Azure Pipeline! This is very helpful.
I have a question about implementing this in an Azure Release Pipeline. In the section titled "Reference secret variables in variable groups" I tried to reference a variable group I defined in the Pipeline Library area. However, I couldn't find any way to edit the YAML. for the stage or job to declare the variable group and reference it as was done in that section with this code
```yaml
env:
MY_MAPPED_TOKEN: $(token) # Maps the secret variable $(token) from my-var-group
```
Is that not possible when working with a release pipeline in Azure DevOps?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a
* Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a
* Content: [Define variables - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch)
* Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/variables.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam** | 1.0 | Is it possible to edit the YAML in a release pipeline? -
Thank you very much for this excellent page on defining variables in an Azure Pipeline! This is very helpful.
I have a question about implementing this in an Azure Release Pipeline. In the section titled "Reference secret variables in variable groups" I tried to reference a variable group I defined in the Pipeline Library area. However, I couldn't find any way to edit the YAML. for the stage or job to declare the variable group and reference it as was done in that section with this code
```yaml
env:
MY_MAPPED_TOKEN: $(token) # Maps the secret variable $(token) from my-var-group
```
Is that not possible when working with a release pipeline in Azure DevOps?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a
* Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a
* Content: [Define variables - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch)
* Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/variables.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam** | process | is it possible to edit the yaml in a release pipeline thank you very much for this excellent page on defining variables in an azure pipeline this is very helpful i have a question about implementing this in an azure release pipeline in the section titled reference secret variables in variable groups i tried to reference a variable group i defined in the pipeline library area however i couldn t find any way to edit the yaml for the stage or job to declare the variable group and reference it as was done in that section with this code yaml env my mapped token token maps the secret variable token from my var group is that not possible when working with a release pipeline in azure devops document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id bcdb content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam | 1 |
22,469 | 31,280,793,122 | IssuesEvent | 2023-08-22 09:27:15 | h4sh5/npm-auto-scanner | https://api.github.com/repos/h4sh5/npm-auto-scanner | opened | chimp 6.1.0 has 2 guarddog issues | npm-install-script npm-silent-process-execution | ```{"npm-install-script":[{"code":" \"postinstall\": \"npm run chimp\",","location":"package/scaffold/package.json:8","message":"The package.json has a script automatically running when the package is installed"}],"npm-silent-process-execution":[{"code":"(0, node_child_process_1.spawn)('npm', ['start'], {\n stdio: 'ignore',\n detached: true,\n cwd: pathToRunFromChimp,\n}).unref();","location":"package/lib/scripts/end-to-end-test.js:28","message":"This package is silently executing another executable"}]}``` | 1.0 | chimp 6.1.0 has 2 guarddog issues - ```{"npm-install-script":[{"code":" \"postinstall\": \"npm run chimp\",","location":"package/scaffold/package.json:8","message":"The package.json has a script automatically running when the package is installed"}],"npm-silent-process-execution":[{"code":"(0, node_child_process_1.spawn)('npm', ['start'], {\n stdio: 'ignore',\n detached: true,\n cwd: pathToRunFromChimp,\n}).unref();","location":"package/lib/scripts/end-to-end-test.js:28","message":"This package is silently executing another executable"}]}``` | process | chimp has guarddog issues npm install script npm silent process execution n stdio ignore n detached true n cwd pathtorunfromchimp n unref location package lib scripts end to end test js message this package is silently executing another executable | 1 |
809 | 3,286,730,928 | IssuesEvent | 2015-10-29 05:28:13 | t3kt/vjzual2 | https://api.github.com/repos/t3kt/vjzual2 | closed | color tinting | enhancement video processing | it should be able to support grayscale input with colored output
could either be its own module or it could be part of the color adjustment modules | 1.0 | color tinting - it should be able to support grayscale input with colored output
could either be its own module or it could be part of the color adjustment modules | process | color tinting it should be able to support grayscale input with colored output could either be its own module or it could be part of the color adjustment modules | 1 |
139,449 | 11,269,960,029 | IssuesEvent | 2020-01-14 09:57:40 | microsoft/AzureStorageExplorer | https://api.github.com/repos/microsoft/AzureStorageExplorer | closed | Update 'workaround' to 'work around' in Release Notes Known Issues part | 🧪 testing | **Storage Explorer Version:** 1.12.0
**Build:** [20200113.2](https://devdiv.visualstudio.com/DevDiv/_build/results?buildId=3379477)
**Branch:** rel/1.12.0
**Platform/OS:** Windows 10/ Linux Ubuntu 18.04/ MacOS High Sierra
**Architecture:** ia32/x64
**Regression From:** Not a regression
**Steps to reproduce:**
1. Launch Storage Explorer -> Open Release Notes.
2. Check the last sentence in Known Issues part.
**Expect Experience:**
The phrase 'work around' is used in this sentence.
**Actual Experience:**
The noun 'workaround' is used in this sentence.

| 1.0 | Update 'workaround' to 'work around' in Release Notes Known Issues part - **Storage Explorer Version:** 1.12.0
**Build:** [20200113.2](https://devdiv.visualstudio.com/DevDiv/_build/results?buildId=3379477)
**Branch:** rel/1.12.0
**Platform/OS:** Windows 10/ Linux Ubuntu 18.04/ MacOS High Sierra
**Architecture:** ia32/x64
**Regression From:** Not a regression
**Steps to reproduce:**
1. Launch Storage Explorer -> Open Release Notes.
2. Check the last sentence in Known Issues part.
**Expect Experience:**
The phrase 'work around' is used in this sentence.
**Actual Experience:**
The noun 'workaround' is used in this sentence.

| non_process | update workaround to work around in release notes known issues part storage explorer version build branch rel platform os windows linux ubuntu macos high sierra architecture regression from not a regression steps to reproduce launch storage explorer open release notes check the last sentence in known issues part expect experience the phrase work around is used in this sentence actual experience the noun workaround is used in this sentence | 0 |
634,296 | 20,358,059,795 | IssuesEvent | 2022-02-20 08:59:54 | ntop/ntopng | https://api.github.com/repos/ntop/ntopng | opened | Active Monitoring Category Missing | enhancement priority ticket | Please add a new alert category for active monitoring and check if some other categories are missing. The goal is to define granular alert categories whose name is meaningful | 1.0 | Active Monitoring Category Missing - Please add a new alert category for active monitoring and check if some other categories are missing. The goal is to define granular alert categories whose name is meaningful | non_process | active monitoring category missing please add a new alert category for active monitoring and check if some other categories are missing the goal is to define granular alert categories whose name is meaningful | 0 |
670,384 | 22,688,470,414 | IssuesEvent | 2022-07-04 16:29:30 | apache/incubator-devlake | https://api.github.com/repos/apache/incubator-devlake | closed | table.issue_changelogs lacks `action` field | type/bug priority/low | ## Describe the bug
table.issue_changelogs lacks `action` field
## To Reproduce
1. set up DevLake
2. visit mysql, and see `issue_changelogs` table.
## Expected behavior
Field ‘action’ exists.
## Actual behavior
Field ‘action’ doesn't exist.
## Screenshots

| 1.0 | table.issue_changelogs lacks `action` field - ## Describe the bug
table.issue_changelogs lacks `action` field
## To Reproduce
1. set up DevLake
2. visit mysql, and see `issue_changelogs` table.
## Expected behavior
Field ‘action’ exists.
## Actual behavior
Field ‘action’ doesn't exist.
## Screenshots

| non_process | table issue changelogs lacks action field describe the bug table issue changelogs lacks action field to reproduce set up devlake visit mysql and see issue changelogs table expected behavior field ‘action’ exists actual behavior field ‘action’ doesn t exist screenshots | 0 |
8,323 | 11,488,458,555 | IssuesEvent | 2020-02-11 13:56:51 | ESMValGroup/ESMValCore | https://api.github.com/repos/ESMValGroup/ESMValCore | opened | Irregular regridding fails for specific dataset | bug data issue preprocessor | **Describe the bug**
For historical `tos` of `MRI-CGCM3`, irregular regridding gives an empty field (completely filled with 0s). The original data is fine.
**Recipe**
```yml
preprocessors:
regrid:
regrid:
target_grid: 2x2
scheme: linear
diagnostics:
diag_x_zhai:
variables:
tos:
preprocessor: regrid
exp: historical
ensemble: r1i1p1
project: CMIP5
mip: Omon
start_year: 1980
end_year: 2004
reference_dataset: ''
additional_datasets:
- {dataset: MRI-CGCM3}
scripts: null
``` | 1.0 | Irregular regridding fails for specific dataset - **Describe the bug**
For historical `tos` of `MRI-CGCM3`, irregular regridding gives an empty field (completely filled with 0s). The original data is fine.
**Recipe**
```yml
preprocessors:
regrid:
regrid:
target_grid: 2x2
scheme: linear
diagnostics:
diag_x_zhai:
variables:
tos:
preprocessor: regrid
exp: historical
ensemble: r1i1p1
project: CMIP5
mip: Omon
start_year: 1980
end_year: 2004
reference_dataset: ''
additional_datasets:
- {dataset: MRI-CGCM3}
scripts: null
``` | process | irregular regridding fails for specific dataset describe the bug for historical tos of mri irregular regridding gives an empty field completely filled with the original data is fine recipe yml preprocessors regrid regrid target grid scheme linear diagnostics diag x zhai variables tos preprocessor regrid exp historical ensemble project mip omon start year end year reference dataset additional datasets dataset mri scripts null | 1 |
277,383 | 24,066,333,503 | IssuesEvent | 2022-09-17 14:44:22 | ktraunmueller/Compositor | https://api.github.com/repos/ktraunmueller/Compositor | closed | Outline view selection behavior | bug automated test | The outline view loses selection when clicking a node in version 1.18.
Steps to reproduce:
- Click any node in the outline view
Observed:
- The selected node immediately loses the selection again
Expected:
- The node should stay selected (highlighted) | 1.0 | Outline view selection behavior - The outline view loses selection when clicking a node in version 1.18.
Steps to reproduce:
- Click any node in the outline view
Observed:
- The selected node immediately loses the selection again
Expected:
- The node should stay selected (highlighted) | non_process | outline view selection behavior the outline view loses selection when clicking a node in version steps to reproduce click any node in the outline view observed the selected node immediately loses the selection again expected the node should stay selected highlighted | 0 |
20,176 | 26,730,116,639 | IssuesEvent | 2023-01-30 03:09:21 | sophgo/tpu-mlir | https://api.github.com/repos/sophgo/tpu-mlir | closed | use oneDNN to do inference for top::ConcatOp and tpu::ConcatOp | task processing | Currently, top::ConcatOp implemented by C++ programming with memcpy as below:
``` c++
LogicalResult top::ConcatOp::init(InferenceParameter &p) { return success(); }
void top::ConcatOp::deinit(InferenceParameter &p) {}
LogicalResult top::ConcatOp::inference(InferenceParameter &p) {
...
memcpy(...)
return success()
}
```
Use oneDnn api would get higher performance.
Get more info about oneDNN from <https://oneapi-src.github.io/oneDNN/supported_primitives.html>
You can test it by: test_onnx.py Concat | 1.0 | use oneDNN to do inference for top::ConcatOp and tpu::ConcatOp - Currently, top::ConcatOp implemented by C++ programming with memcpy as below:
``` c++
LogicalResult top::ConcatOp::init(InferenceParameter &p) { return success(); }
void top::ConcatOp::deinit(InferenceParameter &p) {}
LogicalResult top::ConcatOp::inference(InferenceParameter &p) {
...
memcpy(...)
return success()
}
```
Use oneDnn api would get higher performance.
Get more info about oneDNN from <https://oneapi-src.github.io/oneDNN/supported_primitives.html>
You can test it by: test_onnx.py Concat | process | use onednn to do inference for top concatop and tpu concatop currently top concatop implemented by c programming with memcpy as below c logicalresult top concatop init inferenceparameter p return success void top concatop deinit inferenceparameter p logicalresult top concatop inference inferenceparameter p memcpy return success use onednn api would get higher performance get more info about onednn from you can test it by test onnx py concat | 1 |
823,327 | 30,990,445,138 | IssuesEvent | 2023-08-09 03:53:36 | googleapis/nodejs-firestore | https://api.github.com/repos/googleapis/nodejs-firestore | opened | Nested object with all `undefined` fields causes entire nested chain to be omitted | priority: p2 type: bug | Consider an operation like so:
```js
ref.set({
a: {
b: {
c: undefined
}
})
```
With `ignoreUndefinedProperties: true`, I would have expected the result written to Firestore to be
```js
{
a: {
b: {}
}
}
```
However the actual result is that the entire `a`→`b`→`c` nested chain is entirely omitted:
```js
{}
```
Is this the intended behaviour?
I believe it is due to these lines below, which, when there are no other non-`undefined` fields in any of the objects, flows all the way up the chain causing the entire nested chain to be omitted.
https://github.com/googleapis/nodejs-firestore/blob/ac35b372faf32f093d83af18d487f1b3f23ee673/dev/src/serializer.ts#L205-L207
Essentially those lines mean than an object consisting entirely of `undefined` fields should be omitted (the object itself), rather than just serialised as an empty object.
Tested on `@google-cloud/firestore` version: `6.7.0`.
| 1.0 | Nested object with all `undefined` fields causes entire nested chain to be omitted - Consider an operation like so:
```js
ref.set({
a: {
b: {
c: undefined
}
})
```
With `ignoreUndefinedProperties: true`, I would have expected the result written to Firestore to be
```js
{
a: {
b: {}
}
}
```
However the actual result is that the entire `a`→`b`→`c` nested chain is entirely omitted:
```js
{}
```
Is this the intended behaviour?
I believe it is due to these lines below, which, when there are no other non-`undefined` fields in any of the objects, flows all the way up the chain causing the entire nested chain to be omitted.
https://github.com/googleapis/nodejs-firestore/blob/ac35b372faf32f093d83af18d487f1b3f23ee673/dev/src/serializer.ts#L205-L207
Essentially those lines mean than an object consisting entirely of `undefined` fields should be omitted (the object itself), rather than just serialised as an empty object.
Tested on `@google-cloud/firestore` version: `6.7.0`.
| non_process | nested object with all undefined fields causes entire nested chain to be omitted consider an operation like so js ref set a b c undefined with ignoreundefinedproperties true i would have expected the result written to firestore to be js a b however the actual result is that the entire a → b → c nested chain is entirely omitted js is this the intended behaviour i believe it is due to these lines below which when there are no other non undefined fields in any of the objects flows all the way up the chain causing the entire nested chain to be omitted essentially those lines mean than an object consisting entirely of undefined fields should be omitted the object itself rather than just serialised as an empty object tested on google cloud firestore version | 0 |
764 | 4,203,436,360 | IssuesEvent | 2016-06-28 05:18:02 | TerriaJS/terriajs | https://api.github.com/repos/TerriaJS/terriajs | closed | Remove and rationalise ViewModels | C-New UI T-Architecture/refactor Z-Small | In the new UI, remove old unused ViewModels; rationalise those that are still required. | 1.0 | Remove and rationalise ViewModels - In the new UI, remove old unused ViewModels; rationalise those that are still required. | non_process | remove and rationalise viewmodels in the new ui remove old unused viewmodels rationalise those that are still required | 0 |
626,638 | 19,830,413,350 | IssuesEvent | 2022-01-20 11:22:00 | o3de/o3de | https://api.github.com/repos/o3de/o3de | closed | Wrinkle Layers not taking effect | kind/bug sig/graphics-audio priority/critical feature/graphics/materials | **Describe the bug**
In skin material type, the Wrinkle Layers property doesn't work .
**Steps to reproduce**
Steps to reproduce the behavior:
1. Create a new Skin material type
2. Enable Wrinkle Layers, set Base Color and Normals
3. See error
**Expected behavior**
The Wrinkle Layers property takes effect.
**Actual behavior**
The Wrinkle Layers property not taking effect.
**Screenshots/Video**
If applicable, add screenshots and/or a video to help explain your problem.

**Desktop/Device (please complete the following information):**
- Device: PC
- OS: Windows
- Version 10
- CPU Intel(R)_Xeon(R)_W-2245
- GPU NVIDIA Quadro RTX 4000
- Memory 32GB
**Additional context**
Add any other context about the problem here. | 1.0 | Wrinkle Layers not taking effect - **Describe the bug**
In skin material type, the Wrinkle Layers property doesn't work .
**Steps to reproduce**
Steps to reproduce the behavior:
1. Create a new Skin material type
2. Enable Wrinkle Layers, set Base Color and Normals
3. See error
**Expected behavior**
The Wrinkle Layers property takes effect.
**Actual behavior**
The Wrinkle Layers property not taking effect.
**Screenshots/Video**
If applicable, add screenshots and/or a video to help explain your problem.

**Desktop/Device (please complete the following information):**
- Device: PC
- OS: Windows
- Version 10
- CPU Intel(R)_Xeon(R)_W-2245
- GPU NVIDIA Quadro RTX 4000
- Memory 32GB
**Additional context**
Add any other context about the problem here. | non_process | wrinkle layers not taking effect describe the bug in skin material type the wrinkle layers property doesn t work steps to reproduce steps to reproduce the behavior create a new skin material type enable wrinkle layers set base color and normals see error expected behavior the wrinkle layers property takes effect actual behavior the wrinkle layers property not taking effect screenshots video if applicable add screenshots and or a video to help explain your problem desktop device please complete the following information device pc os windows version cpu intel r xeon r w gpu nvidia quadro rtx memory additional context add any other context about the problem here | 0 |
10,146 | 13,044,162,533 | IssuesEvent | 2020-07-29 03:47:33 | tikv/tikv | https://api.github.com/repos/tikv/tikv | closed | UCP: Migrate scalar function `JsonMergePatchSig` from TiDB | challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor |
## Description
Port the scalar function `JsonMergePatchSig` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @sticnarf
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
| 2.0 | UCP: Migrate scalar function `JsonMergePatchSig` from TiDB -
## Description
Port the scalar function `JsonMergePatchSig` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @sticnarf
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
| process | ucp migrate scalar function jsonmergepatchsig from tidb description port the scalar function jsonmergepatchsig from tidb to coprocessor score mentor s sticnarf recommended skills rust programming learning materials already implemented expressions ported from tidb | 1 |
80,198 | 30,109,077,208 | IssuesEvent | 2023-06-30 05:52:57 | scipy/scipy | https://api.github.com/repos/scipy/scipy | opened | BUG: Error in matrixa.dot(matrixb) in Scipy 1.11.0 and 1.11.1 | defect | ### Describe your issue.
We are the developers of Recommenders library and in the last few days all our tests broke because of changes introduced in 1.11.0 and 1.11.1.
The error can be found here: https://github.com/microsoft/recommenders/issues/1951
We would like to confirm whether the operator .dot is deprecated in favour of '*' or the problem is different.
### Reproducing Code Example
```python
test_scores = self.user_affinity[user_ids, :].dot(self.item_similarity)
```
```
### Error message
```shell
=================================== FAILURES ===================================
___________________________ test_sar_deep_dive_runs ____________________________
notebooks = ***'als_deep_dive': '/mnt/azureml/cr/j/4e944a170be74c0e9727bb7a9e80efcd/exe/wd/examples/02_model_collaborative_filtering...rk_movielens': '/mnt/azureml/cr/j/4e944a170be74c0e9727bb7a9e80efcd/exe/wd/examples/06_benchmarks/movielens.ipynb', ...***
output_notebook = 'output.ipynb', kernel_name = 'python3'
@pytest.mark.notebooks
def test_sar_deep_dive_runs(notebooks, output_notebook, kernel_name):
notebook_path = notebooks["sar_deep_dive"]
> pm.execute_notebook(notebook_path, output_notebook, kernel_name=kernel_name)
tests/unit/examples/test_notebooks_python.py:43:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/azureml-envs/azureml_2248098658e75fe22b1e778dcf414d40/lib/python3.9/site-packages/papermill/execute.py:[128](https://github.com/microsoft/recommenders/actions/runs/5388016531/jobs/9780549827#step:3:135): in execute_notebook
raise_for_execution_errors(nb, output_path)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
nb = ***'cells': [***'id': 'ed4e3ed1', 'cell_type': 'markdown', 'source': '<span style="color:red; font-family:Helvetica Neue, ...end_time': '2023-06-27T09:42:50.951864', 'duration': 12.94534, 'exception': True***, 'nbformat': 4, 'nbformat_minor': 5***
output_path = 'output.ipynb'
def raise_for_execution_errors(nb, output_path):
"""Assigned parameters into the appropriate place in the input notebook
Parameters
----------
nb : NotebookNode
Executable notebook object
output_path : str
Path to write executed notebook
"""
error = None
for index, cell in enumerate(nb.cells):
if cell.get("outputs") is None:
continue
for output in cell.outputs:
if output.output_type == "error":
if output.ename == "SystemExit" and (output.evalue == "" or output.evalue == "0"):
continue
error = PapermillExecutionError(
cell_index=index,
exec_count=cell.execution_count,
source=cell.source,
ename=output.ename,
evalue=output.evalue,
traceback=output.traceback,
)
break
if error:
# Write notebook back out with the Error Message at the top of the Notebook, and a link to
# the relevant cell (by adding a note just before the failure with an HTML anchor)
error_msg = ERROR_MESSAGE_TEMPLATE % str(error.exec_count)
error_msg_cell = nbformat.v4.new_markdown_cell(error_msg)
error_msg_cell.metadata['tags'] = [ERROR_MARKER_TAG]
error_anchor_cell = nbformat.v4.new_markdown_cell(ERROR_ANCHOR_MSG)
error_anchor_cell.metadata['tags'] = [ERROR_MARKER_TAG]
# put the anchor before the cell with the error, before all the indices change due to the
# heading-prepending
nb.cells.insert(error.cell_index, error_anchor_cell)
nb.cells.insert(0, error_msg_cell)
write_ipynb(nb, output_path)
> raise error
E papermill.exceptions.PapermillExecutionError:
E ---------------------------------------------------------------------------
E Exception encountered at "In [9]":
E ---------------------------------------------------------------------------
E ValueError Traceback (most recent call last)
E Cell In[9], line 1
E ----> 1 top_k = model.recommend_k_items(test, top_k=TOP_K, remove_seen=True)
E
E File /mnt/azureml/cr/j/4e944a[170](https://github.com/microsoft/recommenders/actions/runs/5388016531/jobs/9780549827#step:3:177)be74c0e9727bb7a9e80efcd/exe/wd/recommenders/models/sar/sar_singlenode.py:533, in SARSingleNode.recommend_k_items(self, test, top_k, sort_top_k, remove_seen)
E 520 def recommend_k_items(self, test, top_k=10, sort_top_k=True, remove_seen=False):
E 521 """Recommend top K items for all users which are in the test set
E 522
E 523 Args:
E (...)
E 530 pandas.DataFrame: top k recommendation items for each user
E 531 """
E --> 533 test_scores = self.score(test, remove_seen=remove_seen)
E 535 top_items, top_scores = get_top_k_scored_items(
E 536 scores=test_scores, top_k=top_k, sort_top_k=sort_top_k
E 537 )
E 539 df = pd.DataFrame(
E 540 ***
E 541 self.col_user: np.repeat(
E (...)
E 546 ***
E 547 )
E
E File /mnt/azureml/cr/j/4e944a170be74c0e9727bb7a9e80efcd/exe/wd/recommenders/models/sar/sar_singlenode.py:346, in SARSingleNode.score(self, test, remove_seen)
E 344 # calculate raw scores with a matrix multiplication
E 345 logger.info("Calculating recommendation scores")
E --> 346 test_scores = self.user_affinity[user_ids, :].dot(self.item_similarity)
E 348 # ensure we're working with a dense ndarray
E 349 if isinstance(test_scores, sparse.spmatrix):
E
E File /azureml-envs/azureml_[224](https://github.com/microsoft/recommenders/actions/runs/5388016531/jobs/9780549827#step:3:231)8098658e75fe22b1e778dcf414d40/lib/python3.9/site-packages/scipy/sparse/_base.py:411, in _spbase.dot(self, other)
E 409 return self * other
E 410 else:
E --> 411 return self @ other
E
E File /azureml-envs/azureml_2248098658e75fe22b1e778dcf414d40/lib/python3.9/site-packages/scipy/sparse/_base.py:622, in _spbase.__matmul__(self, other)
E 620 def __matmul__(self, other):
E 621 if isscalarlike(other):
E --> 622 raise ValueError("Scalar operands are not allowed, "
E 623 "use '*' instead")
E 624 return self._mul_dispatch(other)
E
E ValueError: Scalar operands are not allowed, use '*' instead
```
```
### SciPy/NumPy/Python version and system information
```shell
Python is 3.9.16, pandas is 1.5.3, numpy is 1.24.4, scipy is 1.11.0 , gcc is 9.4.0.
```
| 1.0 | BUG: Error in matrixa.dot(matrixb) in Scipy 1.11.0 and 1.11.1 - ### Describe your issue.
We are the developers of Recommenders library and in the last few days all our tests broke because of changes introduced in 1.11.0 and 1.11.1.
The error can be found here: https://github.com/microsoft/recommenders/issues/1951
We would like to confirm whether the operator .dot is deprecated in favour of '*' or the problem is different.
### Reproducing Code Example
```python
test_scores = self.user_affinity[user_ids, :].dot(self.item_similarity)
```
```
### Error message
```shell
=================================== FAILURES ===================================
___________________________ test_sar_deep_dive_runs ____________________________
notebooks = ***'als_deep_dive': '/mnt/azureml/cr/j/4e944a170be74c0e9727bb7a9e80efcd/exe/wd/examples/02_model_collaborative_filtering...rk_movielens': '/mnt/azureml/cr/j/4e944a170be74c0e9727bb7a9e80efcd/exe/wd/examples/06_benchmarks/movielens.ipynb', ...***
output_notebook = 'output.ipynb', kernel_name = 'python3'
@pytest.mark.notebooks
def test_sar_deep_dive_runs(notebooks, output_notebook, kernel_name):
notebook_path = notebooks["sar_deep_dive"]
> pm.execute_notebook(notebook_path, output_notebook, kernel_name=kernel_name)
tests/unit/examples/test_notebooks_python.py:43:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/azureml-envs/azureml_2248098658e75fe22b1e778dcf414d40/lib/python3.9/site-packages/papermill/execute.py:[128](https://github.com/microsoft/recommenders/actions/runs/5388016531/jobs/9780549827#step:3:135): in execute_notebook
raise_for_execution_errors(nb, output_path)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
nb = ***'cells': [***'id': 'ed4e3ed1', 'cell_type': 'markdown', 'source': '<span style="color:red; font-family:Helvetica Neue, ...end_time': '2023-06-27T09:42:50.951864', 'duration': 12.94534, 'exception': True***, 'nbformat': 4, 'nbformat_minor': 5***
output_path = 'output.ipynb'
def raise_for_execution_errors(nb, output_path):
"""Assigned parameters into the appropriate place in the input notebook
Parameters
----------
nb : NotebookNode
Executable notebook object
output_path : str
Path to write executed notebook
"""
error = None
for index, cell in enumerate(nb.cells):
if cell.get("outputs") is None:
continue
for output in cell.outputs:
if output.output_type == "error":
if output.ename == "SystemExit" and (output.evalue == "" or output.evalue == "0"):
continue
error = PapermillExecutionError(
cell_index=index,
exec_count=cell.execution_count,
source=cell.source,
ename=output.ename,
evalue=output.evalue,
traceback=output.traceback,
)
break
if error:
# Write notebook back out with the Error Message at the top of the Notebook, and a link to
# the relevant cell (by adding a note just before the failure with an HTML anchor)
error_msg = ERROR_MESSAGE_TEMPLATE % str(error.exec_count)
error_msg_cell = nbformat.v4.new_markdown_cell(error_msg)
error_msg_cell.metadata['tags'] = [ERROR_MARKER_TAG]
error_anchor_cell = nbformat.v4.new_markdown_cell(ERROR_ANCHOR_MSG)
error_anchor_cell.metadata['tags'] = [ERROR_MARKER_TAG]
# put the anchor before the cell with the error, before all the indices change due to the
# heading-prepending
nb.cells.insert(error.cell_index, error_anchor_cell)
nb.cells.insert(0, error_msg_cell)
write_ipynb(nb, output_path)
> raise error
E papermill.exceptions.PapermillExecutionError:
E ---------------------------------------------------------------------------
E Exception encountered at "In [9]":
E ---------------------------------------------------------------------------
E ValueError Traceback (most recent call last)
E Cell In[9], line 1
E ----> 1 top_k = model.recommend_k_items(test, top_k=TOP_K, remove_seen=True)
E
E File /mnt/azureml/cr/j/4e944a[170](https://github.com/microsoft/recommenders/actions/runs/5388016531/jobs/9780549827#step:3:177)be74c0e9727bb7a9e80efcd/exe/wd/recommenders/models/sar/sar_singlenode.py:533, in SARSingleNode.recommend_k_items(self, test, top_k, sort_top_k, remove_seen)
E 520 def recommend_k_items(self, test, top_k=10, sort_top_k=True, remove_seen=False):
E 521 """Recommend top K items for all users which are in the test set
E 522
E 523 Args:
E (...)
E 530 pandas.DataFrame: top k recommendation items for each user
E 531 """
E --> 533 test_scores = self.score(test, remove_seen=remove_seen)
E 535 top_items, top_scores = get_top_k_scored_items(
E 536 scores=test_scores, top_k=top_k, sort_top_k=sort_top_k
E 537 )
E 539 df = pd.DataFrame(
E 540 ***
E 541 self.col_user: np.repeat(
E (...)
E 546 ***
E 547 )
E
E File /mnt/azureml/cr/j/4e944a170be74c0e9727bb7a9e80efcd/exe/wd/recommenders/models/sar/sar_singlenode.py:346, in SARSingleNode.score(self, test, remove_seen)
E 344 # calculate raw scores with a matrix multiplication
E 345 logger.info("Calculating recommendation scores")
E --> 346 test_scores = self.user_affinity[user_ids, :].dot(self.item_similarity)
E 348 # ensure we're working with a dense ndarray
E 349 if isinstance(test_scores, sparse.spmatrix):
E
E File /azureml-envs/azureml_[224](https://github.com/microsoft/recommenders/actions/runs/5388016531/jobs/9780549827#step:3:231)8098658e75fe22b1e778dcf414d40/lib/python3.9/site-packages/scipy/sparse/_base.py:411, in _spbase.dot(self, other)
E 409 return self * other
E 410 else:
E --> 411 return self @ other
E
E File /azureml-envs/azureml_2248098658e75fe22b1e778dcf414d40/lib/python3.9/site-packages/scipy/sparse/_base.py:622, in _spbase.__matmul__(self, other)
E 620 def __matmul__(self, other):
E 621 if isscalarlike(other):
E --> 622 raise ValueError("Scalar operands are not allowed, "
E 623 "use '*' instead")
E 624 return self._mul_dispatch(other)
E
E ValueError: Scalar operands are not allowed, use '*' instead
```
```
### SciPy/NumPy/Python version and system information
```shell
Python is 3.9.16, pandas is 1.5.3, numpy is 1.24.4, scipy is 1.11.0 , gcc is 9.4.0.
```
| non_process | bug error in matrixa dot matrixb in scipy and describe your issue we are the developers of recommenders library and in the last few days all our tests broke because of changes introduced in and the error can be found here we would like to confirm whether the operator dot is deprecated in favour of or the problem is different reproducing code example python test scores self user affinity dot self item similarity error message shell failures test sar deep dive runs notebooks als deep dive mnt azureml cr j exe wd examples model collaborative filtering rk movielens mnt azureml cr j exe wd examples benchmarks movielens ipynb output notebook output ipynb kernel name pytest mark notebooks def test sar deep dive runs notebooks output notebook kernel name notebook path notebooks pm execute notebook notebook path output notebook kernel name kernel name tests unit examples test notebooks python py azureml envs azureml lib site packages papermill execute py in execute notebook raise for execution errors nb output path nb cells id cell type markdown source span style color red font family helvetica neue end time duration exception true nbformat nbformat minor output path output ipynb def raise for execution errors nb output path assigned parameters into the appropriate place in the input notebook parameters nb notebooknode executable notebook object output path str path to write executed notebook error none for index cell in enumerate nb cells if cell get outputs is none continue for output in cell outputs if output output type error if output ename systemexit and output evalue or output evalue continue error papermillexecutionerror cell index index exec count cell execution count source cell source ename output ename evalue output evalue traceback output traceback break if error write notebook back out with the error message at the top of the notebook and a link to the relevant cell by adding a note just before the failure with an html anchor error msg error message template str error exec count error msg cell nbformat new markdown cell error msg error msg cell metadata error anchor cell nbformat new markdown cell error anchor msg error anchor cell metadata put the anchor before the cell with the error before all the indices change due to the heading prepending nb cells insert error cell index error anchor cell nb cells insert error msg cell write ipynb nb output path raise error e papermill exceptions papermillexecutionerror e e exception encountered at in e e valueerror traceback most recent call last e cell in line e top k model recommend k items test top k top k remove seen true e e file mnt azureml cr j in sarsinglenode recommend k items self test top k sort top k remove seen e def recommend k items self test top k sort top k true remove seen false e recommend top k items for all users which are in the test set e e args e e pandas dataframe top k recommendation items for each user e e test scores self score test remove seen remove seen e top items top scores get top k scored items e scores test scores top k top k sort top k sort top k e e df pd dataframe e e self col user np repeat e e e e e file mnt azureml cr j exe wd recommenders models sar sar singlenode py in sarsinglenode score self test remove seen e calculate raw scores with a matrix multiplication e logger info calculating recommendation scores e test scores self user affinity dot self item similarity e ensure we re working with a dense ndarray e if isinstance test scores sparse spmatrix e e file azureml envs azureml in spbase dot self other e return self other e else e return self other e e file azureml envs azureml lib site packages scipy sparse base py in spbase matmul self other e def matmul self other e if isscalarlike other e raise valueerror scalar operands are not allowed e use instead e return self mul dispatch other e e valueerror scalar operands are not allowed use instead scipy numpy python version and system information shell python is pandas is numpy is scipy is gcc is | 0 |
723,779 | 24,907,850,849 | IssuesEvent | 2022-10-29 13:41:29 | AY2223S1-CS2103T-W16-3/tp | https://api.github.com/repos/AY2223S1-CS2103T-W16-3/tp | closed | [PE-D][Tester E] Missing feature in UG | priority.Medium type.Bug severity.Low | 
I was able to use this find command. However, this feature was not written in the UG. Also, there seem to be no difference between this find command and get command.
<!--session: 1666943991997-d0271cf5-6577-4789-80d7-2bb3024d8faf-->
<!--Version: Web v3.4.4-->
-------------
Labels: `severity.Low` `type.DocumentationBug`
original: sjoann/ped#3 | 1.0 | [PE-D][Tester E] Missing feature in UG - 
I was able to use this find command. However, this feature was not written in the UG. Also, there seem to be no difference between this find command and get command.
<!--session: 1666943991997-d0271cf5-6577-4789-80d7-2bb3024d8faf-->
<!--Version: Web v3.4.4-->
-------------
Labels: `severity.Low` `type.DocumentationBug`
original: sjoann/ped#3 | non_process | missing feature in ug i was able to use this find command however this feature was not written in the ug also there seem to be no difference between this find command and get command labels severity low type documentationbug original sjoann ped | 0 |
12,211 | 14,742,918,895 | IssuesEvent | 2021-01-07 13:06:51 | kdjstudios/SABillingGitlab | https://api.github.com/repos/kdjstudios/SABillingGitlab | closed | Towne - Security Deposit - comment | anc-ui anp-1.5 ant-enhancement grt-ui processes | In GitLab by @kdjstudios on Jun 24, 2019, 13:41
**Submitted by:** Deb Crown <dcrown@towneanswering.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-06-14-68471/conversation
**Server:** External (Both)
**Client/Site:** Towne (All)
**Account:** All
**Issue:**
I just wanted to comment that I think it was a poor placement choice for the ‘security deposit fees’ button.
It displaced the ‘staged fees’ button….moving it over to the right….away from where it had always been located.
Did the designers think that security fees would be accessed more often than staged fees?
It seems like it would have been better placed beside the credit card and e-check payment buttons.
Just my 2 cents…I know you didn’t ask! | 1.0 | Towne - Security Deposit - comment - In GitLab by @kdjstudios on Jun 24, 2019, 13:41
**Submitted by:** Deb Crown <dcrown@towneanswering.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-06-14-68471/conversation
**Server:** External (Both)
**Client/Site:** Towne (All)
**Account:** All
**Issue:**
I just wanted to comment that I think it was a poor placement choice for the ‘security deposit fees’ button.
It displaced the ‘staged fees’ button….moving it over to the right….away from where it had always been located.
Did the designers think that security fees would be accessed more often than staged fees?
It seems like it would have been better placed beside the credit card and e-check payment buttons.
Just my 2 cents…I know you didn’t ask! | process | towne security deposit comment in gitlab by kdjstudios on jun submitted by deb crown helpdesk server external both client site towne all account all issue i just wanted to comment that i think it was a poor placement choice for the ‘security deposit fees’ button it displaced the ‘staged fees’ button… moving it over to the right… away from where it had always been located did the designers think that security fees would be accessed more often than staged fees it seems like it would have been better placed beside the credit card and e check payment buttons just my cents…i know you didn’t ask | 1 |
12,877 | 15,268,025,916 | IssuesEvent | 2021-02-22 10:50:44 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Redirecting StdOut and StdErr results in unexpected behaviour in Linux console apps | area-System.Diagnostics.Process needs more info | <!--This is just a template - feel free to delete any and all of it and replace as appropriate.-->
### Description
Redirecting both STDOUT and STDERR using the Process class results in unexpected behaviour.
I have a very simple .NET5 Linux program (same behaviour on Core 3.1) which runs Nano and waits for it to terminate. If I redirect only one output stream then this code works as expected. Nano loads and I can interact exactly how I would if it was launched via Bash.
If I redirect stderr and stdout a number of unexpected things happen:
- The nano window does not fill the entire terminal screen
- Escape sequences appear on screen (^X as an example)
- I can use the menus but I have to press the escape sequence then hit enter
It seems that there is some corruption or change in behaviour in handling the STDOUT and STDERR streams BUT only when both are redirected.
This is an example program that shows the correct behaviour I would expect. In this example only STDOUT is redirected and is immediately echoed to screen by copying to stream to Console.Out. Note that no other streams are redirected.
```
static async Task Main(string[] args)
{
var proc = new Process();
proc.StartInfo.UseShellExecute = false;
proc.StartInfo.FileName = "/bin/nano";
proc.StartInfo.Arguments = "hello";
proc.StartInfo.CreateNoWindow = true;
proc.StartInfo.RedirectStandardError = false;
proc.StartInfo.RedirectStandardOutput = true; // only redirect stdout
proc.StartInfo.RedirectStandardInput = false;
proc.Start();
var t1 = proc.StandardOutput.BaseStream.CopyToAsync(Console.OpenStandardOutput());
proc.WaitForExit();
await t1;
}
```
This is an example of incorrect handling. Note that here both STDOUT and STDERR are redirected. Only one stream is actually copied for the sake of a small test case.
```
static async Task Main(string[] args)
{
var proc = new Process();
proc.StartInfo.UseShellExecute = false;
proc.StartInfo.FileName = "/bin/nano";
proc.StartInfo.Arguments = "hello";
proc.StartInfo.CreateNoWindow = true;
proc.StartInfo.RedirectStandardError = true; // redirecting both out streams
proc.StartInfo.RedirectStandardOutput = true; // redirecting both out streams
proc.StartInfo.RedirectStandardInput = false;
proc.Start();
var t1 = proc.StandardOutput.BaseStream.CopyToAsync(Console.OpenStandardOutput());
proc.WaitForExit();
await t1;
}
```
Nano creates no STDERR as seen by `nano 2>> stderr.txt` so I'm not convinced the content of these streams is actually important here.
<!--
* Please share a clear and concise description of the problem.
* Include minimal steps to reproduce the problem if possible. E.g.: the smallest possible code snippet; or a small repo to clone, with steps to run it.
* What behavior are you seeing, and what behavior would you expect?
-->
### Configuration
<!--
* Which version of .NET is the code running on?
* What OS and version, and what distro if applicable?
* What is the architecture (x64, x86, ARM, ARM64)?
* Do you know whether it is specific to that configuration?
* If you're using Blazor, which web browser(s) do you see this issue in?
-->
- X64 Linux Ubuntu 18.04 WSL1
- X64 Linux Ubuntu 18.04 WSL2
- X64 Linux Ubuntu 20.04 WSL2
### Regression?
Same behaviour on Core 3.1
<!--
* Did this work in a previous build or release of .NET Core, or from .NET Framework? If you can try a previous release or build to find out, that can help us narrow down the problem. If you don't know, that's OK.
-->
### Other information
<!--
* Please include any relevant stack traces or error messages. If possible please include text as text rather than images (so it shows up in searches).
* If you have an idea where the problem might lie, let us know that here. Please include any pointers to code, relevant changes, or related issues you know of.
* Do you know of any workarounds?
-->
| 1.0 | Redirecting StdOut and StdErr results in unexpected behaviour in Linux console apps - <!--This is just a template - feel free to delete any and all of it and replace as appropriate.-->
### Description
Redirecting both STDOUT and STDERR using the Process class results in unexpected behaviour.
I have a very simple .NET5 Linux program (same behaviour on Core 3.1) which runs Nano and waits for it to terminate. If I redirect only one output stream then this code works as expected. Nano loads and I can interact exactly how I would if it was launched via Bash.
If I redirect stderr and stdout a number of unexpected things happen:
- The nano window does not fill the entire terminal screen
- Escape sequences appear on screen (^X as an example)
- I can use the menus but I have to press the escape sequence then hit enter
It seems that there is some corruption or change in behaviour in handling the STDOUT and STDERR streams BUT only when both are redirected.
This is an example program that shows the correct behaviour I would expect. In this example only STDOUT is redirected and is immediately echoed to screen by copying to stream to Console.Out. Note that no other streams are redirected.
```
static async Task Main(string[] args)
{
var proc = new Process();
proc.StartInfo.UseShellExecute = false;
proc.StartInfo.FileName = "/bin/nano";
proc.StartInfo.Arguments = "hello";
proc.StartInfo.CreateNoWindow = true;
proc.StartInfo.RedirectStandardError = false;
proc.StartInfo.RedirectStandardOutput = true; // only redirect stdout
proc.StartInfo.RedirectStandardInput = false;
proc.Start();
var t1 = proc.StandardOutput.BaseStream.CopyToAsync(Console.OpenStandardOutput());
proc.WaitForExit();
await t1;
}
```
This is an example of incorrect handling. Note that here both STDOUT and STDERR are redirected. Only one stream is actually copied for the sake of a small test case.
```
static async Task Main(string[] args)
{
var proc = new Process();
proc.StartInfo.UseShellExecute = false;
proc.StartInfo.FileName = "/bin/nano";
proc.StartInfo.Arguments = "hello";
proc.StartInfo.CreateNoWindow = true;
proc.StartInfo.RedirectStandardError = true; // redirecting both out streams
proc.StartInfo.RedirectStandardOutput = true; // redirecting both out streams
proc.StartInfo.RedirectStandardInput = false;
proc.Start();
var t1 = proc.StandardOutput.BaseStream.CopyToAsync(Console.OpenStandardOutput());
proc.WaitForExit();
await t1;
}
```
Nano creates no STDERR as seen by `nano 2>> stderr.txt` so I'm not convinced the content of these streams is actually important here.
<!--
* Please share a clear and concise description of the problem.
* Include minimal steps to reproduce the problem if possible. E.g.: the smallest possible code snippet; or a small repo to clone, with steps to run it.
* What behavior are you seeing, and what behavior would you expect?
-->
### Configuration
<!--
* Which version of .NET is the code running on?
* What OS and version, and what distro if applicable?
* What is the architecture (x64, x86, ARM, ARM64)?
* Do you know whether it is specific to that configuration?
* If you're using Blazor, which web browser(s) do you see this issue in?
-->
- X64 Linux Ubuntu 18.04 WSL1
- X64 Linux Ubuntu 18.04 WSL2
- X64 Linux Ubuntu 20.04 WSL2
### Regression?
Same behaviour on Core 3.1
<!--
* Did this work in a previous build or release of .NET Core, or from .NET Framework? If you can try a previous release or build to find out, that can help us narrow down the problem. If you don't know, that's OK.
-->
### Other information
<!--
* Please include any relevant stack traces or error messages. If possible please include text as text rather than images (so it shows up in searches).
* If you have an idea where the problem might lie, let us know that here. Please include any pointers to code, relevant changes, or related issues you know of.
* Do you know of any workarounds?
-->
| process | redirecting stdout and stderr results in unexpected behaviour in linux console apps description redirecting both stdout and stderr using the process class results in unexpected behaviour i have a very simple linux program same behaviour on core which runs nano and waits for it to terminate if i redirect only one output stream then this code works as expected nano loads and i can interact exactly how i would if it was launched via bash if i redirect stderr and stdout a number of unexpected things happen the nano window does not fill the entire terminal screen escape sequences appear on screen x as an example i can use the menus but i have to press the escape sequence then hit enter it seems that there is some corruption or change in behaviour in handling the stdout and stderr streams but only when both are redirected this is an example program that shows the correct behaviour i would expect in this example only stdout is redirected and is immediately echoed to screen by copying to stream to console out note that no other streams are redirected static async task main string args var proc new process proc startinfo useshellexecute false proc startinfo filename bin nano proc startinfo arguments hello proc startinfo createnowindow true proc startinfo redirectstandarderror false proc startinfo redirectstandardoutput true only redirect stdout proc startinfo redirectstandardinput false proc start var proc standardoutput basestream copytoasync console openstandardoutput proc waitforexit await this is an example of incorrect handling note that here both stdout and stderr are redirected only one stream is actually copied for the sake of a small test case static async task main string args var proc new process proc startinfo useshellexecute false proc startinfo filename bin nano proc startinfo arguments hello proc startinfo createnowindow true proc startinfo redirectstandarderror true redirecting both out streams proc startinfo redirectstandardoutput true redirecting both out streams proc startinfo redirectstandardinput false proc start var proc standardoutput basestream copytoasync console openstandardoutput proc waitforexit await nano creates no stderr as seen by nano stderr txt so i m not convinced the content of these streams is actually important here please share a clear and concise description of the problem include minimal steps to reproduce the problem if possible e g the smallest possible code snippet or a small repo to clone with steps to run it what behavior are you seeing and what behavior would you expect configuration which version of net is the code running on what os and version and what distro if applicable what is the architecture arm do you know whether it is specific to that configuration if you re using blazor which web browser s do you see this issue in linux ubuntu linux ubuntu linux ubuntu regression same behaviour on core did this work in a previous build or release of net core or from net framework if you can try a previous release or build to find out that can help us narrow down the problem if you don t know that s ok other information please include any relevant stack traces or error messages if possible please include text as text rather than images so it shows up in searches if you have an idea where the problem might lie let us know that here please include any pointers to code relevant changes or related issues you know of do you know of any workarounds | 1 |
137,224 | 11,102,830,644 | IssuesEvent | 2019-12-17 01:31:09 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | ccl/partitionccl: TestRepartitioning failed under stress | C-test-failure O-robot | SHA: https://github.com/cockroachdb/cockroach/commits/c280de40c2bcab93c41fe82bef8353a5ecd95ac4
Parameters:
```
TAGS=
GOFLAGS=-parallel=4
```
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stress instead of stressrace and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stressrace TESTS=TestRepartitioning PKG=github.com/cockroachdb/cockroach/pkg/ccl/partitionccl TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1312022&tab=buildLog
```
[n1,client=127.0.0.1:55340,user=root] planning starts: SELECT
[n1,client=127.0.0.1:55340,user=root] generating optimizer plan
[n1,client=127.0.0.1:55340,user=root] added table 'data.public."single col range partitioning - MAXVALUE"' to table collection
[n1,client=127.0.0.1:55340,user=root] query cache hit
[n1,client=127.0.0.1:55340,user=root] planning ends
[n1,client=127.0.0.1:55340,user=root] checking distributability
[n1,client=127.0.0.1:55340,user=root] will distribute plan: true
[n1,client=127.0.0.1:55340,user=root] execution starts: distributed engine
=== SPAN START: consuming rows ===
[n1,client=127.0.0.1:55340,user=root] creating DistSQL plan with isLocal=false
[n1,client=127.0.0.1:55340,user=root] querying next range at /Table/76/1/6
[n1,client=127.0.0.1:55340,user=root] running DistSQL plan
=== SPAN START: flow ===
[n1,client=127.0.0.1:55340,user=root] starting (0 processors, 0 startables)
=== SPAN START: table reader ===
cockroach.processorid: 0
cockroach.stat.tablereader.bytes.read: 0 B
cockroach.stat.tablereader.input.rows: 0
cockroach.stat.tablereader.stalltime: 0s
[n1,client=127.0.0.1:55340,user=root] starting scan with limitBatches true
[n1,client=127.0.0.1:55340,user=root] Scan /Table/76/{1/6-2}
=== SPAN START: txn coordinator send ===
=== SPAN START: dist sender send ===
[n1,client=127.0.0.1:55340,user=root,txn=e7753557] querying next range at /Table/76/1/6
[n1,client=127.0.0.1:55340,user=root,txn=e7753557] r91: sending batch 1 Scan to (n1,s1):1
[n1,client=127.0.0.1:55340,user=root,txn=e7753557] sending request to local client
=== SPAN START: /cockroach.roachpb.Internal/Batch ===
[n1] 1 Scan
[n1,s1] executing 1 requests
[n1,s1,r91/1:/Table/76/{1/5-2}] read-only path
[n1,s1,r91/1:/Table/76/{1/5-2}] read has no clock uncertainty
[n1,s1,r91/1:/Table/76/{1/5-2}] acquire latches
[n1,s1,r91/1:/Table/76/{1/5-2}] waited 4.819µs to acquire latches
[n1,s1,r91/1:/Table/76/{1/5-2}] waiting for read lock
[n1,s1,r91/1:/Table/76/{1/5-2}] read completed
=== SPAN START: count rows ===
cockroach.processorid: 1
cockroach.stat.aggregator.input.rows: 0
cockroach.stat.aggregator.mem.max: 0 B
cockroach.stat.aggregator.stalltime: 37µs
[n1,client=127.0.0.1:55340,user=root] execution ends
[n1,client=127.0.0.1:55340,user=root] rows affected: 1
[n1,client=127.0.0.1:55340,user=root] AutoCommit. err: <nil>
[n1,client=127.0.0.1:55340,user=root] releasing 1 tables
=== SPAN START: exec cmd: exec stmt ===
[n1,client=127.0.0.1:55340,user=root] [NoTxn pos:7134] executing ExecStmt: SET TRACING = off
[n1,client=127.0.0.1:55340,user=root] executing: SET TRACING = off in state: NoTxn
goroutine 457667 [running]:
runtime/debug.Stack(0xa7a358200, 0xc003937050, 0x3876e80)
/usr/local/go/src/runtime/debug/stack.go:24 +0xa7
github.com/cockroachdb/cockroach/pkg/testutils.SucceedsSoon(0x38e26a0, 0xc0014b4500, 0xc003937020)
/go/src/github.com/cockroachdb/cockroach/pkg/testutils/soon.go:49 +0x103
github.com/cockroachdb/cockroach/pkg/ccl/partitionccl.TestRepartitioning.func1(0xc0014b4500)
/go/src/github.com/cockroachdb/cockroach/pkg/ccl/partitionccl/partition_test.go:1356 +0xabc
testing.tRunner(0xc0014b4500, 0xc005a12120)
/usr/local/go/src/testing/testing.go:827 +0xbf
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:878 +0x35c
``` | 1.0 | ccl/partitionccl: TestRepartitioning failed under stress - SHA: https://github.com/cockroachdb/cockroach/commits/c280de40c2bcab93c41fe82bef8353a5ecd95ac4
Parameters:
```
TAGS=
GOFLAGS=-parallel=4
```
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stress instead of stressrace and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stressrace TESTS=TestRepartitioning PKG=github.com/cockroachdb/cockroach/pkg/ccl/partitionccl TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1312022&tab=buildLog
```
[n1,client=127.0.0.1:55340,user=root] planning starts: SELECT
[n1,client=127.0.0.1:55340,user=root] generating optimizer plan
[n1,client=127.0.0.1:55340,user=root] added table 'data.public."single col range partitioning - MAXVALUE"' to table collection
[n1,client=127.0.0.1:55340,user=root] query cache hit
[n1,client=127.0.0.1:55340,user=root] planning ends
[n1,client=127.0.0.1:55340,user=root] checking distributability
[n1,client=127.0.0.1:55340,user=root] will distribute plan: true
[n1,client=127.0.0.1:55340,user=root] execution starts: distributed engine
=== SPAN START: consuming rows ===
[n1,client=127.0.0.1:55340,user=root] creating DistSQL plan with isLocal=false
[n1,client=127.0.0.1:55340,user=root] querying next range at /Table/76/1/6
[n1,client=127.0.0.1:55340,user=root] running DistSQL plan
=== SPAN START: flow ===
[n1,client=127.0.0.1:55340,user=root] starting (0 processors, 0 startables)
=== SPAN START: table reader ===
cockroach.processorid: 0
cockroach.stat.tablereader.bytes.read: 0 B
cockroach.stat.tablereader.input.rows: 0
cockroach.stat.tablereader.stalltime: 0s
[n1,client=127.0.0.1:55340,user=root] starting scan with limitBatches true
[n1,client=127.0.0.1:55340,user=root] Scan /Table/76/{1/6-2}
=== SPAN START: txn coordinator send ===
=== SPAN START: dist sender send ===
[n1,client=127.0.0.1:55340,user=root,txn=e7753557] querying next range at /Table/76/1/6
[n1,client=127.0.0.1:55340,user=root,txn=e7753557] r91: sending batch 1 Scan to (n1,s1):1
[n1,client=127.0.0.1:55340,user=root,txn=e7753557] sending request to local client
=== SPAN START: /cockroach.roachpb.Internal/Batch ===
[n1] 1 Scan
[n1,s1] executing 1 requests
[n1,s1,r91/1:/Table/76/{1/5-2}] read-only path
[n1,s1,r91/1:/Table/76/{1/5-2}] read has no clock uncertainty
[n1,s1,r91/1:/Table/76/{1/5-2}] acquire latches
[n1,s1,r91/1:/Table/76/{1/5-2}] waited 4.819µs to acquire latches
[n1,s1,r91/1:/Table/76/{1/5-2}] waiting for read lock
[n1,s1,r91/1:/Table/76/{1/5-2}] read completed
=== SPAN START: count rows ===
cockroach.processorid: 1
cockroach.stat.aggregator.input.rows: 0
cockroach.stat.aggregator.mem.max: 0 B
cockroach.stat.aggregator.stalltime: 37µs
[n1,client=127.0.0.1:55340,user=root] execution ends
[n1,client=127.0.0.1:55340,user=root] rows affected: 1
[n1,client=127.0.0.1:55340,user=root] AutoCommit. err: <nil>
[n1,client=127.0.0.1:55340,user=root] releasing 1 tables
=== SPAN START: exec cmd: exec stmt ===
[n1,client=127.0.0.1:55340,user=root] [NoTxn pos:7134] executing ExecStmt: SET TRACING = off
[n1,client=127.0.0.1:55340,user=root] executing: SET TRACING = off in state: NoTxn
goroutine 457667 [running]:
runtime/debug.Stack(0xa7a358200, 0xc003937050, 0x3876e80)
/usr/local/go/src/runtime/debug/stack.go:24 +0xa7
github.com/cockroachdb/cockroach/pkg/testutils.SucceedsSoon(0x38e26a0, 0xc0014b4500, 0xc003937020)
/go/src/github.com/cockroachdb/cockroach/pkg/testutils/soon.go:49 +0x103
github.com/cockroachdb/cockroach/pkg/ccl/partitionccl.TestRepartitioning.func1(0xc0014b4500)
/go/src/github.com/cockroachdb/cockroach/pkg/ccl/partitionccl/partition_test.go:1356 +0xabc
testing.tRunner(0xc0014b4500, 0xc005a12120)
/usr/local/go/src/testing/testing.go:827 +0xbf
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:878 +0x35c
``` | non_process | ccl partitionccl testrepartitioning failed under stress sha parameters tags goflags parallel to repro try don t forget to check out a clean suitable branch and experiment with the stress invocation until the desired results present themselves for example using stress instead of stressrace and passing the p stressflag which controls concurrency scripts gceworker sh start scripts gceworker sh mosh cd go src github com cockroachdb cockroach stdbuf ol el make stressrace tests testrepartitioning pkg github com cockroachdb cockroach pkg ccl partitionccl testtimeout stressflags maxtime timeout tee tmp stress log failed test planning starts select generating optimizer plan added table data public single col range partitioning maxvalue to table collection query cache hit planning ends checking distributability will distribute plan true execution starts distributed engine span start consuming rows creating distsql plan with islocal false querying next range at table running distsql plan span start flow starting processors startables span start table reader cockroach processorid cockroach stat tablereader bytes read b cockroach stat tablereader input rows cockroach stat tablereader stalltime starting scan with limitbatches true scan table span start txn coordinator send span start dist sender send querying next range at table sending batch scan to sending request to local client span start cockroach roachpb internal batch scan executing requests read only path read has no clock uncertainty acquire latches waited to acquire latches waiting for read lock read completed span start count rows cockroach processorid cockroach stat aggregator input rows cockroach stat aggregator mem max b cockroach stat aggregator stalltime execution ends rows affected autocommit err releasing tables span start exec cmd exec stmt executing execstmt set tracing off executing set tracing off in state notxn goroutine runtime debug stack usr local go src runtime debug stack go github com cockroachdb cockroach pkg testutils succeedssoon go src github com cockroachdb cockroach pkg testutils soon go github com cockroachdb cockroach pkg ccl partitionccl testrepartitioning go src github com cockroachdb cockroach pkg ccl partitionccl partition test go testing trunner usr local go src testing testing go created by testing t run usr local go src testing testing go | 0 |
16,797 | 22,044,378,415 | IssuesEvent | 2022-05-29 21:02:11 | lynnandtonic/nestflix.fun | https://api.github.com/repos/lynnandtonic/nestflix.fun | closed | Add Ben from The Simpsons | suggested title in process | Please add as much of the following info as you can:
Title: Ben
Type (film/tv show): Daytime talk show.
Film or show in which it appears: The Simpsons Season 6 "Homer Badman"
Is the parent film/show streaming anywhere? Disney+
About when in the parent film/show does it appear? Season 6 episode 9 "Homer Badman", 13:16
Actual footage of the film/show can be seen (yes/no)? Yes
| 1.0 | Add Ben from The Simpsons - Please add as much of the following info as you can:
Title: Ben
Type (film/tv show): Daytime talk show.
Film or show in which it appears: The Simpsons Season 6 "Homer Badman"
Is the parent film/show streaming anywhere? Disney+
About when in the parent film/show does it appear? Season 6 episode 9 "Homer Badman", 13:16
Actual footage of the film/show can be seen (yes/no)? Yes
| process | add ben from the simpsons please add as much of the following info as you can title ben type film tv show daytime talk show film or show in which it appears the simpsons season homer badman is the parent film show streaming anywhere disney about when in the parent film show does it appear season episode homer badman actual footage of the film show can be seen yes no yes | 1 |
248,717 | 21,053,888,933 | IssuesEvent | 2022-03-31 23:44:13 | angular/angular | https://api.github.com/repos/angular/angular | closed | Can we have a disposable TestBed? | feature comp: testing | ### Which @angular/* package(s) are relevant/related to the feature request?
core
### Description
When writing tests I like to setup/teardown an environment between every test. This ensures my tests are not dependant on any sort of global state that could render my tests brittle and that the dependencies going into the test are what I expect them to be.
The second point is particularly painful as sometimes it can be unclear what is being provided to a component via the dependency injection framework.
Current usage of TestBed requires us to `TestBed.configureTestingModule` which is a shared global state used between test blocks.
It would be great if TestBed offered the ability to setup/teardown against a specified target `Window`, and provided an object to interact with, allowing us to create a new injector for each test block.
This would add versatility to how Angular could be tested, Jest in Node or Karma in browser, all you would just need to do is provide the test module bootstrapper an appropriate `Window` target where a module would construct around that.
### Proposed solution
Add the following methods to the `TestBed` interface
```typescript
interface TestBed {
platformTestingDynamic(options: { windowRef?: any }): TestPlatformRef
}
```
So given a simple component like this:
```typescript
@Component({
selector: 'app-root',
template: `<h1>{{ title }}</h1>`
})
export class AppComponent {
constructor(
@Inject('titleProvider') public title: string
) {}
}
```
I could then write a test suite that, at the start of each test will construct the module, render the template and tear it down such that there is no global state (except the document, which I can also choose to bootstrap/teardown if I wanted).
Further I could reference a local `windowRef` or a global `Window` - so when running in a browser I would be able to use a real window, however when running in node I could use a substitution like JSDOM.
```typescript
import { TestBed } from '@angular/core/testing'
import JSDOM from 'jsdom'
import { AppComponent } from './app.component'
const html = `<!DOCTYPE html><body></body>`
const { window: windowRef, document: documentRef } = new JSDOM(html);
const MOCK_TITLE = 'MOCK_TITLE'
describe('AppComponent', () => {
let testPlatformRef: TestPlatformRef
let testModuleRef: TestNgModuleRef
beforeEach(async () => {
// Create outlet for application
documentRef.body.appendChild(documentRef.createElement('app-root'))
// Construct around my specified Window reference
testPlatformRef = TestBed.platformTestingDynamic({ windowRef })
// Bootstrap a test module
testModuleRef = testPlatformRef.bootstrapModule({
declarations: [AppComponent],
providers: [
{ provide: 'titleProvider', useValue: MOCK_TITLE }
]
})
// Bootstrap component and wait for stable
await testModuleRef.bootstrapComponent(AppComponent)
})
afterEach(() => {
testPlatformRef.destroy()
// This will automatically delete the `app-root` element
testModuleRef.destroy()
})
it('Should render title', () => {
expect(documentRef.querySelector('h1').innerHTML).toBe(MOCK_TITLE)
})
})
```
### Alternatives considered
Just use the global state offered by TestBed like everyone else | 1.0 | Can we have a disposable TestBed? - ### Which @angular/* package(s) are relevant/related to the feature request?
core
### Description
When writing tests I like to setup/teardown an environment between every test. This ensures my tests are not dependant on any sort of global state that could render my tests brittle and that the dependencies going into the test are what I expect them to be.
The second point is particularly painful as sometimes it can be unclear what is being provided to a component via the dependency injection framework.
Current usage of TestBed requires us to `TestBed.configureTestingModule` which is a shared global state used between test blocks.
It would be great if TestBed offered the ability to setup/teardown against a specified target `Window`, and provided an object to interact with, allowing us to create a new injector for each test block.
This would add versatility to how Angular could be tested, Jest in Node or Karma in browser, all you would just need to do is provide the test module bootstrapper an appropriate `Window` target where a module would construct around that.
### Proposed solution
Add the following methods to the `TestBed` interface
```typescript
interface TestBed {
platformTestingDynamic(options: { windowRef?: any }): TestPlatformRef
}
```
So given a simple component like this:
```typescript
@Component({
selector: 'app-root',
template: `<h1>{{ title }}</h1>`
})
export class AppComponent {
constructor(
@Inject('titleProvider') public title: string
) {}
}
```
I could then write a test suite that, at the start of each test will construct the module, render the template and tear it down such that there is no global state (except the document, which I can also choose to bootstrap/teardown if I wanted).
Further I could reference a local `windowRef` or a global `Window` - so when running in a browser I would be able to use a real window, however when running in node I could use a substitution like JSDOM.
```typescript
import { TestBed } from '@angular/core/testing'
import JSDOM from 'jsdom'
import { AppComponent } from './app.component'
const html = `<!DOCTYPE html><body></body>`
const { window: windowRef, document: documentRef } = new JSDOM(html);
const MOCK_TITLE = 'MOCK_TITLE'
describe('AppComponent', () => {
let testPlatformRef: TestPlatformRef
let testModuleRef: TestNgModuleRef
beforeEach(async () => {
// Create outlet for application
documentRef.body.appendChild(documentRef.createElement('app-root'))
// Construct around my specified Window reference
testPlatformRef = TestBed.platformTestingDynamic({ windowRef })
// Bootstrap a test module
testModuleRef = testPlatformRef.bootstrapModule({
declarations: [AppComponent],
providers: [
{ provide: 'titleProvider', useValue: MOCK_TITLE }
]
})
// Bootstrap component and wait for stable
await testModuleRef.bootstrapComponent(AppComponent)
})
afterEach(() => {
testPlatformRef.destroy()
// This will automatically delete the `app-root` element
testModuleRef.destroy()
})
it('Should render title', () => {
expect(documentRef.querySelector('h1').innerHTML).toBe(MOCK_TITLE)
})
})
```
### Alternatives considered
Just use the global state offered by TestBed like everyone else | non_process | can we have a disposable testbed which angular package s are relevant related to the feature request core description when writing tests i like to setup teardown an environment between every test this ensures my tests are not dependant on any sort of global state that could render my tests brittle and that the dependencies going into the test are what i expect them to be the second point is particularly painful as sometimes it can be unclear what is being provided to a component via the dependency injection framework current usage of testbed requires us to testbed configuretestingmodule which is a shared global state used between test blocks it would be great if testbed offered the ability to setup teardown against a specified target window and provided an object to interact with allowing us to create a new injector for each test block this would add versatility to how angular could be tested jest in node or karma in browser all you would just need to do is provide the test module bootstrapper an appropriate window target where a module would construct around that proposed solution add the following methods to the testbed interface typescript interface testbed platformtestingdynamic options windowref any testplatformref so given a simple component like this typescript component selector app root template title export class appcomponent constructor inject titleprovider public title string i could then write a test suite that at the start of each test will construct the module render the template and tear it down such that there is no global state except the document which i can also choose to bootstrap teardown if i wanted further i could reference a local windowref or a global window so when running in a browser i would be able to use a real window however when running in node i could use a substitution like jsdom typescript import testbed from angular core testing import jsdom from jsdom import appcomponent from app component const html const window windowref document documentref new jsdom html const mock title mock title describe appcomponent let testplatformref testplatformref let testmoduleref testngmoduleref beforeeach async create outlet for application documentref body appendchild documentref createelement app root construct around my specified window reference testplatformref testbed platformtestingdynamic windowref bootstrap a test module testmoduleref testplatformref bootstrapmodule declarations providers provide titleprovider usevalue mock title bootstrap component and wait for stable await testmoduleref bootstrapcomponent appcomponent aftereach testplatformref destroy this will automatically delete the app root element testmoduleref destroy it should render title expect documentref queryselector innerhtml tobe mock title alternatives considered just use the global state offered by testbed like everyone else | 0 |
343,317 | 10,328,039,705 | IssuesEvent | 2019-09-02 08:35:41 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | outlook.live.com - see bug description | browser-focus-geckoview engine-gecko priority-critical | <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://outlook.live.com/owa/
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: always disconnected.
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | outlook.live.com - see bug description - <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://outlook.live.com/owa/
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: always disconnected.
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_process | outlook live com see bug description url browser version firefox mobile operating system android tested another browser yes problem type something else description always disconnected steps to reproduce browser configuration none from with ❤️ | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.