Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 1 744 | labels stringlengths 4 574 | body stringlengths 9 211k | index stringclasses 10 values | text_combine stringlengths 96 211k | label stringclasses 2 values | text stringlengths 96 188k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
20,227 | 26,825,787,177 | IssuesEvent | 2023-02-02 12:47:25 | apache/arrow-rs | https://api.github.com/repos/apache/arrow-rs | closed | Archery Failures with Latest Miniz Oxide | bug development-process | **Describe the bug**
<!--
A clear and concise description of what the bug is.
-->
We are seeing archery failures in CI that appear to occur in miniz_oxide.
It could be correlation, but 0.6.3 was recently released and might be responsible for these failures, more investigation is needed
**To Reproduce**
<!--
Steps to reproduce the behavior:
-->
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
**Additional context**
<!--
Add any other context about the problem here.
--> | 1.0 | Archery Failures with Latest Miniz Oxide - **Describe the bug**
<!--
A clear and concise description of what the bug is.
-->
We are seeing archery failures in CI that appear to occur in miniz_oxide.
It could be correlation, but 0.6.3 was recently released and might be responsible for these failures, more investigation is needed
**To Reproduce**
<!--
Steps to reproduce the behavior:
-->
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
**Additional context**
<!--
Add any other context about the problem here.
--> | process | archery failures with latest miniz oxide describe the bug a clear and concise description of what the bug is we are seeing archery failures in ci that appear to occur in miniz oxide it could be correlation but was recently released and might be responsible for these failures more investigation is needed to reproduce steps to reproduce the behavior expected behavior a clear and concise description of what you expected to happen additional context add any other context about the problem here | 1 |
52,469 | 12,971,424,359 | IssuesEvent | 2020-07-21 10:54:26 | enthought/traits | https://api.github.com/repos/enthought/traits | opened | Traits 6.1.1 release | component: build | I'm making a placeholder issue for the Traits 6.1.1 release, so that it can be added to the appropriate sprint.
There are currently 10 closed and one open PR labelled as needing backport to 6.1. The 10 closed PRs are backported in #1251.
I'm not making a checklist as an issue this time around; instead, see the wiki for the process for bugfix releases. | 1.0 | Traits 6.1.1 release - I'm making a placeholder issue for the Traits 6.1.1 release, so that it can be added to the appropriate sprint.
There are currently 10 closed and one open PR labelled as needing backport to 6.1. The 10 closed PRs are backported in #1251.
I'm not making a checklist as an issue this time around; instead, see the wiki for the process for bugfix releases. | non_process | traits release i m making a placeholder issue for the traits release so that it can be added to the appropriate sprint there are currently closed and one open pr labelled as needing backport to the closed prs are backported in i m not making a checklist as an issue this time around instead see the wiki for the process for bugfix releases | 0 |
10,922 | 13,724,687,356 | IssuesEvent | 2020-10-03 15:16:02 | MobileOrg/mobileorg | https://api.github.com/repos/MobileOrg/mobileorg | closed | Switch SwiftyDropbox dependency to Swift Package Manager | development process | Carthage has a problem right now with Xcode 12 support (due to Apple Silicon support, aka `arm64` for macOS targets) and it breaks the compilation for all package (check the [#3019](https://github.com/Carthage/Carthage/issues/3019)). They are working on a solution - XCFramework support but it will take a while.
On the other hand, SwiftyDropbox just added Swift Package Manager support ([#252](https://github.com/dropbox/SwiftyDropbox/issues/252)). It is easier to maintain, and, perhaps, this is the future of dependency management for Swift projects. Let's switch at some point.
## To do
- [x] Remove Carthage support
- [x] Add SPM support
- [x] Update the documentation
- [x] Update Travis CI configuration
- [x] Check CI builds
- [x] Check that it works with Xcode 11.7 as well | 1.0 | Switch SwiftyDropbox dependency to Swift Package Manager - Carthage has a problem right now with Xcode 12 support (due to Apple Silicon support, aka `arm64` for macOS targets) and it breaks the compilation for all package (check the [#3019](https://github.com/Carthage/Carthage/issues/3019)). They are working on a solution - XCFramework support but it will take a while.
On the other hand, SwiftyDropbox just added Swift Package Manager support ([#252](https://github.com/dropbox/SwiftyDropbox/issues/252)). It is easier to maintain, and, perhaps, this is the future of dependency management for Swift projects. Let's switch at some point.
## To do
- [x] Remove Carthage support
- [x] Add SPM support
- [x] Update the documentation
- [x] Update Travis CI configuration
- [x] Check CI builds
- [x] Check that it works with Xcode 11.7 as well | process | switch swiftydropbox dependency to swift package manager carthage has a problem right now with xcode support due to apple silicon support aka for macos targets and it breaks the compilation for all package check the they are working on a solution xcframework support but it will take a while on the other hand swiftydropbox just added swift package manager support it is easier to maintain and perhaps this is the future of dependency management for swift projects let s switch at some point to do remove carthage support add spm support update the documentation update travis ci configuration check ci builds check that it works with xcode as well | 1 |
131,454 | 18,288,721,411 | IssuesEvent | 2021-10-05 13:11:52 | carbon-design-system/carbon-for-ibm-dotcom | https://api.github.com/repos/carbon-design-system/carbon-for-ibm-dotcom | closed | [Video card] React: Change video card to display video title as Card headline, not card copy | Feature request package: react dev priority: medium Needs design approval | #### User Story
<!-- {{Provide a detailed description of the user's need here, but avoid any type of solutions}} -->
> As a `[user role below]`:
Carbon for ibm.com developer
> I need to:
create/change the `video card`
> so that I can:
provide the ibm.com adopter developers components they can use to build ibm.com web pages
#### Additional information
<!-- {{Please provide any additional information or resources for reference}} -->
- Story within Storybook with corresponding knobs
- Utilize Carbon
- **See the Epic for the Design and Functional specs information**
- React Visual QA testing issue (#6520 )
- Prod QA testing issue (#6523 )
#### Acceptance criteria
- [ ] Built as a pure React component/variant
- [ ] Include README for the react component and corresponding styles
- [ ] Add any necessary stable selectors
- [ ] Create codesandbox example under `/packages/react/examples/codesandbox` and include in README
- [ ] Minimum 80% unit test coverage
- [ ] Update the Carbon for ibm.com website component [data file](https://github.com/carbon-design-system/carbon-for-ibm-dotcom-website/blob/master/src/data/components.json) to be sure the web site Component Status and Storybook links are correct
- [ ] Use the [Visual QA checklist](https://github.com/carbon-design-system/carbon-for-ibm-dotcom/wiki/Definition-of-done-(Visual-QA-checklist)) to verify design quality
- [ ] If a design is provided, the Designer is included as a Reviewer in the Pull Request
- [ ] Provide a direct link to the deploy preview for the designer in the Pull Request description
- [ ] A comment is posted in the Prod QA issue, tagging Praveen when development is finished
| 1.0 | [Video card] React: Change video card to display video title as Card headline, not card copy - #### User Story
<!-- {{Provide a detailed description of the user's need here, but avoid any type of solutions}} -->
> As a `[user role below]`:
Carbon for ibm.com developer
> I need to:
create/change the `video card`
> so that I can:
provide the ibm.com adopter developers components they can use to build ibm.com web pages
#### Additional information
<!-- {{Please provide any additional information or resources for reference}} -->
- Story within Storybook with corresponding knobs
- Utilize Carbon
- **See the Epic for the Design and Functional specs information**
- React Visual QA testing issue (#6520 )
- Prod QA testing issue (#6523 )
#### Acceptance criteria
- [ ] Built as a pure React component/variant
- [ ] Include README for the react component and corresponding styles
- [ ] Add any necessary stable selectors
- [ ] Create codesandbox example under `/packages/react/examples/codesandbox` and include in README
- [ ] Minimum 80% unit test coverage
- [ ] Update the Carbon for ibm.com website component [data file](https://github.com/carbon-design-system/carbon-for-ibm-dotcom-website/blob/master/src/data/components.json) to be sure the web site Component Status and Storybook links are correct
- [ ] Use the [Visual QA checklist](https://github.com/carbon-design-system/carbon-for-ibm-dotcom/wiki/Definition-of-done-(Visual-QA-checklist)) to verify design quality
- [ ] If a design is provided, the Designer is included as a Reviewer in the Pull Request
- [ ] Provide a direct link to the deploy preview for the designer in the Pull Request description
- [ ] A comment is posted in the Prod QA issue, tagging Praveen when development is finished
| non_process | react change video card to display video title as card headline not card copy user story as a carbon for ibm com developer i need to create change the video card so that i can provide the ibm com adopter developers components they can use to build ibm com web pages additional information story within storybook with corresponding knobs utilize carbon see the epic for the design and functional specs information react visual qa testing issue prod qa testing issue acceptance criteria built as a pure react component variant include readme for the react component and corresponding styles add any necessary stable selectors create codesandbox example under packages react examples codesandbox and include in readme minimum unit test coverage update the carbon for ibm com website component to be sure the web site component status and storybook links are correct use the to verify design quality if a design is provided the designer is included as a reviewer in the pull request provide a direct link to the deploy preview for the designer in the pull request description a comment is posted in the prod qa issue tagging praveen when development is finished | 0 |
104 | 2,539,972,607 | IssuesEvent | 2015-01-27 18:38:40 | tinkerpop/tinkerpop3 | https://api.github.com/repos/tinkerpop/tinkerpop3 | closed | Throw exceptions when things don't make sense | enhancement process | From our earlier discussion in IM:
```
gremlin> g.V().has(label, "person").values("age").fold().submit(g.compute())
==>[29, 27, 32, 35, 32]
gremlin> g.V().has(label, "person").values("age").fold().map {it.get().mean()}.submit(g.compute())
==>29.0
==>27.0
gremlin> g.V().has(label, "person").values("age").fold().map {1}.submit(g.compute())
==>1
gremlin> g.V().has(label, "person").values("age").fold().map {1}.submit(g.compute())
==>1
==>1
gremlin>
```
> IllegalArgumentException: the provided traversal will not execute correctly in OLAP
And here's another one, that doesn't even work in OLTP (not sure if it should work):
```
gremlin> g.V().has(label, "person").values("age").union(__.count(), __.sum())
==>0
==>0.0
``` | 1.0 | Throw exceptions when things don't make sense - From our earlier discussion in IM:
```
gremlin> g.V().has(label, "person").values("age").fold().submit(g.compute())
==>[29, 27, 32, 35, 32]
gremlin> g.V().has(label, "person").values("age").fold().map {it.get().mean()}.submit(g.compute())
==>29.0
==>27.0
gremlin> g.V().has(label, "person").values("age").fold().map {1}.submit(g.compute())
==>1
gremlin> g.V().has(label, "person").values("age").fold().map {1}.submit(g.compute())
==>1
==>1
gremlin>
```
> IllegalArgumentException: the provided traversal will not execute correctly in OLAP
And here's another one, that doesn't even work in OLTP (not sure if it should work):
```
gremlin> g.V().has(label, "person").values("age").union(__.count(), __.sum())
==>0
==>0.0
``` | process | throw exceptions when things don t make sense from our earlier discussion in im gremlin g v has label person values age fold submit g compute gremlin g v has label person values age fold map it get mean submit g compute gremlin g v has label person values age fold map submit g compute gremlin g v has label person values age fold map submit g compute gremlin illegalargumentexception the provided traversal will not execute correctly in olap and here s another one that doesn t even work in oltp not sure if it should work gremlin g v has label person values age union count sum | 1 |
80,264 | 7,743,412,850 | IssuesEvent | 2018-05-29 12:46:19 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | opened | Test com.hazelcast.client.topic.Issue9766Test.serverRestartWhenReliableTopicListenerRegistered failed | Type: Test-Failure | During a PR run for maintenance-3.x branch, the following test failure occured:
```
Regression
com.hazelcast.client.topic.Issue9766Test.serverRestartWhenReliableTopicListenerRegistered
Failing for the past 1 build (Since Unstable#15809 )
Took 2 min 4 sec.
Error Message
CountDownLatch failed to complete within 120 seconds, count left: 1
Stacktrace
java.lang.AssertionError: CountDownLatch failed to complete within 120 seconds, count left: 1
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at com.hazelcast.test.HazelcastTestSupport.assertOpenEventually(HazelcastTestSupport.java:1131)
at com.hazelcast.test.HazelcastTestSupport.assertOpenEventually(HazelcastTestSupport.java:1116)
at com.hazelcast.test.HazelcastTestSupport.assertOpenEventually(HazelcastTestSupport.java:1104)
at com.hazelcast.client.topic.Issue9766Test.serverRestartWhenReliableTopicListenerRegistered(Issue9766Test.java:96)
Started Running Test: serverRestartWhenReliableTopicListenerRegistered
10:36:45,188 INFO |serverRestartWhenReliableTopicListenerRegistered| - [XmlConfigLocator] serverRestartWhenReliableTopicListenerRegistered - Loading 'hazelcast-default.xml' from classpath.
10:36:45,257 INFO |serverRestartWhenReliableTopicListenerRegistered| - [system] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Hazelcast 3.11-SNAPSHOT (20180529 - fd1375e) starting at [127.0.0.1]:5005
10:36:45,257 INFO |serverRestartWhenReliableTopicListenerRegistered| - [system] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Copyright (c) 2008-2018, Hazelcast, Inc. All Rights Reserved.
10:36:45,257 INFO |serverRestartWhenReliableTopicListenerRegistered| - [system] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Configured Hazelcast Serialization version: 1
10:36:45,257 INFO |serverRestartWhenReliableTopicListenerRegistered| - [Node] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] A non-empty group password is configured for the Hazelcast member. Starting with Hazelcast version 3.8.2, members with the same group name, but with different group passwords (that do not use authentication) form a cluster. The group password configuration will be removed completely in a future release.
10:36:45,272 INFO |serverRestartWhenReliableTopicListenerRegistered| - [BackpressureRegulator] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Backpressure is disabled
10:36:45,272 INFO |serverRestartWhenReliableTopicListenerRegistered| - [InboundResponseHandlerSupplier] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Running with 2 response threads
10:36:45,369 INFO |serverRestartWhenReliableTopicListenerRegistered| - [OperationExecutorImpl] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Starting 72 partition threads and 37 generic threads (1 dedicated for priority tasks)
10:36:45,614 INFO |serverRestartWhenReliableTopicListenerRegistered| - [Diagnostics] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
10:36:45,614 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] [127.0.0.1]:5005 is STARTING
10:36:45,615 INFO |serverRestartWhenReliableTopicListenerRegistered| - [system] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Cluster version set to 3.11
10:36:45,615 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClusterService] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT]
Members {size:1, ver:1} [
Member [127.0.0.1]:5005 - 489cfc08-a504-4213-8698-3676c8406eb4 this
]
10:36:45,625 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] [127.0.0.1]:5005 is STARTED
10:36:45,629 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] serverRestartWhenReliableTopicListenerRegistered - hz.client_9 [dev] [3.11-SNAPSHOT] HazelcastClient 3.11-SNAPSHOT (20180529 - fd1375e) is STARTING
10:36:45,632 INFO |serverRestartWhenReliableTopicListenerRegistered| - [HealthMonitor] hz._hzInstance_5_dev.HealthMonitor - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] processors=72, physical.memory.total=377.6G, physical.memory.free=105.4G, swap.space.total=4.0G, swap.space.free=3.7G, heap.memory.used=657.5M, heap.memory.free=341.5M, heap.memory.total=999.0M, heap.memory.max=1.8G, heap.memory.used/total=65.81%, heap.memory.used/max=36.11%, minor.gc.count=7, minor.gc.time=1024ms, major.gc.count=2, major.gc.time=1002ms, load.process=6.25%, load.system=93.33%, load.systemAverage=655.02, thread.count=133, thread.peakCount=847, cluster.timeDiff=0, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=0, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=0, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
10:36:45,702 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientInvocationService] serverRestartWhenReliableTopicListenerRegistered - hz.client_9 [dev] [3.11-SNAPSHOT] Running with 2 response threads
10:36:45,717 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] serverRestartWhenReliableTopicListenerRegistered - hz.client_9 [dev] [3.11-SNAPSHOT] HazelcastClient 3.11-SNAPSHOT (20180529 - fd1375e) is STARTED
10:36:45,757 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_9.cluster- - hz.client_9 [dev] [3.11-SNAPSHOT] Trying to connect to [127.0.0.1]:5005 as owner member
10:36:45,759 INFO |serverRestartWhenReliableTopicListenerRegistered| - [HazelcastClient] hz.client_9.internal-2 - Created connection to endpoint: [127.0.0.1]:5005, connection: MockedClientConnection{localAddress=[127.0.0.1]:40026, super=ClientConnection{alive=true, connectionId=1, channel=null, remoteEndpoint=null, lastReadTime=never, lastWriteTime=never, closedTime=never, connected server version=null}}
10:36:45,775 DEBUG |serverRestartWhenReliableTopicListenerRegistered| - [AuthenticationMessageTask] hz._hzInstance_5_dev.priority-generic-operation.thread-0 - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Processing owner authentication with principal ClientPrincipal{uuid='4084fb8c-8a27-4daf-8786-eb2e1d18329d', ownerUuid='489cfc08-a504-4213-8698-3676c8406eb4'}
10:36:45,776 DEBUG |serverRestartWhenReliableTopicListenerRegistered| - [ClientReAuthOperation] hz._hzInstance_5_dev.cached.thread-3 - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Client authenticated 4084fb8c-8a27-4daf-8786-eb2e1d18329d, owner 489cfc08-a504-4213-8698-3676c8406eb4
10:36:45,796 DEBUG |serverRestartWhenReliableTopicListenerRegistered| - [AuthenticationMessageTask] hz._hzInstance_5_dev.async.thread-2 - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Processed owner authentication with principal ClientPrincipal{uuid='4084fb8c-8a27-4daf-8786-eb2e1d18329d', ownerUuid='489cfc08-a504-4213-8698-3676c8406eb4'}
10:36:45,796 INFO |serverRestartWhenReliableTopicListenerRegistered| - [AuthenticationMessageTask] hz._hzInstance_5_dev.async.thread-2 - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Received auth from MockedNodeConnection{ remoteEndpoint = [127.0.0.1]:40026, localEndpoint = [127.0.0.1]:5005, connectionId = 1}, successfully authenticated, principal: ClientPrincipal{uuid='4084fb8c-8a27-4daf-8786-eb2e1d18329d', ownerUuid='489cfc08-a504-4213-8698-3676c8406eb4'}, owner connection: true, client version: 3.11-SNAPSHOT
10:36:45,798 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_9.internal-3 - hz.client_9 [dev] [3.11-SNAPSHOT] Setting MockedClientConnection{localAddress=[127.0.0.1]:40026, super=ClientConnection{alive=true, connectionId=1, channel=null, remoteEndpoint=[127.0.0.1]:5005, lastReadTime=2018-05-29 10:36:45.797, lastWriteTime=2018-05-29 10:36:45.774, closedTime=never, connected server version=3.11-SNAPSHOT}} as owner with principal ClientPrincipal{uuid='4084fb8c-8a27-4daf-8786-eb2e1d18329d', ownerUuid='489cfc08-a504-4213-8698-3676c8406eb4'}
10:36:45,798 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_9.internal-3 - hz.client_9 [dev] [3.11-SNAPSHOT] Authenticated with server [127.0.0.1]:5005, server version:3.11-SNAPSHOT Local address: /127.0.0.1:40026
10:36:45,805 INFO |serverRestartWhenReliableTopicListenerRegistered| - [PartitionStateManager] hz._hzInstance_5_dev.client.thread-1 - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Initializing cluster partition table arrangement...
10:36:45,822 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientMembershipListener] hz.client_9.event-73 - hz.client_9 [dev] [3.11-SNAPSHOT]
Members [1] {
Member [127.0.0.1]:5005 - 489cfc08-a504-4213-8698-3676c8406eb4
}
10:36:45,822 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] hz.client_9.cluster- - hz.client_9 [dev] [3.11-SNAPSHOT] HazelcastClient 3.11-SNAPSHOT (20180529 - fd1375e) is CLIENT_CONNECTED
10:36:45,826 INFO |serverRestartWhenReliableTopicListenerRegistered| - [Diagnostics] serverRestartWhenReliableTopicListenerRegistered - hz.client_9 [dev] [3.11-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
10:36:45,829 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] serverRestartWhenReliableTopicListenerRegistered - hz.client_10 [dev] [3.11-SNAPSHOT] HazelcastClient 3.11-SNAPSHOT (20180529 - fd1375e) is STARTING
10:36:45,887 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientInvocationService] serverRestartWhenReliableTopicListenerRegistered - hz.client_10 [dev] [3.11-SNAPSHOT] Running with 2 response threads
10:36:45,927 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] serverRestartWhenReliableTopicListenerRegistered - hz.client_10 [dev] [3.11-SNAPSHOT] HazelcastClient 3.11-SNAPSHOT (20180529 - fd1375e) is STARTED
10:36:45,930 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_10.cluster- - hz.client_10 [dev] [3.11-SNAPSHOT] Trying to connect to [127.0.0.1]:5005 as owner member
10:36:45,934 INFO |serverRestartWhenReliableTopicListenerRegistered| - [HazelcastClient] hz.client_10.internal-3 - Created connection to endpoint: [127.0.0.1]:5005, connection: MockedClientConnection{localAddress=[127.0.0.1]:40027, super=ClientConnection{alive=true, connectionId=1, channel=null, remoteEndpoint=null, lastReadTime=never, lastWriteTime=never, closedTime=never, connected server version=null}}
10:36:45,935 DEBUG |serverRestartWhenReliableTopicListenerRegistered| - [AuthenticationMessageTask] hz._hzInstance_5_dev.generic-operation.thread-1 - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Processing owner authentication with principal ClientPrincipal{uuid='a6ae5613-e218-4a14-a0fb-9492a0aee4f8', ownerUuid='489cfc08-a504-4213-8698-3676c8406eb4'}
10:36:45,935 DEBUG |serverRestartWhenReliableTopicListenerRegistered| - [ClientReAuthOperation] hz._hzInstance_5_dev.cached.thread-3 - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Client authenticated a6ae5613-e218-4a14-a0fb-9492a0aee4f8, owner 489cfc08-a504-4213-8698-3676c8406eb4
10:36:45,936 DEBUG |serverRestartWhenReliableTopicListenerRegistered| - [AuthenticationMessageTask] hz._hzInstance_5_dev.async.thread-4 - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Processed owner authentication with principal ClientPrincipal{uuid='a6ae5613-e218-4a14-a0fb-9492a0aee4f8', ownerUuid='489cfc08-a504-4213-8698-3676c8406eb4'}
10:36:45,936 INFO |serverRestartWhenReliableTopicListenerRegistered| - [AuthenticationMessageTask] hz._hzInstance_5_dev.async.thread-4 - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Received auth from MockedNodeConnection{ remoteEndpoint = [127.0.0.1]:40027, localEndpoint = [127.0.0.1]:5005, connectionId = 1}, successfully authenticated, principal: ClientPrincipal{uuid='a6ae5613-e218-4a14-a0fb-9492a0aee4f8', ownerUuid='489cfc08-a504-4213-8698-3676c8406eb4'}, owner connection: true, client version: 3.11-SNAPSHOT
10:36:45,937 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_10.internal-1 - hz.client_10 [dev] [3.11-SNAPSHOT] Setting MockedClientConnection{localAddress=[127.0.0.1]:40027, super=ClientConnection{alive=true, connectionId=1, channel=null, remoteEndpoint=[127.0.0.1]:5005, lastReadTime=2018-05-29 10:36:45.936, lastWriteTime=2018-05-29 10:36:45.934, closedTime=never, connected server version=3.11-SNAPSHOT}} as owner with principal ClientPrincipal{uuid='a6ae5613-e218-4a14-a0fb-9492a0aee4f8', ownerUuid='489cfc08-a504-4213-8698-3676c8406eb4'}
10:36:45,937 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_10.internal-1 - hz.client_10 [dev] [3.11-SNAPSHOT] Authenticated with server [127.0.0.1]:5005, server version:3.11-SNAPSHOT Local address: /127.0.0.1:40027
10:36:45,944 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientMembershipListener] hz.client_10.event-76 - hz.client_10 [dev] [3.11-SNAPSHOT]
Members [1] {
Member [127.0.0.1]:5005 - 489cfc08-a504-4213-8698-3676c8406eb4
}
10:36:45,960 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] hz.client_10.cluster- - hz.client_10 [dev] [3.11-SNAPSHOT] HazelcastClient 3.11-SNAPSHOT (20180529 - fd1375e) is CLIENT_CONNECTED
10:36:45,970 INFO |serverRestartWhenReliableTopicListenerRegistered| - [Diagnostics] serverRestartWhenReliableTopicListenerRegistered - hz.client_10 [dev] [3.11-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
10:36:46,056 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] [127.0.0.1]:5005 is SHUTTING_DOWN
10:36:46,056 WARN |serverRestartWhenReliableTopicListenerRegistered| - [Node] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Terminating forcefully...
10:36:46,056 INFO |serverRestartWhenReliableTopicListenerRegistered| - [Node] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Shutting down connection manager...
10:36:46,056 WARN |serverRestartWhenReliableTopicListenerRegistered| - [TestClientRegistry$MockedNodeConnection] serverRestartWhenReliableTopicListenerRegistered - Server connection closed: null
10:36:46,056 INFO |serverRestartWhenReliableTopicListenerRegistered| - [MockConnectionManager] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:40027, connection: MockedNodeConnection{ remoteEndpoint = [127.0.0.1]:40027, localEndpoint = [127.0.0.1]:5005, connectionId = 1}
10:36:46,056 WARN |serverRestartWhenReliableTopicListenerRegistered| - [TestClientRegistry$MockedNodeConnection] serverRestartWhenReliableTopicListenerRegistered - Server connection closed: null
10:36:46,056 INFO |serverRestartWhenReliableTopicListenerRegistered| - [MockConnectionManager] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:40026, connection: MockedNodeConnection{ remoteEndpoint = [127.0.0.1]:40026, localEndpoint = [127.0.0.1]:5005, connectionId = 1}
10:36:46,057 INFO |serverRestartWhenReliableTopicListenerRegistered| - [Node] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Shutting down node engine...
10:36:46,080 WARN |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnection] pool-54-thread-1 - hz.client_10 [dev] [3.11-SNAPSHOT] MockedClientConnection{localAddress=[127.0.0.1]:40027, super=ClientConnection{alive=false, connectionId=1, channel=null, remoteEndpoint=[127.0.0.1]:5005, lastReadTime=2018-05-29 10:36:46.031, lastWriteTime=2018-05-29 10:36:46.010, closedTime=2018-05-29 10:36:46.057, connected server version=3.11-SNAPSHOT}} closed. Reason: com.hazelcast.spi.exception.TargetDisconnectedException[Mocked Remote socket closed]
com.hazelcast.spi.exception.TargetDisconnectedException: Mocked Remote socket closed
at com.hazelcast.client.test.TestClientRegistry$MockedClientConnection$4.run(TestClientRegistry.java:348) [test-classes/:?]
at com.hazelcast.client.test.TwoWayBlockableExecutor$BlockableRunnable.run(TwoWayBlockableExecutor.java:97) [test-classes/:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_171]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_171]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]
10:36:46,150 INFO |serverRestartWhenReliableTopicListenerRegistered| - [NodeExtension] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Destroying node NodeExtension.
10:36:46,159 WARN |serverRestartWhenReliableTopicListenerRegistered| - [TwoWayBlockableExecutor] hz.client_9.internal-3 - Dropping outgoing runnable since other end closed. Runnable message ClientMessage{connection=null, length=22, correlationId=12, operation=null, messageType=190a, partitionId=98, isComplete=false, isRetryable=false, isEvent=false, writeOffset=0}, MockedClientConnection{localAddress=[127.0.0.1]:40026, super=ClientConnection{alive=false, connectionId=1, channel=null, remoteEndpoint=[127.0.0.1]:5005, lastReadTime=2018-05-29 10:36:46.053, lastWriteTime=2018-05-29 10:36:46.062, closedTime=2018-05-29 10:36:46.062, connected server version=3.11-SNAPSHOT}}
10:36:46,161 INFO |serverRestartWhenReliableTopicListenerRegistered| - [Node] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Hazelcast Shutdown is completed in 105 ms.
10:36:46,162 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] [127.0.0.1]:5005 is SHUTDOWN
10:36:46,163 INFO |serverRestartWhenReliableTopicListenerRegistered| - [XmlConfigLocator] serverRestartWhenReliableTopicListenerRegistered - Loading 'hazelcast-default.xml' from classpath.
10:36:46,158 WARN |serverRestartWhenReliableTopicListenerRegistered| - [TwoWayBlockableExecutor] pool-54-thread-1 - Dropping outgoing runnable since other end closed. Client Closed EOF. MockedClientConnection{localAddress=[127.0.0.1]:40027, super=ClientConnection{alive=false, connectionId=1, channel=null, remoteEndpoint=[127.0.0.1]:5005, lastReadTime=2018-05-29 10:36:46.031, lastWriteTime=2018-05-29 10:36:46.010, closedTime=2018-05-29 10:36:46.057, connected server version=3.11-SNAPSHOT}}
10:36:46,163 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] pool-54-thread-1 - hz.client_10 [dev] [3.11-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5005, connection: MockedClientConnection{localAddress=[127.0.0.1]:40027, super=ClientConnection{alive=false, connectionId=1, channel=null, remoteEndpoint=[127.0.0.1]:5005, lastReadTime=2018-05-29 10:36:46.031, lastWriteTime=2018-05-29 10:36:46.010, closedTime=2018-05-29 10:36:46.057, connected server version=3.11-SNAPSHOT}}
10:36:46,163 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] hz.client_10.cluster- - hz.client_10 [dev] [3.11-SNAPSHOT] HazelcastClient 3.11-SNAPSHOT (20180529 - fd1375e) is CLIENT_DISCONNECTED
10:36:46,164 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_10.cluster- - hz.client_10 [dev] [3.11-SNAPSHOT] Trying to connect to [127.0.0.1]:5005 as owner member
10:36:46,167 WARN |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_10.cluster- - hz.client_10 [dev] [3.11-SNAPSHOT] Exception during initial connection to [127.0.0.1]:5005, exception com.hazelcast.core.HazelcastException: java.io.IOException: Can not connected to [127.0.0.1]:5005: instance does not exist
10:36:46,167 WARN |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_10.cluster- - hz.client_10 [dev] [3.11-SNAPSHOT] Unable to get alive cluster connection, try in 2997 ms later, attempt 1 of 2147483647.
10:36:46,063 WARN |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnection] pool-52-thread-1 - hz.client_9 [dev] [3.11-SNAPSHOT] MockedClientConnection{localAddress=[127.0.0.1]:40026, super=ClientConnection{alive=false, connectionId=1, channel=null, remoteEndpoint=[127.0.0.1]:5005, lastReadTime=2018-05-29 10:36:46.053, lastWriteTime=2018-05-29 10:36:46.062, closedTime=2018-05-29 10:36:46.062, connected server version=3.11-SNAPSHOT}} closed. Reason: com.hazelcast.spi.exception.TargetDisconnectedException[Mocked Remote socket closed]
com.hazelcast.spi.exception.TargetDisconnectedException: Mocked Remote socket closed
at com.hazelcast.client.test.TestClientRegistry$MockedClientConnection$4.run(TestClientRegistry.java:348) [test-classes/:?]
at com.hazelcast.client.test.TwoWayBlockableExecutor$BlockableRunnable.run(TwoWayBlockableExecutor.java:97) [test-classes/:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_171]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_171]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]
10:36:46,195 WARN |serverRestartWhenReliableTopicListenerRegistered| - [TwoWayBlockableExecutor] pool-52-thread-1 - Dropping outgoing runnable since other end closed. Client Closed EOF. MockedClientConnection{localAddress=[127.0.0.1]:40026, super=ClientConnection{alive=false, connectionId=1, channel=null, remoteEndpoint=[127.0.0.1]:5005, lastReadTime=2018-05-29 10:36:46.053, lastWriteTime=2018-05-29 10:36:46.062, closedTime=2018-05-29 10:36:46.062, connected server version=3.11-SNAPSHOT}}
10:36:46,195 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] pool-52-thread-1 - hz.client_9 [dev] [3.11-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5005, connection: MockedClientConnection{localAddress=[127.0.0.1]:40026, super=ClientConnection{alive=false, connectionId=1, channel=null, remoteEndpoint=[127.0.0.1]:5005, lastReadTime=2018-05-29 10:36:46.053, lastWriteTime=2018-05-29 10:36:46.062, closedTime=2018-05-29 10:36:46.062, connected server version=3.11-SNAPSHOT}}
10:36:46,196 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] hz.client_9.cluster- - hz.client_9 [dev] [3.11-SNAPSHOT] HazelcastClient 3.11-SNAPSHOT (20180529 - fd1375e) is CLIENT_DISCONNECTED
10:36:46,196 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_9.cluster- - hz.client_9 [dev] [3.11-SNAPSHOT] Trying to connect to [127.0.0.1]:5005 as owner member
10:36:46,236 WARN |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_9.cluster- - hz.client_9 [dev] [3.11-SNAPSHOT] Exception during initial connection to [127.0.0.1]:5005, exception com.hazelcast.core.HazelcastException: java.io.IOException: Can not connected to [127.0.0.1]:5005: instance does not exist
10:36:46,236 WARN |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_9.cluster- - hz.client_9 [dev] [3.11-SNAPSHOT] Unable to get alive cluster connection, try in 2960 ms later, attempt 1 of 2147483647.
10:36:46,259 WARN |serverRestartWhenReliableTopicListenerRegistered| - [ClientReliableTopicProxy] hz.client_9.user-1 - hz.client_9 [dev] [3.11-SNAPSHOT] Terminating MessageListener com.hazelcast.client.topic.Issue9766Test$1@61a9a9ff on topic: foo. Reason: Unhandled exception, message: Mocked Remote socket closed
com.hazelcast.spi.exception.TargetDisconnectedException: Mocked Remote socket closed
at com.hazelcast.client.spi.impl.AbstractClientInvocationService$CleanResourcesTask.notifyException(AbstractClientInvocationService.java:230) ~[classes/:?]
at com.hazelcast.client.spi.impl.AbstractClientInvocationService$CleanResourcesTask.run(AbstractClientInvocationService.java:225) ~[classes/:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_171]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) ~[?:1.8.0_171]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) ~[?:1.8.0_171]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) ~[?:1.8.0_171]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_171]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_171]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]
at com.hazelcast.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:64) [hazelcast-3.11-SNAPSHOT.jar:3.11-SNAPSHOT]
at com.hazelcast.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:80) [hazelcast-3.11-SNAPSHOT.jar:3.11-SNAPSHOT]
Caused by: com.hazelcast.spi.exception.TargetDisconnectedException: Mocked Remote socket closed
at com.hazelcast.client.test.TestClientRegistry$MockedClientConnection$4.run(TestClientRegistry.java:348) ~[test-classes/:?]
at com.hazelcast.client.test.TwoWayBlockableExecutor$BlockableRunnable.run(TwoWayBlockableExecutor.java:97) ~[test-classes/:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_171]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_171]
at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_171]
10:36:46,669 INFO |serverRestartWhenReliableTopicListenerRegistered| - [system] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Hazelcast 3.11-SNAPSHOT (20180529 - fd1375e) starting at [127.0.0.1]:5006
10:36:46,669 INFO |serverRestartWhenReliableTopicListenerRegistered| - [system] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Copyright (c) 2008-2018, Hazelcast, Inc. All Rights Reserved.
10:36:46,669 INFO |serverRestartWhenReliableTopicListenerRegistered| - [system] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Configured Hazelcast Serialization version: 1
10:36:46,669 INFO |serverRestartWhenReliableTopicListenerRegistered| - [Node] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] A non-empty group password is configured for the Hazelcast member. Starting with Hazelcast version 3.8.2, members with the same group name, but with different group passwords (that do not use authentication) form a cluster. The group password configuration will be removed completely in a future release.
10:36:46,696 INFO |serverRestartWhenReliableTopicListenerRegistered| - [BackpressureRegulator] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Backpressure is disabled
10:36:46,696 INFO |serverRestartWhenReliableTopicListenerRegistered| - [InboundResponseHandlerSupplier] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Running with 2 response threads
10:36:46,749 INFO |serverRestartWhenReliableTopicListenerRegistered| - [OperationExecutorImpl] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Starting 72 partition threads and 37 generic threads (1 dedicated for priority tasks)
10:36:46,940 INFO |serverRestartWhenReliableTopicListenerRegistered| - [Diagnostics] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
10:36:46,941 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] [127.0.0.1]:5006 is STARTING
10:36:46,941 INFO |serverRestartWhenReliableTopicListenerRegistered| - [system] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Cluster version set to 3.11
10:36:46,942 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClusterService] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT]
Members {size:1, ver:1} [
Member [127.0.0.1]:5006 - 577059d8-1e86-4c6b-be25-c35b17519f00 this
]
10:36:46,942 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] [127.0.0.1]:5006 is STARTED
10:36:46,953 INFO |serverRestartWhenReliableTopicListenerRegistered| - [HealthMonitor] hz._hzInstance_6_dev.HealthMonitor - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] processors=72, physical.memory.total=377.6G, physical.memory.free=105.4G, swap.space.total=4.0G, swap.space.free=3.7G, heap.memory.used=46.5M, heap.memory.free=956.0M, heap.memory.total=1002.5M, heap.memory.max=1.8G, heap.memory.used/total=4.63%, heap.memory.used/max=2.55%, minor.gc.count=8, minor.gc.time=1108ms, major.gc.count=2, major.gc.time=1002ms, load.process=0.00%, load.system=93.75%, load.systemAverage=655.02, thread.count=165, thread.peakCount=847, cluster.timeDiff=0, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=0, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=0, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
10:36:49,165 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_10.cluster- - hz.client_10 [dev] [3.11-SNAPSHOT] Trying to connect to [127.0.0.1]:5006 as owner member
10:36:49,172 INFO |serverRestartWhenReliableTopicListenerRegistered| - [HazelcastClient] hz.client_10.internal-3 - Created connection to endpoint: [127.0.0.1]:5006, connection: MockedClientConnection{localAddress=[127.0.0.1]:40028, super=ClientConnection{alive=true, connectionId=2, channel=null, remoteEndpoint=null, lastReadTime=never, lastWriteTime=never, closedTime=never, connected server version=null}}
10:36:49,174 DEBUG |serverRestartWhenReliableTopicListenerRegistered| - [AuthenticationMessageTask] hz._hzInstance_6_dev.generic-operation.thread-0 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Processing owner authentication with principal ClientPrincipal{uuid='a6ae5613-e218-4a14-a0fb-9492a0aee4f8', ownerUuid='577059d8-1e86-4c6b-be25-c35b17519f00'}
10:36:49,182 DEBUG |serverRestartWhenReliableTopicListenerRegistered| - [ClientReAuthOperation] hz._hzInstance_6_dev.cached.thread-1 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Client authenticated a6ae5613-e218-4a14-a0fb-9492a0aee4f8, owner 577059d8-1e86-4c6b-be25-c35b17519f00
10:36:49,183 DEBUG |serverRestartWhenReliableTopicListenerRegistered| - [AuthenticationMessageTask] hz._hzInstance_6_dev.async.thread-2 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Processed owner authentication with principal ClientPrincipal{uuid='a6ae5613-e218-4a14-a0fb-9492a0aee4f8', ownerUuid='577059d8-1e86-4c6b-be25-c35b17519f00'}
10:36:49,183 INFO |serverRestartWhenReliableTopicListenerRegistered| - [AuthenticationMessageTask] hz._hzInstance_6_dev.async.thread-2 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Received auth from MockedNodeConnection{ remoteEndpoint = [127.0.0.1]:40028, localEndpoint = [127.0.0.1]:5006, connectionId = 2}, successfully authenticated, principal: ClientPrincipal{uuid='a6ae5613-e218-4a14-a0fb-9492a0aee4f8', ownerUuid='577059d8-1e86-4c6b-be25-c35b17519f00'}, owner connection: true, client version: 3.11-SNAPSHOT
10:36:49,186 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_10.internal-1 - hz.client_10 [dev] [3.11-SNAPSHOT] Setting MockedClientConnection{localAddress=[127.0.0.1]:40028, super=ClientConnection{alive=true, connectionId=2, channel=null, remoteEndpoint=[127.0.0.1]:5006, lastReadTime=2018-05-29 10:36:49.184, lastWriteTime=2018-05-29 10:36:49.173, closedTime=never, connected server version=3.11-SNAPSHOT}} as owner with principal ClientPrincipal{uuid='a6ae5613-e218-4a14-a0fb-9492a0aee4f8', ownerUuid='577059d8-1e86-4c6b-be25-c35b17519f00'}
10:36:49,186 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_10.internal-1 - hz.client_10 [dev] [3.11-SNAPSHOT] Authenticated with server [127.0.0.1]:5006, server version:3.11-SNAPSHOT Local address: /127.0.0.1:40028
10:36:49,198 INFO |serverRestartWhenReliableTopicListenerRegistered| - [PartitionStateManager] hz._hzInstance_6_dev.client.thread-1 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Initializing cluster partition table arrangement...
10:36:49,200 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_9.cluster- - hz.client_9 [dev] [3.11-SNAPSHOT] Trying to connect to [127.0.0.1]:5006 as owner member
10:36:49,200 INFO |serverRestartWhenReliableTopicListenerRegistered| - [HazelcastClient] hz.client_9.internal-1 - Created connection to endpoint: [127.0.0.1]:5006, connection: MockedClientConnection{localAddress=[127.0.0.1]:40029, super=ClientConnection{alive=true, connectionId=2, channel=null, remoteEndpoint=null, lastReadTime=never, lastWriteTime=never, closedTime=never, connected server version=null}}
10:36:49,206 DEBUG |serverRestartWhenReliableTopicListenerRegistered| - [AuthenticationMessageTask] hz._hzInstance_6_dev.priority-generic-operation.thread-0 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Processing owner authentication with principal ClientPrincipal{uuid='4084fb8c-8a27-4daf-8786-eb2e1d18329d', ownerUuid='577059d8-1e86-4c6b-be25-c35b17519f00'}
10:36:49,207 DEBUG |serverRestartWhenReliableTopicListenerRegistered| - [ClientReAuthOperation] hz._hzInstance_6_dev.cached.thread-1 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Client authenticated 4084fb8c-8a27-4daf-8786-eb2e1d18329d, owner 577059d8-1e86-4c6b-be25-c35b17519f00
10:36:49,216 DEBUG |serverRestartWhenReliableTopicListenerRegistered| - [AuthenticationMessageTask] hz._hzInstance_6_dev.async.thread-4 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Processed owner authentication with principal ClientPrincipal{uuid='4084fb8c-8a27-4daf-8786-eb2e1d18329d', ownerUuid='577059d8-1e86-4c6b-be25-c35b17519f00'}
10:36:49,216 INFO |serverRestartWhenReliableTopicListenerRegistered| - [AuthenticationMessageTask] hz._hzInstance_6_dev.async.thread-4 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Received auth from MockedNodeConnection{ remoteEndpoint = [127.0.0.1]:40029, localEndpoint = [127.0.0.1]:5006, connectionId = 2}, successfully authenticated, principal: ClientPrincipal{uuid='4084fb8c-8a27-4daf-8786-eb2e1d18329d', ownerUuid='577059d8-1e86-4c6b-be25-c35b17519f00'}, owner connection: true, client version: 3.11-SNAPSHOT
10:36:49,216 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientMembershipListener] hz.client_10.event-76 - hz.client_10 [dev] [3.11-SNAPSHOT]
Members [1] {
Member [127.0.0.1]:5006 - 577059d8-1e86-4c6b-be25-c35b17519f00
}
10:36:49,217 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] hz.client_10.cluster- - hz.client_10 [dev] [3.11-SNAPSHOT] HazelcastClient 3.11-SNAPSHOT (20180529 - fd1375e) is CLIENT_CONNECTED
10:36:49,217 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_9.internal-2 - hz.client_9 [dev] [3.11-SNAPSHOT] Setting MockedClientConnection{localAddress=[127.0.0.1]:40029, super=ClientConnection{alive=true, connectionId=2, channel=null, remoteEndpoint=[127.0.0.1]:5006, lastReadTime=2018-05-29 10:36:49.216, lastWriteTime=2018-05-29 10:36:49.206, closedTime=never, connected server version=3.11-SNAPSHOT}} as owner with principal ClientPrincipal{uuid='4084fb8c-8a27-4daf-8786-eb2e1d18329d', ownerUuid='577059d8-1e86-4c6b-be25-c35b17519f00'}
10:36:49,217 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_9.internal-2 - hz.client_9 [dev] [3.11-SNAPSHOT] Authenticated with server [127.0.0.1]:5006, server version:3.11-SNAPSHOT Local address: /127.0.0.1:40029
10:36:49,260 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientMembershipListener] hz.client_9.event-73 - hz.client_9 [dev] [3.11-SNAPSHOT]
Members [1] {
Member [127.0.0.1]:5006 - 577059d8-1e86-4c6b-be25-c35b17519f00
}
10:36:49,260 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] hz.client_9.cluster- - hz.client_9 [dev] [3.11-SNAPSHOT] HazelcastClient 3.11-SNAPSHOT (20180529 - fd1375e) is CLIENT_CONNECTED
10:37:06,960 INFO |serverRestartWhenReliableTopicListenerRegistered| - [HealthMonitor] hz._hzInstance_6_dev.HealthMonitor - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] processors=72, physical.memory.total=377.6G, physical.memory.free=113.5G, swap.space.total=4.0G, swap.space.free=3.7G, heap.memory.used=57.4M, heap.memory.free=945.1M, heap.memory.total=1002.5M, heap.memory.max=1.8G, heap.memory.used/total=5.73%, heap.memory.used/max=3.16%, minor.gc.count=8, minor.gc.time=1108ms, major.gc.count=2, major.gc.time=1002ms, load.process=0.00%, load.system=92.86%, load.systemAverage=637.14, thread.count=182, thread.peakCount=847, cluster.timeDiff=0, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=5, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=0, clientEndpoint.count=2, connection.active.count=0, client.connection.count=0, connection.count=0
10:37:26,965 INFO |serverRestartWhenReliableTopicListenerRegistered| - [HealthMonitor] hz._hzInstance_6_dev.HealthMonitor - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] processors=72, physical.memory.total=377.6G, physical.memory.free=115.1G, swap.space.total=4.0G, swap.space.free=3.7G, heap.memory.used=62.3M, heap.memory.free=940.2M, heap.memory.total=1002.5M, heap.memory.max=1.8G, heap.memory.used/total=6.21%, heap.memory.used/max=3.42%, minor.gc.count=8, minor.gc.time=1108ms, major.gc.count=2, major.gc.time=1002ms, load.process=0.00%, load.system=76.92%, load.systemAverage=582.09, thread.count=185, thread.peakCount=847, cluster.timeDiff=0, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=5, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=0, clientEndpoint.count=2, connection.active.count=0, client.connection.count=0, connection.count=0
10:37:46,969 INFO |serverRestartWhenReliableTopicListenerRegistered| - [HealthMonitor] hz._hzInstance_6_dev.HealthMonitor - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] processors=72, physical.memory.total=377.6G, physical.memory.free=115.3G, swap.space.total=4.0G, swap.space.free=3.7G, heap.memory.used=67.2M, heap.memory.free=935.3M, heap.memory.total=1002.5M, heap.memory.max=1.8G, heap.memory.used/total=6.70%, heap.memory.used/max=3.69%, minor.gc.count=8, minor.gc.time=1108ms, major.gc.count=2, major.gc.time=1002ms, load.process=7.14%, load.system=93.33%, load.systemAverage=478.82, thread.count=185, thread.peakCount=847, cluster.timeDiff=0, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=5, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=0, clientEndpoint.count=2, connection.active.count=0, client.connection.count=0, connection.count=0
10:38:06,974 INFO |serverRestartWhenReliableTopicListenerRegistered| - [HealthMonitor] hz._hzInstance_6_dev.HealthMonitor - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] processors=72, physical.memory.total=377.6G, physical.memory.free=117.7G, swap.space.total=4.0G, swap.space.free=3.7G, heap.memory.used=72.0M, heap.memory.free=930.5M, heap.memory.total=1002.5M, heap.memory.max=1.8G, heap.memory.used/total=7.19%, heap.memory.used/max=3.96%, minor.gc.count=8, minor.gc.time=1108ms, major.gc.count=2, major.gc.time=1002ms, load.process=0.00%, load.system=92.86%, load.systemAverage=462.19, thread.count=173, thread.peakCount=847, cluster.timeDiff=0, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=5, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=0, clientEndpoint.count=2, connection.active.count=0, client.connection.count=0, connection.count=0
10:38:50,126 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] Thread-13 - hz.client_9 [dev] [3.11-SNAPSHOT] HazelcastClient 3.11-SNAPSHOT (20180529 - fd1375e) is SHUTTING_DOWN
10:38:50,127 WARN |serverRestartWhenReliableTopicListenerRegistered| - [TestClientRegistry$MockedNodeConnection] pool-59-thread-1 - Server connection closed: null
10:38:50,127 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] Thread-13 - hz.client_9 [dev] [3.11-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5006, connection: MockedClientConnection{localAddress=[127.0.0.1]:40029, super=ClientConnection{alive=false, connectionId=2, channel=null, remoteEndpoint=[127.0.0.1]:5006, lastReadTime=2018-05-29 10:38:45.831, lastWriteTime=2018-05-29 10:38:45.829, closedTime=2018-05-29 10:38:50.127, connected server version=3.11-SNAPSHOT}}
10:38:50,127 INFO |serverRestartWhenReliableTopicListenerRegistered| - [MockConnectionManager] pool-59-thread-1 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:40029, connection: MockedNodeConnection{ remoteEndpoint = [127.0.0.1]:40029, localEndpoint = [127.0.0.1]:5006, connectionId = 2}
10:38:50,128 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientEndpointManager] hz._hzInstance_6_dev.event-85 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Destroying ClientEndpoint{connection=MockedNodeConnection{ remoteEndpoint = [127.0.0.1]:40029, localEndpoint = [127.0.0.1]:5006, connectionId = 2}, principal='ClientPrincipal{uuid='4084fb8c-8a27-4daf-8786-eb2e1d18329d', ownerUuid='577059d8-1e86-4c6b-be25-c35b17519f00'}, ownerConnection=true, authenticated=true, clientVersion=3.11-SNAPSHOT, creationTime=1527590209206, latest statistics=null}
10:38:50,129 WARN |serverRestartWhenReliableTopicListenerRegistered| - [TwoWayBlockableExecutor] pool-59-thread-1 - Dropping incoming runnable since other end closed. Server Closed EOF. MockedClientConnection{localAddress=[127.0.0.1]:40029, super=ClientConnection{alive=false, connectionId=2, channel=null, remoteEndpoint=[127.0.0.1]:5006, lastReadTime=2018-05-29 10:38:45.831, lastWriteTime=2018-05-29 10:38:45.829, closedTime=2018-05-29 10:38:50.127, connected server version=3.11-SNAPSHOT}}
10:38:50,132 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] Thread-13 - hz.client_9 [dev] [3.11-SNAPSHOT] HazelcastClient 3.11-SNAPSHOT (20180529 - fd1375e) is SHUTDOWN
10:38:50,132 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] Thread-13 - hz.client_10 [dev] [3.11-SNAPSHOT] HazelcastClient 3.11-SNAPSHOT (20180529 - fd1375e) is SHUTTING_DOWN
10:38:50,133 WARN |serverRestartWhenReliableTopicListenerRegistered| - [TestClientRegistry$MockedNodeConnection] pool-57-thread-1 - Server connection closed: null
10:38:50,133 INFO |serverRestartWhenReliableTopicListenerRegistered| - [MockConnectionManager] pool-57-thread-1 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:40028, connection: MockedNodeConnection{ remoteEndpoint = [127.0.0.1]:40028, localEndpoint = [127.0.0.1]:5006, connectionId = 2}
10:38:50,133 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] Thread-13 - hz.client_10 [dev] [3.11-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5006, connection: MockedClientConnection{localAddress=[127.0.0.1]:40028, super=ClientConnection{alive=false, connectionId=2, channel=null, remoteEndpoint=[127.0.0.1]:5006, lastReadTime=2018-05-29 10:38:45.977, lastWriteTime=2018-05-29 10:38:45.975, closedTime=2018-05-29 10:38:50.132, connected server version=3.11-SNAPSHOT}}
10:38:50,133 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientEndpointManager] hz._hzInstance_6_dev.event-84 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Destroying ClientEndpoint{connection=MockedNodeConnection{ remoteEndpoint = [127.0.0.1]:40028, localEndpoint = [127.0.0.1]:5006, connectionId = 2}, principal='ClientPrincipal{uuid='a6ae5613-e218-4a14-a0fb-9492a0aee4f8', ownerUuid='577059d8-1e86-4c6b-be25-c35b17519f00'}, ownerConnection=true, authenticated=true, clientVersion=3.11-SNAPSHOT, creationTime=1527590209173, latest statistics=null}
10:38:50,133 WARN |serverRestartWhenReliableTopicListenerRegistered| - [TwoWayBlockableExecutor] pool-57-thread-1 - Dropping incoming runnable since other end closed. Server Closed EOF. MockedClientConnection{localAddress=[127.0.0.1]:40028, super=ClientConnection{alive=false, connectionId=2, channel=null, remoteEndpoint=[127.0.0.1]:5006, lastReadTime=2018-05-29 10:38:45.977, lastWriteTime=2018-05-29 10:38:45.975, closedTime=2018-05-29 10:38:50.132, connected server version=3.11-SNAPSHOT}}
10:38:50,136 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] Thread-13 - hz.client_10 [dev] [3.11-SNAPSHOT] HazelcastClient 3.11-SNAPSHOT (20180529 - fd1375e) is SHUTDOWN
10:38:50,136 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] Thread-13 - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] [127.0.0.1]:5005 is SHUTTING_DOWN
10:38:50,136 INFO |serverRestartWhenReliableTopicListenerRegistered| - [Node] Thread-13 - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Node is already shutting down... Waiting for shutdown process to complete...
10:38:50,136 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] Thread-13 - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] [127.0.0.1]:5005 is SHUTDOWN
10:38:50,136 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] Thread-13 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] [127.0.0.1]:5006 is SHUTTING_DOWN
10:38:50,136 WARN |serverRestartWhenReliableTopicListenerRegistered| - [Node] Thread-13 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Terminating forcefully...
10:38:50,136 INFO |serverRestartWhenReliableTopicListenerRegistered| - [Node] Thread-13 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Shutting down connection manager...
10:38:50,136 INFO |serverRestartWhenReliableTopicListenerRegistered| - [Node] Thread-13 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Shutting down node engine...
10:38:50,143 INFO |serverRestartWhenReliableTopicListenerRegistered| - [NodeExtension] Thread-13 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Destroying node NodeExtension.
10:38:50,144 INFO |serverRestartWhenReliableTopicListenerRegistered| - [Node] Thread-13 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Hazelcast Shutdown is completed in 8 ms.
10:38:50,144 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] Thread-13 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] [127.0.0.1]:5006 is SHUTDOWN
Standard Error
THREAD DUMP FOR TEST FAILURE: "CountDownLatch failed to complete within 120 seconds, count left: 1" at "serverRestartWhenReliableTopicListenerRegistered"
``` | 1.0 | Test com.hazelcast.client.topic.Issue9766Test.serverRestartWhenReliableTopicListenerRegistered failed - During a PR run for maintenance-3.x branch, the following test failure occured:
```
Regression
com.hazelcast.client.topic.Issue9766Test.serverRestartWhenReliableTopicListenerRegistered
Failing for the past 1 build (Since Unstable#15809 )
Took 2 min 4 sec.
Error Message
CountDownLatch failed to complete within 120 seconds, count left: 1
Stacktrace
java.lang.AssertionError: CountDownLatch failed to complete within 120 seconds, count left: 1
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at com.hazelcast.test.HazelcastTestSupport.assertOpenEventually(HazelcastTestSupport.java:1131)
at com.hazelcast.test.HazelcastTestSupport.assertOpenEventually(HazelcastTestSupport.java:1116)
at com.hazelcast.test.HazelcastTestSupport.assertOpenEventually(HazelcastTestSupport.java:1104)
at com.hazelcast.client.topic.Issue9766Test.serverRestartWhenReliableTopicListenerRegistered(Issue9766Test.java:96)
Started Running Test: serverRestartWhenReliableTopicListenerRegistered
10:36:45,188 INFO |serverRestartWhenReliableTopicListenerRegistered| - [XmlConfigLocator] serverRestartWhenReliableTopicListenerRegistered - Loading 'hazelcast-default.xml' from classpath.
10:36:45,257 INFO |serverRestartWhenReliableTopicListenerRegistered| - [system] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Hazelcast 3.11-SNAPSHOT (20180529 - fd1375e) starting at [127.0.0.1]:5005
10:36:45,257 INFO |serverRestartWhenReliableTopicListenerRegistered| - [system] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Copyright (c) 2008-2018, Hazelcast, Inc. All Rights Reserved.
10:36:45,257 INFO |serverRestartWhenReliableTopicListenerRegistered| - [system] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Configured Hazelcast Serialization version: 1
10:36:45,257 INFO |serverRestartWhenReliableTopicListenerRegistered| - [Node] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] A non-empty group password is configured for the Hazelcast member. Starting with Hazelcast version 3.8.2, members with the same group name, but with different group passwords (that do not use authentication) form a cluster. The group password configuration will be removed completely in a future release.
10:36:45,272 INFO |serverRestartWhenReliableTopicListenerRegistered| - [BackpressureRegulator] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Backpressure is disabled
10:36:45,272 INFO |serverRestartWhenReliableTopicListenerRegistered| - [InboundResponseHandlerSupplier] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Running with 2 response threads
10:36:45,369 INFO |serverRestartWhenReliableTopicListenerRegistered| - [OperationExecutorImpl] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Starting 72 partition threads and 37 generic threads (1 dedicated for priority tasks)
10:36:45,614 INFO |serverRestartWhenReliableTopicListenerRegistered| - [Diagnostics] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
10:36:45,614 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] [127.0.0.1]:5005 is STARTING
10:36:45,615 INFO |serverRestartWhenReliableTopicListenerRegistered| - [system] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Cluster version set to 3.11
10:36:45,615 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClusterService] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT]
Members {size:1, ver:1} [
Member [127.0.0.1]:5005 - 489cfc08-a504-4213-8698-3676c8406eb4 this
]
10:36:45,625 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] [127.0.0.1]:5005 is STARTED
10:36:45,629 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] serverRestartWhenReliableTopicListenerRegistered - hz.client_9 [dev] [3.11-SNAPSHOT] HazelcastClient 3.11-SNAPSHOT (20180529 - fd1375e) is STARTING
10:36:45,632 INFO |serverRestartWhenReliableTopicListenerRegistered| - [HealthMonitor] hz._hzInstance_5_dev.HealthMonitor - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] processors=72, physical.memory.total=377.6G, physical.memory.free=105.4G, swap.space.total=4.0G, swap.space.free=3.7G, heap.memory.used=657.5M, heap.memory.free=341.5M, heap.memory.total=999.0M, heap.memory.max=1.8G, heap.memory.used/total=65.81%, heap.memory.used/max=36.11%, minor.gc.count=7, minor.gc.time=1024ms, major.gc.count=2, major.gc.time=1002ms, load.process=6.25%, load.system=93.33%, load.systemAverage=655.02, thread.count=133, thread.peakCount=847, cluster.timeDiff=0, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=0, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=0, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
10:36:45,702 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientInvocationService] serverRestartWhenReliableTopicListenerRegistered - hz.client_9 [dev] [3.11-SNAPSHOT] Running with 2 response threads
10:36:45,717 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] serverRestartWhenReliableTopicListenerRegistered - hz.client_9 [dev] [3.11-SNAPSHOT] HazelcastClient 3.11-SNAPSHOT (20180529 - fd1375e) is STARTED
10:36:45,757 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_9.cluster- - hz.client_9 [dev] [3.11-SNAPSHOT] Trying to connect to [127.0.0.1]:5005 as owner member
10:36:45,759 INFO |serverRestartWhenReliableTopicListenerRegistered| - [HazelcastClient] hz.client_9.internal-2 - Created connection to endpoint: [127.0.0.1]:5005, connection: MockedClientConnection{localAddress=[127.0.0.1]:40026, super=ClientConnection{alive=true, connectionId=1, channel=null, remoteEndpoint=null, lastReadTime=never, lastWriteTime=never, closedTime=never, connected server version=null}}
10:36:45,775 DEBUG |serverRestartWhenReliableTopicListenerRegistered| - [AuthenticationMessageTask] hz._hzInstance_5_dev.priority-generic-operation.thread-0 - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Processing owner authentication with principal ClientPrincipal{uuid='4084fb8c-8a27-4daf-8786-eb2e1d18329d', ownerUuid='489cfc08-a504-4213-8698-3676c8406eb4'}
10:36:45,776 DEBUG |serverRestartWhenReliableTopicListenerRegistered| - [ClientReAuthOperation] hz._hzInstance_5_dev.cached.thread-3 - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Client authenticated 4084fb8c-8a27-4daf-8786-eb2e1d18329d, owner 489cfc08-a504-4213-8698-3676c8406eb4
10:36:45,796 DEBUG |serverRestartWhenReliableTopicListenerRegistered| - [AuthenticationMessageTask] hz._hzInstance_5_dev.async.thread-2 - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Processed owner authentication with principal ClientPrincipal{uuid='4084fb8c-8a27-4daf-8786-eb2e1d18329d', ownerUuid='489cfc08-a504-4213-8698-3676c8406eb4'}
10:36:45,796 INFO |serverRestartWhenReliableTopicListenerRegistered| - [AuthenticationMessageTask] hz._hzInstance_5_dev.async.thread-2 - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Received auth from MockedNodeConnection{ remoteEndpoint = [127.0.0.1]:40026, localEndpoint = [127.0.0.1]:5005, connectionId = 1}, successfully authenticated, principal: ClientPrincipal{uuid='4084fb8c-8a27-4daf-8786-eb2e1d18329d', ownerUuid='489cfc08-a504-4213-8698-3676c8406eb4'}, owner connection: true, client version: 3.11-SNAPSHOT
10:36:45,798 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_9.internal-3 - hz.client_9 [dev] [3.11-SNAPSHOT] Setting MockedClientConnection{localAddress=[127.0.0.1]:40026, super=ClientConnection{alive=true, connectionId=1, channel=null, remoteEndpoint=[127.0.0.1]:5005, lastReadTime=2018-05-29 10:36:45.797, lastWriteTime=2018-05-29 10:36:45.774, closedTime=never, connected server version=3.11-SNAPSHOT}} as owner with principal ClientPrincipal{uuid='4084fb8c-8a27-4daf-8786-eb2e1d18329d', ownerUuid='489cfc08-a504-4213-8698-3676c8406eb4'}
10:36:45,798 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_9.internal-3 - hz.client_9 [dev] [3.11-SNAPSHOT] Authenticated with server [127.0.0.1]:5005, server version:3.11-SNAPSHOT Local address: /127.0.0.1:40026
10:36:45,805 INFO |serverRestartWhenReliableTopicListenerRegistered| - [PartitionStateManager] hz._hzInstance_5_dev.client.thread-1 - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Initializing cluster partition table arrangement...
10:36:45,822 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientMembershipListener] hz.client_9.event-73 - hz.client_9 [dev] [3.11-SNAPSHOT]
Members [1] {
Member [127.0.0.1]:5005 - 489cfc08-a504-4213-8698-3676c8406eb4
}
10:36:45,822 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] hz.client_9.cluster- - hz.client_9 [dev] [3.11-SNAPSHOT] HazelcastClient 3.11-SNAPSHOT (20180529 - fd1375e) is CLIENT_CONNECTED
10:36:45,826 INFO |serverRestartWhenReliableTopicListenerRegistered| - [Diagnostics] serverRestartWhenReliableTopicListenerRegistered - hz.client_9 [dev] [3.11-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
10:36:45,829 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] serverRestartWhenReliableTopicListenerRegistered - hz.client_10 [dev] [3.11-SNAPSHOT] HazelcastClient 3.11-SNAPSHOT (20180529 - fd1375e) is STARTING
10:36:45,887 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientInvocationService] serverRestartWhenReliableTopicListenerRegistered - hz.client_10 [dev] [3.11-SNAPSHOT] Running with 2 response threads
10:36:45,927 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] serverRestartWhenReliableTopicListenerRegistered - hz.client_10 [dev] [3.11-SNAPSHOT] HazelcastClient 3.11-SNAPSHOT (20180529 - fd1375e) is STARTED
10:36:45,930 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_10.cluster- - hz.client_10 [dev] [3.11-SNAPSHOT] Trying to connect to [127.0.0.1]:5005 as owner member
10:36:45,934 INFO |serverRestartWhenReliableTopicListenerRegistered| - [HazelcastClient] hz.client_10.internal-3 - Created connection to endpoint: [127.0.0.1]:5005, connection: MockedClientConnection{localAddress=[127.0.0.1]:40027, super=ClientConnection{alive=true, connectionId=1, channel=null, remoteEndpoint=null, lastReadTime=never, lastWriteTime=never, closedTime=never, connected server version=null}}
10:36:45,935 DEBUG |serverRestartWhenReliableTopicListenerRegistered| - [AuthenticationMessageTask] hz._hzInstance_5_dev.generic-operation.thread-1 - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Processing owner authentication with principal ClientPrincipal{uuid='a6ae5613-e218-4a14-a0fb-9492a0aee4f8', ownerUuid='489cfc08-a504-4213-8698-3676c8406eb4'}
10:36:45,935 DEBUG |serverRestartWhenReliableTopicListenerRegistered| - [ClientReAuthOperation] hz._hzInstance_5_dev.cached.thread-3 - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Client authenticated a6ae5613-e218-4a14-a0fb-9492a0aee4f8, owner 489cfc08-a504-4213-8698-3676c8406eb4
10:36:45,936 DEBUG |serverRestartWhenReliableTopicListenerRegistered| - [AuthenticationMessageTask] hz._hzInstance_5_dev.async.thread-4 - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Processed owner authentication with principal ClientPrincipal{uuid='a6ae5613-e218-4a14-a0fb-9492a0aee4f8', ownerUuid='489cfc08-a504-4213-8698-3676c8406eb4'}
10:36:45,936 INFO |serverRestartWhenReliableTopicListenerRegistered| - [AuthenticationMessageTask] hz._hzInstance_5_dev.async.thread-4 - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Received auth from MockedNodeConnection{ remoteEndpoint = [127.0.0.1]:40027, localEndpoint = [127.0.0.1]:5005, connectionId = 1}, successfully authenticated, principal: ClientPrincipal{uuid='a6ae5613-e218-4a14-a0fb-9492a0aee4f8', ownerUuid='489cfc08-a504-4213-8698-3676c8406eb4'}, owner connection: true, client version: 3.11-SNAPSHOT
10:36:45,937 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_10.internal-1 - hz.client_10 [dev] [3.11-SNAPSHOT] Setting MockedClientConnection{localAddress=[127.0.0.1]:40027, super=ClientConnection{alive=true, connectionId=1, channel=null, remoteEndpoint=[127.0.0.1]:5005, lastReadTime=2018-05-29 10:36:45.936, lastWriteTime=2018-05-29 10:36:45.934, closedTime=never, connected server version=3.11-SNAPSHOT}} as owner with principal ClientPrincipal{uuid='a6ae5613-e218-4a14-a0fb-9492a0aee4f8', ownerUuid='489cfc08-a504-4213-8698-3676c8406eb4'}
10:36:45,937 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_10.internal-1 - hz.client_10 [dev] [3.11-SNAPSHOT] Authenticated with server [127.0.0.1]:5005, server version:3.11-SNAPSHOT Local address: /127.0.0.1:40027
10:36:45,944 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientMembershipListener] hz.client_10.event-76 - hz.client_10 [dev] [3.11-SNAPSHOT]
Members [1] {
Member [127.0.0.1]:5005 - 489cfc08-a504-4213-8698-3676c8406eb4
}
10:36:45,960 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] hz.client_10.cluster- - hz.client_10 [dev] [3.11-SNAPSHOT] HazelcastClient 3.11-SNAPSHOT (20180529 - fd1375e) is CLIENT_CONNECTED
10:36:45,970 INFO |serverRestartWhenReliableTopicListenerRegistered| - [Diagnostics] serverRestartWhenReliableTopicListenerRegistered - hz.client_10 [dev] [3.11-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
10:36:46,056 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] [127.0.0.1]:5005 is SHUTTING_DOWN
10:36:46,056 WARN |serverRestartWhenReliableTopicListenerRegistered| - [Node] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Terminating forcefully...
10:36:46,056 INFO |serverRestartWhenReliableTopicListenerRegistered| - [Node] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Shutting down connection manager...
10:36:46,056 WARN |serverRestartWhenReliableTopicListenerRegistered| - [TestClientRegistry$MockedNodeConnection] serverRestartWhenReliableTopicListenerRegistered - Server connection closed: null
10:36:46,056 INFO |serverRestartWhenReliableTopicListenerRegistered| - [MockConnectionManager] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:40027, connection: MockedNodeConnection{ remoteEndpoint = [127.0.0.1]:40027, localEndpoint = [127.0.0.1]:5005, connectionId = 1}
10:36:46,056 WARN |serverRestartWhenReliableTopicListenerRegistered| - [TestClientRegistry$MockedNodeConnection] serverRestartWhenReliableTopicListenerRegistered - Server connection closed: null
10:36:46,056 INFO |serverRestartWhenReliableTopicListenerRegistered| - [MockConnectionManager] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:40026, connection: MockedNodeConnection{ remoteEndpoint = [127.0.0.1]:40026, localEndpoint = [127.0.0.1]:5005, connectionId = 1}
10:36:46,057 INFO |serverRestartWhenReliableTopicListenerRegistered| - [Node] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Shutting down node engine...
10:36:46,080 WARN |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnection] pool-54-thread-1 - hz.client_10 [dev] [3.11-SNAPSHOT] MockedClientConnection{localAddress=[127.0.0.1]:40027, super=ClientConnection{alive=false, connectionId=1, channel=null, remoteEndpoint=[127.0.0.1]:5005, lastReadTime=2018-05-29 10:36:46.031, lastWriteTime=2018-05-29 10:36:46.010, closedTime=2018-05-29 10:36:46.057, connected server version=3.11-SNAPSHOT}} closed. Reason: com.hazelcast.spi.exception.TargetDisconnectedException[Mocked Remote socket closed]
com.hazelcast.spi.exception.TargetDisconnectedException: Mocked Remote socket closed
at com.hazelcast.client.test.TestClientRegistry$MockedClientConnection$4.run(TestClientRegistry.java:348) [test-classes/:?]
at com.hazelcast.client.test.TwoWayBlockableExecutor$BlockableRunnable.run(TwoWayBlockableExecutor.java:97) [test-classes/:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_171]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_171]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]
10:36:46,150 INFO |serverRestartWhenReliableTopicListenerRegistered| - [NodeExtension] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Destroying node NodeExtension.
10:36:46,159 WARN |serverRestartWhenReliableTopicListenerRegistered| - [TwoWayBlockableExecutor] hz.client_9.internal-3 - Dropping outgoing runnable since other end closed. Runnable message ClientMessage{connection=null, length=22, correlationId=12, operation=null, messageType=190a, partitionId=98, isComplete=false, isRetryable=false, isEvent=false, writeOffset=0}, MockedClientConnection{localAddress=[127.0.0.1]:40026, super=ClientConnection{alive=false, connectionId=1, channel=null, remoteEndpoint=[127.0.0.1]:5005, lastReadTime=2018-05-29 10:36:46.053, lastWriteTime=2018-05-29 10:36:46.062, closedTime=2018-05-29 10:36:46.062, connected server version=3.11-SNAPSHOT}}
10:36:46,161 INFO |serverRestartWhenReliableTopicListenerRegistered| - [Node] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Hazelcast Shutdown is completed in 105 ms.
10:36:46,162 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] [127.0.0.1]:5005 is SHUTDOWN
10:36:46,163 INFO |serverRestartWhenReliableTopicListenerRegistered| - [XmlConfigLocator] serverRestartWhenReliableTopicListenerRegistered - Loading 'hazelcast-default.xml' from classpath.
10:36:46,158 WARN |serverRestartWhenReliableTopicListenerRegistered| - [TwoWayBlockableExecutor] pool-54-thread-1 - Dropping outgoing runnable since other end closed. Client Closed EOF. MockedClientConnection{localAddress=[127.0.0.1]:40027, super=ClientConnection{alive=false, connectionId=1, channel=null, remoteEndpoint=[127.0.0.1]:5005, lastReadTime=2018-05-29 10:36:46.031, lastWriteTime=2018-05-29 10:36:46.010, closedTime=2018-05-29 10:36:46.057, connected server version=3.11-SNAPSHOT}}
10:36:46,163 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] pool-54-thread-1 - hz.client_10 [dev] [3.11-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5005, connection: MockedClientConnection{localAddress=[127.0.0.1]:40027, super=ClientConnection{alive=false, connectionId=1, channel=null, remoteEndpoint=[127.0.0.1]:5005, lastReadTime=2018-05-29 10:36:46.031, lastWriteTime=2018-05-29 10:36:46.010, closedTime=2018-05-29 10:36:46.057, connected server version=3.11-SNAPSHOT}}
10:36:46,163 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] hz.client_10.cluster- - hz.client_10 [dev] [3.11-SNAPSHOT] HazelcastClient 3.11-SNAPSHOT (20180529 - fd1375e) is CLIENT_DISCONNECTED
10:36:46,164 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_10.cluster- - hz.client_10 [dev] [3.11-SNAPSHOT] Trying to connect to [127.0.0.1]:5005 as owner member
10:36:46,167 WARN |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_10.cluster- - hz.client_10 [dev] [3.11-SNAPSHOT] Exception during initial connection to [127.0.0.1]:5005, exception com.hazelcast.core.HazelcastException: java.io.IOException: Can not connected to [127.0.0.1]:5005: instance does not exist
10:36:46,167 WARN |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_10.cluster- - hz.client_10 [dev] [3.11-SNAPSHOT] Unable to get alive cluster connection, try in 2997 ms later, attempt 1 of 2147483647.
10:36:46,063 WARN |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnection] pool-52-thread-1 - hz.client_9 [dev] [3.11-SNAPSHOT] MockedClientConnection{localAddress=[127.0.0.1]:40026, super=ClientConnection{alive=false, connectionId=1, channel=null, remoteEndpoint=[127.0.0.1]:5005, lastReadTime=2018-05-29 10:36:46.053, lastWriteTime=2018-05-29 10:36:46.062, closedTime=2018-05-29 10:36:46.062, connected server version=3.11-SNAPSHOT}} closed. Reason: com.hazelcast.spi.exception.TargetDisconnectedException[Mocked Remote socket closed]
com.hazelcast.spi.exception.TargetDisconnectedException: Mocked Remote socket closed
at com.hazelcast.client.test.TestClientRegistry$MockedClientConnection$4.run(TestClientRegistry.java:348) [test-classes/:?]
at com.hazelcast.client.test.TwoWayBlockableExecutor$BlockableRunnable.run(TwoWayBlockableExecutor.java:97) [test-classes/:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_171]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_171]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]
10:36:46,195 WARN |serverRestartWhenReliableTopicListenerRegistered| - [TwoWayBlockableExecutor] pool-52-thread-1 - Dropping outgoing runnable since other end closed. Client Closed EOF. MockedClientConnection{localAddress=[127.0.0.1]:40026, super=ClientConnection{alive=false, connectionId=1, channel=null, remoteEndpoint=[127.0.0.1]:5005, lastReadTime=2018-05-29 10:36:46.053, lastWriteTime=2018-05-29 10:36:46.062, closedTime=2018-05-29 10:36:46.062, connected server version=3.11-SNAPSHOT}}
10:36:46,195 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] pool-52-thread-1 - hz.client_9 [dev] [3.11-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5005, connection: MockedClientConnection{localAddress=[127.0.0.1]:40026, super=ClientConnection{alive=false, connectionId=1, channel=null, remoteEndpoint=[127.0.0.1]:5005, lastReadTime=2018-05-29 10:36:46.053, lastWriteTime=2018-05-29 10:36:46.062, closedTime=2018-05-29 10:36:46.062, connected server version=3.11-SNAPSHOT}}
10:36:46,196 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] hz.client_9.cluster- - hz.client_9 [dev] [3.11-SNAPSHOT] HazelcastClient 3.11-SNAPSHOT (20180529 - fd1375e) is CLIENT_DISCONNECTED
10:36:46,196 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_9.cluster- - hz.client_9 [dev] [3.11-SNAPSHOT] Trying to connect to [127.0.0.1]:5005 as owner member
10:36:46,236 WARN |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_9.cluster- - hz.client_9 [dev] [3.11-SNAPSHOT] Exception during initial connection to [127.0.0.1]:5005, exception com.hazelcast.core.HazelcastException: java.io.IOException: Can not connected to [127.0.0.1]:5005: instance does not exist
10:36:46,236 WARN |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_9.cluster- - hz.client_9 [dev] [3.11-SNAPSHOT] Unable to get alive cluster connection, try in 2960 ms later, attempt 1 of 2147483647.
10:36:46,259 WARN |serverRestartWhenReliableTopicListenerRegistered| - [ClientReliableTopicProxy] hz.client_9.user-1 - hz.client_9 [dev] [3.11-SNAPSHOT] Terminating MessageListener com.hazelcast.client.topic.Issue9766Test$1@61a9a9ff on topic: foo. Reason: Unhandled exception, message: Mocked Remote socket closed
com.hazelcast.spi.exception.TargetDisconnectedException: Mocked Remote socket closed
at com.hazelcast.client.spi.impl.AbstractClientInvocationService$CleanResourcesTask.notifyException(AbstractClientInvocationService.java:230) ~[classes/:?]
at com.hazelcast.client.spi.impl.AbstractClientInvocationService$CleanResourcesTask.run(AbstractClientInvocationService.java:225) ~[classes/:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_171]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) ~[?:1.8.0_171]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) ~[?:1.8.0_171]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) ~[?:1.8.0_171]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_171]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_171]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]
at com.hazelcast.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:64) [hazelcast-3.11-SNAPSHOT.jar:3.11-SNAPSHOT]
at com.hazelcast.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:80) [hazelcast-3.11-SNAPSHOT.jar:3.11-SNAPSHOT]
Caused by: com.hazelcast.spi.exception.TargetDisconnectedException: Mocked Remote socket closed
at com.hazelcast.client.test.TestClientRegistry$MockedClientConnection$4.run(TestClientRegistry.java:348) ~[test-classes/:?]
at com.hazelcast.client.test.TwoWayBlockableExecutor$BlockableRunnable.run(TwoWayBlockableExecutor.java:97) ~[test-classes/:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_171]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_171]
at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_171]
10:36:46,669 INFO |serverRestartWhenReliableTopicListenerRegistered| - [system] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Hazelcast 3.11-SNAPSHOT (20180529 - fd1375e) starting at [127.0.0.1]:5006
10:36:46,669 INFO |serverRestartWhenReliableTopicListenerRegistered| - [system] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Copyright (c) 2008-2018, Hazelcast, Inc. All Rights Reserved.
10:36:46,669 INFO |serverRestartWhenReliableTopicListenerRegistered| - [system] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Configured Hazelcast Serialization version: 1
10:36:46,669 INFO |serverRestartWhenReliableTopicListenerRegistered| - [Node] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] A non-empty group password is configured for the Hazelcast member. Starting with Hazelcast version 3.8.2, members with the same group name, but with different group passwords (that do not use authentication) form a cluster. The group password configuration will be removed completely in a future release.
10:36:46,696 INFO |serverRestartWhenReliableTopicListenerRegistered| - [BackpressureRegulator] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Backpressure is disabled
10:36:46,696 INFO |serverRestartWhenReliableTopicListenerRegistered| - [InboundResponseHandlerSupplier] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Running with 2 response threads
10:36:46,749 INFO |serverRestartWhenReliableTopicListenerRegistered| - [OperationExecutorImpl] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Starting 72 partition threads and 37 generic threads (1 dedicated for priority tasks)
10:36:46,940 INFO |serverRestartWhenReliableTopicListenerRegistered| - [Diagnostics] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
10:36:46,941 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] [127.0.0.1]:5006 is STARTING
10:36:46,941 INFO |serverRestartWhenReliableTopicListenerRegistered| - [system] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Cluster version set to 3.11
10:36:46,942 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClusterService] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT]
Members {size:1, ver:1} [
Member [127.0.0.1]:5006 - 577059d8-1e86-4c6b-be25-c35b17519f00 this
]
10:36:46,942 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] serverRestartWhenReliableTopicListenerRegistered - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] [127.0.0.1]:5006 is STARTED
10:36:46,953 INFO |serverRestartWhenReliableTopicListenerRegistered| - [HealthMonitor] hz._hzInstance_6_dev.HealthMonitor - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] processors=72, physical.memory.total=377.6G, physical.memory.free=105.4G, swap.space.total=4.0G, swap.space.free=3.7G, heap.memory.used=46.5M, heap.memory.free=956.0M, heap.memory.total=1002.5M, heap.memory.max=1.8G, heap.memory.used/total=4.63%, heap.memory.used/max=2.55%, minor.gc.count=8, minor.gc.time=1108ms, major.gc.count=2, major.gc.time=1002ms, load.process=0.00%, load.system=93.75%, load.systemAverage=655.02, thread.count=165, thread.peakCount=847, cluster.timeDiff=0, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=0, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=0, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
10:36:49,165 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_10.cluster- - hz.client_10 [dev] [3.11-SNAPSHOT] Trying to connect to [127.0.0.1]:5006 as owner member
10:36:49,172 INFO |serverRestartWhenReliableTopicListenerRegistered| - [HazelcastClient] hz.client_10.internal-3 - Created connection to endpoint: [127.0.0.1]:5006, connection: MockedClientConnection{localAddress=[127.0.0.1]:40028, super=ClientConnection{alive=true, connectionId=2, channel=null, remoteEndpoint=null, lastReadTime=never, lastWriteTime=never, closedTime=never, connected server version=null}}
10:36:49,174 DEBUG |serverRestartWhenReliableTopicListenerRegistered| - [AuthenticationMessageTask] hz._hzInstance_6_dev.generic-operation.thread-0 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Processing owner authentication with principal ClientPrincipal{uuid='a6ae5613-e218-4a14-a0fb-9492a0aee4f8', ownerUuid='577059d8-1e86-4c6b-be25-c35b17519f00'}
10:36:49,182 DEBUG |serverRestartWhenReliableTopicListenerRegistered| - [ClientReAuthOperation] hz._hzInstance_6_dev.cached.thread-1 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Client authenticated a6ae5613-e218-4a14-a0fb-9492a0aee4f8, owner 577059d8-1e86-4c6b-be25-c35b17519f00
10:36:49,183 DEBUG |serverRestartWhenReliableTopicListenerRegistered| - [AuthenticationMessageTask] hz._hzInstance_6_dev.async.thread-2 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Processed owner authentication with principal ClientPrincipal{uuid='a6ae5613-e218-4a14-a0fb-9492a0aee4f8', ownerUuid='577059d8-1e86-4c6b-be25-c35b17519f00'}
10:36:49,183 INFO |serverRestartWhenReliableTopicListenerRegistered| - [AuthenticationMessageTask] hz._hzInstance_6_dev.async.thread-2 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Received auth from MockedNodeConnection{ remoteEndpoint = [127.0.0.1]:40028, localEndpoint = [127.0.0.1]:5006, connectionId = 2}, successfully authenticated, principal: ClientPrincipal{uuid='a6ae5613-e218-4a14-a0fb-9492a0aee4f8', ownerUuid='577059d8-1e86-4c6b-be25-c35b17519f00'}, owner connection: true, client version: 3.11-SNAPSHOT
10:36:49,186 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_10.internal-1 - hz.client_10 [dev] [3.11-SNAPSHOT] Setting MockedClientConnection{localAddress=[127.0.0.1]:40028, super=ClientConnection{alive=true, connectionId=2, channel=null, remoteEndpoint=[127.0.0.1]:5006, lastReadTime=2018-05-29 10:36:49.184, lastWriteTime=2018-05-29 10:36:49.173, closedTime=never, connected server version=3.11-SNAPSHOT}} as owner with principal ClientPrincipal{uuid='a6ae5613-e218-4a14-a0fb-9492a0aee4f8', ownerUuid='577059d8-1e86-4c6b-be25-c35b17519f00'}
10:36:49,186 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_10.internal-1 - hz.client_10 [dev] [3.11-SNAPSHOT] Authenticated with server [127.0.0.1]:5006, server version:3.11-SNAPSHOT Local address: /127.0.0.1:40028
10:36:49,198 INFO |serverRestartWhenReliableTopicListenerRegistered| - [PartitionStateManager] hz._hzInstance_6_dev.client.thread-1 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Initializing cluster partition table arrangement...
10:36:49,200 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_9.cluster- - hz.client_9 [dev] [3.11-SNAPSHOT] Trying to connect to [127.0.0.1]:5006 as owner member
10:36:49,200 INFO |serverRestartWhenReliableTopicListenerRegistered| - [HazelcastClient] hz.client_9.internal-1 - Created connection to endpoint: [127.0.0.1]:5006, connection: MockedClientConnection{localAddress=[127.0.0.1]:40029, super=ClientConnection{alive=true, connectionId=2, channel=null, remoteEndpoint=null, lastReadTime=never, lastWriteTime=never, closedTime=never, connected server version=null}}
10:36:49,206 DEBUG |serverRestartWhenReliableTopicListenerRegistered| - [AuthenticationMessageTask] hz._hzInstance_6_dev.priority-generic-operation.thread-0 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Processing owner authentication with principal ClientPrincipal{uuid='4084fb8c-8a27-4daf-8786-eb2e1d18329d', ownerUuid='577059d8-1e86-4c6b-be25-c35b17519f00'}
10:36:49,207 DEBUG |serverRestartWhenReliableTopicListenerRegistered| - [ClientReAuthOperation] hz._hzInstance_6_dev.cached.thread-1 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Client authenticated 4084fb8c-8a27-4daf-8786-eb2e1d18329d, owner 577059d8-1e86-4c6b-be25-c35b17519f00
10:36:49,216 DEBUG |serverRestartWhenReliableTopicListenerRegistered| - [AuthenticationMessageTask] hz._hzInstance_6_dev.async.thread-4 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Processed owner authentication with principal ClientPrincipal{uuid='4084fb8c-8a27-4daf-8786-eb2e1d18329d', ownerUuid='577059d8-1e86-4c6b-be25-c35b17519f00'}
10:36:49,216 INFO |serverRestartWhenReliableTopicListenerRegistered| - [AuthenticationMessageTask] hz._hzInstance_6_dev.async.thread-4 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Received auth from MockedNodeConnection{ remoteEndpoint = [127.0.0.1]:40029, localEndpoint = [127.0.0.1]:5006, connectionId = 2}, successfully authenticated, principal: ClientPrincipal{uuid='4084fb8c-8a27-4daf-8786-eb2e1d18329d', ownerUuid='577059d8-1e86-4c6b-be25-c35b17519f00'}, owner connection: true, client version: 3.11-SNAPSHOT
10:36:49,216 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientMembershipListener] hz.client_10.event-76 - hz.client_10 [dev] [3.11-SNAPSHOT]
Members [1] {
Member [127.0.0.1]:5006 - 577059d8-1e86-4c6b-be25-c35b17519f00
}
10:36:49,217 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] hz.client_10.cluster- - hz.client_10 [dev] [3.11-SNAPSHOT] HazelcastClient 3.11-SNAPSHOT (20180529 - fd1375e) is CLIENT_CONNECTED
10:36:49,217 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_9.internal-2 - hz.client_9 [dev] [3.11-SNAPSHOT] Setting MockedClientConnection{localAddress=[127.0.0.1]:40029, super=ClientConnection{alive=true, connectionId=2, channel=null, remoteEndpoint=[127.0.0.1]:5006, lastReadTime=2018-05-29 10:36:49.216, lastWriteTime=2018-05-29 10:36:49.206, closedTime=never, connected server version=3.11-SNAPSHOT}} as owner with principal ClientPrincipal{uuid='4084fb8c-8a27-4daf-8786-eb2e1d18329d', ownerUuid='577059d8-1e86-4c6b-be25-c35b17519f00'}
10:36:49,217 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] hz.client_9.internal-2 - hz.client_9 [dev] [3.11-SNAPSHOT] Authenticated with server [127.0.0.1]:5006, server version:3.11-SNAPSHOT Local address: /127.0.0.1:40029
10:36:49,260 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientMembershipListener] hz.client_9.event-73 - hz.client_9 [dev] [3.11-SNAPSHOT]
Members [1] {
Member [127.0.0.1]:5006 - 577059d8-1e86-4c6b-be25-c35b17519f00
}
10:36:49,260 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] hz.client_9.cluster- - hz.client_9 [dev] [3.11-SNAPSHOT] HazelcastClient 3.11-SNAPSHOT (20180529 - fd1375e) is CLIENT_CONNECTED
10:37:06,960 INFO |serverRestartWhenReliableTopicListenerRegistered| - [HealthMonitor] hz._hzInstance_6_dev.HealthMonitor - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] processors=72, physical.memory.total=377.6G, physical.memory.free=113.5G, swap.space.total=4.0G, swap.space.free=3.7G, heap.memory.used=57.4M, heap.memory.free=945.1M, heap.memory.total=1002.5M, heap.memory.max=1.8G, heap.memory.used/total=5.73%, heap.memory.used/max=3.16%, minor.gc.count=8, minor.gc.time=1108ms, major.gc.count=2, major.gc.time=1002ms, load.process=0.00%, load.system=92.86%, load.systemAverage=637.14, thread.count=182, thread.peakCount=847, cluster.timeDiff=0, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=5, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=0, clientEndpoint.count=2, connection.active.count=0, client.connection.count=0, connection.count=0
10:37:26,965 INFO |serverRestartWhenReliableTopicListenerRegistered| - [HealthMonitor] hz._hzInstance_6_dev.HealthMonitor - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] processors=72, physical.memory.total=377.6G, physical.memory.free=115.1G, swap.space.total=4.0G, swap.space.free=3.7G, heap.memory.used=62.3M, heap.memory.free=940.2M, heap.memory.total=1002.5M, heap.memory.max=1.8G, heap.memory.used/total=6.21%, heap.memory.used/max=3.42%, minor.gc.count=8, minor.gc.time=1108ms, major.gc.count=2, major.gc.time=1002ms, load.process=0.00%, load.system=76.92%, load.systemAverage=582.09, thread.count=185, thread.peakCount=847, cluster.timeDiff=0, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=5, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=0, clientEndpoint.count=2, connection.active.count=0, client.connection.count=0, connection.count=0
10:37:46,969 INFO |serverRestartWhenReliableTopicListenerRegistered| - [HealthMonitor] hz._hzInstance_6_dev.HealthMonitor - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] processors=72, physical.memory.total=377.6G, physical.memory.free=115.3G, swap.space.total=4.0G, swap.space.free=3.7G, heap.memory.used=67.2M, heap.memory.free=935.3M, heap.memory.total=1002.5M, heap.memory.max=1.8G, heap.memory.used/total=6.70%, heap.memory.used/max=3.69%, minor.gc.count=8, minor.gc.time=1108ms, major.gc.count=2, major.gc.time=1002ms, load.process=7.14%, load.system=93.33%, load.systemAverage=478.82, thread.count=185, thread.peakCount=847, cluster.timeDiff=0, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=5, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=0, clientEndpoint.count=2, connection.active.count=0, client.connection.count=0, connection.count=0
10:38:06,974 INFO |serverRestartWhenReliableTopicListenerRegistered| - [HealthMonitor] hz._hzInstance_6_dev.HealthMonitor - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] processors=72, physical.memory.total=377.6G, physical.memory.free=117.7G, swap.space.total=4.0G, swap.space.free=3.7G, heap.memory.used=72.0M, heap.memory.free=930.5M, heap.memory.total=1002.5M, heap.memory.max=1.8G, heap.memory.used/total=7.19%, heap.memory.used/max=3.96%, minor.gc.count=8, minor.gc.time=1108ms, major.gc.count=2, major.gc.time=1002ms, load.process=0.00%, load.system=92.86%, load.systemAverage=462.19, thread.count=173, thread.peakCount=847, cluster.timeDiff=0, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=5, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=0, clientEndpoint.count=2, connection.active.count=0, client.connection.count=0, connection.count=0
10:38:50,126 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] Thread-13 - hz.client_9 [dev] [3.11-SNAPSHOT] HazelcastClient 3.11-SNAPSHOT (20180529 - fd1375e) is SHUTTING_DOWN
10:38:50,127 WARN |serverRestartWhenReliableTopicListenerRegistered| - [TestClientRegistry$MockedNodeConnection] pool-59-thread-1 - Server connection closed: null
10:38:50,127 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] Thread-13 - hz.client_9 [dev] [3.11-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5006, connection: MockedClientConnection{localAddress=[127.0.0.1]:40029, super=ClientConnection{alive=false, connectionId=2, channel=null, remoteEndpoint=[127.0.0.1]:5006, lastReadTime=2018-05-29 10:38:45.831, lastWriteTime=2018-05-29 10:38:45.829, closedTime=2018-05-29 10:38:50.127, connected server version=3.11-SNAPSHOT}}
10:38:50,127 INFO |serverRestartWhenReliableTopicListenerRegistered| - [MockConnectionManager] pool-59-thread-1 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:40029, connection: MockedNodeConnection{ remoteEndpoint = [127.0.0.1]:40029, localEndpoint = [127.0.0.1]:5006, connectionId = 2}
10:38:50,128 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientEndpointManager] hz._hzInstance_6_dev.event-85 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Destroying ClientEndpoint{connection=MockedNodeConnection{ remoteEndpoint = [127.0.0.1]:40029, localEndpoint = [127.0.0.1]:5006, connectionId = 2}, principal='ClientPrincipal{uuid='4084fb8c-8a27-4daf-8786-eb2e1d18329d', ownerUuid='577059d8-1e86-4c6b-be25-c35b17519f00'}, ownerConnection=true, authenticated=true, clientVersion=3.11-SNAPSHOT, creationTime=1527590209206, latest statistics=null}
10:38:50,129 WARN |serverRestartWhenReliableTopicListenerRegistered| - [TwoWayBlockableExecutor] pool-59-thread-1 - Dropping incoming runnable since other end closed. Server Closed EOF. MockedClientConnection{localAddress=[127.0.0.1]:40029, super=ClientConnection{alive=false, connectionId=2, channel=null, remoteEndpoint=[127.0.0.1]:5006, lastReadTime=2018-05-29 10:38:45.831, lastWriteTime=2018-05-29 10:38:45.829, closedTime=2018-05-29 10:38:50.127, connected server version=3.11-SNAPSHOT}}
10:38:50,132 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] Thread-13 - hz.client_9 [dev] [3.11-SNAPSHOT] HazelcastClient 3.11-SNAPSHOT (20180529 - fd1375e) is SHUTDOWN
10:38:50,132 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] Thread-13 - hz.client_10 [dev] [3.11-SNAPSHOT] HazelcastClient 3.11-SNAPSHOT (20180529 - fd1375e) is SHUTTING_DOWN
10:38:50,133 WARN |serverRestartWhenReliableTopicListenerRegistered| - [TestClientRegistry$MockedNodeConnection] pool-57-thread-1 - Server connection closed: null
10:38:50,133 INFO |serverRestartWhenReliableTopicListenerRegistered| - [MockConnectionManager] pool-57-thread-1 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:40028, connection: MockedNodeConnection{ remoteEndpoint = [127.0.0.1]:40028, localEndpoint = [127.0.0.1]:5006, connectionId = 2}
10:38:50,133 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientConnectionManager] Thread-13 - hz.client_10 [dev] [3.11-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5006, connection: MockedClientConnection{localAddress=[127.0.0.1]:40028, super=ClientConnection{alive=false, connectionId=2, channel=null, remoteEndpoint=[127.0.0.1]:5006, lastReadTime=2018-05-29 10:38:45.977, lastWriteTime=2018-05-29 10:38:45.975, closedTime=2018-05-29 10:38:50.132, connected server version=3.11-SNAPSHOT}}
10:38:50,133 INFO |serverRestartWhenReliableTopicListenerRegistered| - [ClientEndpointManager] hz._hzInstance_6_dev.event-84 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Destroying ClientEndpoint{connection=MockedNodeConnection{ remoteEndpoint = [127.0.0.1]:40028, localEndpoint = [127.0.0.1]:5006, connectionId = 2}, principal='ClientPrincipal{uuid='a6ae5613-e218-4a14-a0fb-9492a0aee4f8', ownerUuid='577059d8-1e86-4c6b-be25-c35b17519f00'}, ownerConnection=true, authenticated=true, clientVersion=3.11-SNAPSHOT, creationTime=1527590209173, latest statistics=null}
10:38:50,133 WARN |serverRestartWhenReliableTopicListenerRegistered| - [TwoWayBlockableExecutor] pool-57-thread-1 - Dropping incoming runnable since other end closed. Server Closed EOF. MockedClientConnection{localAddress=[127.0.0.1]:40028, super=ClientConnection{alive=false, connectionId=2, channel=null, remoteEndpoint=[127.0.0.1]:5006, lastReadTime=2018-05-29 10:38:45.977, lastWriteTime=2018-05-29 10:38:45.975, closedTime=2018-05-29 10:38:50.132, connected server version=3.11-SNAPSHOT}}
10:38:50,136 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] Thread-13 - hz.client_10 [dev] [3.11-SNAPSHOT] HazelcastClient 3.11-SNAPSHOT (20180529 - fd1375e) is SHUTDOWN
10:38:50,136 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] Thread-13 - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] [127.0.0.1]:5005 is SHUTTING_DOWN
10:38:50,136 INFO |serverRestartWhenReliableTopicListenerRegistered| - [Node] Thread-13 - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] Node is already shutting down... Waiting for shutdown process to complete...
10:38:50,136 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] Thread-13 - [127.0.0.1]:5005 [dev] [3.11-SNAPSHOT] [127.0.0.1]:5005 is SHUTDOWN
10:38:50,136 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] Thread-13 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] [127.0.0.1]:5006 is SHUTTING_DOWN
10:38:50,136 WARN |serverRestartWhenReliableTopicListenerRegistered| - [Node] Thread-13 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Terminating forcefully...
10:38:50,136 INFO |serverRestartWhenReliableTopicListenerRegistered| - [Node] Thread-13 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Shutting down connection manager...
10:38:50,136 INFO |serverRestartWhenReliableTopicListenerRegistered| - [Node] Thread-13 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Shutting down node engine...
10:38:50,143 INFO |serverRestartWhenReliableTopicListenerRegistered| - [NodeExtension] Thread-13 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Destroying node NodeExtension.
10:38:50,144 INFO |serverRestartWhenReliableTopicListenerRegistered| - [Node] Thread-13 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] Hazelcast Shutdown is completed in 8 ms.
10:38:50,144 INFO |serverRestartWhenReliableTopicListenerRegistered| - [LifecycleService] Thread-13 - [127.0.0.1]:5006 [dev] [3.11-SNAPSHOT] [127.0.0.1]:5006 is SHUTDOWN
Standard Error
THREAD DUMP FOR TEST FAILURE: "CountDownLatch failed to complete within 120 seconds, count left: 1" at "serverRestartWhenReliableTopicListenerRegistered"
``` | non_process | test com hazelcast client topic serverrestartwhenreliabletopiclistenerregistered failed during a pr run for maintenance x branch the following test failure occured regression com hazelcast client topic serverrestartwhenreliabletopiclistenerregistered failing for the past build since unstable took min sec error message countdownlatch failed to complete within seconds count left stacktrace java lang assertionerror countdownlatch failed to complete within seconds count left at org junit assert fail assert java at org junit assert asserttrue assert java at com hazelcast test hazelcasttestsupport assertopeneventually hazelcasttestsupport java at com hazelcast test hazelcasttestsupport assertopeneventually hazelcasttestsupport java at com hazelcast test hazelcasttestsupport assertopeneventually hazelcasttestsupport java at com hazelcast client topic serverrestartwhenreliabletopiclistenerregistered java started running test serverrestartwhenreliabletopiclistenerregistered info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered loading hazelcast default xml from classpath info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered hazelcast snapshot starting at info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered copyright c hazelcast inc all rights reserved info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered configured hazelcast serialization version info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered a non empty group password is configured for the hazelcast member starting with hazelcast version members with the same group name but with different group passwords that do not use authentication form a cluster the group password configuration will be removed completely in a future release info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered backpressure is disabled info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered running with response threads info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered starting partition threads and generic threads dedicated for priority tasks info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered diagnostics disabled to enable add dhazelcast diagnostics enabled true to the jvm arguments info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered is starting info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered cluster version set to info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered members size ver member this info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered is started info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered hz client hazelcastclient snapshot is starting info serverrestartwhenreliabletopiclistenerregistered hz hzinstance dev healthmonitor processors physical memory total physical memory free swap space total swap space free heap memory used heap memory free heap memory total heap memory max heap memory used total heap memory used max minor gc count minor gc time major gc count major gc time load process load system load systemaverage thread count thread peakcount cluster timediff event q size executor q async size executor q client size executor q query size executor q scheduled size executor q io size executor q system size executor q operations size executor q priorityoperation size operations completed count executor q mapload size executor q maploadallkeys size executor q cluster size executor q response size operations running count operations pending invocations percentage operations pending invocations count proxy count clientendpoint count connection active count client connection count connection count info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered hz client running with response threads info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered hz client hazelcastclient snapshot is started info serverrestartwhenreliabletopiclistenerregistered hz client cluster hz client trying to connect to as owner member info serverrestartwhenreliabletopiclistenerregistered hz client internal created connection to endpoint connection mockedclientconnection localaddress super clientconnection alive true connectionid channel null remoteendpoint null lastreadtime never lastwritetime never closedtime never connected server version null debug serverrestartwhenreliabletopiclistenerregistered hz hzinstance dev priority generic operation thread processing owner authentication with principal clientprincipal uuid owneruuid debug serverrestartwhenreliabletopiclistenerregistered hz hzinstance dev cached thread client authenticated owner debug serverrestartwhenreliabletopiclistenerregistered hz hzinstance dev async thread processed owner authentication with principal clientprincipal uuid owneruuid info serverrestartwhenreliabletopiclistenerregistered hz hzinstance dev async thread received auth from mockednodeconnection remoteendpoint localendpoint connectionid successfully authenticated principal clientprincipal uuid owneruuid owner connection true client version snapshot info serverrestartwhenreliabletopiclistenerregistered hz client internal hz client setting mockedclientconnection localaddress super clientconnection alive true connectionid channel null remoteendpoint lastreadtime lastwritetime closedtime never connected server version snapshot as owner with principal clientprincipal uuid owneruuid info serverrestartwhenreliabletopiclistenerregistered hz client internal hz client authenticated with server server version snapshot local address info serverrestartwhenreliabletopiclistenerregistered hz hzinstance dev client thread initializing cluster partition table arrangement info serverrestartwhenreliabletopiclistenerregistered hz client event hz client members member info serverrestartwhenreliabletopiclistenerregistered hz client cluster hz client hazelcastclient snapshot is client connected info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered hz client diagnostics disabled to enable add dhazelcast diagnostics enabled true to the jvm arguments info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered hz client hazelcastclient snapshot is starting info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered hz client running with response threads info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered hz client hazelcastclient snapshot is started info serverrestartwhenreliabletopiclistenerregistered hz client cluster hz client trying to connect to as owner member info serverrestartwhenreliabletopiclistenerregistered hz client internal created connection to endpoint connection mockedclientconnection localaddress super clientconnection alive true connectionid channel null remoteendpoint null lastreadtime never lastwritetime never closedtime never connected server version null debug serverrestartwhenreliabletopiclistenerregistered hz hzinstance dev generic operation thread processing owner authentication with principal clientprincipal uuid owneruuid debug serverrestartwhenreliabletopiclistenerregistered hz hzinstance dev cached thread client authenticated owner debug serverrestartwhenreliabletopiclistenerregistered hz hzinstance dev async thread processed owner authentication with principal clientprincipal uuid owneruuid info serverrestartwhenreliabletopiclistenerregistered hz hzinstance dev async thread received auth from mockednodeconnection remoteendpoint localendpoint connectionid successfully authenticated principal clientprincipal uuid owneruuid owner connection true client version snapshot info serverrestartwhenreliabletopiclistenerregistered hz client internal hz client setting mockedclientconnection localaddress super clientconnection alive true connectionid channel null remoteendpoint lastreadtime lastwritetime closedtime never connected server version snapshot as owner with principal clientprincipal uuid owneruuid info serverrestartwhenreliabletopiclistenerregistered hz client internal hz client authenticated with server server version snapshot local address info serverrestartwhenreliabletopiclistenerregistered hz client event hz client members member info serverrestartwhenreliabletopiclistenerregistered hz client cluster hz client hazelcastclient snapshot is client connected info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered hz client diagnostics disabled to enable add dhazelcast diagnostics enabled true to the jvm arguments info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered is shutting down warn serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered terminating forcefully info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered shutting down connection manager warn serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered server connection closed null info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered removed connection to endpoint connection mockednodeconnection remoteendpoint localendpoint connectionid warn serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered server connection closed null info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered removed connection to endpoint connection mockednodeconnection remoteendpoint localendpoint connectionid info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered shutting down node engine warn serverrestartwhenreliabletopiclistenerregistered pool thread hz client mockedclientconnection localaddress super clientconnection alive false connectionid channel null remoteendpoint lastreadtime lastwritetime closedtime connected server version snapshot closed reason com hazelcast spi exception targetdisconnectedexception com hazelcast spi exception targetdisconnectedexception mocked remote socket closed at com hazelcast client test testclientregistry mockedclientconnection run testclientregistry java at com hazelcast client test twowayblockableexecutor blockablerunnable run twowayblockableexecutor java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered destroying node nodeextension warn serverrestartwhenreliabletopiclistenerregistered hz client internal dropping outgoing runnable since other end closed runnable message clientmessage connection null length correlationid operation null messagetype partitionid iscomplete false isretryable false isevent false writeoffset mockedclientconnection localaddress super clientconnection alive false connectionid channel null remoteendpoint lastreadtime lastwritetime closedtime connected server version snapshot info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered hazelcast shutdown is completed in ms info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered is shutdown info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered loading hazelcast default xml from classpath warn serverrestartwhenreliabletopiclistenerregistered pool thread dropping outgoing runnable since other end closed client closed eof mockedclientconnection localaddress super clientconnection alive false connectionid channel null remoteendpoint lastreadtime lastwritetime closedtime connected server version snapshot info serverrestartwhenreliabletopiclistenerregistered pool thread hz client removed connection to endpoint connection mockedclientconnection localaddress super clientconnection alive false connectionid channel null remoteendpoint lastreadtime lastwritetime closedtime connected server version snapshot info serverrestartwhenreliabletopiclistenerregistered hz client cluster hz client hazelcastclient snapshot is client disconnected info serverrestartwhenreliabletopiclistenerregistered hz client cluster hz client trying to connect to as owner member warn serverrestartwhenreliabletopiclistenerregistered hz client cluster hz client exception during initial connection to exception com hazelcast core hazelcastexception java io ioexception can not connected to instance does not exist warn serverrestartwhenreliabletopiclistenerregistered hz client cluster hz client unable to get alive cluster connection try in ms later attempt of warn serverrestartwhenreliabletopiclistenerregistered pool thread hz client mockedclientconnection localaddress super clientconnection alive false connectionid channel null remoteendpoint lastreadtime lastwritetime closedtime connected server version snapshot closed reason com hazelcast spi exception targetdisconnectedexception com hazelcast spi exception targetdisconnectedexception mocked remote socket closed at com hazelcast client test testclientregistry mockedclientconnection run testclientregistry java at com hazelcast client test twowayblockableexecutor blockablerunnable run twowayblockableexecutor java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java warn serverrestartwhenreliabletopiclistenerregistered pool thread dropping outgoing runnable since other end closed client closed eof mockedclientconnection localaddress super clientconnection alive false connectionid channel null remoteendpoint lastreadtime lastwritetime closedtime connected server version snapshot info serverrestartwhenreliabletopiclistenerregistered pool thread hz client removed connection to endpoint connection mockedclientconnection localaddress super clientconnection alive false connectionid channel null remoteendpoint lastreadtime lastwritetime closedtime connected server version snapshot info serverrestartwhenreliabletopiclistenerregistered hz client cluster hz client hazelcastclient snapshot is client disconnected info serverrestartwhenreliabletopiclistenerregistered hz client cluster hz client trying to connect to as owner member warn serverrestartwhenreliabletopiclistenerregistered hz client cluster hz client exception during initial connection to exception com hazelcast core hazelcastexception java io ioexception can not connected to instance does not exist warn serverrestartwhenreliabletopiclistenerregistered hz client cluster hz client unable to get alive cluster connection try in ms later attempt of warn serverrestartwhenreliabletopiclistenerregistered hz client user hz client terminating messagelistener com hazelcast client topic on topic foo reason unhandled exception message mocked remote socket closed com hazelcast spi exception targetdisconnectedexception mocked remote socket closed at com hazelcast client spi impl abstractclientinvocationservice cleanresourcestask notifyexception abstractclientinvocationservice java at com hazelcast client spi impl abstractclientinvocationservice cleanresourcestask run abstractclientinvocationservice java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask runandreset futuretask java at java util concurrent scheduledthreadpoolexecutor scheduledfuturetask access scheduledthreadpoolexecutor java at java util concurrent scheduledthreadpoolexecutor scheduledfuturetask run scheduledthreadpoolexecutor java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java at com hazelcast util executor hazelcastmanagedthread executerun hazelcastmanagedthread java at com hazelcast util executor hazelcastmanagedthread run hazelcastmanagedthread java caused by com hazelcast spi exception targetdisconnectedexception mocked remote socket closed at com hazelcast client test testclientregistry mockedclientconnection run testclientregistry java at com hazelcast client test twowayblockableexecutor blockablerunnable run twowayblockableexecutor java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered hazelcast snapshot starting at info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered copyright c hazelcast inc all rights reserved info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered configured hazelcast serialization version info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered a non empty group password is configured for the hazelcast member starting with hazelcast version members with the same group name but with different group passwords that do not use authentication form a cluster the group password configuration will be removed completely in a future release info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered backpressure is disabled info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered running with response threads info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered starting partition threads and generic threads dedicated for priority tasks info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered diagnostics disabled to enable add dhazelcast diagnostics enabled true to the jvm arguments info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered is starting info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered cluster version set to info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered members size ver member this info serverrestartwhenreliabletopiclistenerregistered serverrestartwhenreliabletopiclistenerregistered is started info serverrestartwhenreliabletopiclistenerregistered hz hzinstance dev healthmonitor processors physical memory total physical memory free swap space total swap space free heap memory used heap memory free heap memory total heap memory max heap memory used total heap memory used max minor gc count minor gc time major gc count major gc time load process load system load systemaverage thread count thread peakcount cluster timediff event q size executor q async size executor q client size executor q query size executor q scheduled size executor q io size executor q system size executor q operations size executor q priorityoperation size operations completed count executor q mapload size executor q maploadallkeys size executor q cluster size executor q response size operations running count operations pending invocations percentage operations pending invocations count proxy count clientendpoint count connection active count client connection count connection count info serverrestartwhenreliabletopiclistenerregistered hz client cluster hz client trying to connect to as owner member info serverrestartwhenreliabletopiclistenerregistered hz client internal created connection to endpoint connection mockedclientconnection localaddress super clientconnection alive true connectionid channel null remoteendpoint null lastreadtime never lastwritetime never closedtime never connected server version null debug serverrestartwhenreliabletopiclistenerregistered hz hzinstance dev generic operation thread processing owner authentication with principal clientprincipal uuid owneruuid debug serverrestartwhenreliabletopiclistenerregistered hz hzinstance dev cached thread client authenticated owner debug serverrestartwhenreliabletopiclistenerregistered hz hzinstance dev async thread processed owner authentication with principal clientprincipal uuid owneruuid info serverrestartwhenreliabletopiclistenerregistered hz hzinstance dev async thread received auth from mockednodeconnection remoteendpoint localendpoint connectionid successfully authenticated principal clientprincipal uuid owneruuid owner connection true client version snapshot info serverrestartwhenreliabletopiclistenerregistered hz client internal hz client setting mockedclientconnection localaddress super clientconnection alive true connectionid channel null remoteendpoint lastreadtime lastwritetime closedtime never connected server version snapshot as owner with principal clientprincipal uuid owneruuid info serverrestartwhenreliabletopiclistenerregistered hz client internal hz client authenticated with server server version snapshot local address info serverrestartwhenreliabletopiclistenerregistered hz hzinstance dev client thread initializing cluster partition table arrangement info serverrestartwhenreliabletopiclistenerregistered hz client cluster hz client trying to connect to as owner member info serverrestartwhenreliabletopiclistenerregistered hz client internal created connection to endpoint connection mockedclientconnection localaddress super clientconnection alive true connectionid channel null remoteendpoint null lastreadtime never lastwritetime never closedtime never connected server version null debug serverrestartwhenreliabletopiclistenerregistered hz hzinstance dev priority generic operation thread processing owner authentication with principal clientprincipal uuid owneruuid debug serverrestartwhenreliabletopiclistenerregistered hz hzinstance dev cached thread client authenticated owner debug serverrestartwhenreliabletopiclistenerregistered hz hzinstance dev async thread processed owner authentication with principal clientprincipal uuid owneruuid info serverrestartwhenreliabletopiclistenerregistered hz hzinstance dev async thread received auth from mockednodeconnection remoteendpoint localendpoint connectionid successfully authenticated principal clientprincipal uuid owneruuid owner connection true client version snapshot info serverrestartwhenreliabletopiclistenerregistered hz client event hz client members member info serverrestartwhenreliabletopiclistenerregistered hz client cluster hz client hazelcastclient snapshot is client connected info serverrestartwhenreliabletopiclistenerregistered hz client internal hz client setting mockedclientconnection localaddress super clientconnection alive true connectionid channel null remoteendpoint lastreadtime lastwritetime closedtime never connected server version snapshot as owner with principal clientprincipal uuid owneruuid info serverrestartwhenreliabletopiclistenerregistered hz client internal hz client authenticated with server server version snapshot local address info serverrestartwhenreliabletopiclistenerregistered hz client event hz client members member info serverrestartwhenreliabletopiclistenerregistered hz client cluster hz client hazelcastclient snapshot is client connected info serverrestartwhenreliabletopiclistenerregistered hz hzinstance dev healthmonitor processors physical memory total physical memory free swap space total swap space free heap memory used heap memory free heap memory total heap memory max heap memory used total heap memory used max minor gc count minor gc time major gc count major gc time load process load system load systemaverage thread count thread peakcount cluster timediff event q size executor q async size executor q client size executor q query size executor q scheduled size executor q io size executor q system size executor q operations size executor q priorityoperation size operations completed count executor q mapload size executor q maploadallkeys size executor q cluster size executor q response size operations running count operations pending invocations percentage operations pending invocations count proxy count clientendpoint count connection active count client connection count connection count info serverrestartwhenreliabletopiclistenerregistered hz hzinstance dev healthmonitor processors physical memory total physical memory free swap space total swap space free heap memory used heap memory free heap memory total heap memory max heap memory used total heap memory used max minor gc count minor gc time major gc count major gc time load process load system load systemaverage thread count thread peakcount cluster timediff event q size executor q async size executor q client size executor q query size executor q scheduled size executor q io size executor q system size executor q operations size executor q priorityoperation size operations completed count executor q mapload size executor q maploadallkeys size executor q cluster size executor q response size operations running count operations pending invocations percentage operations pending invocations count proxy count clientendpoint count connection active count client connection count connection count info serverrestartwhenreliabletopiclistenerregistered hz hzinstance dev healthmonitor processors physical memory total physical memory free swap space total swap space free heap memory used heap memory free heap memory total heap memory max heap memory used total heap memory used max minor gc count minor gc time major gc count major gc time load process load system load systemaverage thread count thread peakcount cluster timediff event q size executor q async size executor q client size executor q query size executor q scheduled size executor q io size executor q system size executor q operations size executor q priorityoperation size operations completed count executor q mapload size executor q maploadallkeys size executor q cluster size executor q response size operations running count operations pending invocations percentage operations pending invocations count proxy count clientendpoint count connection active count client connection count connection count info serverrestartwhenreliabletopiclistenerregistered hz hzinstance dev healthmonitor processors physical memory total physical memory free swap space total swap space free heap memory used heap memory free heap memory total heap memory max heap memory used total heap memory used max minor gc count minor gc time major gc count major gc time load process load system load systemaverage thread count thread peakcount cluster timediff event q size executor q async size executor q client size executor q query size executor q scheduled size executor q io size executor q system size executor q operations size executor q priorityoperation size operations completed count executor q mapload size executor q maploadallkeys size executor q cluster size executor q response size operations running count operations pending invocations percentage operations pending invocations count proxy count clientendpoint count connection active count client connection count connection count info serverrestartwhenreliabletopiclistenerregistered thread hz client hazelcastclient snapshot is shutting down warn serverrestartwhenreliabletopiclistenerregistered pool thread server connection closed null info serverrestartwhenreliabletopiclistenerregistered thread hz client removed connection to endpoint connection mockedclientconnection localaddress super clientconnection alive false connectionid channel null remoteendpoint lastreadtime lastwritetime closedtime connected server version snapshot info serverrestartwhenreliabletopiclistenerregistered pool thread removed connection to endpoint connection mockednodeconnection remoteendpoint localendpoint connectionid info serverrestartwhenreliabletopiclistenerregistered hz hzinstance dev event destroying clientendpoint connection mockednodeconnection remoteendpoint localendpoint connectionid principal clientprincipal uuid owneruuid ownerconnection true authenticated true clientversion snapshot creationtime latest statistics null warn serverrestartwhenreliabletopiclistenerregistered pool thread dropping incoming runnable since other end closed server closed eof mockedclientconnection localaddress super clientconnection alive false connectionid channel null remoteendpoint lastreadtime lastwritetime closedtime connected server version snapshot info serverrestartwhenreliabletopiclistenerregistered thread hz client hazelcastclient snapshot is shutdown info serverrestartwhenreliabletopiclistenerregistered thread hz client hazelcastclient snapshot is shutting down warn serverrestartwhenreliabletopiclistenerregistered pool thread server connection closed null info serverrestartwhenreliabletopiclistenerregistered pool thread removed connection to endpoint connection mockednodeconnection remoteendpoint localendpoint connectionid info serverrestartwhenreliabletopiclistenerregistered thread hz client removed connection to endpoint connection mockedclientconnection localaddress super clientconnection alive false connectionid channel null remoteendpoint lastreadtime lastwritetime closedtime connected server version snapshot info serverrestartwhenreliabletopiclistenerregistered hz hzinstance dev event destroying clientendpoint connection mockednodeconnection remoteendpoint localendpoint connectionid principal clientprincipal uuid owneruuid ownerconnection true authenticated true clientversion snapshot creationtime latest statistics null warn serverrestartwhenreliabletopiclistenerregistered pool thread dropping incoming runnable since other end closed server closed eof mockedclientconnection localaddress super clientconnection alive false connectionid channel null remoteendpoint lastreadtime lastwritetime closedtime connected server version snapshot info serverrestartwhenreliabletopiclistenerregistered thread hz client hazelcastclient snapshot is shutdown info serverrestartwhenreliabletopiclistenerregistered thread is shutting down info serverrestartwhenreliabletopiclistenerregistered thread node is already shutting down waiting for shutdown process to complete info serverrestartwhenreliabletopiclistenerregistered thread is shutdown info serverrestartwhenreliabletopiclistenerregistered thread is shutting down warn serverrestartwhenreliabletopiclistenerregistered thread terminating forcefully info serverrestartwhenreliabletopiclistenerregistered thread shutting down connection manager info serverrestartwhenreliabletopiclistenerregistered thread shutting down node engine info serverrestartwhenreliabletopiclistenerregistered thread destroying node nodeextension info serverrestartwhenreliabletopiclistenerregistered thread hazelcast shutdown is completed in ms info serverrestartwhenreliabletopiclistenerregistered thread is shutdown standard error thread dump for test failure countdownlatch failed to complete within seconds count left at serverrestartwhenreliabletopiclistenerregistered | 0 |
9,633 | 7,770,041,681 | IssuesEvent | 2018-06-04 07:26:54 | pluck-cms/pluck | https://api.github.com/repos/pluck-cms/pluck | closed | File upload vuln pluck4.7.7 | Security bug | An issue was discovered in Pluck before 4.7.7. Remote PHP code execution is possible.
Do you hava a email? I send details to it. | True | File upload vuln pluck4.7.7 - An issue was discovered in Pluck before 4.7.7. Remote PHP code execution is possible.
Do you hava a email? I send details to it. | non_process | file upload vuln an issue was discovered in pluck before remote php code execution is possible do you hava a email i send details to it | 0 |
22,234 | 30,784,648,204 | IssuesEvent | 2023-07-31 12:29:04 | keras-team/keras-cv | https://api.github.com/repos/keras-team/keras-cv | closed | Add augment_bounding_boxes support to RandomTranslation layer | contribution-welcome preprocessing | The augment_bounding_boxes should be implemented for RandomTranslation Layer in keras_cv. The PR should contain implementation, test scripts and a demo script to verify implementation.
Example code for implementing augment_bounding_boxes() can be found here
- https://github.com/keras-team/keras-cv/blob/master/keras_cv/layers/preprocessing/random_flip.py#:~:text=def%20augment_bounding_boxes(,)%3A
- https://github.com/keras-team/keras-cv/blob/master/keras_cv/layers/preprocessing/random_rotation.py#:~:text=def%20augment_image(self%2C%20image%2C%20transformation%2C%20**kwargs)%3A
- The implementations can be verified using demo utils in keras_cv.bounding_box - Example of demo script can be found here : https://github.com/keras-team/keras-cv/blob/master/examples/layers/preprocessing/bounding_box/random_rotation_demo.py | 1.0 | Add augment_bounding_boxes support to RandomTranslation layer - The augment_bounding_boxes should be implemented for RandomTranslation Layer in keras_cv. The PR should contain implementation, test scripts and a demo script to verify implementation.
Example code for implementing augment_bounding_boxes() can be found here
- https://github.com/keras-team/keras-cv/blob/master/keras_cv/layers/preprocessing/random_flip.py#:~:text=def%20augment_bounding_boxes(,)%3A
- https://github.com/keras-team/keras-cv/blob/master/keras_cv/layers/preprocessing/random_rotation.py#:~:text=def%20augment_image(self%2C%20image%2C%20transformation%2C%20**kwargs)%3A
- The implementations can be verified using demo utils in keras_cv.bounding_box - Example of demo script can be found here : https://github.com/keras-team/keras-cv/blob/master/examples/layers/preprocessing/bounding_box/random_rotation_demo.py | process | add augment bounding boxes support to randomtranslation layer the augment bounding boxes should be implemented for randomtranslation layer in keras cv the pr should contain implementation test scripts and a demo script to verify implementation example code for implementing augment bounding boxes can be found here the implementations can be verified using demo utils in keras cv bounding box example of demo script can be found here | 1 |
13,204 | 15,649,103,891 | IssuesEvent | 2021-03-23 07:01:29 | kubernetes/minikube | https://api.github.com/repos/kubernetes/minikube | closed | Upgrade csi-hostpath-driver addon to v1.6.0 | area/addons kind/process priority/backlog | VolumeSnapshot is upgraded to GA by https://github.com/kubernetes/minikube/pull/10654.
But current csi-hostpath-driver addon is not latest version(It's using rc image version).
I'll send PR to upgrade csi-hostpath-driver addon to v1.6.0(latest).
/area addons | 1.0 | Upgrade csi-hostpath-driver addon to v1.6.0 - VolumeSnapshot is upgraded to GA by https://github.com/kubernetes/minikube/pull/10654.
But current csi-hostpath-driver addon is not latest version(It's using rc image version).
I'll send PR to upgrade csi-hostpath-driver addon to v1.6.0(latest).
/area addons | process | upgrade csi hostpath driver addon to volumesnapshot is upgraded to ga by but current csi hostpath driver addon is not latest version it s using rc image version i ll send pr to upgrade csi hostpath driver addon to latest area addons | 1 |
158,929 | 20,035,850,502 | IssuesEvent | 2022-02-02 11:48:22 | kapseliboi/watch-rtp-play | https://api.github.com/repos/kapseliboi/watch-rtp-play | opened | CVE-2021-3795 (High) detected in semver-regex-3.1.2.tgz, semver-regex-1.0.0.tgz | security vulnerability | ## CVE-2021-3795 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>semver-regex-3.1.2.tgz</b>, <b>semver-regex-1.0.0.tgz</b></p></summary>
<p>
<details><summary><b>semver-regex-3.1.2.tgz</b></p></summary>
<p>Regular expression for matching semver versions</p>
<p>Library home page: <a href="https://registry.npmjs.org/semver-regex/-/semver-regex-3.1.2.tgz">https://registry.npmjs.org/semver-regex/-/semver-regex-3.1.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/find-versions/node_modules/semver-regex/package.json</p>
<p>
Dependency Hierarchy:
- semantic-release-17.4.7.tgz (Root Library)
- find-versions-4.0.0.tgz
- :x: **semver-regex-3.1.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>semver-regex-1.0.0.tgz</b></p></summary>
<p>Regular expression for matching semver versions</p>
<p>Library home page: <a href="https://registry.npmjs.org/semver-regex/-/semver-regex-1.0.0.tgz">https://registry.npmjs.org/semver-regex/-/semver-regex-1.0.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/semver-regex/package.json</p>
<p>
Dependency Hierarchy:
- pre-git-3.17.1.tgz (Root Library)
- validate-commit-msg-2.14.0.tgz
- :x: **semver-regex-1.0.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/watch-rtp-play/commit/48d53e8d914b530419c83024f41813f20c2f0636">48d53e8d914b530419c83024f41813f20c2f0636</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
semver-regex is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3795>CVE-2021-3795</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sindresorhus/semver-regex/releases/tag/v4.0.1">https://github.com/sindresorhus/semver-regex/releases/tag/v4.0.1</a></p>
<p>Release Date: 2021-09-15</p>
<p>Fix Resolution (semver-regex): 3.1.3</p>
<p>Direct dependency fix Resolution (semantic-release): 18.0.0-beta.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-3795 (High) detected in semver-regex-3.1.2.tgz, semver-regex-1.0.0.tgz - ## CVE-2021-3795 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>semver-regex-3.1.2.tgz</b>, <b>semver-regex-1.0.0.tgz</b></p></summary>
<p>
<details><summary><b>semver-regex-3.1.2.tgz</b></p></summary>
<p>Regular expression for matching semver versions</p>
<p>Library home page: <a href="https://registry.npmjs.org/semver-regex/-/semver-regex-3.1.2.tgz">https://registry.npmjs.org/semver-regex/-/semver-regex-3.1.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/find-versions/node_modules/semver-regex/package.json</p>
<p>
Dependency Hierarchy:
- semantic-release-17.4.7.tgz (Root Library)
- find-versions-4.0.0.tgz
- :x: **semver-regex-3.1.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>semver-regex-1.0.0.tgz</b></p></summary>
<p>Regular expression for matching semver versions</p>
<p>Library home page: <a href="https://registry.npmjs.org/semver-regex/-/semver-regex-1.0.0.tgz">https://registry.npmjs.org/semver-regex/-/semver-regex-1.0.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/semver-regex/package.json</p>
<p>
Dependency Hierarchy:
- pre-git-3.17.1.tgz (Root Library)
- validate-commit-msg-2.14.0.tgz
- :x: **semver-regex-1.0.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/watch-rtp-play/commit/48d53e8d914b530419c83024f41813f20c2f0636">48d53e8d914b530419c83024f41813f20c2f0636</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
semver-regex is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3795>CVE-2021-3795</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sindresorhus/semver-regex/releases/tag/v4.0.1">https://github.com/sindresorhus/semver-regex/releases/tag/v4.0.1</a></p>
<p>Release Date: 2021-09-15</p>
<p>Fix Resolution (semver-regex): 3.1.3</p>
<p>Direct dependency fix Resolution (semantic-release): 18.0.0-beta.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve high detected in semver regex tgz semver regex tgz cve high severity vulnerability vulnerable libraries semver regex tgz semver regex tgz semver regex tgz regular expression for matching semver versions library home page a href path to dependency file package json path to vulnerable library node modules find versions node modules semver regex package json dependency hierarchy semantic release tgz root library find versions tgz x semver regex tgz vulnerable library semver regex tgz regular expression for matching semver versions library home page a href path to dependency file package json path to vulnerable library node modules semver regex package json dependency hierarchy pre git tgz root library validate commit msg tgz x semver regex tgz vulnerable library found in head commit a href found in base branch master vulnerability details semver regex is vulnerable to inefficient regular expression complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution semver regex direct dependency fix resolution semantic release beta step up your open source security game with whitesource | 0 |
18,489 | 24,550,905,465 | IssuesEvent | 2022-10-12 12:32:32 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [PM] [Angular Upgrade] My account > Change password > Password criteria > UI of password criteria message should be as per the design document | Bug P1 Participant manager Process: Fixed Process: Tested dev | My account > Change password > Password criteria > UI of password criteria message should be as per the design document
**Note:** Issue needs to be fixed wherever password criteria message is available
**AR:**

**ER:**

| 2.0 | [PM] [Angular Upgrade] My account > Change password > Password criteria > UI of password criteria message should be as per the design document - My account > Change password > Password criteria > UI of password criteria message should be as per the design document
**Note:** Issue needs to be fixed wherever password criteria message is available
**AR:**

**ER:**

| process | my account change password password criteria ui of password criteria message should be as per the design document my account change password password criteria ui of password criteria message should be as per the design document note issue needs to be fixed wherever password criteria message is available ar er | 1 |
160,379 | 25,155,821,050 | IssuesEvent | 2022-11-10 13:29:11 | hypha-dao/dho-web-client | https://api.github.com/repos/hypha-dao/dho-web-client | closed | Support Page: Link Button is not active (blue) when on page | Bug Design | Reproduce:
1. Go to https://dao.hypha.earth/demoxdaox/support
2. Check button colour
Expected behaviour:
The button should be active colour (blue) because I am on the page
Experienced behaviour:
The button is white (not active)
<img width="1089" alt="image" src="https://user-images.githubusercontent.com/75991832/192295654-2f9be6b1-57c5-44e1-b295-d2f3ffec39f3.png">
| 1.0 | Support Page: Link Button is not active (blue) when on page - Reproduce:
1. Go to https://dao.hypha.earth/demoxdaox/support
2. Check button colour
Expected behaviour:
The button should be active colour (blue) because I am on the page
Experienced behaviour:
The button is white (not active)
<img width="1089" alt="image" src="https://user-images.githubusercontent.com/75991832/192295654-2f9be6b1-57c5-44e1-b295-d2f3ffec39f3.png">
| non_process | support page link button is not active blue when on page reproduce go to check button colour expected behaviour the button should be active colour blue because i am on the page experienced behaviour the button is white not active img width alt image src | 0 |
12,408 | 14,916,969,026 | IssuesEvent | 2021-01-22 19:04:19 | yuta252/startlens_frontend_user | https://api.github.com/repos/yuta252/startlens_frontend_user | closed | i18n国際化対応(react-intl)の導入 | dev process | ## 導入
フロントエンドにおいて、国際化対応するためにreact-intlを導入する。
## 変更点
- localesフォルダ下に翻訳ファイルを設置する(en.ts, js.ts)
- App.tsxにreduxのstateを参照し言語情報を取得した上で言語ファイルを切り替えるchooseLocaleData関数を作成
- constant.tsも言語ファイルを参照するように設計を変更
- 各ページにて<FormattedMessage />コンポーネント及びintl.formatMessage関数を利用し言語ファイルのidを参照するように設定する。
## 参照
- [react-intlを使ってReactアプリを国際化する](https://blog.mitsuruog.info/2016/10/using-react-intl-make-react-app-as-i18n.html)
- [Reactアプリでのi18n対応(国際化対応)についての勉強メモ](https://qiita.com/shinshin86/items/39c924fbb7583948b0f0)
- [intl.formatMessageによるデータの関数参照](https://stackoverflow.com/questions/39630620/react-intl-how-to-use-formattedmessage-in-input-placeholder)
- [React i18n-next](https://qiita.com/suzukalight/items/54860fdda35e6ce983d9)
## 備考
- 国際化の対象としては、日本に来日する外国人が利用しうる言語である日本語、英語、韓国語、中国語(簡体字)、中国語(繁体字)を想定。
- まずは、日本語と英語のみ実装する方針。
- react intl version3系を利用するにあたって、addLocaleData([...ja])の関数の廃止やintl.formatMessageの利用方法に変更がある。 | 1.0 | i18n国際化対応(react-intl)の導入 - ## 導入
フロントエンドにおいて、国際化対応するためにreact-intlを導入する。
## 変更点
- localesフォルダ下に翻訳ファイルを設置する(en.ts, js.ts)
- App.tsxにreduxのstateを参照し言語情報を取得した上で言語ファイルを切り替えるchooseLocaleData関数を作成
- constant.tsも言語ファイルを参照するように設計を変更
- 各ページにて<FormattedMessage />コンポーネント及びintl.formatMessage関数を利用し言語ファイルのidを参照するように設定する。
## 参照
- [react-intlを使ってReactアプリを国際化する](https://blog.mitsuruog.info/2016/10/using-react-intl-make-react-app-as-i18n.html)
- [Reactアプリでのi18n対応(国際化対応)についての勉強メモ](https://qiita.com/shinshin86/items/39c924fbb7583948b0f0)
- [intl.formatMessageによるデータの関数参照](https://stackoverflow.com/questions/39630620/react-intl-how-to-use-formattedmessage-in-input-placeholder)
- [React i18n-next](https://qiita.com/suzukalight/items/54860fdda35e6ce983d9)
## 備考
- 国際化の対象としては、日本に来日する外国人が利用しうる言語である日本語、英語、韓国語、中国語(簡体字)、中国語(繁体字)を想定。
- まずは、日本語と英語のみ実装する方針。
- react intl version3系を利用するにあたって、addLocaleData([...ja])の関数の廃止やintl.formatMessageの利用方法に変更がある。 | process | (react intl)の導入 導入 フロントエンドにおいて、国際化対応するためにreact intlを導入する。 変更点 localesフォルダ下に翻訳ファイルを設置する(en ts js ts) app tsxにreduxのstateを参照し言語情報を取得した上で言語ファイルを切り替えるchooselocaledata関数を作成 constant tsも言語ファイルを参照するように設計を変更 各ページにて コンポーネント及びintl formatmessage関数を利用し言語ファイルのidを参照するように設定する。 参照 備考 国際化の対象としては、日本に来日する外国人が利用しうる言語である日本語、英語、韓国語、中国語(簡体字)、中国語(繁体字)を想定。 まずは、日本語と英語のみ実装する方針。 react intl 、addlocaledata の関数の廃止やintl formatmessageの利用方法に変更がある。 | 1 |
214,112 | 16,547,352,024 | IssuesEvent | 2021-05-28 02:46:10 | octokit/webhooks.js | https://api.github.com/repos/octokit/webhooks.js | closed | docs: add reference to `@octokit/webhooks-definitions` for event payload types | documentation maintenance | I'm trying to use this package only for the `verify` and `sign` functions and want to handle the webhook events in my own web server.
Is there currently a way to use the types for the plain payload as I would be receiving it from GitHub? As I understand it, the only directly exported type is `EmitterEventMap` which mirrors the format of how the `webhooks` `handler` function receives the events, i.e. as a nested object with the event type name as the key.
I'd like to the different payload types directly with a bit of type narrowing/guarding in my custom code but not sure if that's at all possible. Thankful for any pointers 🙂 | 1.0 | docs: add reference to `@octokit/webhooks-definitions` for event payload types - I'm trying to use this package only for the `verify` and `sign` functions and want to handle the webhook events in my own web server.
Is there currently a way to use the types for the plain payload as I would be receiving it from GitHub? As I understand it, the only directly exported type is `EmitterEventMap` which mirrors the format of how the `webhooks` `handler` function receives the events, i.e. as a nested object with the event type name as the key.
I'd like to the different payload types directly with a bit of type narrowing/guarding in my custom code but not sure if that's at all possible. Thankful for any pointers 🙂 | non_process | docs add reference to octokit webhooks definitions for event payload types i m trying to use this package only for the verify and sign functions and want to handle the webhook events in my own web server is there currently a way to use the types for the plain payload as i would be receiving it from github as i understand it the only directly exported type is emittereventmap which mirrors the format of how the webhooks handler function receives the events i e as a nested object with the event type name as the key i d like to the different payload types directly with a bit of type narrowing guarding in my custom code but not sure if that s at all possible thankful for any pointers 🙂 | 0 |
3,224 | 6,283,245,061 | IssuesEvent | 2017-07-19 02:33:23 | gaocegege/Processing.R | https://api.github.com/repos/gaocegege/Processing.R | closed | libraryImport video example: can't define movieEvent hook | community/processing priority/p1 size/no-idea status/claimed type/bug | I've tried using the `importLibrary()` function to create a second library example using the Processing Video library ("video"), and specifically its Loop.pde demo sketch.
I was successful -- video plays in a loop in Processing.R -- however I ran into a problem redefining the Video library's movieEvent function hook. My demo sketch works around this problem by dropping the framerate and reading the video each frame no matter what. This creates rough, choppy, low-framerate video. I wonder if there is a way to do this right.
To set up this test sketch, install the Video library in PDE, save the test sketch in Processing.R mode, then copy transit.mov from the video library example into the sketch /data folder.
```
settings <- function() {
# Please install the video before you run the example.
importLibrary("video")
size(640, 360)
}
setup <- function() {
frameRate(10) # hack -- drop the framerate to give video more time to load
# copy transit.mov from video library example into sketch /data folder
movie = Movie$new(processing, "transit.mov");
movie$loop()
}
draw <- function() {
background(0)
movie$read() # hack -- reads regardless of whether the next frame is ready or not
image(movie, 0, 0, width, height)
}
## The video library uses the movieEvent() function
## to manage when the movie object reads the next frame.
## However I'm not sure how to redefine this hook
## in R mode. For the original Java video library example, see:
## /libraries/video/examples/Movie/Loop/Loop.pde
## doesn't work
# movieEvent <- function(m) {
# m$read()
# }
## also doesn't work
# movieEvent <- function() {
# movie$read()
# }
## also doesn't work
# Movie$movieEvent <- function(m) {
# m$read()
# }
## also doesn't work
# Movie$movieEvent <- function() {
# movie$read()
# }
``` | 1.0 | libraryImport video example: can't define movieEvent hook - I've tried using the `importLibrary()` function to create a second library example using the Processing Video library ("video"), and specifically its Loop.pde demo sketch.
I was successful -- video plays in a loop in Processing.R -- however I ran into a problem redefining the Video library's movieEvent function hook. My demo sketch works around this problem by dropping the framerate and reading the video each frame no matter what. This creates rough, choppy, low-framerate video. I wonder if there is a way to do this right.
To set up this test sketch, install the Video library in PDE, save the test sketch in Processing.R mode, then copy transit.mov from the video library example into the sketch /data folder.
```
settings <- function() {
# Please install the video before you run the example.
importLibrary("video")
size(640, 360)
}
setup <- function() {
frameRate(10) # hack -- drop the framerate to give video more time to load
# copy transit.mov from video library example into sketch /data folder
movie = Movie$new(processing, "transit.mov");
movie$loop()
}
draw <- function() {
background(0)
movie$read() # hack -- reads regardless of whether the next frame is ready or not
image(movie, 0, 0, width, height)
}
## The video library uses the movieEvent() function
## to manage when the movie object reads the next frame.
## However I'm not sure how to redefine this hook
## in R mode. For the original Java video library example, see:
## /libraries/video/examples/Movie/Loop/Loop.pde
## doesn't work
# movieEvent <- function(m) {
# m$read()
# }
## also doesn't work
# movieEvent <- function() {
# movie$read()
# }
## also doesn't work
# Movie$movieEvent <- function(m) {
# m$read()
# }
## also doesn't work
# Movie$movieEvent <- function() {
# movie$read()
# }
``` | process | libraryimport video example can t define movieevent hook i ve tried using the importlibrary function to create a second library example using the processing video library video and specifically its loop pde demo sketch i was successful video plays in a loop in processing r however i ran into a problem redefining the video library s movieevent function hook my demo sketch works around this problem by dropping the framerate and reading the video each frame no matter what this creates rough choppy low framerate video i wonder if there is a way to do this right to set up this test sketch install the video library in pde save the test sketch in processing r mode then copy transit mov from the video library example into the sketch data folder settings function please install the video before you run the example importlibrary video size setup function framerate hack drop the framerate to give video more time to load copy transit mov from video library example into sketch data folder movie movie new processing transit mov movie loop draw function background movie read hack reads regardless of whether the next frame is ready or not image movie width height the video library uses the movieevent function to manage when the movie object reads the next frame however i m not sure how to redefine this hook in r mode for the original java video library example see libraries video examples movie loop loop pde doesn t work movieevent function m m read also doesn t work movieevent function movie read also doesn t work movie movieevent function m m read also doesn t work movie movieevent function movie read | 1 |
3,867 | 6,808,645,832 | IssuesEvent | 2017-11-04 06:08:06 | Great-Hill-Corporation/quickBlocks | https://api.github.com/repos/Great-Hill-Corporation/quickBlocks | reopened | opening the miniTransaction file takes more than a minute. Needless to say, this is not good for demos | apps-miniBlocks status-inprocess type-enhancement | From https://github.com/Great-Hill-Corporation/ethslurp/issues/117 | 1.0 | opening the miniTransaction file takes more than a minute. Needless to say, this is not good for demos - From https://github.com/Great-Hill-Corporation/ethslurp/issues/117 | process | opening the minitransaction file takes more than a minute needless to say this is not good for demos from | 1 |
244,514 | 18,760,346,626 | IssuesEvent | 2021-11-05 15:45:20 | lanl/scico | https://api.github.com/repos/lanl/scico | closed | Todo in docs Style Guide | documentation | Todo note (with reference to coding conventions) removed from Overview subsection of Style Guide section of docs in branch `brendt/docs-edits`:
Briefly explain which components are taken from each convention (see above) to avoid ambiguity in cases in which they differ. | 1.0 | Todo in docs Style Guide - Todo note (with reference to coding conventions) removed from Overview subsection of Style Guide section of docs in branch `brendt/docs-edits`:
Briefly explain which components are taken from each convention (see above) to avoid ambiguity in cases in which they differ. | non_process | todo in docs style guide todo note with reference to coding conventions removed from overview subsection of style guide section of docs in branch brendt docs edits briefly explain which components are taken from each convention see above to avoid ambiguity in cases in which they differ | 0 |
236,904 | 26,072,299,432 | IssuesEvent | 2022-12-24 01:15:15 | nexmo-community/node-passwordless-login | https://api.github.com/repos/nexmo-community/node-passwordless-login | closed | body-parser-1.18.3.tgz: 1 vulnerabilities (highest severity is: 7.5) - autoclosed | security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>body-parser-1.18.3.tgz</b></p></summary>
<p></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/qs/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/nexmo-community/node-passwordless-login/commit/0b2f3b3be174bbf9facd61f644e49627d59aea49">0b2f3b3be174bbf9facd61f644e49627d59aea49</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (body-parser version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2022-24999](https://www.mend.io/vulnerability-database/CVE-2022-24999) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | qs-6.5.2.tgz | Transitive | 1.19.0 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-24999</summary>
### Vulnerable Library - <b>qs-6.5.2.tgz</b></p>
<p>A querystring parser that supports nesting and arrays, with a depth limit</p>
<p>Library home page: <a href="https://registry.npmjs.org/qs/-/qs-6.5.2.tgz">https://registry.npmjs.org/qs/-/qs-6.5.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/qs/package.json</p>
<p>
Dependency Hierarchy:
- body-parser-1.18.3.tgz (Root Library)
- :x: **qs-6.5.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/nexmo-community/node-passwordless-login/commit/0b2f3b3be174bbf9facd61f644e49627d59aea49">0b2f3b3be174bbf9facd61f644e49627d59aea49</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
qs before 6.10.3, as used in Express before 4.17.3 and other products, allows attackers to cause a Node process hang for an Express application because an __ proto__ key can be used. In many typical Express use cases, an unauthenticated remote attacker can place the attack payload in the query string of the URL that is used to visit the application, such as a[__proto__]=b&a[__proto__]&a[length]=100000000. The fix was backported to qs 6.9.7, 6.8.3, 6.7.3, 6.6.1, 6.5.3, 6.4.1, 6.3.3, and 6.2.4 (and therefore Express 4.17.3, which has "deps: qs@6.9.7" in its release description, is not vulnerable).
<p>Publish Date: 2022-11-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-24999>CVE-2022-24999</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-24999">https://www.cve.org/CVERecord?id=CVE-2022-24999</a></p>
<p>Release Date: 2022-11-26</p>
<p>Fix Resolution (qs): 6.5.3</p>
<p>Direct dependency fix Resolution (body-parser): 1.19.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p> | True | body-parser-1.18.3.tgz: 1 vulnerabilities (highest severity is: 7.5) - autoclosed - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>body-parser-1.18.3.tgz</b></p></summary>
<p></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/qs/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/nexmo-community/node-passwordless-login/commit/0b2f3b3be174bbf9facd61f644e49627d59aea49">0b2f3b3be174bbf9facd61f644e49627d59aea49</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (body-parser version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2022-24999](https://www.mend.io/vulnerability-database/CVE-2022-24999) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | qs-6.5.2.tgz | Transitive | 1.19.0 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-24999</summary>
### Vulnerable Library - <b>qs-6.5.2.tgz</b></p>
<p>A querystring parser that supports nesting and arrays, with a depth limit</p>
<p>Library home page: <a href="https://registry.npmjs.org/qs/-/qs-6.5.2.tgz">https://registry.npmjs.org/qs/-/qs-6.5.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/qs/package.json</p>
<p>
Dependency Hierarchy:
- body-parser-1.18.3.tgz (Root Library)
- :x: **qs-6.5.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/nexmo-community/node-passwordless-login/commit/0b2f3b3be174bbf9facd61f644e49627d59aea49">0b2f3b3be174bbf9facd61f644e49627d59aea49</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
qs before 6.10.3, as used in Express before 4.17.3 and other products, allows attackers to cause a Node process hang for an Express application because an __ proto__ key can be used. In many typical Express use cases, an unauthenticated remote attacker can place the attack payload in the query string of the URL that is used to visit the application, such as a[__proto__]=b&a[__proto__]&a[length]=100000000. The fix was backported to qs 6.9.7, 6.8.3, 6.7.3, 6.6.1, 6.5.3, 6.4.1, 6.3.3, and 6.2.4 (and therefore Express 4.17.3, which has "deps: qs@6.9.7" in its release description, is not vulnerable).
<p>Publish Date: 2022-11-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-24999>CVE-2022-24999</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-24999">https://www.cve.org/CVERecord?id=CVE-2022-24999</a></p>
<p>Release Date: 2022-11-26</p>
<p>Fix Resolution (qs): 6.5.3</p>
<p>Direct dependency fix Resolution (body-parser): 1.19.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p> | non_process | body parser tgz vulnerabilities highest severity is autoclosed vulnerable library body parser tgz path to dependency file package json path to vulnerable library node modules qs package json found in head commit a href vulnerabilities cve severity cvss dependency type fixed in body parser version remediation available high qs tgz transitive details cve vulnerable library qs tgz a querystring parser that supports nesting and arrays with a depth limit library home page a href path to dependency file package json path to vulnerable library node modules qs package json dependency hierarchy body parser tgz root library x qs tgz vulnerable library found in head commit a href found in base branch main vulnerability details qs before as used in express before and other products allows attackers to cause a node process hang for an express application because an proto key can be used in many typical express use cases an unauthenticated remote attacker can place the attack payload in the query string of the url that is used to visit the application such as a b a a the fix was backported to qs and and therefore express which has deps qs in its release description is not vulnerable publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution qs direct dependency fix resolution body parser rescue worker helmet automatic remediation is available for this issue rescue worker helmet automatic remediation is available for this issue | 0 |
351,584 | 25,032,979,480 | IssuesEvent | 2022-11-04 13:54:27 | unoplatform/uno | https://api.github.com/repos/unoplatform/uno | closed | [Documentation] Using Adaptive triggers | hacktoberfest difficulty/starter kind/documentation | ## What would you like clarification on:
Using Adaptive Triggers in Uno Platform apps.
Most documentation should match UWP, but in case of Uno the order of Adaptive Trigger states matters (e.g. in Uno Platform the state triggers are evaluated in order and the first one matching will be applied), which should be part of the documentation.
## Concern?
- [ ] Usage in industry
- [x] Clarification of capabilities
- [ ] Getting started with Uno
- [x] Developing with Uno
- [ ] Contributing to the Uno project
- [ ] Publishing your application
- [ ] Support
- [ ] Other (please specify):
## For which Platform:
All | 1.0 | [Documentation] Using Adaptive triggers - ## What would you like clarification on:
Using Adaptive Triggers in Uno Platform apps.
Most documentation should match UWP, but in case of Uno the order of Adaptive Trigger states matters (e.g. in Uno Platform the state triggers are evaluated in order and the first one matching will be applied), which should be part of the documentation.
## Concern?
- [ ] Usage in industry
- [x] Clarification of capabilities
- [ ] Getting started with Uno
- [x] Developing with Uno
- [ ] Contributing to the Uno project
- [ ] Publishing your application
- [ ] Support
- [ ] Other (please specify):
## For which Platform:
All | non_process | using adaptive triggers what would you like clarification on using adaptive triggers in uno platform apps most documentation should match uwp but in case of uno the order of adaptive trigger states matters e g in uno platform the state triggers are evaluated in order and the first one matching will be applied which should be part of the documentation concern usage in industry clarification of capabilities getting started with uno developing with uno contributing to the uno project publishing your application support other please specify for which platform all | 0 |
13,322 | 15,786,597,617 | IssuesEvent | 2021-04-01 17:58:52 | hasura/ask-me-anything | https://api.github.com/repos/hasura/ask-me-anything | closed | When using `hasura console` and making metadata changes, how will it determine which file to modify? | processing-for-shortvid question | Per @scriptonist "CLI is aware of the expected metadata structure and parses the raw metadata (from the server) to determine the files to edit." | 1.0 | When using `hasura console` and making metadata changes, how will it determine which file to modify? - Per @scriptonist "CLI is aware of the expected metadata structure and parses the raw metadata (from the server) to determine the files to edit." | process | when using hasura console and making metadata changes how will it determine which file to modify per scriptonist cli is aware of the expected metadata structure and parses the raw metadata from the server to determine the files to edit | 1 |
13,534 | 16,066,954,985 | IssuesEvent | 2021-04-23 20:49:43 | googleapis/python-bigquery | https://api.github.com/repos/googleapis/python-bigquery | closed | add test session to nox without installing any "extras" | api: bigquery type: process | https://github.com/googleapis/python-bigquery/pull/613 is making me a bit nervous that we might accidentally introduce a required dependency that we thought was optional. It wouldn't be the first time this has happened (https://github.com/googleapis/python-bigquery/issues/549), so I'd like at least a unit test session that runs without any extras. | 1.0 | add test session to nox without installing any "extras" - https://github.com/googleapis/python-bigquery/pull/613 is making me a bit nervous that we might accidentally introduce a required dependency that we thought was optional. It wouldn't be the first time this has happened (https://github.com/googleapis/python-bigquery/issues/549), so I'd like at least a unit test session that runs without any extras. | process | add test session to nox without installing any extras is making me a bit nervous that we might accidentally introduce a required dependency that we thought was optional it wouldn t be the first time this has happened so i d like at least a unit test session that runs without any extras | 1 |
21,522 | 29,805,473,076 | IssuesEvent | 2023-06-16 11:18:54 | metabase/metabase | https://api.github.com/repos/metabase/metabase | closed | [CI] BigQuery intermittent (but frequent) test failures | Type:Bug Priority:P1 Database/BigQuery .CI & Tests .Backend flaky-test-fix .Team/QueryProcessor :hammer_and_wrench: | Example of a failed run:
https://github.com/metabase/metabase/actions/runs/4304721614/jobs/7506170453#step:3:471
This failure happens **a lot** and the key thing seems to be: `Failed to create :bigquery-cloud-sdk 'test-data' test database`
```
ERROR in metabase-enterprise.sandbox.query-processor.middleware.row-level-restrictions-test/pivot-query-test (impl.clj:140)
Uncaught exception, not in assertion.
clojure.lang.ExceptionInfo: Failed to create :bigquery-cloud-sdk 'test-data' test database: Failed to create test database: Error in sync step Sync bigquery-cloud-sdk Database 66 'test-data': Error in sync step Analyze data for bigquery-cloud-sdk Database 66 'test-data': Error in sync step fingerprint-fields: Error fingerprinting Table 229 'v3_test_data.view_WXBVWOIRAWZVTGRPPCNZ': Output of get-table does not match schema:
(not (instance? com.google.cloud.bigquery.Table nil))
database-name: "test-data"
driver: :bigquery-cloud-sdk
clojure.lang.ExceptionInfo: Failed to create test database: Error in sync step Sync bigquery-cloud-sdk Database 66 'test-data': Error in sync step Analyze data for bigquery-cloud-sdk Database 66 'test-data': Error in sync step fingerprint-fields: Error fingerprinting Table 229 'v3_test_data.view_WXBVWOIRAWZVTGRPPCNZ': Output of get-table does not match schema:
(not (instance? com.google.cloud.bigquery.Table nil))
connection-details: {:project-id nil,
:service-account-json
"***",
:dataset-id "v3_test_data",
:include-user-id-and-hash true}
database-name: "test-data"
driver: :bigquery-cloud-sdk
clojure.lang.ExceptionInfo: Error in sync step Sync bigquery-cloud-sdk Database 66 'test-data': Error in sync step Analyze data for bigquery-cloud-sdk Database 66 'test-data': Error in sync step fingerprint-fields: Error fingerprinting Table 229 'v3_test_data.view_WXBVWOIRAWZVTGRPPCNZ': Output of get-table does not match schema:
(not (instance? com.google.cloud.bigquery.Table nil))
clojure.lang.ExceptionInfo: Error in sync step Analyze data for bigquery-cloud-sdk Database 66 'test-data': Error in sync step fingerprint-fields: Error fingerprinting Table 229 'v3_test_data.view_WXBVWOIRAWZVTGRPPCNZ': Output of get-table does not match schema:
(not (instance? com.google.cloud.bigquery.Table nil))
clojure.lang.ExceptionInfo: Error in sync step fingerprint-fields: Error fingerprinting Table 229 'v3_test_data.view_WXBVWOIRAWZVTGRPPCNZ': Output of get-table does not match schema:
(not (instance? com.google.cloud.bigquery.Table nil))
clojure.lang.ExceptionInfo: Error fingerprinting Table 229 'v3_test_data.view_WXBVWOIRAWZVTGRPPCNZ': Output of get-table does not match schema:
(not (instance? com.google.cloud.bigquery.Table nil))
clojure.lang.ExceptionInfo: Output of get-table does not match schema:
(not (instance? com.google.cloud.bigquery.Table nil))
error: (not (instance? com.google.cloud.bigquery.Table nil))
schema: com.google.cloud.bigquery.Table
type: :schema.core/error
value: nil | 1.0 | [CI] BigQuery intermittent (but frequent) test failures - Example of a failed run:
https://github.com/metabase/metabase/actions/runs/4304721614/jobs/7506170453#step:3:471
This failure happens **a lot** and the key thing seems to be: `Failed to create :bigquery-cloud-sdk 'test-data' test database`
```
ERROR in metabase-enterprise.sandbox.query-processor.middleware.row-level-restrictions-test/pivot-query-test (impl.clj:140)
Uncaught exception, not in assertion.
clojure.lang.ExceptionInfo: Failed to create :bigquery-cloud-sdk 'test-data' test database: Failed to create test database: Error in sync step Sync bigquery-cloud-sdk Database 66 'test-data': Error in sync step Analyze data for bigquery-cloud-sdk Database 66 'test-data': Error in sync step fingerprint-fields: Error fingerprinting Table 229 'v3_test_data.view_WXBVWOIRAWZVTGRPPCNZ': Output of get-table does not match schema:
(not (instance? com.google.cloud.bigquery.Table nil))
database-name: "test-data"
driver: :bigquery-cloud-sdk
clojure.lang.ExceptionInfo: Failed to create test database: Error in sync step Sync bigquery-cloud-sdk Database 66 'test-data': Error in sync step Analyze data for bigquery-cloud-sdk Database 66 'test-data': Error in sync step fingerprint-fields: Error fingerprinting Table 229 'v3_test_data.view_WXBVWOIRAWZVTGRPPCNZ': Output of get-table does not match schema:
(not (instance? com.google.cloud.bigquery.Table nil))
connection-details: {:project-id nil,
:service-account-json
"***",
:dataset-id "v3_test_data",
:include-user-id-and-hash true}
database-name: "test-data"
driver: :bigquery-cloud-sdk
clojure.lang.ExceptionInfo: Error in sync step Sync bigquery-cloud-sdk Database 66 'test-data': Error in sync step Analyze data for bigquery-cloud-sdk Database 66 'test-data': Error in sync step fingerprint-fields: Error fingerprinting Table 229 'v3_test_data.view_WXBVWOIRAWZVTGRPPCNZ': Output of get-table does not match schema:
(not (instance? com.google.cloud.bigquery.Table nil))
clojure.lang.ExceptionInfo: Error in sync step Analyze data for bigquery-cloud-sdk Database 66 'test-data': Error in sync step fingerprint-fields: Error fingerprinting Table 229 'v3_test_data.view_WXBVWOIRAWZVTGRPPCNZ': Output of get-table does not match schema:
(not (instance? com.google.cloud.bigquery.Table nil))
clojure.lang.ExceptionInfo: Error in sync step fingerprint-fields: Error fingerprinting Table 229 'v3_test_data.view_WXBVWOIRAWZVTGRPPCNZ': Output of get-table does not match schema:
(not (instance? com.google.cloud.bigquery.Table nil))
clojure.lang.ExceptionInfo: Error fingerprinting Table 229 'v3_test_data.view_WXBVWOIRAWZVTGRPPCNZ': Output of get-table does not match schema:
(not (instance? com.google.cloud.bigquery.Table nil))
clojure.lang.ExceptionInfo: Output of get-table does not match schema:
(not (instance? com.google.cloud.bigquery.Table nil))
error: (not (instance? com.google.cloud.bigquery.Table nil))
schema: com.google.cloud.bigquery.Table
type: :schema.core/error
value: nil | process | bigquery intermittent but frequent test failures example of a failed run this failure happens a lot and the key thing seems to be failed to create bigquery cloud sdk test data test database error in metabase enterprise sandbox query processor middleware row level restrictions test pivot query test impl clj uncaught exception not in assertion clojure lang exceptioninfo failed to create bigquery cloud sdk test data test database failed to create test database error in sync step sync bigquery cloud sdk database test data error in sync step analyze data for bigquery cloud sdk database test data error in sync step fingerprint fields error fingerprinting table test data view wxbvwoirawzvtgrppcnz output of get table does not match schema not instance com google cloud bigquery table nil database name test data driver bigquery cloud sdk clojure lang exceptioninfo failed to create test database error in sync step sync bigquery cloud sdk database test data error in sync step analyze data for bigquery cloud sdk database test data error in sync step fingerprint fields error fingerprinting table test data view wxbvwoirawzvtgrppcnz output of get table does not match schema not instance com google cloud bigquery table nil connection details project id nil service account json dataset id test data include user id and hash true database name test data driver bigquery cloud sdk clojure lang exceptioninfo error in sync step sync bigquery cloud sdk database test data error in sync step analyze data for bigquery cloud sdk database test data error in sync step fingerprint fields error fingerprinting table test data view wxbvwoirawzvtgrppcnz output of get table does not match schema not instance com google cloud bigquery table nil clojure lang exceptioninfo error in sync step analyze data for bigquery cloud sdk database test data error in sync step fingerprint fields error fingerprinting table test data view wxbvwoirawzvtgrppcnz output of get table does not match schema not instance com google cloud bigquery table nil clojure lang exceptioninfo error in sync step fingerprint fields error fingerprinting table test data view wxbvwoirawzvtgrppcnz output of get table does not match schema not instance com google cloud bigquery table nil clojure lang exceptioninfo error fingerprinting table test data view wxbvwoirawzvtgrppcnz output of get table does not match schema not instance com google cloud bigquery table nil clojure lang exceptioninfo output of get table does not match schema not instance com google cloud bigquery table nil error not instance com google cloud bigquery table nil schema com google cloud bigquery table type schema core error value nil | 1 |
10,439 | 13,220,669,107 | IssuesEvent | 2020-08-17 12:50:17 | timberio/vector | https://api.github.com/repos/timberio/vector | closed | Implement remap arithmetic | domain: processing type: enhancement | The remap mapping syntax needs support for basic numerical arithmetic (+, -, *, /, %) as well as boolean comparison operators (>, >=, ==, !=, <, <=). This would allow expressions such as `.foo = .foo + .bar` as well as conditional expressions `.foo = .bar > 10`, which can later be used as `if` statement arguments `.foo = if .foo > 10 { .foo } else { .bar }`. | 1.0 | Implement remap arithmetic - The remap mapping syntax needs support for basic numerical arithmetic (+, -, *, /, %) as well as boolean comparison operators (>, >=, ==, !=, <, <=). This would allow expressions such as `.foo = .foo + .bar` as well as conditional expressions `.foo = .bar > 10`, which can later be used as `if` statement arguments `.foo = if .foo > 10 { .foo } else { .bar }`. | process | implement remap arithmetic the remap mapping syntax needs support for basic numerical arithmetic as well as boolean comparison operators which can later be used as if statement arguments foo if foo foo else bar | 1 |
18,326 | 24,445,469,261 | IssuesEvent | 2022-10-06 17:34:09 | bondaleksey/credit-card-fraud-detection | https://api.github.com/repos/bondaleksey/credit-card-fraud-detection | opened | Generate data | work plan data preprocessing | - Setup a Spark cluster with 3 data nodes in Yandex Cloud (YC)
- Generate a 100GB sample of simulated data
- Upload all the generated data to the cluster in the Hadoop Distributed File System (HDFS). | 1.0 | Generate data - - Setup a Spark cluster with 3 data nodes in Yandex Cloud (YC)
- Generate a 100GB sample of simulated data
- Upload all the generated data to the cluster in the Hadoop Distributed File System (HDFS). | process | generate data setup a spark cluster with data nodes in yandex cloud yc generate a sample of simulated data upload all the generated data to the cluster in the hadoop distributed file system hdfs | 1 |
149,781 | 11,914,943,915 | IssuesEvent | 2020-03-31 14:18:26 | VNG-Realisatie/gemma-zaken | https://api.github.com/repos/VNG-Realisatie/gemma-zaken | closed | Applicatie verwijderen in AC cascade niet door naar ZRC | API Test Platform bug | # Bug
Als ik een `Applicatie` verwijder via de API van het AC, dan zouden de gecachete versies van die applicatie in de andere componenten ook verwijderd moeten worden. Voor het ZRC en DRC is dit echter niet het geval, hier blijft de gecachete applicatie bestaan, terwijl hij niet meer bestaat in het AC.
| 1.0 | Applicatie verwijderen in AC cascade niet door naar ZRC - # Bug
Als ik een `Applicatie` verwijder via de API van het AC, dan zouden de gecachete versies van die applicatie in de andere componenten ook verwijderd moeten worden. Voor het ZRC en DRC is dit echter niet het geval, hier blijft de gecachete applicatie bestaan, terwijl hij niet meer bestaat in het AC.
| non_process | applicatie verwijderen in ac cascade niet door naar zrc bug als ik een applicatie verwijder via de api van het ac dan zouden de gecachete versies van die applicatie in de andere componenten ook verwijderd moeten worden voor het zrc en drc is dit echter niet het geval hier blijft de gecachete applicatie bestaan terwijl hij niet meer bestaat in het ac | 0 |
15,889 | 20,075,036,694 | IssuesEvent | 2022-02-04 11:43:35 | climatepolicyradar/navigator | https://api.github.com/repos/climatepolicyradar/navigator | opened | Order passages into natural reading order | Document processing | Text will need to be extracted in natural reading order of the document. Since the scope of the alpha will be restricted to English language documents, this means that the order will be assumed to be top to bottom / left to right. Where a document contains a multi-column layout, text should be extracted in each column separately, and concatenated in a way that retains the correct reading order.
| 1.0 | Order passages into natural reading order - Text will need to be extracted in natural reading order of the document. Since the scope of the alpha will be restricted to English language documents, this means that the order will be assumed to be top to bottom / left to right. Where a document contains a multi-column layout, text should be extracted in each column separately, and concatenated in a way that retains the correct reading order.
| process | order passages into natural reading order text will need to be extracted in natural reading order of the document since the scope of the alpha will be restricted to english language documents this means that the order will be assumed to be top to bottom left to right where a document contains a multi column layout text should be extracted in each column separately and concatenated in a way that retains the correct reading order | 1 |
10,799 | 13,609,287,116 | IssuesEvent | 2020-09-23 04:50:02 | googleapis/java-logging-logback | https://api.github.com/repos/googleapis/java-logging-logback | closed | Dependency Dashboard | api: logging type: process | This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/logging.version -->deps: update dependency com.google.cloud:google-cloud-logging to v1.102.0
- [ ] <!-- rebase-branch=renovate/com.google.cloud-libraries-bom-10.x -->chore(deps): update dependency com.google.cloud:libraries-bom to v10
- [ ] <!-- rebase-branch=renovate/major-easymock.version -->deps: update dependency org.easymock:easymock to v4
- [ ] <!-- rebase-all-open-prs -->**Check this option to rebase all the above open PRs at once**
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
| 1.0 | Dependency Dashboard - This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/logging.version -->deps: update dependency com.google.cloud:google-cloud-logging to v1.102.0
- [ ] <!-- rebase-branch=renovate/com.google.cloud-libraries-bom-10.x -->chore(deps): update dependency com.google.cloud:libraries-bom to v10
- [ ] <!-- rebase-branch=renovate/major-easymock.version -->deps: update dependency org.easymock:easymock to v4
- [ ] <!-- rebase-all-open-prs -->**Check this option to rebase all the above open PRs at once**
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
| process | dependency dashboard this issue contains a list of renovate updates and their statuses open these updates have all been created already click a checkbox below to force a retry rebase of any deps update dependency com google cloud google cloud logging to chore deps update dependency com google cloud libraries bom to deps update dependency org easymock easymock to check this option to rebase all the above open prs at once check this box to trigger a request for renovate to run again on this repository | 1 |
9,803 | 3,072,559,904 | IssuesEvent | 2015-08-19 17:32:49 | liqd/adhocracy3.mercator | https://api.github.com/repos/liqd/adhocracy3.mercator | closed | htmlhint sometimes tries to check deleted files in commit hook | tests | In the `check_code` commit hook I sometimes get the following error:
```
Warning: Unable to read "src/meinberlin/meinberlin/build/js/Packages/MeinBerlin/Burgerhaushalt/Process/Phase.html" file (Error code: ENOENT). Use --force to continue.
```
The file was created and removed but never added to git (maybe staged and unstaged again, not sure) | 1.0 | htmlhint sometimes tries to check deleted files in commit hook - In the `check_code` commit hook I sometimes get the following error:
```
Warning: Unable to read "src/meinberlin/meinberlin/build/js/Packages/MeinBerlin/Burgerhaushalt/Process/Phase.html" file (Error code: ENOENT). Use --force to continue.
```
The file was created and removed but never added to git (maybe staged and unstaged again, not sure) | non_process | htmlhint sometimes tries to check deleted files in commit hook in the check code commit hook i sometimes get the following error warning unable to read src meinberlin meinberlin build js packages meinberlin burgerhaushalt process phase html file error code enoent use force to continue the file was created and removed but never added to git maybe staged and unstaged again not sure | 0 |
744,989 | 25,964,425,455 | IssuesEvent | 2022-12-19 04:53:03 | Sunbird-cQube/community | https://api.github.com/repos/Sunbird-cQube/community | closed | Code refactoring for Emission Data Storage | Backlog Size-Medium Tech-Priority-P1 Emission Data Storage | - [ ] **01. Code Structure**
The code structure is not consistent with sunbird standards.
I expected a basic organisation of the code into ingestion, processing, storage, metrics, query etc folders.
Please review this in detail with Anand P.
_Update_
Folder sructure is organised. Require inputs and then review.
https://github.com/Sunbird-cQube/cQube_Edu/tree/cqube-4.0-alpha
- [ ] **02. Code Readability**
With respect to code, I see function names becoming the file names themselves. for ex: update_nifi_params.py etc. Readability of the code is poor and this would have a significant impact on maintainability and contribution. Code needs to be refactored as well, please plan for it post review with Anand P
_Additional Info_
- Python script https://github.com/Sunbird-cQube/cQube_Workflow/blob/release-3.7/development/datasource/python/views.py
- The Diskha data extraction mechanism is implemented through the NIFI processor.
_Update_
Code refactor is partially done, Require input and then review
https://github.com/Sunbird-cQube/cQube_Edu/tree/cqube-4.0-alpha/utils/nifi-utils
| 1.0 | Code refactoring for Emission Data Storage - - [ ] **01. Code Structure**
The code structure is not consistent with sunbird standards.
I expected a basic organisation of the code into ingestion, processing, storage, metrics, query etc folders.
Please review this in detail with Anand P.
_Update_
Folder sructure is organised. Require inputs and then review.
https://github.com/Sunbird-cQube/cQube_Edu/tree/cqube-4.0-alpha
- [ ] **02. Code Readability**
With respect to code, I see function names becoming the file names themselves. for ex: update_nifi_params.py etc. Readability of the code is poor and this would have a significant impact on maintainability and contribution. Code needs to be refactored as well, please plan for it post review with Anand P
_Additional Info_
- Python script https://github.com/Sunbird-cQube/cQube_Workflow/blob/release-3.7/development/datasource/python/views.py
- The Diskha data extraction mechanism is implemented through the NIFI processor.
_Update_
Code refactor is partially done, Require input and then review
https://github.com/Sunbird-cQube/cQube_Edu/tree/cqube-4.0-alpha/utils/nifi-utils
| non_process | code refactoring for emission data storage code structure the code structure is not consistent with sunbird standards i expected a basic organisation of the code into ingestion processing storage metrics query etc folders please review this in detail with anand p update folder sructure is organised require inputs and then review code readability with respect to code i see function names becoming the file names themselves for ex update nifi params py etc readability of the code is poor and this would have a significant impact on maintainability and contribution code needs to be refactored as well please plan for it post review with anand p additional info python script the diskha data extraction mechanism is implemented through the nifi processor update code refactor is partially done require input and then review | 0 |
9,615 | 12,553,259,626 | IssuesEvent | 2020-06-06 21:15:17 | metabase/metabase | https://api.github.com/repos/metabase/metabase | closed | Bug: Positional parameters are not supported. | Priority:P3 Querying/Native Querying/Processor Type:Bug | `?` marks in a custom query is somehow interpreted as positional parameters.
Minimal required SQL
```sql
#standardSQL
-- ? Removing this comment will resolve the error
select id
from tupac_sightings.sightings
where timestamp_seconds(timestamp) between {{start_date}} and {{end_date}}
```
Error message: `Positional parameters are not supported at [6:86]`
-------
- Your browser and the version: Chrome 63.0
- Your operating system: OS X 10
- Your databases: BigQuery
- Metabase version: v0.28.0
- Metabase hosting environment: jar
- Metabase internal database: MySQL
------- | 1.0 | Bug: Positional parameters are not supported. - `?` marks in a custom query is somehow interpreted as positional parameters.
Minimal required SQL
```sql
#standardSQL
-- ? Removing this comment will resolve the error
select id
from tupac_sightings.sightings
where timestamp_seconds(timestamp) between {{start_date}} and {{end_date}}
```
Error message: `Positional parameters are not supported at [6:86]`
-------
- Your browser and the version: Chrome 63.0
- Your operating system: OS X 10
- Your databases: BigQuery
- Metabase version: v0.28.0
- Metabase hosting environment: jar
- Metabase internal database: MySQL
------- | process | bug positional parameters are not supported marks in a custom query is somehow interpreted as positional parameters minimal required sql sql standardsql removing this comment will resolve the error select id from tupac sightings sightings where timestamp seconds timestamp between start date and end date error message positional parameters are not supported at your browser and the version chrome your operating system os x your databases bigquery metabase version metabase hosting environment jar metabase internal database mysql | 1 |
24,582 | 6,555,087,396 | IssuesEvent | 2017-09-06 08:57:32 | mozilla/addons-frontend | https://api.github.com/repos/mozilla/addons-frontend | closed | Clean up the server tests | component: code quality triaged | The server tests are starting to get clunky - it would be great if we could ditch them and move as much code to unitests as possible.
- CSP tests, we could look at passing the config to the middleware and check the output of the middleware directly (saving a whole request/response) setup via supertest.
- SRI tests, we should see if we can move them to the HTML component tests.
- The basic request/response tests will be superceded by the UI tests once live.
- It would be nice not to need to update the package.json with a new servertest run just because we added a new app.
| 1.0 | Clean up the server tests - The server tests are starting to get clunky - it would be great if we could ditch them and move as much code to unitests as possible.
- CSP tests, we could look at passing the config to the middleware and check the output of the middleware directly (saving a whole request/response) setup via supertest.
- SRI tests, we should see if we can move them to the HTML component tests.
- The basic request/response tests will be superceded by the UI tests once live.
- It would be nice not to need to update the package.json with a new servertest run just because we added a new app.
| non_process | clean up the server tests the server tests are starting to get clunky it would be great if we could ditch them and move as much code to unitests as possible csp tests we could look at passing the config to the middleware and check the output of the middleware directly saving a whole request response setup via supertest sri tests we should see if we can move them to the html component tests the basic request response tests will be superceded by the ui tests once live it would be nice not to need to update the package json with a new servertest run just because we added a new app | 0 |
18,160 | 24,194,224,191 | IssuesEvent | 2022-09-23 21:10:16 | GSA/EDX | https://api.github.com/repos/GSA/EDX | closed | Web records & DLP | process digital council collaboration DLP | ## Summary
Ensure general practices for web records management are included in DLP.
## Additional context and links
[NARA Guidance on Managing Web Records](https://www.archives.gov/records-mgmt/policy/managing-web-records-index.html)
## Checklist
List below the specific actions to be taken
- [x] Meet with Robert Smudde's team (GSA's Records Officer)
- [x] Update DLP as needed to incorporate record mgmt | 1.0 | Web records & DLP - ## Summary
Ensure general practices for web records management are included in DLP.
## Additional context and links
[NARA Guidance on Managing Web Records](https://www.archives.gov/records-mgmt/policy/managing-web-records-index.html)
## Checklist
List below the specific actions to be taken
- [x] Meet with Robert Smudde's team (GSA's Records Officer)
- [x] Update DLP as needed to incorporate record mgmt | process | web records dlp summary ensure general practices for web records management are included in dlp additional context and links checklist list below the specific actions to be taken meet with robert smudde s team gsa s records officer update dlp as needed to incorporate record mgmt | 1 |
5,345 | 8,177,764,447 | IssuesEvent | 2018-08-28 11:51:33 | allinurl/goaccess | https://api.github.com/repos/allinurl/goaccess | closed | Older data wiped | log-processing on-disk question | Sequence of events:
1. built goaccess
2. run it against current access.log (rotated daily). Static html produced and fine
3. saw it was good, so run it against the old gzipped logs. Again html was fine and older data visible
4. cron execution against current access.log: html generated and older data trashed, I see only today's data
I didn't expected older data to be deleted. Is this normal?
Also, when having two months of data, how can I scroll/zoom to see only one specific period?
Thanks | 1.0 | Older data wiped - Sequence of events:
1. built goaccess
2. run it against current access.log (rotated daily). Static html produced and fine
3. saw it was good, so run it against the old gzipped logs. Again html was fine and older data visible
4. cron execution against current access.log: html generated and older data trashed, I see only today's data
I didn't expected older data to be deleted. Is this normal?
Also, when having two months of data, how can I scroll/zoom to see only one specific period?
Thanks | process | older data wiped sequence of events built goaccess run it against current access log rotated daily static html produced and fine saw it was good so run it against the old gzipped logs again html was fine and older data visible cron execution against current access log html generated and older data trashed i see only today s data i didn t expected older data to be deleted is this normal also when having two months of data how can i scroll zoom to see only one specific period thanks | 1 |
108,235 | 23,584,168,724 | IssuesEvent | 2022-08-23 10:07:46 | Regalis11/Barotrauma | https://api.github.com/repos/Regalis11/Barotrauma | closed | Commands done in submarine test mode carries over to signleplayer without having to enable cheat | Bug Code Unstable | **Steps To Reproduce**
1. Load up sub editor test mode
2. Use commands that aren't game mode specific like debugdraw and lighting
3. Hit Esc and Main Menu
4. Load up singleplayer
5. Notice that the commands are still in use but you can't even disable them as you don't have cheats enabled
**Version**
0.18.6.0
Branch: bugfixes
| 1.0 | Commands done in submarine test mode carries over to signleplayer without having to enable cheat - **Steps To Reproduce**
1. Load up sub editor test mode
2. Use commands that aren't game mode specific like debugdraw and lighting
3. Hit Esc and Main Menu
4. Load up singleplayer
5. Notice that the commands are still in use but you can't even disable them as you don't have cheats enabled
**Version**
0.18.6.0
Branch: bugfixes
| non_process | commands done in submarine test mode carries over to signleplayer without having to enable cheat steps to reproduce load up sub editor test mode use commands that aren t game mode specific like debugdraw and lighting hit esc and main menu load up singleplayer notice that the commands are still in use but you can t even disable them as you don t have cheats enabled version branch bugfixes | 0 |
254,812 | 21,877,903,127 | IssuesEvent | 2022-05-19 11:57:12 | mennaelkashef/eShop | https://api.github.com/repos/mennaelkashef/eShop | opened | Test comment | Hello! RULE-GOT-APPLIED DOES-NOT-CONTAIN-STRING Rule-works-on-convert-to-bug test instabug ARW | # :clipboard: Bug Details
>Test comment
key | value
--|--
Reported At | 2022-05-19 11:41:21 UTC
Email | a@test.com
Categories | Report a bug
Tags | test, Hello!, RULE-GOT-APPLIED, DOES-NOT-CONTAIN-STRING, Rule-works-on-convert-to-bug, instabug, ARW
App Version | 1.1 (1)
Session Duration | 384
Device | Google sdk_gphone_x86, OS Level 30
Display | 1080x1920 (xhdpi)
Location | Cairo, Egypt (en)
## :point_right: [View Full Bug Report on Instabug](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8796?utm_source=github&utm_medium=integrations) :point_left:
___
# :iphone: View Hierarchy
This bug was reported from **com.example.app.apm.APMFragment**
Find its interactive view hierarchy with all its subviews here: :point_right: **[Check View Hierarchy](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8796?show-hierarchy-view=true&utm_source=github&utm_medium=integrations)** :point_left:
___
# :chart_with_downwards_trend: Session Profiler
Here is what the app was doing right before the bug was reported:
Key | Value
--|--
Used Memory | 51.1% - 0.99/1.93 GB
Used Storage | 18.8% - 1.09/5.81 GB
Connectivity | WiFi
Battery | 100% - unplugged
Orientation | portrait
Find all the changes that happened in the parameters mentioned above during the last 60 seconds before the bug was reported here: :point_right: **[View Full Session Profiler](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8796?show-session-profiler=true&utm_source=github&utm_medium=integrations)** :point_left:
___
# :bust_in_silhouette: User Info
### User Data
```
Testing....
```
### User Attributes
```
Address: CA
key_name 700542620: key value bla bla bla la
key_name -1679981589: key value bla bla bla la
key_name -646276569: key value bla bla bla la
key_name -1497341062: key value bla bla bla la
key_name -295404463: key value bla bla bla la
key_name 1333332619: key value bla bla bla la
key_name 1476208203: key value bla bla bla la
Age: 18
key_name -1713391803: key value bla bla bla la
key_name 907844771: key value bla bla bla la
```
___
# :mag_right: Logs
### User Steps
Here are the last 10 steps done by the user right before the bug was reported:
```
11:39:27 Tap in "androidx.appcompat.widget.AppCompatImageButton" in "com.example.app.main.MainActivity"
11:39:30 Tap in "androidx.constraintlayout.widget.ConstraintLayout" in "com.example.app.main.MainActivity"
11:39:32 Tap in "instabug_extra_screenshot_button" of type "android.widget.ImageButton" in "com.example.app.main.MainActivity"
11:39:36 Tap in "HttpUrlConnecti..." of type "androidx.appcompat.widget.AppCompatTextView" in "com.example.app.main.MainActivity"
11:39:50 Tap in "toolbar" of type "androidx.appcompat.widget.Toolbar" in "com.example.app.main.MainActivity"
11:40:07 Tap in "toolbar" of type "androidx.appcompat.widget.Toolbar" in "com.example.app.main.MainActivity"
11:41:17 Tap in "APIs" of type "androidx.appcompat.widget.AppCompatTextView" in "com.example.app.main.MainActivity"
11:41:18 com.example.app.main.MainActivity was paused.
11:41:18 In activity com.example.app.main.MainActivity: fragment com.example.app.apm.APMFragment was paused.
11:41:21 Long press in "Enable Hot App ..." of type "android.widget.Switch" in "com.example.app.main.MainActivity"
```
Find all the user steps done by the user throughout the session here: :point_right: **[View All User Steps](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8796?show-logs=user_steps&utm_source=github&utm_medium=integrations)** :point_left:
### User Events
Here are the last 10 user events logged right before the bug was reported:
```
11:34:59 Testing user event
```
Find all the logged user events throughout the session here: :point_right: **[View All User Events](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8796?show-logs=user_events&utm_source=github&utm_medium=integrations)** :point_left:
### Console Log
Here are the last 10 console logs logged right before the bug was reported:
```
11:41:26 D/IBG-APM ( 6372): Request [POST] https://api.instabug.com/api/sdk/v3/chats/sync has succeeded.
11:41:26 D/IBG-APM ( 6372): Total duration: 286 ms
11:41:26 D/IBG-APM ( 6372): Status code: 200.
11:41:26 D/IBG-APM ( 6372): Attributes: {}
11:41:26 I/chatty ( 6372): uid=10154(com.example.app) Thread-18 identical 2 lines
11:41:26 D/IBG-APM ( 6372): Request [POST] https://api.instabug.com/api/sdk/v3/chats/sync has succeeded.
11:41:26 D/IBG-APM ( 6372): Total duration: 286 ms
11:41:26 D/IBG-APM ( 6372): Status code: 200.
11:41:26 D/IBG-APM ( 6372): Attributes: {}
11:41:27 V/FA ( 6372): Inactivity, disconnecting from the service
```
Find all the logged console logs throughout the session here: :point_right: **[View All Console Log](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8796?show-logs=console_log&utm_source=github&utm_medium=integrations)** :point_left:
___
# :warning: Looking for More Details?
1. **Network Log**: we are unable to capture your network requests automatically. If you are using HttpUrlConnection or Okhttp requests, [**check the details mentioned here**](https://docs.instabug.com/docs/android-logging?utm_source=github&utm_medium=integrations#section-network-logs).
2. **Instabug Log**: start adding Instabug logs to see them right inside each report you receive. [**Find all the details in the docs**](https://docs.instabug.com/docs/android-logging?utm_source=github&utm_medium=integrations). | 1.0 | Test comment - # :clipboard: Bug Details
>Test comment
key | value
--|--
Reported At | 2022-05-19 11:41:21 UTC
Email | a@test.com
Categories | Report a bug
Tags | test, Hello!, RULE-GOT-APPLIED, DOES-NOT-CONTAIN-STRING, Rule-works-on-convert-to-bug, instabug, ARW
App Version | 1.1 (1)
Session Duration | 384
Device | Google sdk_gphone_x86, OS Level 30
Display | 1080x1920 (xhdpi)
Location | Cairo, Egypt (en)
## :point_right: [View Full Bug Report on Instabug](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8796?utm_source=github&utm_medium=integrations) :point_left:
___
# :iphone: View Hierarchy
This bug was reported from **com.example.app.apm.APMFragment**
Find its interactive view hierarchy with all its subviews here: :point_right: **[Check View Hierarchy](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8796?show-hierarchy-view=true&utm_source=github&utm_medium=integrations)** :point_left:
___
# :chart_with_downwards_trend: Session Profiler
Here is what the app was doing right before the bug was reported:
Key | Value
--|--
Used Memory | 51.1% - 0.99/1.93 GB
Used Storage | 18.8% - 1.09/5.81 GB
Connectivity | WiFi
Battery | 100% - unplugged
Orientation | portrait
Find all the changes that happened in the parameters mentioned above during the last 60 seconds before the bug was reported here: :point_right: **[View Full Session Profiler](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8796?show-session-profiler=true&utm_source=github&utm_medium=integrations)** :point_left:
___
# :bust_in_silhouette: User Info
### User Data
```
Testing....
```
### User Attributes
```
Address: CA
key_name 700542620: key value bla bla bla la
key_name -1679981589: key value bla bla bla la
key_name -646276569: key value bla bla bla la
key_name -1497341062: key value bla bla bla la
key_name -295404463: key value bla bla bla la
key_name 1333332619: key value bla bla bla la
key_name 1476208203: key value bla bla bla la
Age: 18
key_name -1713391803: key value bla bla bla la
key_name 907844771: key value bla bla bla la
```
___
# :mag_right: Logs
### User Steps
Here are the last 10 steps done by the user right before the bug was reported:
```
11:39:27 Tap in "androidx.appcompat.widget.AppCompatImageButton" in "com.example.app.main.MainActivity"
11:39:30 Tap in "androidx.constraintlayout.widget.ConstraintLayout" in "com.example.app.main.MainActivity"
11:39:32 Tap in "instabug_extra_screenshot_button" of type "android.widget.ImageButton" in "com.example.app.main.MainActivity"
11:39:36 Tap in "HttpUrlConnecti..." of type "androidx.appcompat.widget.AppCompatTextView" in "com.example.app.main.MainActivity"
11:39:50 Tap in "toolbar" of type "androidx.appcompat.widget.Toolbar" in "com.example.app.main.MainActivity"
11:40:07 Tap in "toolbar" of type "androidx.appcompat.widget.Toolbar" in "com.example.app.main.MainActivity"
11:41:17 Tap in "APIs" of type "androidx.appcompat.widget.AppCompatTextView" in "com.example.app.main.MainActivity"
11:41:18 com.example.app.main.MainActivity was paused.
11:41:18 In activity com.example.app.main.MainActivity: fragment com.example.app.apm.APMFragment was paused.
11:41:21 Long press in "Enable Hot App ..." of type "android.widget.Switch" in "com.example.app.main.MainActivity"
```
Find all the user steps done by the user throughout the session here: :point_right: **[View All User Steps](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8796?show-logs=user_steps&utm_source=github&utm_medium=integrations)** :point_left:
### User Events
Here are the last 10 user events logged right before the bug was reported:
```
11:34:59 Testing user event
```
Find all the logged user events throughout the session here: :point_right: **[View All User Events](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8796?show-logs=user_events&utm_source=github&utm_medium=integrations)** :point_left:
### Console Log
Here are the last 10 console logs logged right before the bug was reported:
```
11:41:26 D/IBG-APM ( 6372): Request [POST] https://api.instabug.com/api/sdk/v3/chats/sync has succeeded.
11:41:26 D/IBG-APM ( 6372): Total duration: 286 ms
11:41:26 D/IBG-APM ( 6372): Status code: 200.
11:41:26 D/IBG-APM ( 6372): Attributes: {}
11:41:26 I/chatty ( 6372): uid=10154(com.example.app) Thread-18 identical 2 lines
11:41:26 D/IBG-APM ( 6372): Request [POST] https://api.instabug.com/api/sdk/v3/chats/sync has succeeded.
11:41:26 D/IBG-APM ( 6372): Total duration: 286 ms
11:41:26 D/IBG-APM ( 6372): Status code: 200.
11:41:26 D/IBG-APM ( 6372): Attributes: {}
11:41:27 V/FA ( 6372): Inactivity, disconnecting from the service
```
Find all the logged console logs throughout the session here: :point_right: **[View All Console Log](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8796?show-logs=console_log&utm_source=github&utm_medium=integrations)** :point_left:
___
# :warning: Looking for More Details?
1. **Network Log**: we are unable to capture your network requests automatically. If you are using HttpUrlConnection or Okhttp requests, [**check the details mentioned here**](https://docs.instabug.com/docs/android-logging?utm_source=github&utm_medium=integrations#section-network-logs).
2. **Instabug Log**: start adding Instabug logs to see them right inside each report you receive. [**Find all the details in the docs**](https://docs.instabug.com/docs/android-logging?utm_source=github&utm_medium=integrations). | non_process | test comment clipboard bug details test comment key value reported at utc email a test com categories report a bug tags test hello rule got applied does not contain string rule works on convert to bug instabug arw app version session duration device google sdk gphone os level display xhdpi location cairo egypt en point right point left iphone view hierarchy this bug was reported from com example app apm apmfragment find its interactive view hierarchy with all its subviews here point right point left chart with downwards trend session profiler here is what the app was doing right before the bug was reported key value used memory gb used storage gb connectivity wifi battery unplugged orientation portrait find all the changes that happened in the parameters mentioned above during the last seconds before the bug was reported here point right point left bust in silhouette user info user data testing user attributes address ca key name key value bla bla bla la key name key value bla bla bla la key name key value bla bla bla la key name key value bla bla bla la key name key value bla bla bla la key name key value bla bla bla la key name key value bla bla bla la age key name key value bla bla bla la key name key value bla bla bla la mag right logs user steps here are the last steps done by the user right before the bug was reported tap in androidx appcompat widget appcompatimagebutton in com example app main mainactivity tap in androidx constraintlayout widget constraintlayout in com example app main mainactivity tap in instabug extra screenshot button of type android widget imagebutton in com example app main mainactivity tap in httpurlconnecti of type androidx appcompat widget appcompattextview in com example app main mainactivity tap in toolbar of type androidx appcompat widget toolbar in com example app main mainactivity tap in toolbar of type androidx appcompat widget toolbar in com example app main mainactivity tap in apis of type androidx appcompat widget appcompattextview in com example app main mainactivity com example app main mainactivity was paused in activity com example app main mainactivity fragment com example app apm apmfragment was paused long press in enable hot app of type android widget switch in com example app main mainactivity find all the user steps done by the user throughout the session here point right point left user events here are the last user events logged right before the bug was reported testing user event find all the logged user events throughout the session here point right point left console log here are the last console logs logged right before the bug was reported d ibg apm request has succeeded d ibg apm total duration ms d ibg apm status code d ibg apm attributes i chatty uid com example app thread identical lines d ibg apm request has succeeded d ibg apm total duration ms d ibg apm status code d ibg apm attributes v fa inactivity disconnecting from the service find all the logged console logs throughout the session here point right point left warning looking for more details network log we are unable to capture your network requests automatically if you are using httpurlconnection or okhttp requests instabug log start adding instabug logs to see them right inside each report you receive | 0 |
184,693 | 32,033,900,702 | IssuesEvent | 2023-09-22 14:06:42 | CDCgov/prime-reportstream | https://api.github.com/repos/CDCgov/prime-reportstream | opened | User testing new website | design experience | ## User story
As a ReportStream designer/researcher, I want to conduct usability testing of the ReportStream website's redesign so that we can incorporate user feedback into the ongoing content/design development of the website.
## Background & context
Audrey is conducting the user test starting 09/22 and I will be supporting by take notes. Each test is scheduled to be for 30 minutes.
## Open questions
_A bullet list format of any unresolved questions that will need answers in order to complete this
task_
- ...
- ...
## Working links
_Links to any Figma, gDoc, or other working document_
- ...
- ...
## Acceptance criteria
- [ ] Take notes during the test
| 1.0 | User testing new website - ## User story
As a ReportStream designer/researcher, I want to conduct usability testing of the ReportStream website's redesign so that we can incorporate user feedback into the ongoing content/design development of the website.
## Background & context
Audrey is conducting the user test starting 09/22 and I will be supporting by take notes. Each test is scheduled to be for 30 minutes.
## Open questions
_A bullet list format of any unresolved questions that will need answers in order to complete this
task_
- ...
- ...
## Working links
_Links to any Figma, gDoc, or other working document_
- ...
- ...
## Acceptance criteria
- [ ] Take notes during the test
| non_process | user testing new website user story as a reportstream designer researcher i want to conduct usability testing of the reportstream website s redesign so that we can incorporate user feedback into the ongoing content design development of the website background context audrey is conducting the user test starting and i will be supporting by take notes each test is scheduled to be for minutes open questions a bullet list format of any unresolved questions that will need answers in order to complete this task working links links to any figma gdoc or other working document acceptance criteria take notes during the test | 0 |
454,499 | 13,102,395,902 | IssuesEvent | 2020-08-04 06:36:51 | kubesphere/kubesphere | https://api.github.com/repos/kubesphere/kubesphere | closed | Kubernetes component validation is inconsistent | area/console kind/bug kind/need-to-verify priority/low | * There are fewer names and different times
* 1、UI show:

* 2、Terminal show:

* k8sv1.17.8+v3.0.0
| 1.0 | Kubernetes component validation is inconsistent - * There are fewer names and different times
* 1、UI show:

* 2、Terminal show:

* k8sv1.17.8+v3.0.0
| non_process | kubernetes component validation is inconsistent there are fewer names and different times 、ui show: 、terminal show | 0 |
5,705 | 8,564,293,638 | IssuesEvent | 2018-11-09 16:20:55 | shirou/gopsutil | https://api.github.com/repos/shirou/gopsutil | closed | It is not meet expectation when I use process.CmdlineSlice() to get redis-server cmdline in centos | os:linux package:process | It is not meet expectation when I use process.CmdlineSlice() to get redis-server cmdline in centos, as followings:
ps -ef|grep redis-server
```
root 4386 1 0 Sep26 ? 04:53:24 redis-server 192.168.0.1:6379
```
ll /proc/4386
```
lrwxrwxrwx 1 root root 0 Nov 9 19:20 exe -> /usr/bin/redis-server
```
cat /proc/4386/cmdline
```
redis-server 192.168.0.1:6379
```
but the index 0 of the slice is redis-server 192.168.0.1:6379 instead of redis-server, is there anything wrong ?
| 1.0 | It is not meet expectation when I use process.CmdlineSlice() to get redis-server cmdline in centos - It is not meet expectation when I use process.CmdlineSlice() to get redis-server cmdline in centos, as followings:
ps -ef|grep redis-server
```
root 4386 1 0 Sep26 ? 04:53:24 redis-server 192.168.0.1:6379
```
ll /proc/4386
```
lrwxrwxrwx 1 root root 0 Nov 9 19:20 exe -> /usr/bin/redis-server
```
cat /proc/4386/cmdline
```
redis-server 192.168.0.1:6379
```
but the index 0 of the slice is redis-server 192.168.0.1:6379 instead of redis-server, is there anything wrong ?
| process | it is not meet expectation when i use process cmdlineslice to get redis server cmdline in centos it is not meet expectation when i use process cmdlineslice to get redis server cmdline in centos as followings ps ef grep redis server root redis server ll proc lrwxrwxrwx root root nov exe usr bin redis server cat proc cmdline redis server but the index of the slice is redis server instead of redis server is there anything wrong | 1 |
257,362 | 19,515,195,518 | IssuesEvent | 2021-12-29 09:02:03 | blockstack/docs | https://api.github.com/repos/blockstack/docs | closed | Add click to expand image in lightbox to docs site | enhancement documentation stale | Many of our more detailed diagrams display very small on the documentation site. It would be good if a user could click to expand them into a full-browser lightbox for more comfortable viewing. | 1.0 | Add click to expand image in lightbox to docs site - Many of our more detailed diagrams display very small on the documentation site. It would be good if a user could click to expand them into a full-browser lightbox for more comfortable viewing. | non_process | add click to expand image in lightbox to docs site many of our more detailed diagrams display very small on the documentation site it would be good if a user could click to expand them into a full browser lightbox for more comfortable viewing | 0 |
319,623 | 23,782,055,802 | IssuesEvent | 2022-09-02 06:24:49 | SenseNet/sensenet | https://api.github.com/repos/SenseNet/sensenet | closed | Docs site technology, build and architecture know-how | documentation | Get familiar with the technology and the possibilities, e.g. how the menu works, how do we change the structure. | 1.0 | Docs site technology, build and architecture know-how - Get familiar with the technology and the possibilities, e.g. how the menu works, how do we change the structure. | non_process | docs site technology build and architecture know how get familiar with the technology and the possibilities e g how the menu works how do we change the structure | 0 |
150,404 | 13,346,709,663 | IssuesEvent | 2020-08-29 09:31:23 | nbQA-dev/nbQA | https://api.github.com/repos/nbQA-dev/nbQA | closed | DOC note that reading from stdin won't work | bug documentation | E.g. putting a breakpoint in a test and then running `nbqa pytest` won't work | 1.0 | DOC note that reading from stdin won't work - E.g. putting a breakpoint in a test and then running `nbqa pytest` won't work | non_process | doc note that reading from stdin won t work e g putting a breakpoint in a test and then running nbqa pytest won t work | 0 |
779,403 | 27,351,580,226 | IssuesEvent | 2023-02-27 09:56:31 | sebastien-d-me/SebBlog | https://api.github.com/repos/sebastien-d-me/SebBlog | opened | Preview page of an article | Priority: Medium Statut: Not started Type : Front-end | #### Description:
Creating an article preview page.
------------
###### Estimated time: 2 day(s)
###### Difficulty: ⭐⭐
| 1.0 | Preview page of an article - #### Description:
Creating an article preview page.
------------
###### Estimated time: 2 day(s)
###### Difficulty: ⭐⭐
| non_process | preview page of an article description creating an article preview page estimated time day s difficulty ⭐⭐ | 0 |
67,154 | 3,266,812,880 | IssuesEvent | 2015-10-22 22:29:49 | YetiForceCompany/YetiForceCRM | https://api.github.com/repos/YetiForceCompany/YetiForceCRM | closed | Fe.Req:Document Preview | Label::Core Priority::#2 Normal Type::Discussion Type::Enhancement | Document to be previewed without to be download every time.
Even image crop or rotation would be great.
Yetiforce! | 1.0 | Fe.Req:Document Preview - Document to be previewed without to be download every time.
Even image crop or rotation would be great.
Yetiforce! | non_process | fe req document preview document to be previewed without to be download every time even image crop or rotation would be great yetiforce | 0 |
3,085 | 6,101,120,878 | IssuesEvent | 2017-06-20 14:02:32 | nodejs/node | https://api.github.com/repos/nodejs/node | closed | Messages are not received via the process.on('message') event while debugging. | child_process debugger | <!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: 6.11.0
Platform: 64-bit (Windows 10)
Subsystem: Process and Cluster
const cluster = require('cluster');
const process = require('process');
if (cluster.isMaster) {
console.log('@Master :', process.pid);
[1, 2].forEach(() => {
cluster.fork();
});
} else {
process.on('message', (msg) => {
console.log('## Process.on -> ', msg);
});
}
cluster.on('online', (worker) => {
console.log('Worker online: ', worker.process.pid);
worker.send('Hey Hi from: ' + worker.process.pid)
});
// Execute the above code using 'debug' ar '--debug-brk=portno' flag.
// You won't see 'Hey Hi from **' console output.
// Execute without 'debug' or '--debug-brk=portno' flag.
// You will see the 'Hey Hi from **' output in the console window.
-->
* **Version**: 6.11.0
* **Platform**: 64-bit (Windows 10)
* **Subsystem**: Process and/or Cluster
<!-- Enter your issue details below this comment. -->
I have landed into a weird problem when debugging my node.js application i.e. when running the node.js application using either of the flags i.e. 'debug' or '--debug-brk=portno'.
**Note: The problem mentioned below would never happen when running the application in normal mode.**
The problem seems to with the process's message event or with cluster's message send operation.
Below is the complete executable code:
```js
const cluster = require('cluster');
const process = require('process');
if (cluster.isMaster) {
console.log('@Master :', process.pid);
[1, 2].forEach(() => {
cluster.fork();
});
} else {
process.on('message', (msg) => {
console.log('## Process.on -> ', msg);
});
}
cluster.on('online', (worker) => {
console.log('Worker online: ', worker.process.pid);
worker.send('Hey Hi from: ' + worker.process.pid)
});
```
Based on the above code one would think that the output similar to:
```
@Master : 17648
Worker online: 17552
Worker online: 8300
## Process.on -> Hey Hi from: 17552
## Process.on -> Hey Hi from: 8300
```
should be shown. But, instead the line
```
## Process.on -> Hey Hi from: 17552
## Process.on -> Hey Hi from: 8300
```
is never outputted in the console.
Here is the source file [debugClusterTest.zip](https://github.com/nodejs/node/files/1084785/debugClusterTest.zip)
If I'm doing something wrong in this test code - please let me know.
| 1.0 | Messages are not received via the process.on('message') event while debugging. - <!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: 6.11.0
Platform: 64-bit (Windows 10)
Subsystem: Process and Cluster
const cluster = require('cluster');
const process = require('process');
if (cluster.isMaster) {
console.log('@Master :', process.pid);
[1, 2].forEach(() => {
cluster.fork();
});
} else {
process.on('message', (msg) => {
console.log('## Process.on -> ', msg);
});
}
cluster.on('online', (worker) => {
console.log('Worker online: ', worker.process.pid);
worker.send('Hey Hi from: ' + worker.process.pid)
});
// Execute the above code using 'debug' ar '--debug-brk=portno' flag.
// You won't see 'Hey Hi from **' console output.
// Execute without 'debug' or '--debug-brk=portno' flag.
// You will see the 'Hey Hi from **' output in the console window.
-->
* **Version**: 6.11.0
* **Platform**: 64-bit (Windows 10)
* **Subsystem**: Process and/or Cluster
<!-- Enter your issue details below this comment. -->
I have landed into a weird problem when debugging my node.js application i.e. when running the node.js application using either of the flags i.e. 'debug' or '--debug-brk=portno'.
**Note: The problem mentioned below would never happen when running the application in normal mode.**
The problem seems to with the process's message event or with cluster's message send operation.
Below is the complete executable code:
```js
const cluster = require('cluster');
const process = require('process');
if (cluster.isMaster) {
console.log('@Master :', process.pid);
[1, 2].forEach(() => {
cluster.fork();
});
} else {
process.on('message', (msg) => {
console.log('## Process.on -> ', msg);
});
}
cluster.on('online', (worker) => {
console.log('Worker online: ', worker.process.pid);
worker.send('Hey Hi from: ' + worker.process.pid)
});
```
Based on the above code one would think that the output similar to:
```
@Master : 17648
Worker online: 17552
Worker online: 8300
## Process.on -> Hey Hi from: 17552
## Process.on -> Hey Hi from: 8300
```
should be shown. But, instead the line
```
## Process.on -> Hey Hi from: 17552
## Process.on -> Hey Hi from: 8300
```
is never outputted in the console.
Here is the source file [debugClusterTest.zip](https://github.com/nodejs/node/files/1084785/debugClusterTest.zip)
If I'm doing something wrong in this test code - please let me know.
| process | messages are not received via the process on message event while debugging thank you for reporting an issue this issue tracker is for bugs and issues found within node js core if you require more general support please file an issue on our help repo please fill in as much of the template below as you re able version platform bit windows subsystem process and cluster const cluster require cluster const process require process if cluster ismaster console log master process pid foreach cluster fork else process on message msg console log process on msg cluster on online worker console log worker online worker process pid worker send hey hi from worker process pid execute the above code using debug ar debug brk portno flag you won t see hey hi from console output execute without debug or debug brk portno flag you will see the hey hi from output in the console window version platform bit windows subsystem process and or cluster i have landed into a weird problem when debugging my node js application i e when running the node js application using either of the flags i e debug or debug brk portno note the problem mentioned below would never happen when running the application in normal mode the problem seems to with the process s message event or with cluster s message send operation below is the complete executable code js const cluster require cluster const process require process if cluster ismaster console log master process pid foreach cluster fork else process on message msg console log process on msg cluster on online worker console log worker online worker process pid worker send hey hi from worker process pid based on the above code one would think that the output similar to master worker online worker online process on hey hi from process on hey hi from should be shown but instead the line process on hey hi from process on hey hi from is never outputted in the console here is the source file if i m doing something wrong in this test code please let me know | 1 |
19,509 | 25,824,519,963 | IssuesEvent | 2022-12-12 12:00:09 | digitalmethodsinitiative/4cat | https://api.github.com/repos/digitalmethodsinitiative/4cat | closed | Replace Hatebase lexicons with Davidson et al.'s lexicon | processors dependencies | ERROR: type should be string, got "https://github.com/t-davidson/hate-speech-and-offensive-language/tree/master/lexicons - it's based on Hatebase but refined through snowballing real-world social media data. And the license is MIT, whereas Hatebase is ambiguously licensed and I'm not 100% sure we can really embed it in 4CAT." | 1.0 | Replace Hatebase lexicons with Davidson et al.'s lexicon - https://github.com/t-davidson/hate-speech-and-offensive-language/tree/master/lexicons - it's based on Hatebase but refined through snowballing real-world social media data. And the license is MIT, whereas Hatebase is ambiguously licensed and I'm not 100% sure we can really embed it in 4CAT. | process | replace hatebase lexicons with davidson et al s lexicon it s based on hatebase but refined through snowballing real world social media data and the license is mit whereas hatebase is ambiguously licensed and i m not sure we can really embed it in | 1 |
19,198 | 25,328,765,011 | IssuesEvent | 2022-11-18 11:29:15 | threefoldtech/js-sdk | https://api.github.com/repos/threefoldtech/js-sdk | closed | A 3bot name of a failed deployment is not reusable | process_wontfix type_bug | ### Description
When creating a 3bot and the deployment fails, the name chosen first is not reusable.
I suggest to initiate the identity first with a short time to live and it is confirmed by the 3bot itself.
| 1.0 | A 3bot name of a failed deployment is not reusable - ### Description
When creating a 3bot and the deployment fails, the name chosen first is not reusable.
I suggest to initiate the identity first with a short time to live and it is confirmed by the 3bot itself.
| process | a name of a failed deployment is not reusable description when creating a and the deployment fails the name chosen first is not reusable i suggest to initiate the identity first with a short time to live and it is confirmed by the itself | 1 |
41,373 | 12,832,000,916 | IssuesEvent | 2020-07-07 06:48:14 | rvvergara/next-js-basic | https://api.github.com/repos/rvvergara/next-js-basic | closed | CVE-2015-9251 (Medium) detected in jquery-1.7.1.min.js | security vulnerability | ## CVE-2015-9251 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/next-js-basic/node_modules/vm-browserify/example/run/index.html</p>
<p>Path to vulnerable library: /next-js-basic/node_modules/vm-browserify/example/run/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rvvergara/next-js-basic/commit/84695a914e1eb9c6b29e2f0eb39bfe4960ad47c8">84695a914e1eb9c6b29e2f0eb39bfe4960ad47c8</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-9251>CVE-2015-9251</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v3.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2015-9251 (Medium) detected in jquery-1.7.1.min.js - ## CVE-2015-9251 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/next-js-basic/node_modules/vm-browserify/example/run/index.html</p>
<p>Path to vulnerable library: /next-js-basic/node_modules/vm-browserify/example/run/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rvvergara/next-js-basic/commit/84695a914e1eb9c6b29e2f0eb39bfe4960ad47c8">84695a914e1eb9c6b29e2f0eb39bfe4960ad47c8</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-9251>CVE-2015-9251</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v3.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file tmp ws scm next js basic node modules vm browserify example run index html path to vulnerable library next js basic node modules vm browserify example run index html dependency hierarchy x jquery min js vulnerable library found in head commit a href vulnerability details jquery before is vulnerable to cross site scripting xss attacks when a cross domain ajax request is performed without the datatype option causing text javascript responses to be executed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource | 0 |
336,126 | 10,171,772,978 | IssuesEvent | 2019-08-08 09:09:33 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | fs/nvs: nvs_init can hang if no nvs_ate available | area: File System area: Flash bug priority: medium | **Describe the bug**
While running on a simulated flash file system, nvs_init() failed to return. Inspection with debugger shows that it stuck in an infinite loop.
**To Reproduce**
Calling nvs_init() on flash file system with no "free" sectors.
**Expected behavior**
If no free space available, return an error.
**Impact**
Potential malfunction of firmware on startup.
**Screenshots or console output**
The code is stuck in the subsys/fs/nvs/nvs.c file in the while(1) loop around line 580. If the nvs_flash_cmp_const() routine never returns a match, the loop will never break. The address being checked decrements past zero and wraps. The fix is to check that addr is greater than zero or return an error.
**Environment (please complete the following information):**
- OS: Linux - native_posix platform
- Toolchain Zephyr SDK
- Commit d8c5c9dcf12eda7e5edb86e3ce4971e750d0c7af, but code as same as current master on github.
**Additional context**
Add any other context about the problem here.
| 1.0 | fs/nvs: nvs_init can hang if no nvs_ate available - **Describe the bug**
While running on a simulated flash file system, nvs_init() failed to return. Inspection with debugger shows that it stuck in an infinite loop.
**To Reproduce**
Calling nvs_init() on flash file system with no "free" sectors.
**Expected behavior**
If no free space available, return an error.
**Impact**
Potential malfunction of firmware on startup.
**Screenshots or console output**
The code is stuck in the subsys/fs/nvs/nvs.c file in the while(1) loop around line 580. If the nvs_flash_cmp_const() routine never returns a match, the loop will never break. The address being checked decrements past zero and wraps. The fix is to check that addr is greater than zero or return an error.
**Environment (please complete the following information):**
- OS: Linux - native_posix platform
- Toolchain Zephyr SDK
- Commit d8c5c9dcf12eda7e5edb86e3ce4971e750d0c7af, but code as same as current master on github.
**Additional context**
Add any other context about the problem here.
| non_process | fs nvs nvs init can hang if no nvs ate available describe the bug while running on a simulated flash file system nvs init failed to return inspection with debugger shows that it stuck in an infinite loop to reproduce calling nvs init on flash file system with no free sectors expected behavior if no free space available return an error impact potential malfunction of firmware on startup screenshots or console output the code is stuck in the subsys fs nvs nvs c file in the while loop around line if the nvs flash cmp const routine never returns a match the loop will never break the address being checked decrements past zero and wraps the fix is to check that addr is greater than zero or return an error environment please complete the following information os linux native posix platform toolchain zephyr sdk commit but code as same as current master on github additional context add any other context about the problem here | 0 |
176,963 | 6,570,695,789 | IssuesEvent | 2017-09-10 02:54:40 | Mountainview-WebDesign/lifestonechurch | https://api.github.com/repos/Mountainview-WebDesign/lifestonechurch | closed | lg page to go up ASAP | priority | @agarrharr
Let's get LG info up ASAP. I'd love to be able to link TOMORROW'S email. Sorry you're getting this info so late.
LET'S KEEP SAME FORMAT THAT YOU HAD BEFORE, NOT LIKE I HAVE IT HERE.
Also, Clements are new leaders so i'll get you their bio soon. THEIR PIC IS ATTACHED.

NEW GRAPHIC ATTACHED

Connect to God's people and God's Word in LifeGroups.
SIGN UP BY SHOWING UP!
Sundays Beginning 9/10
Clement Group
6:30pm @ Pepper Residence
5468 W. Sierra Rose Drive, Herriman
Coleman Group
6:30pm @ Coleman Residence
3281 W 12075 S Riverton
Wednesdays Beginning 9/13
Smith Group
6:30pm @ Blevins Residence
5392 Venetia St. Herriman
Helton Group
6:30pm @ Wise Residence
4982 Badger Lane, Riverton 84096
| 1.0 | lg page to go up ASAP - @agarrharr
Let's get LG info up ASAP. I'd love to be able to link TOMORROW'S email. Sorry you're getting this info so late.
LET'S KEEP SAME FORMAT THAT YOU HAD BEFORE, NOT LIKE I HAVE IT HERE.
Also, Clements are new leaders so i'll get you their bio soon. THEIR PIC IS ATTACHED.

NEW GRAPHIC ATTACHED

Connect to God's people and God's Word in LifeGroups.
SIGN UP BY SHOWING UP!
Sundays Beginning 9/10
Clement Group
6:30pm @ Pepper Residence
5468 W. Sierra Rose Drive, Herriman
Coleman Group
6:30pm @ Coleman Residence
3281 W 12075 S Riverton
Wednesdays Beginning 9/13
Smith Group
6:30pm @ Blevins Residence
5392 Venetia St. Herriman
Helton Group
6:30pm @ Wise Residence
4982 Badger Lane, Riverton 84096
| non_process | lg page to go up asap agarrharr let s get lg info up asap i d love to be able to link tomorrow s email sorry you re getting this info so late let s keep same format that you had before not like i have it here also clements are new leaders so i ll get you their bio soon their pic is attached new graphic attached connect to god s people and god s word in lifegroups sign up by showing up sundays beginning clement group pepper residence w sierra rose drive herriman coleman group coleman residence w s riverton wednesdays beginning smith group blevins residence venetia st herriman helton group wise residence badger lane riverton | 0 |
156,605 | 19,901,883,139 | IssuesEvent | 2022-01-25 08:51:30 | kedacore/external-scaler-azure-cosmos-db | https://api.github.com/repos/kedacore/external-scaler-azure-cosmos-db | opened | CVE-2018-8292 (High) detected in system.net.http.4.3.0.nupkg | security vulnerability | ## CVE-2018-8292 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>system.net.http.4.3.0.nupkg</b></p></summary>
<p>Provides a programming interface for modern HTTP applications, including HTTP client components that...</p>
<p>Library home page: <a href="https://api.nuget.org/packages/system.net.http.4.3.0.nupkg">https://api.nuget.org/packages/system.net.http.4.3.0.nupkg</a></p>
<p>Path to dependency file: /src/Scaler.Tests/Keda.CosmosDb.Scaler.Tests.csproj</p>
<p>Path to vulnerable library: /usr/share/dotnet/sdk/NuGetFallbackFolder/system.net.http/4.3.0/system.net.http.4.3.0.nupkg</p>
<p>
Dependency Hierarchy:
- xunit.2.4.1.nupkg (Root Library)
- xunit.assert.2.4.1.nupkg
- netstandard.library.1.6.1.nupkg
- :x: **system.net.http.4.3.0.nupkg** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kedacore/external-scaler-azure-cosmos-db/commit/9a3e22a3002c867374c05b64c2992a68bb75f49e">9a3e22a3002c867374c05b64c2992a68bb75f49e</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An information disclosure vulnerability exists in .NET Core when authentication information is inadvertently exposed in a redirect, aka ".NET Core Information Disclosure Vulnerability." This affects .NET Core 2.1, .NET Core 1.0, .NET Core 1.1, PowerShell Core 6.0.
<p>Publish Date: 2018-10-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-8292>CVE-2018-8292</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/dotnet/announcements/issues/88">https://github.com/dotnet/announcements/issues/88</a></p>
<p>Release Date: 2018-10-10</p>
<p>Fix Resolution: System.Net.Http - 4.3.4;Microsoft.PowerShell.Commands.Utility - 6.1.0-rc.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-8292 (High) detected in system.net.http.4.3.0.nupkg - ## CVE-2018-8292 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>system.net.http.4.3.0.nupkg</b></p></summary>
<p>Provides a programming interface for modern HTTP applications, including HTTP client components that...</p>
<p>Library home page: <a href="https://api.nuget.org/packages/system.net.http.4.3.0.nupkg">https://api.nuget.org/packages/system.net.http.4.3.0.nupkg</a></p>
<p>Path to dependency file: /src/Scaler.Tests/Keda.CosmosDb.Scaler.Tests.csproj</p>
<p>Path to vulnerable library: /usr/share/dotnet/sdk/NuGetFallbackFolder/system.net.http/4.3.0/system.net.http.4.3.0.nupkg</p>
<p>
Dependency Hierarchy:
- xunit.2.4.1.nupkg (Root Library)
- xunit.assert.2.4.1.nupkg
- netstandard.library.1.6.1.nupkg
- :x: **system.net.http.4.3.0.nupkg** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kedacore/external-scaler-azure-cosmos-db/commit/9a3e22a3002c867374c05b64c2992a68bb75f49e">9a3e22a3002c867374c05b64c2992a68bb75f49e</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An information disclosure vulnerability exists in .NET Core when authentication information is inadvertently exposed in a redirect, aka ".NET Core Information Disclosure Vulnerability." This affects .NET Core 2.1, .NET Core 1.0, .NET Core 1.1, PowerShell Core 6.0.
<p>Publish Date: 2018-10-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-8292>CVE-2018-8292</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/dotnet/announcements/issues/88">https://github.com/dotnet/announcements/issues/88</a></p>
<p>Release Date: 2018-10-10</p>
<p>Fix Resolution: System.Net.Http - 4.3.4;Microsoft.PowerShell.Commands.Utility - 6.1.0-rc.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve high detected in system net http nupkg cve high severity vulnerability vulnerable library system net http nupkg provides a programming interface for modern http applications including http client components that library home page a href path to dependency file src scaler tests keda cosmosdb scaler tests csproj path to vulnerable library usr share dotnet sdk nugetfallbackfolder system net http system net http nupkg dependency hierarchy xunit nupkg root library xunit assert nupkg netstandard library nupkg x system net http nupkg vulnerable library found in head commit a href found in base branch main vulnerability details an information disclosure vulnerability exists in net core when authentication information is inadvertently exposed in a redirect aka net core information disclosure vulnerability this affects net core net core net core powershell core publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution system net http microsoft powershell commands utility rc step up your open source security game with whitesource | 0 |
14,203 | 17,101,215,533 | IssuesEvent | 2021-07-09 11:31:47 | Joystream/hydra | https://api.github.com/repos/Joystream/hydra | closed | Escape `0x00` in mappings | hydra-processor medium-prio-feature |
I was also wondering about this 0x00 byte problem (perhaps you noticed it on query-node channel). Is there an easy way for hydra to hook into String fields setters or perhaps preInsert / preUpdate and remove all \u0000 characters? We can also perpare all strings before saving them on the mappings side, but it's easy to overlook it in some cases. And I suspect this may also become a problem for other Hydra users
There is even a related issue on stackoverflow: https://stackoverflow.com/questions/1347646/postgres-error-on-insert-error-invalid-byte-sequence-for-encoding-utf8-0x0
| 1.0 | Escape `0x00` in mappings -
I was also wondering about this 0x00 byte problem (perhaps you noticed it on query-node channel). Is there an easy way for hydra to hook into String fields setters or perhaps preInsert / preUpdate and remove all \u0000 characters? We can also perpare all strings before saving them on the mappings side, but it's easy to overlook it in some cases. And I suspect this may also become a problem for other Hydra users
There is even a related issue on stackoverflow: https://stackoverflow.com/questions/1347646/postgres-error-on-insert-error-invalid-byte-sequence-for-encoding-utf8-0x0
| process | escape in mappings i was also wondering about this byte problem perhaps you noticed it on query node channel is there an easy way for hydra to hook into string fields setters or perhaps preinsert preupdate and remove all characters we can also perpare all strings before saving them on the mappings side but it s easy to overlook it in some cases and i suspect this may also become a problem for other hydra users there is even a related issue on stackoverflow | 1 |
618,080 | 19,424,067,364 | IssuesEvent | 2021-12-21 01:31:26 | justalemon/LemonUI | https://api.github.com/repos/justalemon/LemonUI | closed | RageMP Support | type: feature request status: acknowledged priority: p2 medium | Should be fairly simple to implement; Would fork and implement it myself if I had the time... Mainly recommending this due to RageMP's poor extendibility in the NativeUI Department (Think it actually uses the NativeUI implementation to begin with).
RAGE.Game.Invoker.Invoke<T>( Hash , params[] );
RAGE.Game.Invoker.GetReturn<T>()
RAGE.Game.Alignment
```
public enum Alignment
{
Center,
Left,
Right,
}
```
RAGE.Game.Font
```
public enum Font
{
ChaletLondon = 0,
HouseScript = 1,
Monospace = 2,
ChaletComprimeCologne = 4,
Pricedown = 7,
}
``` | 1.0 | RageMP Support - Should be fairly simple to implement; Would fork and implement it myself if I had the time... Mainly recommending this due to RageMP's poor extendibility in the NativeUI Department (Think it actually uses the NativeUI implementation to begin with).
RAGE.Game.Invoker.Invoke<T>( Hash , params[] );
RAGE.Game.Invoker.GetReturn<T>()
RAGE.Game.Alignment
```
public enum Alignment
{
Center,
Left,
Right,
}
```
RAGE.Game.Font
```
public enum Font
{
ChaletLondon = 0,
HouseScript = 1,
Monospace = 2,
ChaletComprimeCologne = 4,
Pricedown = 7,
}
``` | non_process | ragemp support should be fairly simple to implement would fork and implement it myself if i had the time mainly recommending this due to ragemp s poor extendibility in the nativeui department think it actually uses the nativeui implementation to begin with rage game invoker invoke hash params rage game invoker getreturn rage game alignment public enum alignment center left right rage game font public enum font chaletlondon housescript monospace chaletcomprimecologne pricedown | 0 |
186,649 | 14,402,840,773 | IssuesEvent | 2020-12-03 15:21:53 | rice-solar-physics/pydrad | https://api.github.com/repos/rice-solar-physics/pydrad | closed | Improve test coverage | test | Should add some very basic unit tests that are run on every PR and merge just to make sure we aren't merging broken code. | 1.0 | Improve test coverage - Should add some very basic unit tests that are run on every PR and merge just to make sure we aren't merging broken code. | non_process | improve test coverage should add some very basic unit tests that are run on every pr and merge just to make sure we aren t merging broken code | 0 |
7,419 | 10,542,780,542 | IssuesEvent | 2019-10-02 13:50:14 | elastic/beats | https://api.github.com/repos/elastic/beats | closed | Javascript Processor panic | :Processors bug | * Winlogbeat: 7.3.0
The javascript processors panics with a nil pointer exception. This seems to happen if the data the processor is looking for does not exists or is in the wrong format.
```
runtime error: invalid memory address or nil pointer dereference [recovered]
panic: Panic at 0: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: Panic at 0: runtime error: invalid memory address or nil pointer dereference
[signal 0xc0000005 code=0x0 addr=0x0 pc=0xfe3c8c]
goroutine 615 [running]:
github.com/elastic/beats/vendor/github.com/dop251/goja.AssertFunction.func1.1(0xc000a39480)
panic(0x1353e60, 0x11fb1d0) /usr/local/go/src/runtime/panic.go:522 +0x1c3
github.com/elastic/beats/vendor/github.com/dop251/goja.(*vm).run(0xc00010b040)
github.com/elastic/beats/vendor/github.com/dop251/goja.(*funcObject).Call(0xc00019e180, 0x16e7f20, 0x21a6b00, 0xc0000e9010, 0x1, 0x1, 0x1464c01, 0xc000087200)
github.com/elastic/beats/vendor/github.com/dop251/goja.AssertFunction.func1.2()
github.com/elastic/beats/vendor/github.com/dop251/goja.(*vm).try(0xc00010b040, 0xc000a393f8, 0x0)
github.com/elastic/beats/vendor/github.com/dop251/goja.AssertFunction.func1(0x16e7f20, 0x21a6b00, 0xc0000e9010, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
github.com/elastic/beats/libbeat/processors/script/javascript.(*session).runProcessFunc(0xc000199090, 0xc0009ad340, 0x0, 0x0, 0x0)
github.com/elastic/beats/libbeat/processors/script/javascript.(*jsProcessor).Run(0xc0001e8fc0, 0xc0009ad340, 0x14e9e46, 0xc0007057d0, 0x0)
github.com/elastic/beats/libbeat/publisher/processing.(*group).Run(0xc000233530, 0xc0009ad340, 0xc0009ad340, 0x0, 0x0)
github.com/elastic/beats/libbeat/publisher/processing.(*group).Run(0xc000730030, 0xc0009ad340, 0x18, 0xc000b9e640, 0x8)
github.com/elastic/beats/libbeat/publisher/pipeline.(*client).publish(0xc0005821e0, 0xebf2f44, 0xed50bf3d8, 0x0, 0x0, 0xc000705440, 0x141d260, 0xc0009ad300, 0x0)
github.com/elastic/beats/libbeat/publisher/pipeline.(*client).Publish(0xc0005821e0, 0xebf2f44, 0xed50bf3d8, 0x0, 0x0, 0xc000705440, 0x141d260, 0xc0009ad300, 0x0)
github.com/elastic/beats/winlogbeat/beater.(*eventLogger).run(0xc000568190, 0xc000110120, 0x16beac0, 0xc0001da280, 0xc00042e210, 0xf, 0x7267ade, 0x36fdc240, 0xed502a632, 0x0, ...)
github.com/elastic/beats/winlogbeat/beater.(*Winlogbeat).processEventLog(0xc0001e0000, 0xc00042e5f0, 0xc000568190, 0xc00042e210, 0xf, 0x7267ade, 0x36fdc240, 0xed502a632, 0x0, 0xc0001ca2a0, ... )
created by github.com/elastic/beats/winlogbeat/beater.(*Winlogbeat).Run
``` | 1.0 | Javascript Processor panic - * Winlogbeat: 7.3.0
The javascript processors panics with a nil pointer exception. This seems to happen if the data the processor is looking for does not exists or is in the wrong format.
```
runtime error: invalid memory address or nil pointer dereference [recovered]
panic: Panic at 0: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: Panic at 0: runtime error: invalid memory address or nil pointer dereference
[signal 0xc0000005 code=0x0 addr=0x0 pc=0xfe3c8c]
goroutine 615 [running]:
github.com/elastic/beats/vendor/github.com/dop251/goja.AssertFunction.func1.1(0xc000a39480)
panic(0x1353e60, 0x11fb1d0) /usr/local/go/src/runtime/panic.go:522 +0x1c3
github.com/elastic/beats/vendor/github.com/dop251/goja.(*vm).run(0xc00010b040)
github.com/elastic/beats/vendor/github.com/dop251/goja.(*funcObject).Call(0xc00019e180, 0x16e7f20, 0x21a6b00, 0xc0000e9010, 0x1, 0x1, 0x1464c01, 0xc000087200)
github.com/elastic/beats/vendor/github.com/dop251/goja.AssertFunction.func1.2()
github.com/elastic/beats/vendor/github.com/dop251/goja.(*vm).try(0xc00010b040, 0xc000a393f8, 0x0)
github.com/elastic/beats/vendor/github.com/dop251/goja.AssertFunction.func1(0x16e7f20, 0x21a6b00, 0xc0000e9010, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
github.com/elastic/beats/libbeat/processors/script/javascript.(*session).runProcessFunc(0xc000199090, 0xc0009ad340, 0x0, 0x0, 0x0)
github.com/elastic/beats/libbeat/processors/script/javascript.(*jsProcessor).Run(0xc0001e8fc0, 0xc0009ad340, 0x14e9e46, 0xc0007057d0, 0x0)
github.com/elastic/beats/libbeat/publisher/processing.(*group).Run(0xc000233530, 0xc0009ad340, 0xc0009ad340, 0x0, 0x0)
github.com/elastic/beats/libbeat/publisher/processing.(*group).Run(0xc000730030, 0xc0009ad340, 0x18, 0xc000b9e640, 0x8)
github.com/elastic/beats/libbeat/publisher/pipeline.(*client).publish(0xc0005821e0, 0xebf2f44, 0xed50bf3d8, 0x0, 0x0, 0xc000705440, 0x141d260, 0xc0009ad300, 0x0)
github.com/elastic/beats/libbeat/publisher/pipeline.(*client).Publish(0xc0005821e0, 0xebf2f44, 0xed50bf3d8, 0x0, 0x0, 0xc000705440, 0x141d260, 0xc0009ad300, 0x0)
github.com/elastic/beats/winlogbeat/beater.(*eventLogger).run(0xc000568190, 0xc000110120, 0x16beac0, 0xc0001da280, 0xc00042e210, 0xf, 0x7267ade, 0x36fdc240, 0xed502a632, 0x0, ...)
github.com/elastic/beats/winlogbeat/beater.(*Winlogbeat).processEventLog(0xc0001e0000, 0xc00042e5f0, 0xc000568190, 0xc00042e210, 0xf, 0x7267ade, 0x36fdc240, 0xed502a632, 0x0, 0xc0001ca2a0, ... )
created by github.com/elastic/beats/winlogbeat/beater.(*Winlogbeat).Run
``` | process | javascript processor panic winlogbeat the javascript processors panics with a nil pointer exception this seems to happen if the data the processor is looking for does not exists or is in the wrong format runtime error invalid memory address or nil pointer dereference panic panic at runtime error invalid memory address or nil pointer dereference panic panic at runtime error invalid memory address or nil pointer dereference goroutine github com elastic beats vendor github com goja assertfunction panic usr local go src runtime panic go github com elastic beats vendor github com goja vm run github com elastic beats vendor github com goja funcobject call github com elastic beats vendor github com goja assertfunction github com elastic beats vendor github com goja vm try github com elastic beats vendor github com goja assertfunction github com elastic beats libbeat processors script javascript session runprocessfunc github com elastic beats libbeat processors script javascript jsprocessor run github com elastic beats libbeat publisher processing group run github com elastic beats libbeat publisher processing group run github com elastic beats libbeat publisher pipeline client publish github com elastic beats libbeat publisher pipeline client publish github com elastic beats winlogbeat beater eventlogger run github com elastic beats winlogbeat beater winlogbeat processeventlog created by github com elastic beats winlogbeat beater winlogbeat run | 1 |
278,875 | 8,651,471,687 | IssuesEvent | 2018-11-27 03:19:01 | majorazero/Agility | https://api.github.com/repos/majorazero/Agility | closed | UX/UI Submitting invalid submissions should prompt the user something instead of being irresponsive | High Priority | Making a task of invalid date seems to trigger this, but current date also does the same thing. | 1.0 | UX/UI Submitting invalid submissions should prompt the user something instead of being irresponsive - Making a task of invalid date seems to trigger this, but current date also does the same thing. | non_process | ux ui submitting invalid submissions should prompt the user something instead of being irresponsive making a task of invalid date seems to trigger this but current date also does the same thing | 0 |
8,533 | 11,705,728,863 | IssuesEvent | 2020-03-07 17:38:29 | Ghost-chu/QuickShop-Reremake | https://api.github.com/repos/Ghost-chu/QuickShop-Reremake | closed | [BUG] Server - Crashing // MYSQL | In Process Performance Issue Waiting For Reply | **Describe the bug**
Server gets hung up attempting to... From what I can assume, pull/place data in the mysql db.
**To Reproduce**
Steps to reproduce the behavior:
1. Have mysql enabled? I'd imagine that's all it takes.
**Expected behavior**
The plugin to access the mysql db, clean without causing the server to crash.
**Paste link:**
Execute command /qs paste, you will get a link contains your server information, paste it under this text.
You must create a paste, except plugin completely won't work.
If you create failed, you should find a paste file under the plugin/QuickShop folder.
- https://paste.enginehub.org/6tkKSrJM
**Additional context**
Here is the pastbin from our crash log - https://pastebin.com/3FeYTdS5
Any help is appreciated. I know that this bug report isn't as organized, and doesn't provide as much information as others have. Looking for any advice on how to navigate this issue.
Thanks!
| 1.0 | [BUG] Server - Crashing // MYSQL - **Describe the bug**
Server gets hung up attempting to... From what I can assume, pull/place data in the mysql db.
**To Reproduce**
Steps to reproduce the behavior:
1. Have mysql enabled? I'd imagine that's all it takes.
**Expected behavior**
The plugin to access the mysql db, clean without causing the server to crash.
**Paste link:**
Execute command /qs paste, you will get a link contains your server information, paste it under this text.
You must create a paste, except plugin completely won't work.
If you create failed, you should find a paste file under the plugin/QuickShop folder.
- https://paste.enginehub.org/6tkKSrJM
**Additional context**
Here is the pastbin from our crash log - https://pastebin.com/3FeYTdS5
Any help is appreciated. I know that this bug report isn't as organized, and doesn't provide as much information as others have. Looking for any advice on how to navigate this issue.
Thanks!
| process | server crashing mysql describe the bug server gets hung up attempting to from what i can assume pull place data in the mysql db to reproduce steps to reproduce the behavior have mysql enabled i d imagine that s all it takes expected behavior the plugin to access the mysql db clean without causing the server to crash paste link execute command qs paste you will get a link contains your server information paste it under this text you must create a paste except plugin completely won t work if you create failed you should find a paste file under the plugin quickshop folder additional context here is the pastbin from our crash log any help is appreciated i know that this bug report isn t as organized and doesn t provide as much information as others have looking for any advice on how to navigate this issue thanks | 1 |
62,094 | 15,162,163,155 | IssuesEvent | 2021-02-12 10:11:24 | spatial-model-editor/spatial-model-editor | https://api.github.com/repos/spatial-model-editor/spatial-model-editor | closed | migrate from QCustomPlot to qwt | GUI build system | QCustomPlot is a nicer library but the GPL license is an issue: should replace with LGPL-licensed qwt library | 1.0 | migrate from QCustomPlot to qwt - QCustomPlot is a nicer library but the GPL license is an issue: should replace with LGPL-licensed qwt library | non_process | migrate from qcustomplot to qwt qcustomplot is a nicer library but the gpl license is an issue should replace with lgpl licensed qwt library | 0 |
11,010 | 13,795,555,297 | IssuesEvent | 2020-10-09 18:13:32 | unicode-org/icu4x | https://api.github.com/repos/unicode-org/icu4x | closed | Multi-layered directory structure | C-process T-task | In #18 we decided to make a top-level `/components` directory. I like this, but I'm also thinking that it might be good to have an extra layer of abstraction. We've come across several types of components so far:
1. Core i18n components: Locale, PluralRules, NumberFormat, etc.
2. Non-i18n utilities: FixedDecimal, Writeable, etc.
3. Data-related code: DataProvider, CldrJsonDataProvider, etc.
4. Data dump (JSON resources)
How should we structure these in the repository?
FYI, ICU4C is split into four categories:
1. common (UnicodeString, UnicodeSet, etc.)
2. i18n (NumberFormat, PluralRules, etc.)
3. io (u_sprintf, etc.)
4. layoutex (deprecated layout engine)
Here's one possible layout, with an emphasis on structure:
- `/`
- `components/`
- `locales/`
- `language-info/` (crate)
- `language-matcher/` (crate)
- `locale/` (crate)
- `numbers/`
- `fixed-decimal/` (crate)
- `number-format/` (crate)
- `unicode/`
- `unicode-set/` (crate)
- `unicode-props/` (crate)
- `udata/`
- `cldr-json-data-provider/` (crate)
- `data-provider/` (crate)
- `fs-data-provider/` (crate)
- `fs-data-exporter/` (crate)
- `strings/`
- `writeable/` (crate)
- `resources/`
- `json/` (root of JSON data directory, latest version)
Here's another possible layout, with an emphasis on flatness:
- `/`
- `components/`
- `language-info/` (crate)
- `language-matcher/` (crate)
- `locale/` (crate)
- `number-format/` (crate)
- `unicode-props/` (crate)
- `udata/`
- `cldr-json-data-provider/` (crate)
- `data-provider/` (crate)
- `fs-data-provider/` (crate)
- `fs-data-exporter/` (crate)
- `json-data/` (root of JSON data directory, latest version)
- `utils/`
- `writeable/` (crate)
- `fixed-decimal/` (crate)
- `unicode-set/` (crate)
Another, simpler layout:
- `/`
- `components/`
- `language-info/` (crate)
- `language-matcher/` (crate)
- `locale/` (crate)
- `number-format/` (crate)
- `unicode-props/` (crate)
- `cldr-json-data-provider/` (crate)
- `data-provider/` (crate)
- `fs-data-provider/` (crate)
- `fs-data-exporter/` (crate)
- `writeable/` (crate)
- `fixed-decimal/` (crate)
- `unicode-set/` (crate)
- `resources/`
- `json-data/` (root of JSON data directory, latest version)
Option 4:
- `/`
- `components/`
- `language-info/` (crate)
- `language-matcher/` (crate)
- `locale/` (crate)
- `number-format/` (crate)
- `unicode-props/` (crate)
- `cldr-json-data-provider/` (crate)
- `data-provider/` (crate)
- `fs-data-provider/` (crate)
- `fs-data-exporter/` (crate)
- `utils/`
- `writeable/` (crate)
- `fixed-decimal/` (crate)
- `unicode-set/` (crate)
- `resources/`
- `json-data/` (root of JSON data directory, latest version)
Option 5:
- `/`
- `components/`
- `language-info/` (crate)
- `language-matcher/` (crate)
- `locale/` (crate)
- `number-format/` (crate)
- `unicode-props/` (crate)
- `udata/`
- `cldr-json-data-provider/` (crate)
- `data-provider/` (crate)
- `fs-data-provider/` (crate)
- `fs-data-exporter/` (crate)
- `utils/`
- `writeable/` (crate)
- `fixed-decimal/` (crate)
- `unicode-set/` (crate)
- `resources/`
- `json-data/` (root of JSON data directory, latest version)
Thoughts?
@Manishearth | 1.0 | Multi-layered directory structure - In #18 we decided to make a top-level `/components` directory. I like this, but I'm also thinking that it might be good to have an extra layer of abstraction. We've come across several types of components so far:
1. Core i18n components: Locale, PluralRules, NumberFormat, etc.
2. Non-i18n utilities: FixedDecimal, Writeable, etc.
3. Data-related code: DataProvider, CldrJsonDataProvider, etc.
4. Data dump (JSON resources)
How should we structure these in the repository?
FYI, ICU4C is split into four categories:
1. common (UnicodeString, UnicodeSet, etc.)
2. i18n (NumberFormat, PluralRules, etc.)
3. io (u_sprintf, etc.)
4. layoutex (deprecated layout engine)
Here's one possible layout, with an emphasis on structure:
- `/`
- `components/`
- `locales/`
- `language-info/` (crate)
- `language-matcher/` (crate)
- `locale/` (crate)
- `numbers/`
- `fixed-decimal/` (crate)
- `number-format/` (crate)
- `unicode/`
- `unicode-set/` (crate)
- `unicode-props/` (crate)
- `udata/`
- `cldr-json-data-provider/` (crate)
- `data-provider/` (crate)
- `fs-data-provider/` (crate)
- `fs-data-exporter/` (crate)
- `strings/`
- `writeable/` (crate)
- `resources/`
- `json/` (root of JSON data directory, latest version)
Here's another possible layout, with an emphasis on flatness:
- `/`
- `components/`
- `language-info/` (crate)
- `language-matcher/` (crate)
- `locale/` (crate)
- `number-format/` (crate)
- `unicode-props/` (crate)
- `udata/`
- `cldr-json-data-provider/` (crate)
- `data-provider/` (crate)
- `fs-data-provider/` (crate)
- `fs-data-exporter/` (crate)
- `json-data/` (root of JSON data directory, latest version)
- `utils/`
- `writeable/` (crate)
- `fixed-decimal/` (crate)
- `unicode-set/` (crate)
Another, simpler layout:
- `/`
- `components/`
- `language-info/` (crate)
- `language-matcher/` (crate)
- `locale/` (crate)
- `number-format/` (crate)
- `unicode-props/` (crate)
- `cldr-json-data-provider/` (crate)
- `data-provider/` (crate)
- `fs-data-provider/` (crate)
- `fs-data-exporter/` (crate)
- `writeable/` (crate)
- `fixed-decimal/` (crate)
- `unicode-set/` (crate)
- `resources/`
- `json-data/` (root of JSON data directory, latest version)
Option 4:
- `/`
- `components/`
- `language-info/` (crate)
- `language-matcher/` (crate)
- `locale/` (crate)
- `number-format/` (crate)
- `unicode-props/` (crate)
- `cldr-json-data-provider/` (crate)
- `data-provider/` (crate)
- `fs-data-provider/` (crate)
- `fs-data-exporter/` (crate)
- `utils/`
- `writeable/` (crate)
- `fixed-decimal/` (crate)
- `unicode-set/` (crate)
- `resources/`
- `json-data/` (root of JSON data directory, latest version)
Option 5:
- `/`
- `components/`
- `language-info/` (crate)
- `language-matcher/` (crate)
- `locale/` (crate)
- `number-format/` (crate)
- `unicode-props/` (crate)
- `udata/`
- `cldr-json-data-provider/` (crate)
- `data-provider/` (crate)
- `fs-data-provider/` (crate)
- `fs-data-exporter/` (crate)
- `utils/`
- `writeable/` (crate)
- `fixed-decimal/` (crate)
- `unicode-set/` (crate)
- `resources/`
- `json-data/` (root of JSON data directory, latest version)
Thoughts?
@Manishearth | process | multi layered directory structure in we decided to make a top level components directory i like this but i m also thinking that it might be good to have an extra layer of abstraction we ve come across several types of components so far core components locale pluralrules numberformat etc non utilities fixeddecimal writeable etc data related code dataprovider cldrjsondataprovider etc data dump json resources how should we structure these in the repository fyi is split into four categories common unicodestring unicodeset etc numberformat pluralrules etc io u sprintf etc layoutex deprecated layout engine here s one possible layout with an emphasis on structure components locales language info crate language matcher crate locale crate numbers fixed decimal crate number format crate unicode unicode set crate unicode props crate udata cldr json data provider crate data provider crate fs data provider crate fs data exporter crate strings writeable crate resources json root of json data directory latest version here s another possible layout with an emphasis on flatness components language info crate language matcher crate locale crate number format crate unicode props crate udata cldr json data provider crate data provider crate fs data provider crate fs data exporter crate json data root of json data directory latest version utils writeable crate fixed decimal crate unicode set crate another simpler layout components language info crate language matcher crate locale crate number format crate unicode props crate cldr json data provider crate data provider crate fs data provider crate fs data exporter crate writeable crate fixed decimal crate unicode set crate resources json data root of json data directory latest version option components language info crate language matcher crate locale crate number format crate unicode props crate cldr json data provider crate data provider crate fs data provider crate fs data exporter crate utils writeable crate fixed decimal crate unicode set crate resources json data root of json data directory latest version option components language info crate language matcher crate locale crate number format crate unicode props crate udata cldr json data provider crate data provider crate fs data provider crate fs data exporter crate utils writeable crate fixed decimal crate unicode set crate resources json data root of json data directory latest version thoughts manishearth | 1 |
128,562 | 27,285,521,469 | IssuesEvent | 2023-02-23 13:14:54 | jakubiec/event-storming-to-code | https://api.github.com/repos/jakubiec/event-storming-to-code | opened | Make invariant scenarios more realistic | presentation code | The AlreadyRegistered scenario is a naive simplification and was correctly spotted that in the real word uniqueness, checks are done a layer above.
Think of a different way to showcase invariants | 1.0 | Make invariant scenarios more realistic - The AlreadyRegistered scenario is a naive simplification and was correctly spotted that in the real word uniqueness, checks are done a layer above.
Think of a different way to showcase invariants | non_process | make invariant scenarios more realistic the alreadyregistered scenario is a naive simplification and was correctly spotted that in the real word uniqueness checks are done a layer above think of a different way to showcase invariants | 0 |
7,835 | 11,011,712,511 | IssuesEvent | 2019-12-04 16:47:31 | 90301/TextReplace | https://api.github.com/repos/90301/TextReplace | closed | Multi-File, Multi Line Block Replace | Log Processor Pre-Processor | Need a list of files, then we can have something like a Log Processor program that can be run on all those files. Exporting a list of files to a file and loading them in may also be helpful
- [x] Get Files In Folder Request / Output them to text for log parser
- [x] Use Files as input File List (Cyberia Pre Processor)
- [x] Block-Based Find And Replace (?) (Cyberia Pre Processor) | 2.0 | Multi-File, Multi Line Block Replace - Need a list of files, then we can have something like a Log Processor program that can be run on all those files. Exporting a list of files to a file and loading them in may also be helpful
- [x] Get Files In Folder Request / Output them to text for log parser
- [x] Use Files as input File List (Cyberia Pre Processor)
- [x] Block-Based Find And Replace (?) (Cyberia Pre Processor) | process | multi file multi line block replace need a list of files then we can have something like a log processor program that can be run on all those files exporting a list of files to a file and loading them in may also be helpful get files in folder request output them to text for log parser use files as input file list cyberia pre processor block based find and replace cyberia pre processor | 1 |
83,225 | 15,699,634,715 | IssuesEvent | 2021-03-26 08:46:41 | LalithK90/wisdom-institute | https://api.github.com/repos/LalithK90/wisdom-institute | opened | CVE-2020-13934 (High) detected in tomcat-embed-core-9.0.30.jar | security vulnerability | ## CVE-2020-13934 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-9.0.30.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: wisdom-institute/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.30/ad32909314fe2ba02cec036434c0addd19bcc580/tomcat-embed-core-9.0.30.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.2.4.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.2.4.RELEASE.jar
- :x: **tomcat-embed-core-9.0.30.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/LalithK90/wisdom-institute/commits/2041e73b5e9e9fc5c2c540bdd4ad42a294d926a3">2041e73b5e9e9fc5c2c540bdd4ad42a294d926a3</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An h2c direct connection to Apache Tomcat 10.0.0-M1 to 10.0.0-M6, 9.0.0.M5 to 9.0.36 and 8.5.1 to 8.5.56 did not release the HTTP/1.1 processor after the upgrade to HTTP/2. If a sufficient number of such requests were made, an OutOfMemoryException could occur leading to a denial of service.
<p>Publish Date: 2020-07-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13934>CVE-2020-13934</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/r61f411cf82488d6ec213063fc15feeeb88e31b0ca9c29652ee4f962e%40%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/r61f411cf82488d6ec213063fc15feeeb88e31b0ca9c29652ee4f962e%40%3Cannounce.tomcat.apache.org%3E</a></p>
<p>Release Date: 2020-07-14</p>
<p>Fix Resolution: org.apache.tomcat:tomcat-coyote:8.5.57,9.0.37,10.0.0-M7;org.apache.tomcat.embed:tomcat-embed-core:8.5.57,9.0.37,10.0.0-M7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-13934 (High) detected in tomcat-embed-core-9.0.30.jar - ## CVE-2020-13934 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-9.0.30.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: wisdom-institute/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.30/ad32909314fe2ba02cec036434c0addd19bcc580/tomcat-embed-core-9.0.30.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.2.4.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.2.4.RELEASE.jar
- :x: **tomcat-embed-core-9.0.30.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/LalithK90/wisdom-institute/commits/2041e73b5e9e9fc5c2c540bdd4ad42a294d926a3">2041e73b5e9e9fc5c2c540bdd4ad42a294d926a3</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An h2c direct connection to Apache Tomcat 10.0.0-M1 to 10.0.0-M6, 9.0.0.M5 to 9.0.36 and 8.5.1 to 8.5.56 did not release the HTTP/1.1 processor after the upgrade to HTTP/2. If a sufficient number of such requests were made, an OutOfMemoryException could occur leading to a denial of service.
<p>Publish Date: 2020-07-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13934>CVE-2020-13934</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/r61f411cf82488d6ec213063fc15feeeb88e31b0ca9c29652ee4f962e%40%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/r61f411cf82488d6ec213063fc15feeeb88e31b0ca9c29652ee4f962e%40%3Cannounce.tomcat.apache.org%3E</a></p>
<p>Release Date: 2020-07-14</p>
<p>Fix Resolution: org.apache.tomcat:tomcat-coyote:8.5.57,9.0.37,10.0.0-M7;org.apache.tomcat.embed:tomcat-embed-core:8.5.57,9.0.37,10.0.0-M7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve high detected in tomcat embed core jar cve high severity vulnerability vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to dependency file wisdom institute build gradle path to vulnerable library home wss scanner gradle caches modules files org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in head commit a href found in base branch master vulnerability details an direct connection to apache tomcat to to and to did not release the http processor after the upgrade to http if a sufficient number of such requests were made an outofmemoryexception could occur leading to a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat tomcat coyote org apache tomcat embed tomcat embed core step up your open source security game with whitesource | 0 |
12,199 | 14,742,481,914 | IssuesEvent | 2021-01-07 12:22:15 | kdjstudios/SABillingGitlab | https://api.github.com/repos/kdjstudios/SABillingGitlab | opened | FW: Billing Cycle Notification for Toronto, Canada | anc-process anp-2 ant-enhancement ant-parent/primary pl-foran | In GitLab by @kdjstudios on Apr 25, 2019, 08:48
**Submitted by:** "Richard Soltoff" <richard.soltoff@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-04-25-31659
**Server:** Internal
**Client/Site:** All
**Account:** NA
**Issue:**
Can you explain to me what this email is telling me?
I do not that they have finalized their billing, which is when I thought we would get an email? | 1.0 | FW: Billing Cycle Notification for Toronto, Canada - In GitLab by @kdjstudios on Apr 25, 2019, 08:48
**Submitted by:** "Richard Soltoff" <richard.soltoff@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-04-25-31659
**Server:** Internal
**Client/Site:** All
**Account:** NA
**Issue:**
Can you explain to me what this email is telling me?
I do not that they have finalized their billing, which is when I thought we would get an email? | process | fw billing cycle notification for toronto canada in gitlab by kdjstudios on apr submitted by richard soltoff helpdesk server internal client site all account na issue can you explain to me what this email is telling me i do not that they have finalized their billing which is when i thought we would get an email | 1 |
6,775 | 9,914,164,406 | IssuesEvent | 2019-06-28 13:45:47 | material-components/material-components-ios | https://api.github.com/repos/material-components/material-components-ios | closed | [Tabs] Internal issue: b/135608374 | [Tabs] type:Process | This was filed as an internal issue. If you are a Googler, please visit [b/135608374](http://b/135608374) for more details.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/135608374](http://b/135608374)
- Blocked by: https://github.com/material-components/material-components-ios/issues/7645
- Blocked by: https://github.com/material-components/material-components-ios/issues/7646
- Blocked by: https://github.com/material-components/material-components-ios/issues/7643 | 1.0 | [Tabs] Internal issue: b/135608374 - This was filed as an internal issue. If you are a Googler, please visit [b/135608374](http://b/135608374) for more details.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/135608374](http://b/135608374)
- Blocked by: https://github.com/material-components/material-components-ios/issues/7645
- Blocked by: https://github.com/material-components/material-components-ios/issues/7646
- Blocked by: https://github.com/material-components/material-components-ios/issues/7643 | process | internal issue b this was filed as an internal issue if you are a googler please visit for more details internal data associated internal bug blocked by blocked by blocked by | 1 |
651,088 | 21,465,036,909 | IssuesEvent | 2022-04-26 02:12:12 | weaveworks/eksctl | https://api.github.com/repos/weaveworks/eksctl | closed | [Bug] remoteNodegroups contains failed stacks that are not nodegroups | kind/bug priority/backlog stale | ### What happened?
>$ eksctl nodegroup create failingnodegroup
which fails due to aws internal, then run it again
>$ eksctl nodegroup create ...
and eksctl says that nodegroup is existing
>1 existing nodegroup(s) (failingnodegroup)
but only the failed (rollbacked) stack exists and the nodegroup does not exist in EKS.
prob due to filtering somewhere in https://github.com/weaveworks/eksctl/blob/7a59b2869bd57b6ddf1e7c00c899fd1f5b1cda8c/pkg/ctl/cmdutils/filter/nodegroup_filter.go#L123
| 1.0 | [Bug] remoteNodegroups contains failed stacks that are not nodegroups - ### What happened?
>$ eksctl nodegroup create failingnodegroup
which fails due to aws internal, then run it again
>$ eksctl nodegroup create ...
and eksctl says that nodegroup is existing
>1 existing nodegroup(s) (failingnodegroup)
but only the failed (rollbacked) stack exists and the nodegroup does not exist in EKS.
prob due to filtering somewhere in https://github.com/weaveworks/eksctl/blob/7a59b2869bd57b6ddf1e7c00c899fd1f5b1cda8c/pkg/ctl/cmdutils/filter/nodegroup_filter.go#L123
| non_process | remotenodegroups contains failed stacks that are not nodegroups what happened eksctl nodegroup create failingnodegroup which fails due to aws internal then run it again eksctl nodegroup create and eksctl says that nodegroup is existing existing nodegroup s failingnodegroup but only the failed rollbacked stack exists and the nodegroup does not exist in eks prob due to filtering somewhere in | 0 |
11,587 | 14,445,787,726 | IssuesEvent | 2020-12-07 23:44:02 | googleapis/release-please | https://api.github.com/repos/googleapis/release-please | closed | Improve test coverage for Go releaser | type: process | We should increase the test coverage for our mono-repo go logic, we especially need some additional tests for the submodule releaser class.
See: https://github.com/googleapis/release-please/pull/617 | 1.0 | Improve test coverage for Go releaser - We should increase the test coverage for our mono-repo go logic, we especially need some additional tests for the submodule releaser class.
See: https://github.com/googleapis/release-please/pull/617 | process | improve test coverage for go releaser we should increase the test coverage for our mono repo go logic we especially need some additional tests for the submodule releaser class see | 1 |
87,853 | 15,790,335,224 | IssuesEvent | 2021-04-02 01:10:27 | YauheniPo/Elements_Test_Framework | https://api.github.com/repos/YauheniPo/Elements_Test_Framework | closed | CVE-2019-16943 (High) detected in jackson-databind-2.9.8.jar - autoclosed | security vulnerability | ## CVE-2019-16943 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: Elements_Test_Framework/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- allure-testng-2.10.0.jar (Root Library)
- allure-java-commons-2.10.0.jar
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/YauheniPo/Elements_Test_Framework/commit/b6525683a2173ae823218926b6f8d80d10c5d61f">b6525683a2173ae823218926b6f8d80d10c5d61f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the p6spy (3.8.6) jar in the classpath, and an attacker can find an RMI service endpoint to access, it is possible to make the service execute a malicious payload. This issue exists because of com.p6spy.engine.spy.P6DataSource mishandling.
<p>Publish Date: 2019-10-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16943>CVE-2019-16943</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16943">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16943</a></p>
<p>Release Date: 2019-10-01</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.6.7.3,2.7.9.7,2.8.11.5,2.9.10.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-16943 (High) detected in jackson-databind-2.9.8.jar - autoclosed - ## CVE-2019-16943 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: Elements_Test_Framework/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- allure-testng-2.10.0.jar (Root Library)
- allure-java-commons-2.10.0.jar
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/YauheniPo/Elements_Test_Framework/commit/b6525683a2173ae823218926b6f8d80d10c5d61f">b6525683a2173ae823218926b6f8d80d10c5d61f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the p6spy (3.8.6) jar in the classpath, and an attacker can find an RMI service endpoint to access, it is possible to make the service execute a malicious payload. This issue exists because of com.p6spy.engine.spy.P6DataSource mishandling.
<p>Publish Date: 2019-10-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16943>CVE-2019-16943</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16943">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16943</a></p>
<p>Release Date: 2019-10-01</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.6.7.3,2.7.9.7,2.8.11.5,2.9.10.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve high detected in jackson databind jar autoclosed cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file elements test framework pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy allure testng jar root library allure java commons jar x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind through when default typing is enabled either globally or for a specific property for an externally exposed json endpoint and the service has the jar in the classpath and an attacker can find an rmi service endpoint to access it is possible to make the service execute a malicious payload this issue exists because of com engine spy mishandling publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with whitesource | 0 |
4,469 | 3,869,946,973 | IssuesEvent | 2016-04-10 22:01:47 | lionheart/openradar-mirror | https://api.github.com/repos/lionheart/openradar-mirror | opened | 23141927: Xcode-beta (7B85): When renaming, reformat automatically | classification:ui/usability reproducible:always status:open | #### Description
Xcode-beta (7B85): When renaming, reformat automatically
Summary:
When renaming via the refactoring feature or via “Edit All In Scope”, reformat the affected code automatically
Steps to Reproduce:
1. Change a method signature via Edit > Refactor > Rename…
2. Check the preview
3. If there has been a length change in part of the method signature, observer that some of the colon-aligned multi-line uses of same are now out of alignment
Expected Results:
Optionally, per default, the affected lines are re-indented.
Actual Results:
Some of the colon-aligned multi-line uses of the method signature are now out of alignment.
Regression:
Happens only on changes in length of the method signature.
Notes:
none
-
Product Version: Xcode-beta (7B85)
Created: 2015-10-16T10:00:55.963760
Originated: 2015-10-16T12:00:00
Open Radar Link: http://www.openradar.me/23141927 | True | 23141927: Xcode-beta (7B85): When renaming, reformat automatically - #### Description
Xcode-beta (7B85): When renaming, reformat automatically
Summary:
When renaming via the refactoring feature or via “Edit All In Scope”, reformat the affected code automatically
Steps to Reproduce:
1. Change a method signature via Edit > Refactor > Rename…
2. Check the preview
3. If there has been a length change in part of the method signature, observer that some of the colon-aligned multi-line uses of same are now out of alignment
Expected Results:
Optionally, per default, the affected lines are re-indented.
Actual Results:
Some of the colon-aligned multi-line uses of the method signature are now out of alignment.
Regression:
Happens only on changes in length of the method signature.
Notes:
none
-
Product Version: Xcode-beta (7B85)
Created: 2015-10-16T10:00:55.963760
Originated: 2015-10-16T12:00:00
Open Radar Link: http://www.openradar.me/23141927 | non_process | xcode beta when renaming reformat automatically description xcode beta when renaming reformat automatically summary when renaming via the refactoring feature or via “edit all in scope” reformat the affected code automatically steps to reproduce change a method signature via edit refactor rename… check the preview if there has been a length change in part of the method signature observer that some of the colon aligned multi line uses of same are now out of alignment expected results optionally per default the affected lines are re indented actual results some of the colon aligned multi line uses of the method signature are now out of alignment regression happens only on changes in length of the method signature notes none product version xcode beta created originated open radar link | 0 |
6,910 | 10,060,299,331 | IssuesEvent | 2019-07-22 18:29:00 | googleapis/google-cloud-python | https://api.github.com/repos/googleapis/google-cloud-python | closed | BigQuery: 'test_bigquery_magic_w_maximum_bytes_billed_invalid' test uses real client. | api: bigquery testing type: process | The *unit* test fails if there are no valid credentials in the environment:
```python
______________ test_bigquery_magic_w_maximum_bytes_billed_invalid ______________
@pytest.mark.usefixtures("ipython_interactive")
def test_bigquery_magic_w_maximum_bytes_billed_invalid():
ip = IPython.get_ipython()
ip.extension_manager.load_extension("google.cloud.bigquery")
magics.context._project = None
sql = "SELECT 17 AS num"
with pytest.raises(ValueError):
> ip.run_cell_magic("bigquery", "--maximum_bytes_billed=abc", sql)
tests/unit/test_magics.py:565:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/unit-3-6/lib/python3.6/site-packages/IPython/core/interactiveshell.py:2358: in run_cell_magic
result = fn(*args, **kwargs)
google/cloud/bigquery/magics.py:400: in _cell_magic
default_query_job_config=context.default_query_job_config,
google/cloud/bigquery/client.py:167: in __init__
project=project, credentials=credentials, _http=_http
../core/google/cloud/client.py:226: in __init__
_ClientProjectMixin.__init__(self, project=project)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <google.cloud.bigquery.client.Client object at 0x7f4bb6739d68>
project = None
def __init__(self, project=None):
project = self._determine_default(project)
if project is None:
raise EnvironmentError(
> "Project was not passed and could not be "
"determined from the environment."
)
E OSError: Project was not passed and could not be determined from the environment.
```
We want unit tests to run in complete isolation from any possibility of triggering real API requests. This test actually expects to run a real query, which means it is a system test. | 1.0 | BigQuery: 'test_bigquery_magic_w_maximum_bytes_billed_invalid' test uses real client. - The *unit* test fails if there are no valid credentials in the environment:
```python
______________ test_bigquery_magic_w_maximum_bytes_billed_invalid ______________
@pytest.mark.usefixtures("ipython_interactive")
def test_bigquery_magic_w_maximum_bytes_billed_invalid():
ip = IPython.get_ipython()
ip.extension_manager.load_extension("google.cloud.bigquery")
magics.context._project = None
sql = "SELECT 17 AS num"
with pytest.raises(ValueError):
> ip.run_cell_magic("bigquery", "--maximum_bytes_billed=abc", sql)
tests/unit/test_magics.py:565:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/unit-3-6/lib/python3.6/site-packages/IPython/core/interactiveshell.py:2358: in run_cell_magic
result = fn(*args, **kwargs)
google/cloud/bigquery/magics.py:400: in _cell_magic
default_query_job_config=context.default_query_job_config,
google/cloud/bigquery/client.py:167: in __init__
project=project, credentials=credentials, _http=_http
../core/google/cloud/client.py:226: in __init__
_ClientProjectMixin.__init__(self, project=project)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <google.cloud.bigquery.client.Client object at 0x7f4bb6739d68>
project = None
def __init__(self, project=None):
project = self._determine_default(project)
if project is None:
raise EnvironmentError(
> "Project was not passed and could not be "
"determined from the environment."
)
E OSError: Project was not passed and could not be determined from the environment.
```
We want unit tests to run in complete isolation from any possibility of triggering real API requests. This test actually expects to run a real query, which means it is a system test. | process | bigquery test bigquery magic w maximum bytes billed invalid test uses real client the unit test fails if there are no valid credentials in the environment python test bigquery magic w maximum bytes billed invalid pytest mark usefixtures ipython interactive def test bigquery magic w maximum bytes billed invalid ip ipython get ipython ip extension manager load extension google cloud bigquery magics context project none sql select as num with pytest raises valueerror ip run cell magic bigquery maximum bytes billed abc sql tests unit test magics py nox unit lib site packages ipython core interactiveshell py in run cell magic result fn args kwargs google cloud bigquery magics py in cell magic default query job config context default query job config google cloud bigquery client py in init project project credentials credentials http http core google cloud client py in init clientprojectmixin init self project project self project none def init self project none project self determine default project if project is none raise environmenterror project was not passed and could not be determined from the environment e oserror project was not passed and could not be determined from the environment we want unit tests to run in complete isolation from any possibility of triggering real api requests this test actually expects to run a real query which means it is a system test | 1 |
137,213 | 5,299,968,981 | IssuesEvent | 2017-02-10 02:20:42 | OperationCode/operationcode | https://api.github.com/repos/OperationCode/operationcode | closed | Donation by text | Priority: Medium Type: Feature | When `opcode` is texted to `x number` (provisioned by Twilio or similar) user makes a donation to scholarship fund.
| 1.0 | Donation by text - When `opcode` is texted to `x number` (provisioned by Twilio or similar) user makes a donation to scholarship fund.
| non_process | donation by text when opcode is texted to x number provisioned by twilio or similar user makes a donation to scholarship fund | 0 |
71,954 | 13,767,221,818 | IssuesEvent | 2020-10-07 15:28:23 | finos/alloy | https://api.github.com/repos/finos/alloy | closed | FINOS Code Scans, Checks, Validation - SDLC Code | Alloy SDLC Code_Readiness Go Live Readiness Checklist | This task can't start until https://github.com/finos/alloy/issues/235 is completed.
## SDLC
- [x] FINOS Security Vulnerability Check
- [x] FINOS Legal/License Scans
- [x] Apply FINOS Project Blueprint | 1.0 | FINOS Code Scans, Checks, Validation - SDLC Code - This task can't start until https://github.com/finos/alloy/issues/235 is completed.
## SDLC
- [x] FINOS Security Vulnerability Check
- [x] FINOS Legal/License Scans
- [x] Apply FINOS Project Blueprint | non_process | finos code scans checks validation sdlc code this task can t start until is completed sdlc finos security vulnerability check finos legal license scans apply finos project blueprint | 0 |
7,278 | 10,431,736,378 | IssuesEvent | 2019-09-17 09:43:16 | ESMValGroup/ESMValCore | https://api.github.com/repos/ESMValGroup/ESMValCore | opened | Give user access to OBS data in different frequencies | enhancement preprocessor | Problem description:
Observational datasets at high frequencies (daily or hourly) are computationally expensive to work with. Not all diagnostics make use of this high time resolution and the necessary preprocessing in these diagnostics is time (and energy) consuming and often leads to memory issues (e.g. #51). Given that the resampled datasets should be an order of magnitude smaller than their high frequency origins, I think it would be worth to include these in the dataset pool. I think we should by default provide hourly data also as daily means and monthly means and daily data should also be provided as monthly means. Some considerations:
a) Should we include this in the CMORization scripts?
b) How do we distinguish between the different frequencies? It should be reflected in the file naming convention and be possible to specify which frequency to pick from the recipe.
| 1.0 | Give user access to OBS data in different frequencies - Problem description:
Observational datasets at high frequencies (daily or hourly) are computationally expensive to work with. Not all diagnostics make use of this high time resolution and the necessary preprocessing in these diagnostics is time (and energy) consuming and often leads to memory issues (e.g. #51). Given that the resampled datasets should be an order of magnitude smaller than their high frequency origins, I think it would be worth to include these in the dataset pool. I think we should by default provide hourly data also as daily means and monthly means and daily data should also be provided as monthly means. Some considerations:
a) Should we include this in the CMORization scripts?
b) How do we distinguish between the different frequencies? It should be reflected in the file naming convention and be possible to specify which frequency to pick from the recipe.
| process | give user access to obs data in different frequencies problem description observational datasets at high frequencies daily or hourly are computationally expensive to work with not all diagnostics make use of this high time resolution and the necessary preprocessing in these diagnostics is time and energy consuming and often leads to memory issues e g given that the resampled datasets should be an order of magnitude smaller than their high frequency origins i think it would be worth to include these in the dataset pool i think we should by default provide hourly data also as daily means and monthly means and daily data should also be provided as monthly means some considerations a should we include this in the cmorization scripts b how do we distinguish between the different frequencies it should be reflected in the file naming convention and be possible to specify which frequency to pick from the recipe | 1 |
13,841 | 16,602,367,561 | IssuesEvent | 2021-06-01 21:28:15 | CodeForPhilly/paws-data-pipeline | https://api.github.com/repos/CodeForPhilly/paws-data-pipeline | closed | Handle even longer Execute runs, give better UX | API Async processes UX | When we did #227 , the Execute Match run time was < 60 minutes. As we've added more features, it's now taking just under three hours (on a pretty fast machine). This hits two timeouts:
- 30 minute login refresh
- 60 minute nginx request timer
If the user were to keep the tab in the foreground and hit the refresh button every 30 min, after 60 min nginx will send a 502 (which the JS code does not catch) and the spinner will continue forever as the JS will never see a 200 for the execute request.
If the user then reloads the page (or logs back in after being timed out) she's presented with an Admin page showing uploaded files but no indication of the running job. The normal reaction will be to hit the **Run Analysis** button again, launching a second execute process. This will generally cause an error, killing one or both processes causing a 500 or 502 to be returned to the client.
**Proposal** ________________________________________
1 - Modify **/api/get_execution_status** so as not to require a job id. Ensure there's no more than one active execute running.
2 - When server starts up, check for remnants of an incomplete execution (_i.e._, non-completed job record in DB). Assuming we can know it's dead[1], delete the in-progress record to allow a new run to be started.
3 - Modify client to check **get_execution_status** every X seconds. If there's a run executing, disable the **Run Analysis** button and show the execution progress. If no run in progress, enable the button. Ensure client handles non-200 responses.
4 - Modify execute code to update status every 100 (?) records so we get more frequent updates. Have client check for progress.[2,3]
5 - Investigate check-pointing the match execution. Could we dump to DB every 1000 records and then restart from there later? (#330)
<hr>
[1] As we're having uwsgi pre-fork two processes, we _shouldn't_ have new server processes except at startup.
[2] What to do if it appears there's no progress?
[3] Later blocks take much longer than earlier blocks.
| 1.0 | Handle even longer Execute runs, give better UX - When we did #227 , the Execute Match run time was < 60 minutes. As we've added more features, it's now taking just under three hours (on a pretty fast machine). This hits two timeouts:
- 30 minute login refresh
- 60 minute nginx request timer
If the user were to keep the tab in the foreground and hit the refresh button every 30 min, after 60 min nginx will send a 502 (which the JS code does not catch) and the spinner will continue forever as the JS will never see a 200 for the execute request.
If the user then reloads the page (or logs back in after being timed out) she's presented with an Admin page showing uploaded files but no indication of the running job. The normal reaction will be to hit the **Run Analysis** button again, launching a second execute process. This will generally cause an error, killing one or both processes causing a 500 or 502 to be returned to the client.
**Proposal** ________________________________________
1 - Modify **/api/get_execution_status** so as not to require a job id. Ensure there's no more than one active execute running.
2 - When server starts up, check for remnants of an incomplete execution (_i.e._, non-completed job record in DB). Assuming we can know it's dead[1], delete the in-progress record to allow a new run to be started.
3 - Modify client to check **get_execution_status** every X seconds. If there's a run executing, disable the **Run Analysis** button and show the execution progress. If no run in progress, enable the button. Ensure client handles non-200 responses.
4 - Modify execute code to update status every 100 (?) records so we get more frequent updates. Have client check for progress.[2,3]
5 - Investigate check-pointing the match execution. Could we dump to DB every 1000 records and then restart from there later? (#330)
<hr>
[1] As we're having uwsgi pre-fork two processes, we _shouldn't_ have new server processes except at startup.
[2] What to do if it appears there's no progress?
[3] Later blocks take much longer than earlier blocks.
| process | handle even longer execute runs give better ux when we did the execute match run time was minutes as we ve added more features it s now taking just under three hours on a pretty fast machine this hits two timeouts minute login refresh minute nginx request timer if the user were to keep the tab in the foreground and hit the refresh button every min after min nginx will send a which the js code does not catch and the spinner will continue forever as the js will never see a for the execute request if the user then reloads the page or logs back in after being timed out she s presented with an admin page showing uploaded files but no indication of the running job the normal reaction will be to hit the run analysis button again launching a second execute process this will generally cause an error killing one or both processes causing a or to be returned to the client proposal modify api get execution status so as not to require a job id ensure there s no more than one active execute running when server starts up check for remnants of an incomplete execution i e non completed job record in db assuming we can know it s dead delete the in progress record to allow a new run to be started modify client to check get execution status every x seconds if there s a run executing disable the run analysis button and show the execution progress if no run in progress enable the button ensure client handles non responses modify execute code to update status every records so we get more frequent updates have client check for progress investigate check pointing the match execution could we dump to db every records and then restart from there later as we re having uwsgi pre fork two processes we shouldn t have new server processes except at startup what to do if it appears there s no progress later blocks take much longer than earlier blocks | 1 |
16,672 | 21,776,241,278 | IssuesEvent | 2022-05-13 14:04:05 | camunda/zeebe | https://api.github.com/repos/camunda/zeebe | closed | Deprecate Cancel Job command | kind/toil scope/broker team/process-automation area/maintainability | **Description**
The cancel job command was used to cancel any existing jobs belonging to a scope that is being terminated. This was changed with #9219 at which point the command was no longer written by the engine. It now only writes the canceled event.
For backward compatibility, the engine must still support the cancel command and the cancel job processor. We should however deprecate the command for future removal.
| 1.0 | Deprecate Cancel Job command - **Description**
The cancel job command was used to cancel any existing jobs belonging to a scope that is being terminated. This was changed with #9219 at which point the command was no longer written by the engine. It now only writes the canceled event.
For backward compatibility, the engine must still support the cancel command and the cancel job processor. We should however deprecate the command for future removal.
| process | deprecate cancel job command description the cancel job command was used to cancel any existing jobs belonging to a scope that is being terminated this was changed with at which point the command was no longer written by the engine it now only writes the canceled event for backward compatibility the engine must still support the cancel command and the cancel job processor we should however deprecate the command for future removal | 1 |
131,655 | 27,998,682,478 | IssuesEvent | 2023-03-27 10:11:32 | JuliaLang/julia | https://api.github.com/repos/JuliaLang/julia | closed | Segmentation fault on Julia v1.9.0-rc1 | bug codegen | I'm using `ApproxFun` with https://github.com/jishnub/ApproxFunBase.jl/tree/pad, and I obtain an intermittent segfault with bounds-checking enabled while starting Julia
```
julia --project --check-bounds=yes
```
The code that I'm running is
```julia
julia> using ApproxFun, LinearAlgebra
[ Info: Precompiling ApproxFun [28f2ccd6-bb30-5033-b560-165f7b14dc2f]
julia> x = Fun(identity, -1..1);
julia> f = cos(x-0.1)*sqrt(1-x^2) + exp(x);
[62277] signal (11.128): Segmentation fault
in expression starting at REPL[3]:1
jl_gc_pool_alloc_inner at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gc.c:1332 [inlined]
jl_gc_pool_alloc_noinline at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gc.c:1385 [inlined]
jl_gc_alloc_ at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/julia_internal.h:459 [inlined]
jl_gc_alloc at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gc.c:3619
_new_array_ at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/array.c:134 [inlined]
_new_array at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/array.c:198 [inlined]
ijl_alloc_array_1d at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/array.c:436
Array at ./boot.jl:477 [inlined]
compute_basic_blocks at ./compiler/ssair/ir.jl:98
inflate_ir! at ./compiler/ssair/legacy.jl:22
inflate_ir! at ./compiler/ssair/legacy.jl:18
retrieve_ir_for_inlining at ./compiler/ssair/inlining.jl:987 [inlined]
#resolve_todo#463 at ./compiler/ssair/inlining.jl:890
resolve_todo at ./compiler/ssair/inlining.jl:840 [inlined]
#analyze_method!#464 at ./compiler/ssair/inlining.jl:982
analyze_method! at ./compiler/ssair/inlining.jl:943 [inlined]
#handle_match!#468 at ./compiler/ssair/inlining.jl:1460
handle_match! at ./compiler/ssair/inlining.jl:1450 [inlined]
#handle_any_const_result!#465 at ./compiler/ssair/inlining.jl:1322
handle_any_const_result! at ./compiler/ssair/inlining.jl:1304 [inlined]
compute_inlining_cases at ./compiler/ssair/inlining.jl:1397
handle_call! at ./compiler/ssair/inlining.jl:1443 [inlined]
assemble_inline_todo! at ./compiler/ssair/inlining.jl:1669
ssa_inlining_pass! at ./compiler/ssair/inlining.jl:79 [inlined]
run_passes at ./compiler/optimize.jl:539
run_passes at ./compiler/optimize.jl:554 [inlined]
optimize at ./compiler/optimize.jl:503 [inlined]
_typeinf at ./compiler/typeinfer.jl:271
typeinf at ./compiler/typeinfer.jl:215
typeinf_edge at ./compiler/typeinfer.jl:931
abstract_call_method at ./compiler/abstractinterpretation.jl:609
abstract_call_gf_by_type at ./compiler/abstractinterpretation.jl:152
abstract_call_known at ./compiler/abstractinterpretation.jl:1930
jfptr_abstract_call_known_19252.clone_1 at /home/jishnu/packages/julias/julia-1.9/lib/julia/sys.so (unknown line)
tojlinvoke21206.clone_1 at /home/jishnu/packages/julias/julia-1.9/lib/julia/sys.so (unknown line)
j_abstract_call_known_17183.clone_1 at /home/jishnu/packages/julias/julia-1.9/lib/julia/sys.so (unknown line)
abstract_call at ./compiler/abstractinterpretation.jl:2001
abstract_call at ./compiler/abstractinterpretation.jl:1980
abstract_eval_statement_expr at ./compiler/abstractinterpretation.jl:2164
abstract_eval_statement at ./compiler/abstractinterpretation.jl:2377
abstract_eval_basic_statement at ./compiler/abstractinterpretation.jl:2641
typeinf_local at ./compiler/abstractinterpretation.jl:2850
typeinf_nocycle at ./compiler/abstractinterpretation.jl:2938
_typeinf at ./compiler/typeinfer.jl:244
typeinf at ./compiler/typeinfer.jl:215
typeinf_ext at ./compiler/typeinfer.jl:1056
typeinf_ext_toplevel at ./compiler/typeinfer.jl:1089
typeinf_ext_toplevel at ./compiler/typeinfer.jl:1085
jfptr_typeinf_ext_toplevel_19458.clone_1 at /home/jishnu/packages/julias/julia-1.9/lib/julia/sys.so (unknown line)
_jl_invoke at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gf.c:2731 [inlined]
ijl_apply_generic at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gf.c:2913
jl_apply at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/julia.h:1878 [inlined]
jl_type_infer at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gf.c:317
jl_generate_fptr_impl at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/jitlayers.cpp:444
jl_compile_method_internal at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gf.c:2321 [inlined]
jl_compile_method_internal at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gf.c:2210
_jl_invoke at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gf.c:2723 [inlined]
ijl_apply_generic at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gf.c:2913
jl_apply at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/julia.h:1878 [inlined]
do_call at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/interpreter.c:126
eval_value at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/interpreter.c:226
eval_stmt_value at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/interpreter.c:177 [inlined]
eval_body at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/interpreter.c:624
jl_interpret_toplevel_thunk at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/interpreter.c:762
jl_toplevel_eval_flex at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/toplevel.c:912
jl_toplevel_eval_flex at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/toplevel.c:856
jl_toplevel_eval_flex at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/toplevel.c:856
jl_toplevel_eval_flex at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/toplevel.c:856
jl_toplevel_eval_flex at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/toplevel.c:856
ijl_toplevel_eval_in at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/toplevel.c:971
eval at ./boot.jl:370 [inlined]
eval_user_input at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/usr/share/julia/stdlib/v1.9/REPL/src/REPL.jl:153
repl_backend_loop at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/usr/share/julia/stdlib/v1.9/REPL/src/REPL.jl:249
#start_repl_backend#46 at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/usr/share/julia/stdlib/v1.9/REPL/src/REPL.jl:234
start_repl_backend at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/usr/share/julia/stdlib/v1.9/REPL/src/REPL.jl:231
_jl_invoke at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gf.c:2731 [inlined]
ijl_apply_generic at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gf.c:2913
#run_repl#59 at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/usr/share/julia/stdlib/v1.9/REPL/src/REPL.jl:377
run_repl at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/usr/share/julia/stdlib/v1.9/REPL/src/REPL.jl:363
jfptr_run_repl_60577.clone_1 at /home/jishnu/packages/julias/julia-1.9/lib/julia/sys.so (unknown line)
_jl_invoke at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gf.c:2731 [inlined]
ijl_apply_generic at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gf.c:2913
#1017 at ./client.jl:421
jfptr_YY.1017_27809.clone_1 at /home/jishnu/packages/julias/julia-1.9/lib/julia/sys.so (unknown line)
_jl_invoke at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gf.c:2731 [inlined]
ijl_apply_generic at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gf.c:2913
jl_apply at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/julia.h:1878 [inlined]
jl_f__call_latest at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/builtins.c:774
#invokelatest#2 at ./essentials.jl:816 [inlined]
invokelatest at ./essentials.jl:813 [inlined]
run_main_repl at ./client.jl:405
exec_options at ./client.jl:322
_start at ./client.jl:522
jfptr__start_33350.clone_1 at /home/jishnu/packages/julias/julia-1.9/lib/julia/sys.so (unknown line)
_jl_invoke at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gf.c:2731 [inlined]
ijl_apply_generic at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gf.c:2913
jl_apply at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/julia.h:1878 [inlined]
true_main at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/jlapi.c:573
jl_repl_entrypoint at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/jlapi.c:717
main at julia (unknown line)
unknown function (ip: 0x7f1eef44a50f)
__libc_start_main at /lib/x86_64-linux-gnu/libc.so.6 (unknown line)
unknown function (ip: 0x401098)
Allocations: 81577167 (Pool: 81552516; Big: 24651); GC: 121
[1] 62277 segmentation fault (core dumped) julia --project --check-bounds=yes
```
rr trace: https://julialang-dumps.s3.amazonaws.com/reports/2023-03-24T14-53-27-jishnub.tar.zst
I have no idea what's leading to this issue, and some help would be greatly appreciated. In case this has already been fixed, my apologies.
My versioninfo:
```julia
julia> versioninfo()
Julia Version 1.9.0-rc1
Commit 3b2e0d8fbc1 (2023-03-07 07:51 UTC)
Platform Info:
OS: Linux (x86_64-linux-gnu)
CPU: 8 × 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-14.0.6 (ORCJIT, tigerlake)
Threads: 1 on 8 virtual cores
Environment:
LD_LIBRARY_PATH = :/usr/lib/x86_64-linux-gnu/gtk-3.0/modules
JULIA_EDITOR = subl
``` | 1.0 | Segmentation fault on Julia v1.9.0-rc1 - I'm using `ApproxFun` with https://github.com/jishnub/ApproxFunBase.jl/tree/pad, and I obtain an intermittent segfault with bounds-checking enabled while starting Julia
```
julia --project --check-bounds=yes
```
The code that I'm running is
```julia
julia> using ApproxFun, LinearAlgebra
[ Info: Precompiling ApproxFun [28f2ccd6-bb30-5033-b560-165f7b14dc2f]
julia> x = Fun(identity, -1..1);
julia> f = cos(x-0.1)*sqrt(1-x^2) + exp(x);
[62277] signal (11.128): Segmentation fault
in expression starting at REPL[3]:1
jl_gc_pool_alloc_inner at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gc.c:1332 [inlined]
jl_gc_pool_alloc_noinline at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gc.c:1385 [inlined]
jl_gc_alloc_ at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/julia_internal.h:459 [inlined]
jl_gc_alloc at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gc.c:3619
_new_array_ at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/array.c:134 [inlined]
_new_array at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/array.c:198 [inlined]
ijl_alloc_array_1d at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/array.c:436
Array at ./boot.jl:477 [inlined]
compute_basic_blocks at ./compiler/ssair/ir.jl:98
inflate_ir! at ./compiler/ssair/legacy.jl:22
inflate_ir! at ./compiler/ssair/legacy.jl:18
retrieve_ir_for_inlining at ./compiler/ssair/inlining.jl:987 [inlined]
#resolve_todo#463 at ./compiler/ssair/inlining.jl:890
resolve_todo at ./compiler/ssair/inlining.jl:840 [inlined]
#analyze_method!#464 at ./compiler/ssair/inlining.jl:982
analyze_method! at ./compiler/ssair/inlining.jl:943 [inlined]
#handle_match!#468 at ./compiler/ssair/inlining.jl:1460
handle_match! at ./compiler/ssair/inlining.jl:1450 [inlined]
#handle_any_const_result!#465 at ./compiler/ssair/inlining.jl:1322
handle_any_const_result! at ./compiler/ssair/inlining.jl:1304 [inlined]
compute_inlining_cases at ./compiler/ssair/inlining.jl:1397
handle_call! at ./compiler/ssair/inlining.jl:1443 [inlined]
assemble_inline_todo! at ./compiler/ssair/inlining.jl:1669
ssa_inlining_pass! at ./compiler/ssair/inlining.jl:79 [inlined]
run_passes at ./compiler/optimize.jl:539
run_passes at ./compiler/optimize.jl:554 [inlined]
optimize at ./compiler/optimize.jl:503 [inlined]
_typeinf at ./compiler/typeinfer.jl:271
typeinf at ./compiler/typeinfer.jl:215
typeinf_edge at ./compiler/typeinfer.jl:931
abstract_call_method at ./compiler/abstractinterpretation.jl:609
abstract_call_gf_by_type at ./compiler/abstractinterpretation.jl:152
abstract_call_known at ./compiler/abstractinterpretation.jl:1930
jfptr_abstract_call_known_19252.clone_1 at /home/jishnu/packages/julias/julia-1.9/lib/julia/sys.so (unknown line)
tojlinvoke21206.clone_1 at /home/jishnu/packages/julias/julia-1.9/lib/julia/sys.so (unknown line)
j_abstract_call_known_17183.clone_1 at /home/jishnu/packages/julias/julia-1.9/lib/julia/sys.so (unknown line)
abstract_call at ./compiler/abstractinterpretation.jl:2001
abstract_call at ./compiler/abstractinterpretation.jl:1980
abstract_eval_statement_expr at ./compiler/abstractinterpretation.jl:2164
abstract_eval_statement at ./compiler/abstractinterpretation.jl:2377
abstract_eval_basic_statement at ./compiler/abstractinterpretation.jl:2641
typeinf_local at ./compiler/abstractinterpretation.jl:2850
typeinf_nocycle at ./compiler/abstractinterpretation.jl:2938
_typeinf at ./compiler/typeinfer.jl:244
typeinf at ./compiler/typeinfer.jl:215
typeinf_ext at ./compiler/typeinfer.jl:1056
typeinf_ext_toplevel at ./compiler/typeinfer.jl:1089
typeinf_ext_toplevel at ./compiler/typeinfer.jl:1085
jfptr_typeinf_ext_toplevel_19458.clone_1 at /home/jishnu/packages/julias/julia-1.9/lib/julia/sys.so (unknown line)
_jl_invoke at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gf.c:2731 [inlined]
ijl_apply_generic at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gf.c:2913
jl_apply at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/julia.h:1878 [inlined]
jl_type_infer at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gf.c:317
jl_generate_fptr_impl at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/jitlayers.cpp:444
jl_compile_method_internal at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gf.c:2321 [inlined]
jl_compile_method_internal at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gf.c:2210
_jl_invoke at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gf.c:2723 [inlined]
ijl_apply_generic at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gf.c:2913
jl_apply at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/julia.h:1878 [inlined]
do_call at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/interpreter.c:126
eval_value at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/interpreter.c:226
eval_stmt_value at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/interpreter.c:177 [inlined]
eval_body at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/interpreter.c:624
jl_interpret_toplevel_thunk at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/interpreter.c:762
jl_toplevel_eval_flex at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/toplevel.c:912
jl_toplevel_eval_flex at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/toplevel.c:856
jl_toplevel_eval_flex at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/toplevel.c:856
jl_toplevel_eval_flex at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/toplevel.c:856
jl_toplevel_eval_flex at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/toplevel.c:856
ijl_toplevel_eval_in at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/toplevel.c:971
eval at ./boot.jl:370 [inlined]
eval_user_input at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/usr/share/julia/stdlib/v1.9/REPL/src/REPL.jl:153
repl_backend_loop at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/usr/share/julia/stdlib/v1.9/REPL/src/REPL.jl:249
#start_repl_backend#46 at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/usr/share/julia/stdlib/v1.9/REPL/src/REPL.jl:234
start_repl_backend at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/usr/share/julia/stdlib/v1.9/REPL/src/REPL.jl:231
_jl_invoke at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gf.c:2731 [inlined]
ijl_apply_generic at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gf.c:2913
#run_repl#59 at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/usr/share/julia/stdlib/v1.9/REPL/src/REPL.jl:377
run_repl at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/usr/share/julia/stdlib/v1.9/REPL/src/REPL.jl:363
jfptr_run_repl_60577.clone_1 at /home/jishnu/packages/julias/julia-1.9/lib/julia/sys.so (unknown line)
_jl_invoke at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gf.c:2731 [inlined]
ijl_apply_generic at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gf.c:2913
#1017 at ./client.jl:421
jfptr_YY.1017_27809.clone_1 at /home/jishnu/packages/julias/julia-1.9/lib/julia/sys.so (unknown line)
_jl_invoke at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gf.c:2731 [inlined]
ijl_apply_generic at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gf.c:2913
jl_apply at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/julia.h:1878 [inlined]
jl_f__call_latest at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/builtins.c:774
#invokelatest#2 at ./essentials.jl:816 [inlined]
invokelatest at ./essentials.jl:813 [inlined]
run_main_repl at ./client.jl:405
exec_options at ./client.jl:322
_start at ./client.jl:522
jfptr__start_33350.clone_1 at /home/jishnu/packages/julias/julia-1.9/lib/julia/sys.so (unknown line)
_jl_invoke at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gf.c:2731 [inlined]
ijl_apply_generic at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/gf.c:2913
jl_apply at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/julia.h:1878 [inlined]
true_main at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/jlapi.c:573
jl_repl_entrypoint at /cache/build/default-amdci5-5/julialang/julia-release-1-dot-9/src/jlapi.c:717
main at julia (unknown line)
unknown function (ip: 0x7f1eef44a50f)
__libc_start_main at /lib/x86_64-linux-gnu/libc.so.6 (unknown line)
unknown function (ip: 0x401098)
Allocations: 81577167 (Pool: 81552516; Big: 24651); GC: 121
[1] 62277 segmentation fault (core dumped) julia --project --check-bounds=yes
```
rr trace: https://julialang-dumps.s3.amazonaws.com/reports/2023-03-24T14-53-27-jishnub.tar.zst
I have no idea what's leading to this issue, and some help would be greatly appreciated. In case this has already been fixed, my apologies.
My versioninfo:
```julia
julia> versioninfo()
Julia Version 1.9.0-rc1
Commit 3b2e0d8fbc1 (2023-03-07 07:51 UTC)
Platform Info:
OS: Linux (x86_64-linux-gnu)
CPU: 8 × 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-14.0.6 (ORCJIT, tigerlake)
Threads: 1 on 8 virtual cores
Environment:
LD_LIBRARY_PATH = :/usr/lib/x86_64-linux-gnu/gtk-3.0/modules
JULIA_EDITOR = subl
``` | non_process | segmentation fault on julia i m using approxfun with and i obtain an intermittent segfault with bounds checking enabled while starting julia julia project check bounds yes the code that i m running is julia julia using approxfun linearalgebra julia x fun identity julia f cos x sqrt x exp x signal segmentation fault in expression starting at repl jl gc pool alloc inner at cache build default julialang julia release dot src gc c jl gc pool alloc noinline at cache build default julialang julia release dot src gc c jl gc alloc at cache build default julialang julia release dot src julia internal h jl gc alloc at cache build default julialang julia release dot src gc c new array at cache build default julialang julia release dot src array c new array at cache build default julialang julia release dot src array c ijl alloc array at cache build default julialang julia release dot src array c array at boot jl compute basic blocks at compiler ssair ir jl inflate ir at compiler ssair legacy jl inflate ir at compiler ssair legacy jl retrieve ir for inlining at compiler ssair inlining jl resolve todo at compiler ssair inlining jl resolve todo at compiler ssair inlining jl analyze method at compiler ssair inlining jl analyze method at compiler ssair inlining jl handle match at compiler ssair inlining jl handle match at compiler ssair inlining jl handle any const result at compiler ssair inlining jl handle any const result at compiler ssair inlining jl compute inlining cases at compiler ssair inlining jl handle call at compiler ssair inlining jl assemble inline todo at compiler ssair inlining jl ssa inlining pass at compiler ssair inlining jl run passes at compiler optimize jl run passes at compiler optimize jl optimize at compiler optimize jl typeinf at compiler typeinfer jl typeinf at compiler typeinfer jl typeinf edge at compiler typeinfer jl abstract call method at compiler abstractinterpretation jl abstract call gf by type at compiler abstractinterpretation jl abstract call known at compiler abstractinterpretation jl jfptr abstract call known clone at home jishnu packages julias julia lib julia sys so unknown line clone at home jishnu packages julias julia lib julia sys so unknown line j abstract call known clone at home jishnu packages julias julia lib julia sys so unknown line abstract call at compiler abstractinterpretation jl abstract call at compiler abstractinterpretation jl abstract eval statement expr at compiler abstractinterpretation jl abstract eval statement at compiler abstractinterpretation jl abstract eval basic statement at compiler abstractinterpretation jl typeinf local at compiler abstractinterpretation jl typeinf nocycle at compiler abstractinterpretation jl typeinf at compiler typeinfer jl typeinf at compiler typeinfer jl typeinf ext at compiler typeinfer jl typeinf ext toplevel at compiler typeinfer jl typeinf ext toplevel at compiler typeinfer jl jfptr typeinf ext toplevel clone at home jishnu packages julias julia lib julia sys so unknown line jl invoke at cache build default julialang julia release dot src gf c ijl apply generic at cache build default julialang julia release dot src gf c jl apply at cache build default julialang julia release dot src julia h jl type infer at cache build default julialang julia release dot src gf c jl generate fptr impl at cache build default julialang julia release dot src jitlayers cpp jl compile method internal at cache build default julialang julia release dot src gf c jl compile method internal at cache build default julialang julia release dot src gf c jl invoke at cache build default julialang julia release dot src gf c ijl apply generic at cache build default julialang julia release dot src gf c jl apply at cache build default julialang julia release dot src julia h do call at cache build default julialang julia release dot src interpreter c eval value at cache build default julialang julia release dot src interpreter c eval stmt value at cache build default julialang julia release dot src interpreter c eval body at cache build default julialang julia release dot src interpreter c jl interpret toplevel thunk at cache build default julialang julia release dot src interpreter c jl toplevel eval flex at cache build default julialang julia release dot src toplevel c jl toplevel eval flex at cache build default julialang julia release dot src toplevel c jl toplevel eval flex at cache build default julialang julia release dot src toplevel c jl toplevel eval flex at cache build default julialang julia release dot src toplevel c jl toplevel eval flex at cache build default julialang julia release dot src toplevel c ijl toplevel eval in at cache build default julialang julia release dot src toplevel c eval at boot jl eval user input at cache build default julialang julia release dot usr share julia stdlib repl src repl jl repl backend loop at cache build default julialang julia release dot usr share julia stdlib repl src repl jl start repl backend at cache build default julialang julia release dot usr share julia stdlib repl src repl jl start repl backend at cache build default julialang julia release dot usr share julia stdlib repl src repl jl jl invoke at cache build default julialang julia release dot src gf c ijl apply generic at cache build default julialang julia release dot src gf c run repl at cache build default julialang julia release dot usr share julia stdlib repl src repl jl run repl at cache build default julialang julia release dot usr share julia stdlib repl src repl jl jfptr run repl clone at home jishnu packages julias julia lib julia sys so unknown line jl invoke at cache build default julialang julia release dot src gf c ijl apply generic at cache build default julialang julia release dot src gf c at client jl jfptr yy clone at home jishnu packages julias julia lib julia sys so unknown line jl invoke at cache build default julialang julia release dot src gf c ijl apply generic at cache build default julialang julia release dot src gf c jl apply at cache build default julialang julia release dot src julia h jl f call latest at cache build default julialang julia release dot src builtins c invokelatest at essentials jl invokelatest at essentials jl run main repl at client jl exec options at client jl start at client jl jfptr start clone at home jishnu packages julias julia lib julia sys so unknown line jl invoke at cache build default julialang julia release dot src gf c ijl apply generic at cache build default julialang julia release dot src gf c jl apply at cache build default julialang julia release dot src julia h true main at cache build default julialang julia release dot src jlapi c jl repl entrypoint at cache build default julialang julia release dot src jlapi c main at julia unknown line unknown function ip libc start main at lib linux gnu libc so unknown line unknown function ip allocations pool big gc segmentation fault core dumped julia project check bounds yes rr trace i have no idea what s leading to this issue and some help would be greatly appreciated in case this has already been fixed my apologies my versioninfo julia julia versioninfo julia version commit utc platform info os linux linux gnu cpu × gen intel r core tm word size libm libopenlibm llvm libllvm orcjit tigerlake threads on virtual cores environment ld library path usr lib linux gnu gtk modules julia editor subl | 0 |
14,045 | 16,850,122,484 | IssuesEvent | 2021-06-20 10:34:19 | log2timeline/plaso | https://api.github.com/repos/log2timeline/plaso | closed | Change knowledge base to handle MUI form time zone names on Windows | enhancement preprocessing | As a follow up of https://github.com/log2timeline/plaso/issues/2673
The `StandardName` Windows Registry value in the key `HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\TimeZoneInformation` can contain a MUI form
As similar for localized names map the MUI form name to the "normalized" name using the `MUI_Std` value of the available time zones https://winreg-kb.readthedocs.io/en/latest/sources/system-keys/Time-zones.html#time-zones-timezonename-sub-key
| 1.0 | Change knowledge base to handle MUI form time zone names on Windows - As a follow up of https://github.com/log2timeline/plaso/issues/2673
The `StandardName` Windows Registry value in the key `HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\TimeZoneInformation` can contain a MUI form
As similar for localized names map the MUI form name to the "normalized" name using the `MUI_Std` value of the available time zones https://winreg-kb.readthedocs.io/en/latest/sources/system-keys/Time-zones.html#time-zones-timezonename-sub-key
| process | change knowledge base to handle mui form time zone names on windows as a follow up of the standardname windows registry value in the key hkey local machine system currentcontrolset control timezoneinformation can contain a mui form as similar for localized names map the mui form name to the normalized name using the mui std value of the available time zones | 1 |
157,151 | 13,674,602,980 | IssuesEvent | 2020-09-29 11:30:37 | AY2021S1-CS2103T-W15-1/tp | https://api.github.com/repos/AY2021S1-CS2103T-W15-1/tp | closed | Add Target user profile, value proposition, and user stories to Developer's Guide | Developer's Guide documentation | Update the target user profile and value proposition to match the project direction you have selected. Give a list of the user stories (and update/delete existing ones, if applicable), including priorities. This can include user stories considered but will not be included in the final product. | 1.0 | Add Target user profile, value proposition, and user stories to Developer's Guide - Update the target user profile and value proposition to match the project direction you have selected. Give a list of the user stories (and update/delete existing ones, if applicable), including priorities. This can include user stories considered but will not be included in the final product. | non_process | add target user profile value proposition and user stories to developer s guide update the target user profile and value proposition to match the project direction you have selected give a list of the user stories and update delete existing ones if applicable including priorities this can include user stories considered but will not be included in the final product | 0 |
13,763 | 23,680,885,486 | IssuesEvent | 2022-08-28 19:34:53 | kysect/Shreks | https://api.github.com/repos/kysect/Shreks | closed | Интегрировать Application API в модуль хендлинга хуков | requirement | Необходимо интегрировать хендлинг хуков со **слоем приложения**, после того, как **слой приложения** будет готов.
## Tasks
- [x] #101
- [x] #133
| 1.0 | Интегрировать Application API в модуль хендлинга хуков - Необходимо интегрировать хендлинг хуков со **слоем приложения**, после того, как **слой приложения** будет готов.
## Tasks
- [x] #101
- [x] #133
| non_process | интегрировать application api в модуль хендлинга хуков необходимо интегрировать хендлинг хуков со слоем приложения после того как слой приложения будет готов tasks | 0 |
5,149 | 7,929,662,743 | IssuesEvent | 2018-07-06 15:48:43 | symfony/symfony | https://api.github.com/repos/symfony/symfony | closed | [Process] Bug with the requirement to prepend `exec` to the command line | Bug Process Status: Needs Review Status: Waiting feedback Unconfirmed | **Symfony version(s) affected**: Possibly all, but was tested on 4.1
**Description**
When creating a new `Process` object, it will prepend `exec` to the command line to be able to communicate with the process and send signals, but only if provided command line is an array.
However, on Linux Alpine distributions, `exec` does not seem to be available, implying the failure of any `Process` object created, whatever the command.
The problem is that this `exec` prefix is added only if the provided command line is an array, and is used as-is when it is provided as a string.
**How to reproduce**
Take these two examples:
```php
$processArray = new Process(['ls', '-l']);
$processString = new Process('ls -l');
```
Add this `dump()` statement after these lines:
https://github.com/symfony/symfony/blob/7135aa43380c26a38524a045b42414efb45cf48f/src/Symfony/Component/Process/Process.php#L261-L268
```php
dump($commandline);
```
You should see something like this:
```
"exec ls -l" <-- command line provided as array
"ls -l" <-- command line provided as string
```
**Questions**
Why is this `exec` prepended only when command line is an array?
Does it have to be prepended if it is a string too?
**Possible Solution**
For now, workaround is to provide a command line as a string instead of an array. But for the rest, it depends on the questions above.
| 1.0 | [Process] Bug with the requirement to prepend `exec` to the command line - **Symfony version(s) affected**: Possibly all, but was tested on 4.1
**Description**
When creating a new `Process` object, it will prepend `exec` to the command line to be able to communicate with the process and send signals, but only if provided command line is an array.
However, on Linux Alpine distributions, `exec` does not seem to be available, implying the failure of any `Process` object created, whatever the command.
The problem is that this `exec` prefix is added only if the provided command line is an array, and is used as-is when it is provided as a string.
**How to reproduce**
Take these two examples:
```php
$processArray = new Process(['ls', '-l']);
$processString = new Process('ls -l');
```
Add this `dump()` statement after these lines:
https://github.com/symfony/symfony/blob/7135aa43380c26a38524a045b42414efb45cf48f/src/Symfony/Component/Process/Process.php#L261-L268
```php
dump($commandline);
```
You should see something like this:
```
"exec ls -l" <-- command line provided as array
"ls -l" <-- command line provided as string
```
**Questions**
Why is this `exec` prepended only when command line is an array?
Does it have to be prepended if it is a string too?
**Possible Solution**
For now, workaround is to provide a command line as a string instead of an array. But for the rest, it depends on the questions above.
| process | bug with the requirement to prepend exec to the command line symfony version s affected possibly all but was tested on description when creating a new process object it will prepend exec to the command line to be able to communicate with the process and send signals but only if provided command line is an array however on linux alpine distributions exec does not seem to be available implying the failure of any process object created whatever the command the problem is that this exec prefix is added only if the provided command line is an array and is used as is when it is provided as a string how to reproduce take these two examples php processarray new process processstring new process ls l add this dump statement after these lines php dump commandline you should see something like this exec ls l command line provided as array ls l command line provided as string questions why is this exec prepended only when command line is an array does it have to be prepended if it is a string too possible solution for now workaround is to provide a command line as a string instead of an array but for the rest it depends on the questions above | 1 |
21,634 | 30,051,113,267 | IssuesEvent | 2023-06-28 00:34:08 | jandevel/kaliningradka | https://api.github.com/repos/jandevel/kaliningradka | opened | Define train, val and test data | Data Processing | - Consider 46 and 47 years as one year
- Take 2 issues per year for train
- Take 1 issue per year for validation
- Take 1 issue per year for test | 1.0 | Define train, val and test data - - Consider 46 and 47 years as one year
- Take 2 issues per year for train
- Take 1 issue per year for validation
- Take 1 issue per year for test | process | define train val and test data consider and years as one year take issues per year for train take issue per year for validation take issue per year for test | 1 |
16,314 | 20,968,985,293 | IssuesEvent | 2022-03-28 09:33:32 | geneontology/go-ontology | https://api.github.com/repos/geneontology/go-ontology | closed | Changes to movement in host | multi-species process | From multiorg call:
Some terms in the 'movement in host' branch are redundant. We propose to merge:
* GO:0035890 exit from host' -> 0 annotations -> merge into GO:0035891 exist from host cell
* GO:0052126 movement in host environment -> merge into GO:0044000 movement in host | 1.0 | Changes to movement in host - From multiorg call:
Some terms in the 'movement in host' branch are redundant. We propose to merge:
* GO:0035890 exit from host' -> 0 annotations -> merge into GO:0035891 exist from host cell
* GO:0052126 movement in host environment -> merge into GO:0044000 movement in host | process | changes to movement in host from multiorg call some terms in the movement in host branch are redundant we propose to merge go exit from host annotations merge into go exist from host cell go movement in host environment merge into go movement in host | 1 |
3,289 | 6,384,510,295 | IssuesEvent | 2017-08-03 05:21:19 | rubberduck-vba/Rubberduck | https://api.github.com/repos/rubberduck-vba/Rubberduck | closed | Parse Errors seem to be associated with line numbers on IF blocks | antlr bug parse-tree-processing | Not sure if this is related to issue #3012
This code Fails to parse:
```vb
Sub TestRubberduckParsing()
100 If True Then
102 Debug.Print "True"
104 Else
106 Debug.Print "False"
108 End If
End Sub
```
Also fails to parse:
```vb
Sub TestRubberduckParsing()
100 If True Then
Debug.Print "True"
102 Else
Debug.Print "False"
106 End If
End Sub
```
This code Does parse:
```vb
Sub TestRubberduckParsing()
100 If True Then
Debug.Print "True"
Else
Debug.Print "False"
End If
End Sub
```
This code will also parse:
```vb
Sub TestRubberduckParsing()
If True Then
102 Debug.Print "True"
Else
106 Debug.Print "False"
End If
End Sub
```
This code will also parse:
```vb
Sub TestRubberduckParsing()
100 If True Then
102 Debug.Print "True"
Else
106 Debug.Print "False"
End If
End Sub
```
I think this also impacts Do While ... and some other blocks but have not had the chance to check properly. | 1.0 | Parse Errors seem to be associated with line numbers on IF blocks - Not sure if this is related to issue #3012
This code Fails to parse:
```vb
Sub TestRubberduckParsing()
100 If True Then
102 Debug.Print "True"
104 Else
106 Debug.Print "False"
108 End If
End Sub
```
Also fails to parse:
```vb
Sub TestRubberduckParsing()
100 If True Then
Debug.Print "True"
102 Else
Debug.Print "False"
106 End If
End Sub
```
This code Does parse:
```vb
Sub TestRubberduckParsing()
100 If True Then
Debug.Print "True"
Else
Debug.Print "False"
End If
End Sub
```
This code will also parse:
```vb
Sub TestRubberduckParsing()
If True Then
102 Debug.Print "True"
Else
106 Debug.Print "False"
End If
End Sub
```
This code will also parse:
```vb
Sub TestRubberduckParsing()
100 If True Then
102 Debug.Print "True"
Else
106 Debug.Print "False"
End If
End Sub
```
I think this also impacts Do While ... and some other blocks but have not had the chance to check properly. | process | parse errors seem to be associated with line numbers on if blocks not sure if this is related to issue this code fails to parse vb sub testrubberduckparsing if true then debug print true else debug print false end if end sub also fails to parse vb sub testrubberduckparsing if true then debug print true else debug print false end if end sub this code does parse vb sub testrubberduckparsing if true then debug print true else debug print false end if end sub this code will also parse vb sub testrubberduckparsing if true then debug print true else debug print false end if end sub this code will also parse vb sub testrubberduckparsing if true then debug print true else debug print false end if end sub i think this also impacts do while and some other blocks but have not had the chance to check properly | 1 |
17,521 | 23,329,297,742 | IssuesEvent | 2022-08-09 02:19:47 | streamnative/flink | https://api.github.com/repos/streamnative/flink | closed | [Enhancement][FLINK-28085] PulsarUnorderedSourceReader should close all the pending transaction when shutdown. | compute/data-processing type/enhancement | Currently transactionId is not persisted. After a job restart we lose handle to the transaction which is still not aborted in Pulsar broker. Pulsar broker will abort these hanging transactions after a timeout but this is not desirable. We need to close all the pending transactionId. | 1.0 | [Enhancement][FLINK-28085] PulsarUnorderedSourceReader should close all the pending transaction when shutdown. - Currently transactionId is not persisted. After a job restart we lose handle to the transaction which is still not aborted in Pulsar broker. Pulsar broker will abort these hanging transactions after a timeout but this is not desirable. We need to close all the pending transactionId. | process | pulsarunorderedsourcereader should close all the pending transaction when shutdown currently transactionid is not persisted after a job restart we lose handle to the transaction which is still not aborted in pulsar broker pulsar broker will abort these hanging transactions after a timeout but this is not desirable we need to close all the pending transactionid | 1 |
279,546 | 24,233,866,028 | IssuesEvent | 2022-09-26 20:53:02 | OpenLiberty/open-liberty | https://api.github.com/repos/OpenLiberty/open-liberty | closed | Feature Test Summary for Java 19 support in Open Liberty | team:Zombie Apocalypse Feature Test Summary | ## Test Strategy
Please note, this Feature Test Summary provided is just used to claim [Java 19 Support in Open Liberty](https://github.com/OpenLiberty/open-liberty/issues/21142) and not to introduce any new or modified Open Liberty functionality.
Normally FAT testing is designed to validate the functionality of a new or changed feature in Open Liberty. Since there is no new OL functionality to test, the associated FAT test is just a simple check to make sure we are running on Java 19. To accomplish that, this FAT uses a WAR file for testing that has been compiled at Java 19 using functionality that is specific to Java 19.
The real testing validation for Java 19 is done via our entire suite of Open Liberty and WebSphere Liberty FAT buckets.
### List of FAT projects affected
* `io.openliberty.java.internal_fat` - https://github.com/OpenLiberty/open-liberty/pull/22328
* `com.ibm.ws.concurrent.mp_fat/test-applications/MPConcurrentApp/src/concurrent/mp/fat/web/MPConcurrentTestServlet`
- testNoNewMethods - Updated this test to tolerate the CompletableFuture interface having an additional 3 methods, which is the case on Java 19.
* `com.ibm.ws.threading_policy_fat/test-applications/basicfat/src/web/PolicyExecutorServlet`
- testClose - Use ExecutorService.close to shut down the executor and await completion of running tasks, if on Java 19 or above. Otherwise, use shutdown and awaitCompletion.
- testExceptionNow - Use Future.exceptionNow on futures for tasks that are: successfully completed, exceptionally completed, running, aborted due to exceeding start timeout, cancelled
- testResultNow: Use Future.resultNow on futures for tasks that are: successfully completed, exceptionally completed, running, aborted due to exceeding start timeout, cancelled
* `com.ibm.ws.concurrent_fat_jakarta/test-applications/ConcurrencyTestWeb/src/test/jakarta/concurrency/web/ConcurrencyTestServlet`
- testExceptionNow - Use Future.exceptionNow on managed completable futures that are successfully completed, exceptionally completed, running, forcibly completed, has its results replaced, cancelled
- testResultNow - Use Future.resultNow on managed completable futures that are successfully completed, exceptionally completed, running, forcibly completed, has its results replaced, cancelled
* `com.ibm.ws.concurrent_fat/test-applications/concurrentSpec/src/fat/concurrent/spec/app/EEConcurrencyTestServlet`
- testExceptionNowOnScheduledFuture - Test exceptionNow on a ScheduledFuture from a ManagedScheduledExecutorService.
- testResultNowOnScheduledFuture - Test resultNow on a ScheduledFuture from a ManagedScheduledExecutorService.
* `com.ibm.ws.concurrent.mp_fat_jakarta/test-applications/MPContextProp2_0_App/src/concurrent/mp/fat/v20/web/MPContextProp2_0_TestServlet`
- testClose - Use ExecutorService.close to shut down a MicroProfile ManagedExecutor and await completion of running tasks, if on Java 19 or above. Otherwise, use shutdown and awaitCompletion.
### Test strategy
* What functionality is new or modified by this feature?
* No new Liberty functionality is added or changed by this feature
* What are the positive and negative tests for that functionality?
* This FAT adds one simple test to ensure we are running on Java 19 by using functionality that is specific to Java 19.
* In this case, there is no correlation between what is being performed, random number generation, to what is actually being tested, which is verification that we are running on Java 19.
* Testing specifics
* For several weeks now, we have been running nightly (M-F) Java 19 builds using the latest [Java 19 OpenJDK Hotspot builds](https://jdk.java.net/19/) and tracking defects. This build runs in lite mode against all the Open and WebSphere Liberty FATs. We also periodically run a build against all the same FATs in full mode to make sure we discover any Java 19 specific defects.
## Confidence Level
Please indicate your confidence in the testing (up to and including FAT) delivered with this feature by selecting one of these values:
0 - No automated testing delivered
1 - We have minimal automated coverage of the feature including golden paths. There is a relatively high risk that defects or issues could be found in this feature.
2 - We have delivered a reasonable automated coverage of the golden paths of this feature but are aware of gaps and extra testing that could be done here. Error/outlying scenarios are not really covered. There are likely risks that issues may exist in the golden paths
3 - We have delivered all automated testing we believe is needed for the golden paths of this feature and minimal coverage of the error/outlying scenarios. There is a risk when the feature is used outside the golden paths however we are confident on the golden path. Note: This may still be a valid end state for a feature... things like Beta features may well suffice at this level.
4 - We have delivered all automated testing we believe is needed for the golden paths of this feature and have good coverage of the error/outlying scenarios. While more testing of the error/outlying scenarios could be added we believe there is minimal risk here and the cost of providing these is considered higher than the benefit they would provide.
5 - We have delivered all automated testing we believe is needed for this feature. The testing covers all golden path cases as well as all the error/outlying scenarios that make sense. We are not aware of any gaps in the testing at this time. No manual testing is required to verify this feature.
Based on your answer above, for any answer other than a 4 or 5 please provide details of what drove your answer. Please be aware, it may be perfectly reasonable in some scenarios to deliver with any value above. We may accept no automated testing is needed for some features, we may be happy with low levels of testing on samples for instance so please don't feel the need to drive to a 5. We need your honest assessment as a team and the reasoning for why you believe shipping at that level is valid. What are the gaps, what is the risk etc. Please also provide links to the follow on work that is needed to close the gaps (should you deem it needed)
| 1.0 | Feature Test Summary for Java 19 support in Open Liberty - ## Test Strategy
Please note, this Feature Test Summary provided is just used to claim [Java 19 Support in Open Liberty](https://github.com/OpenLiberty/open-liberty/issues/21142) and not to introduce any new or modified Open Liberty functionality.
Normally FAT testing is designed to validate the functionality of a new or changed feature in Open Liberty. Since there is no new OL functionality to test, the associated FAT test is just a simple check to make sure we are running on Java 19. To accomplish that, this FAT uses a WAR file for testing that has been compiled at Java 19 using functionality that is specific to Java 19.
The real testing validation for Java 19 is done via our entire suite of Open Liberty and WebSphere Liberty FAT buckets.
### List of FAT projects affected
* `io.openliberty.java.internal_fat` - https://github.com/OpenLiberty/open-liberty/pull/22328
* `com.ibm.ws.concurrent.mp_fat/test-applications/MPConcurrentApp/src/concurrent/mp/fat/web/MPConcurrentTestServlet`
- testNoNewMethods - Updated this test to tolerate the CompletableFuture interface having an additional 3 methods, which is the case on Java 19.
* `com.ibm.ws.threading_policy_fat/test-applications/basicfat/src/web/PolicyExecutorServlet`
- testClose - Use ExecutorService.close to shut down the executor and await completion of running tasks, if on Java 19 or above. Otherwise, use shutdown and awaitCompletion.
- testExceptionNow - Use Future.exceptionNow on futures for tasks that are: successfully completed, exceptionally completed, running, aborted due to exceeding start timeout, cancelled
- testResultNow: Use Future.resultNow on futures for tasks that are: successfully completed, exceptionally completed, running, aborted due to exceeding start timeout, cancelled
* `com.ibm.ws.concurrent_fat_jakarta/test-applications/ConcurrencyTestWeb/src/test/jakarta/concurrency/web/ConcurrencyTestServlet`
- testExceptionNow - Use Future.exceptionNow on managed completable futures that are successfully completed, exceptionally completed, running, forcibly completed, has its results replaced, cancelled
- testResultNow - Use Future.resultNow on managed completable futures that are successfully completed, exceptionally completed, running, forcibly completed, has its results replaced, cancelled
* `com.ibm.ws.concurrent_fat/test-applications/concurrentSpec/src/fat/concurrent/spec/app/EEConcurrencyTestServlet`
- testExceptionNowOnScheduledFuture - Test exceptionNow on a ScheduledFuture from a ManagedScheduledExecutorService.
- testResultNowOnScheduledFuture - Test resultNow on a ScheduledFuture from a ManagedScheduledExecutorService.
* `com.ibm.ws.concurrent.mp_fat_jakarta/test-applications/MPContextProp2_0_App/src/concurrent/mp/fat/v20/web/MPContextProp2_0_TestServlet`
- testClose - Use ExecutorService.close to shut down a MicroProfile ManagedExecutor and await completion of running tasks, if on Java 19 or above. Otherwise, use shutdown and awaitCompletion.
### Test strategy
* What functionality is new or modified by this feature?
* No new Liberty functionality is added or changed by this feature
* What are the positive and negative tests for that functionality?
* This FAT adds one simple test to ensure we are running on Java 19 by using functionality that is specific to Java 19.
* In this case, there is no correlation between what is being performed, random number generation, to what is actually being tested, which is verification that we are running on Java 19.
* Testing specifics
* For several weeks now, we have been running nightly (M-F) Java 19 builds using the latest [Java 19 OpenJDK Hotspot builds](https://jdk.java.net/19/) and tracking defects. This build runs in lite mode against all the Open and WebSphere Liberty FATs. We also periodically run a build against all the same FATs in full mode to make sure we discover any Java 19 specific defects.
## Confidence Level
Please indicate your confidence in the testing (up to and including FAT) delivered with this feature by selecting one of these values:
0 - No automated testing delivered
1 - We have minimal automated coverage of the feature including golden paths. There is a relatively high risk that defects or issues could be found in this feature.
2 - We have delivered a reasonable automated coverage of the golden paths of this feature but are aware of gaps and extra testing that could be done here. Error/outlying scenarios are not really covered. There are likely risks that issues may exist in the golden paths
3 - We have delivered all automated testing we believe is needed for the golden paths of this feature and minimal coverage of the error/outlying scenarios. There is a risk when the feature is used outside the golden paths however we are confident on the golden path. Note: This may still be a valid end state for a feature... things like Beta features may well suffice at this level.
4 - We have delivered all automated testing we believe is needed for the golden paths of this feature and have good coverage of the error/outlying scenarios. While more testing of the error/outlying scenarios could be added we believe there is minimal risk here and the cost of providing these is considered higher than the benefit they would provide.
5 - We have delivered all automated testing we believe is needed for this feature. The testing covers all golden path cases as well as all the error/outlying scenarios that make sense. We are not aware of any gaps in the testing at this time. No manual testing is required to verify this feature.
Based on your answer above, for any answer other than a 4 or 5 please provide details of what drove your answer. Please be aware, it may be perfectly reasonable in some scenarios to deliver with any value above. We may accept no automated testing is needed for some features, we may be happy with low levels of testing on samples for instance so please don't feel the need to drive to a 5. We need your honest assessment as a team and the reasoning for why you believe shipping at that level is valid. What are the gaps, what is the risk etc. Please also provide links to the follow on work that is needed to close the gaps (should you deem it needed)
| non_process | feature test summary for java support in open liberty test strategy please note this feature test summary provided is just used to claim and not to introduce any new or modified open liberty functionality normally fat testing is designed to validate the functionality of a new or changed feature in open liberty since there is no new ol functionality to test the associated fat test is just a simple check to make sure we are running on java to accomplish that this fat uses a war file for testing that has been compiled at java using functionality that is specific to java the real testing validation for java is done via our entire suite of open liberty and websphere liberty fat buckets list of fat projects affected io openliberty java internal fat com ibm ws concurrent mp fat test applications mpconcurrentapp src concurrent mp fat web mpconcurrenttestservlet testnonewmethods updated this test to tolerate the completablefuture interface having an additional methods which is the case on java com ibm ws threading policy fat test applications basicfat src web policyexecutorservlet testclose use executorservice close to shut down the executor and await completion of running tasks if on java or above otherwise use shutdown and awaitcompletion testexceptionnow use future exceptionnow on futures for tasks that are successfully completed exceptionally completed running aborted due to exceeding start timeout cancelled testresultnow use future resultnow on futures for tasks that are successfully completed exceptionally completed running aborted due to exceeding start timeout cancelled com ibm ws concurrent fat jakarta test applications concurrencytestweb src test jakarta concurrency web concurrencytestservlet testexceptionnow use future exceptionnow on managed completable futures that are successfully completed exceptionally completed running forcibly completed has its results replaced cancelled testresultnow use future resultnow on managed completable futures that are successfully completed exceptionally completed running forcibly completed has its results replaced cancelled com ibm ws concurrent fat test applications concurrentspec src fat concurrent spec app eeconcurrencytestservlet testexceptionnowonscheduledfuture test exceptionnow on a scheduledfuture from a managedscheduledexecutorservice testresultnowonscheduledfuture test resultnow on a scheduledfuture from a managedscheduledexecutorservice com ibm ws concurrent mp fat jakarta test applications app src concurrent mp fat web testservlet testclose use executorservice close to shut down a microprofile managedexecutor and await completion of running tasks if on java or above otherwise use shutdown and awaitcompletion test strategy what functionality is new or modified by this feature no new liberty functionality is added or changed by this feature what are the positive and negative tests for that functionality this fat adds one simple test to ensure we are running on java by using functionality that is specific to java in this case there is no correlation between what is being performed random number generation to what is actually being tested which is verification that we are running on java testing specifics for several weeks now we have been running nightly m f java builds using the latest and tracking defects this build runs in lite mode against all the open and websphere liberty fats we also periodically run a build against all the same fats in full mode to make sure we discover any java specific defects confidence level please indicate your confidence in the testing up to and including fat delivered with this feature by selecting one of these values no automated testing delivered we have minimal automated coverage of the feature including golden paths there is a relatively high risk that defects or issues could be found in this feature we have delivered a reasonable automated coverage of the golden paths of this feature but are aware of gaps and extra testing that could be done here error outlying scenarios are not really covered there are likely risks that issues may exist in the golden paths we have delivered all automated testing we believe is needed for the golden paths of this feature and minimal coverage of the error outlying scenarios there is a risk when the feature is used outside the golden paths however we are confident on the golden path note this may still be a valid end state for a feature things like beta features may well suffice at this level we have delivered all automated testing we believe is needed for the golden paths of this feature and have good coverage of the error outlying scenarios while more testing of the error outlying scenarios could be added we believe there is minimal risk here and the cost of providing these is considered higher than the benefit they would provide we have delivered all automated testing we believe is needed for this feature the testing covers all golden path cases as well as all the error outlying scenarios that make sense we are not aware of any gaps in the testing at this time no manual testing is required to verify this feature based on your answer above for any answer other than a or please provide details of what drove your answer please be aware it may be perfectly reasonable in some scenarios to deliver with any value above we may accept no automated testing is needed for some features we may be happy with low levels of testing on samples for instance so please don t feel the need to drive to a we need your honest assessment as a team and the reasoning for why you believe shipping at that level is valid what are the gaps what is the risk etc please also provide links to the follow on work that is needed to close the gaps should you deem it needed | 0 |
15,361 | 19,531,962,378 | IssuesEvent | 2021-12-30 18:44:59 | joscha-alisch/dyve | https://api.github.com/repos/joscha-alisch/dyve | closed | Process: Build/Release Binaries | process | We should not only build docker images, but also provide binaries for all major operating systems directly. | 1.0 | Process: Build/Release Binaries - We should not only build docker images, but also provide binaries for all major operating systems directly. | process | process build release binaries we should not only build docker images but also provide binaries for all major operating systems directly | 1 |
397,721 | 11,731,687,449 | IssuesEvent | 2020-03-11 01:00:01 | grpc/grpc | https://api.github.com/repos/grpc/grpc | closed | what's the difference between call_cq and notification_cq. | kind/bug priority/P2 | <!--
This form is for bug reports and feature requests ONLY!
For general questions and troubleshooting, please ask/look for answers here:
- grpc.io mailing list: https://groups.google.com/forum/#!forum/grpc-io
- StackOverflow, with "grpc" tag: https://stackoverflow.com/questions/tagged/grpc
Issues specific to *grpc-java*, *grpc-go*, *grpc-node*, *grpc-dart*, *grpc-web* should be created in the repository they belong to (e.g. https://github.com/grpc/grpc-LANGUAGE/issues/new)
-->
### What version of gRPC and what language are you using?
grpc version: 1.27.3
C++11
### What operating system (Linux, Windows,...) and version?
Ubuntu 18.04
### What runtime / compiler are you using (e.g. python version or version of gcc)
gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
### What did you do?
::grpc::Service::RequestAsyncServerStreaming(
int index, ::grpc_impl::ServerContext* context, Message* request,
internal::ServerAsyncStreamingInterface* stream,
::grpc_impl::CompletionQueue* call_cq,
::grpc_impl::ServerCompletionQueue* notification_cq, void* tag)
what's the difference between call_cq and notification_cq.
They are used for different event type?
| 1.0 | what's the difference between call_cq and notification_cq. - <!--
This form is for bug reports and feature requests ONLY!
For general questions and troubleshooting, please ask/look for answers here:
- grpc.io mailing list: https://groups.google.com/forum/#!forum/grpc-io
- StackOverflow, with "grpc" tag: https://stackoverflow.com/questions/tagged/grpc
Issues specific to *grpc-java*, *grpc-go*, *grpc-node*, *grpc-dart*, *grpc-web* should be created in the repository they belong to (e.g. https://github.com/grpc/grpc-LANGUAGE/issues/new)
-->
### What version of gRPC and what language are you using?
grpc version: 1.27.3
C++11
### What operating system (Linux, Windows,...) and version?
Ubuntu 18.04
### What runtime / compiler are you using (e.g. python version or version of gcc)
gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
### What did you do?
::grpc::Service::RequestAsyncServerStreaming(
int index, ::grpc_impl::ServerContext* context, Message* request,
internal::ServerAsyncStreamingInterface* stream,
::grpc_impl::CompletionQueue* call_cq,
::grpc_impl::ServerCompletionQueue* notification_cq, void* tag)
what's the difference between call_cq and notification_cq.
They are used for different event type?
| non_process | what s the difference between call cq and notification cq this form is for bug reports and feature requests only for general questions and troubleshooting please ask look for answers here grpc io mailing list stackoverflow with grpc tag issues specific to grpc java grpc go grpc node grpc dart grpc web should be created in the repository they belong to e g what version of grpc and what language are you using grpc version c what operating system linux windows and version ubuntu what runtime compiler are you using e g python version or version of gcc gcc ubuntu what did you do grpc service requestasyncserverstreaming int index grpc impl servercontext context message request internal serverasyncstreaminginterface stream grpc impl completionqueue call cq grpc impl servercompletionqueue notification cq void tag what s the difference between call cq and notification cq they are used for different event type | 0 |
8,252 | 11,421,370,944 | IssuesEvent | 2020-02-03 12:02:40 | parcel-bundler/parcel | https://api.github.com/repos/parcel-bundler/parcel | closed | Importing empty stylus file fails | :bug: Bug CSS Preprocessing Stale |
Hello. I am trying to import a stylus file from a JSX file like this:
```js
import React from "react"
import "./Modal.styl"
...
```
This generally works except when the stylus file is empty. When it is I get the following error:
`Cannot read property 'render' of nul` (note the null with one 'l')
If I add anything to the stylus file, even a comment, or blank line, everything works as expected.
Here is our `.babelrc`:
```json
{
"presets": [
"env", "react"
],
"plugins": [
["transform-class-properties", {"spec": true}]
]
}
```
I'm running parcel v1.12.3 on Node v10.15.3
| 1.0 | Importing empty stylus file fails -
Hello. I am trying to import a stylus file from a JSX file like this:
```js
import React from "react"
import "./Modal.styl"
...
```
This generally works except when the stylus file is empty. When it is I get the following error:
`Cannot read property 'render' of nul` (note the null with one 'l')
If I add anything to the stylus file, even a comment, or blank line, everything works as expected.
Here is our `.babelrc`:
```json
{
"presets": [
"env", "react"
],
"plugins": [
["transform-class-properties", {"spec": true}]
]
}
```
I'm running parcel v1.12.3 on Node v10.15.3
| process | importing empty stylus file fails hello i am trying to import a stylus file from a jsx file like this js import react from react import modal styl this generally works except when the stylus file is empty when it is i get the following error cannot read property render of nul note the null with one l if i add anything to the stylus file even a comment or blank line everything works as expected here is our babelrc json presets env react plugins i m running parcel on node | 1 |
12,055 | 14,739,212,317 | IssuesEvent | 2021-01-07 06:44:08 | kdjstudios/SABillingGitlab | https://api.github.com/repos/kdjstudios/SABillingGitlab | closed | Stockton - CC SAB Error | anc-process anp-1 ant-bug has attachment | In GitLab by @kdjstudios on Sep 5, 2018, 07:59
**Submitted by:** "Sarah Baptist" <sarah.baptist@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-09-04-31621/conversation
**Server:** Internal
**Client/Site:** Stockton
**Account:**
**Issue:**
I am getting an error while processes credit cards. The payment is coming back declined but it is zeroing out the balance. Please see attached images, I just want to be sure they actually cleared.

 | 1.0 | Stockton - CC SAB Error - In GitLab by @kdjstudios on Sep 5, 2018, 07:59
**Submitted by:** "Sarah Baptist" <sarah.baptist@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-09-04-31621/conversation
**Server:** Internal
**Client/Site:** Stockton
**Account:**
**Issue:**
I am getting an error while processes credit cards. The payment is coming back declined but it is zeroing out the balance. Please see attached images, I just want to be sure they actually cleared.

 | process | stockton cc sab error in gitlab by kdjstudios on sep submitted by sarah baptist helpdesk server internal client site stockton account issue i am getting an error while processes credit cards the payment is coming back declined but it is zeroing out the balance please see attached images i just want to be sure they actually cleared uploads sab error png uploads sab error cc png | 1 |
4,140 | 15,650,158,676 | IssuesEvent | 2021-03-23 08:37:23 | longhorn/longhorn | https://api.github.com/repos/longhorn/longhorn | closed | [BUG] Volume stuck in attaching state if one of the replica location can't be found on scaling up a pod. | area/instance-manager area/manager kind/bug priority/2 require/automation-e2e | **Describe the bug**
Volume stuck in attaching state if one of the replica location can't be found on scaling up a pod.
**To Reproduce**
Steps to reproduce the behavior:
1. Deploy Longhorn on a cluster of 4 nodes (1 etcd/control plane, 3 workers)
2. Mount an additional disk on node-1 and add that disk in longhorn.
3. Disable the default disk on node-1.
3. Create a volume (3 replicas), attach it to a pod, and write some data into it.
4. Detach the volume by scaling down the pod.
5. Umount the disk on the node.
5. Scale up the pod.
6. Volume stuck in attaching state.
**Expected behavior**
Volume should get attached and the replica which is not available anymore should become failed.
**Log**
```
time="2020-11-18T21:11:48Z" level=warning msg="Instance volume-test-1-r-249b003f crashed on Instance Manager instance-manager-r-85b66b6c at khushboo-test-lh-wk1, try to get log"
time="2020-11-18T21:11:48Z" level=warning msg="volume-test-1-r-249b003f: time=\"2020-11-18T20:34:01Z\" level=info msg=\"Creating volume /host/data/replicas/volume-test-1-2255984c, size 10737418240/512\""
time="2020-11-18T21:11:48Z" level=warning msg="volume-test-1-r-249b003f: time=\"2020-11-18T20:34:01Z\" level=info msg=\"Listening on data server 0.0.0.0:10001\""
time="2020-11-18T21:11:48Z" level=warning msg="volume-test-1-r-249b003f: time=\"2020-11-18T20:34:01Z\" level=info msg=\"Listening on sync agent server 0.0.0.0:10002\""
time="2020-11-18T21:11:48Z" level=warning msg="volume-test-1-r-249b003f: time=\"2020-11-18T20:34:01Z\" level=info msg=\"Listening on gRPC Replica server 0.0.0.0:10000\""
time="2020-11-18T21:11:48Z" level=warning msg="volume-test-1-r-249b003f: time=\"2020-11-18T20:34:01Z\" level=info msg=\"Listening on sync 0.0.0.0:10002\""
time="2020-11-18T21:11:48Z" level=warning msg="volume-test-1-r-249b003f: time=\"2020-11-18T20:34:02Z\" level=info msg=\"New connection from: 10.42.1.6:35972\""
time="2020-11-18T21:11:48Z" level=warning msg="volume-test-1-r-249b003f: time=\"2020-11-18T20:34:02Z\" level=info msg=\"Opening volume /host/data/replicas/volume-test-1-2255984c, size 10737418240/512\""
time="2020-11-18T21:11:48Z" level=warning msg="volume-test-1-r-249b003f: time=\"2020-11-18T20:34:17Z\" level=info msg=\"Reloading the revision counter before processing the first write, the current revision cache is 0, the latest revision counter in file is 0\""
time="2020-11-18T21:11:48Z" level=warning msg="volume-test-1-r-249b003f: time=\"2020-11-18T20:41:21Z\" level=info msg=\"Closing volume\""
time="2020-11-18T21:11:48Z" level=warning msg="volume-test-1-r-249b003f: time=\"2020-11-18T20:41:22Z\" level=warning msg=\"Received signal interrupt to shutdown\""
time="2020-11-18T21:11:48Z" level=warning msg="volume-test-1-r-249b003f: time=\"2020-11-18T20:41:22Z\" level=warning msg=\"Starting to execute registered shutdown func github.com/longhorn/longhorn-engine/app/cmd.startReplica.func4\""
time="2020-11-18T21:11:48Z" level=warning msg="volume-test-1-r-249b003f: time=\"2020-11-18T21:07:02Z\" level=info msg=\"Creating volume /host/data/replicas/volume-test-1-2255984c, size 10737418240/512\""
time="2020-11-18T21:11:48Z" level=warning msg="volume-test-1-r-249b003f: time=\"2020-11-18T21:07:02Z\" level=fatal msg=\"Error running start replica command: mkdir /host/data/replicas/volume-test-1-2255984c: no such file or directory\""
time="2020-11-18T21:12:18Z" level=warning msg="Instance volume-test-1-r-249b003f is state error, error message: exit status 1"
```
instance-r-manager log:
```
longhorn-instance-manager] time="2020-11-18T21:20:18Z" level=debug msg="Process Manager: got logs for process volume-test-1-r-249b003f"
[longhorn-instance-manager] time="2020-11-18T21:20:18Z" level=debug msg="Process Manager: start getting logs for process volume-test-1-r-249b003f"
[longhorn-instance-manager] time="2020-11-18T21:20:18Z" level=debug msg="Process Manager: got logs for process volume-test-1-r-249b003f"
[longhorn-instance-manager] time="2020-11-18T21:20:48Z" level=debug msg="Process Manager: start getting logs for process volume-test-1-r-249b003f"
[longhorn-instance-manager] time="2020-11-18T21:20:48Z" level=debug msg="Process Manager: got logs for process volume-test-1-r-249b003f"
[longhorn-instance-manager] time="2020-11-18T21:20:48Z" level=debug msg="Process Manager: start getting logs for process volume-test-1-r-249b003f"
[longhorn-instance-manager] time="2020-11-18T21:20:48Z" level=debug msg="Process Manager: got logs for process volume-test-1-r-249b003f"
[longhorn-instance-manager] time="2020-11-18T21:21:02Z" level=debug msg="Process update: volume-test-1-r-249b003f: state error: Error: exit status 1"
```
[longhorn-support-bundle_f4d80f34-2207-4059-a289-f32a30f791f7_2020-11-18T21-22-48Z.zip](https://github.com/longhorn/longhorn/files/5562888/longhorn-support-bundle_f4d80f34-2207-4059-a289-f32a30f791f7_2020-11-18T21-22-48Z.zip)
**Environment:**
- Longhorn version: Longhorn-master - 11/18/2020
- Kubernetes version: v1.1.8
- Node OS type and version: ubuntu 18.04
| 1.0 | [BUG] Volume stuck in attaching state if one of the replica location can't be found on scaling up a pod. - **Describe the bug**
Volume stuck in attaching state if one of the replica location can't be found on scaling up a pod.
**To Reproduce**
Steps to reproduce the behavior:
1. Deploy Longhorn on a cluster of 4 nodes (1 etcd/control plane, 3 workers)
2. Mount an additional disk on node-1 and add that disk in longhorn.
3. Disable the default disk on node-1.
3. Create a volume (3 replicas), attach it to a pod, and write some data into it.
4. Detach the volume by scaling down the pod.
5. Umount the disk on the node.
5. Scale up the pod.
6. Volume stuck in attaching state.
**Expected behavior**
Volume should get attached and the replica which is not available anymore should become failed.
**Log**
```
time="2020-11-18T21:11:48Z" level=warning msg="Instance volume-test-1-r-249b003f crashed on Instance Manager instance-manager-r-85b66b6c at khushboo-test-lh-wk1, try to get log"
time="2020-11-18T21:11:48Z" level=warning msg="volume-test-1-r-249b003f: time=\"2020-11-18T20:34:01Z\" level=info msg=\"Creating volume /host/data/replicas/volume-test-1-2255984c, size 10737418240/512\""
time="2020-11-18T21:11:48Z" level=warning msg="volume-test-1-r-249b003f: time=\"2020-11-18T20:34:01Z\" level=info msg=\"Listening on data server 0.0.0.0:10001\""
time="2020-11-18T21:11:48Z" level=warning msg="volume-test-1-r-249b003f: time=\"2020-11-18T20:34:01Z\" level=info msg=\"Listening on sync agent server 0.0.0.0:10002\""
time="2020-11-18T21:11:48Z" level=warning msg="volume-test-1-r-249b003f: time=\"2020-11-18T20:34:01Z\" level=info msg=\"Listening on gRPC Replica server 0.0.0.0:10000\""
time="2020-11-18T21:11:48Z" level=warning msg="volume-test-1-r-249b003f: time=\"2020-11-18T20:34:01Z\" level=info msg=\"Listening on sync 0.0.0.0:10002\""
time="2020-11-18T21:11:48Z" level=warning msg="volume-test-1-r-249b003f: time=\"2020-11-18T20:34:02Z\" level=info msg=\"New connection from: 10.42.1.6:35972\""
time="2020-11-18T21:11:48Z" level=warning msg="volume-test-1-r-249b003f: time=\"2020-11-18T20:34:02Z\" level=info msg=\"Opening volume /host/data/replicas/volume-test-1-2255984c, size 10737418240/512\""
time="2020-11-18T21:11:48Z" level=warning msg="volume-test-1-r-249b003f: time=\"2020-11-18T20:34:17Z\" level=info msg=\"Reloading the revision counter before processing the first write, the current revision cache is 0, the latest revision counter in file is 0\""
time="2020-11-18T21:11:48Z" level=warning msg="volume-test-1-r-249b003f: time=\"2020-11-18T20:41:21Z\" level=info msg=\"Closing volume\""
time="2020-11-18T21:11:48Z" level=warning msg="volume-test-1-r-249b003f: time=\"2020-11-18T20:41:22Z\" level=warning msg=\"Received signal interrupt to shutdown\""
time="2020-11-18T21:11:48Z" level=warning msg="volume-test-1-r-249b003f: time=\"2020-11-18T20:41:22Z\" level=warning msg=\"Starting to execute registered shutdown func github.com/longhorn/longhorn-engine/app/cmd.startReplica.func4\""
time="2020-11-18T21:11:48Z" level=warning msg="volume-test-1-r-249b003f: time=\"2020-11-18T21:07:02Z\" level=info msg=\"Creating volume /host/data/replicas/volume-test-1-2255984c, size 10737418240/512\""
time="2020-11-18T21:11:48Z" level=warning msg="volume-test-1-r-249b003f: time=\"2020-11-18T21:07:02Z\" level=fatal msg=\"Error running start replica command: mkdir /host/data/replicas/volume-test-1-2255984c: no such file or directory\""
time="2020-11-18T21:12:18Z" level=warning msg="Instance volume-test-1-r-249b003f is state error, error message: exit status 1"
```
instance-r-manager log:
```
longhorn-instance-manager] time="2020-11-18T21:20:18Z" level=debug msg="Process Manager: got logs for process volume-test-1-r-249b003f"
[longhorn-instance-manager] time="2020-11-18T21:20:18Z" level=debug msg="Process Manager: start getting logs for process volume-test-1-r-249b003f"
[longhorn-instance-manager] time="2020-11-18T21:20:18Z" level=debug msg="Process Manager: got logs for process volume-test-1-r-249b003f"
[longhorn-instance-manager] time="2020-11-18T21:20:48Z" level=debug msg="Process Manager: start getting logs for process volume-test-1-r-249b003f"
[longhorn-instance-manager] time="2020-11-18T21:20:48Z" level=debug msg="Process Manager: got logs for process volume-test-1-r-249b003f"
[longhorn-instance-manager] time="2020-11-18T21:20:48Z" level=debug msg="Process Manager: start getting logs for process volume-test-1-r-249b003f"
[longhorn-instance-manager] time="2020-11-18T21:20:48Z" level=debug msg="Process Manager: got logs for process volume-test-1-r-249b003f"
[longhorn-instance-manager] time="2020-11-18T21:21:02Z" level=debug msg="Process update: volume-test-1-r-249b003f: state error: Error: exit status 1"
```
[longhorn-support-bundle_f4d80f34-2207-4059-a289-f32a30f791f7_2020-11-18T21-22-48Z.zip](https://github.com/longhorn/longhorn/files/5562888/longhorn-support-bundle_f4d80f34-2207-4059-a289-f32a30f791f7_2020-11-18T21-22-48Z.zip)
**Environment:**
- Longhorn version: Longhorn-master - 11/18/2020
- Kubernetes version: v1.1.8
- Node OS type and version: ubuntu 18.04
| non_process | volume stuck in attaching state if one of the replica location can t be found on scaling up a pod describe the bug volume stuck in attaching state if one of the replica location can t be found on scaling up a pod to reproduce steps to reproduce the behavior deploy longhorn on a cluster of nodes etcd control plane workers mount an additional disk on node and add that disk in longhorn disable the default disk on node create a volume replicas attach it to a pod and write some data into it detach the volume by scaling down the pod umount the disk on the node scale up the pod volume stuck in attaching state expected behavior volume should get attached and the replica which is not available anymore should become failed log time level warning msg instance volume test r crashed on instance manager instance manager r at khushboo test lh try to get log time level warning msg volume test r time level info msg creating volume host data replicas volume test size time level warning msg volume test r time level info msg listening on data server time level warning msg volume test r time level info msg listening on sync agent server time level warning msg volume test r time level info msg listening on grpc replica server time level warning msg volume test r time level info msg listening on sync time level warning msg volume test r time level info msg new connection from time level warning msg volume test r time level info msg opening volume host data replicas volume test size time level warning msg volume test r time level info msg reloading the revision counter before processing the first write the current revision cache is the latest revision counter in file is time level warning msg volume test r time level info msg closing volume time level warning msg volume test r time level warning msg received signal interrupt to shutdown time level warning msg volume test r time level warning msg starting to execute registered shutdown func github com longhorn longhorn engine app cmd startreplica time level warning msg volume test r time level info msg creating volume host data replicas volume test size time level warning msg volume test r time level fatal msg error running start replica command mkdir host data replicas volume test no such file or directory time level warning msg instance volume test r is state error error message exit status instance r manager log longhorn instance manager time level debug msg process manager got logs for process volume test r time level debug msg process manager start getting logs for process volume test r time level debug msg process manager got logs for process volume test r time level debug msg process manager start getting logs for process volume test r time level debug msg process manager got logs for process volume test r time level debug msg process manager start getting logs for process volume test r time level debug msg process manager got logs for process volume test r time level debug msg process update volume test r state error error exit status environment longhorn version longhorn master kubernetes version node os type and version ubuntu | 0 |
3,015 | 6,022,298,926 | IssuesEvent | 2017-06-07 20:44:42 | hashicorp/packer | https://api.github.com/repos/hashicorp/packer | closed | Warn users using Packer to push artifacts which create Vagrant boxes | docs enhancement post-processor/atlas | We will be deprecating vagrant.box artifacts in atlas (see https://github.com/hashicorp/packer/issues/4780), but before we do mark docs as deprecated and warn users who try to use this feature. | 1.0 | Warn users using Packer to push artifacts which create Vagrant boxes - We will be deprecating vagrant.box artifacts in atlas (see https://github.com/hashicorp/packer/issues/4780), but before we do mark docs as deprecated and warn users who try to use this feature. | process | warn users using packer to push artifacts which create vagrant boxes we will be deprecating vagrant box artifacts in atlas see but before we do mark docs as deprecated and warn users who try to use this feature | 1 |
3,010 | 6,011,027,834 | IssuesEvent | 2017-06-06 14:26:16 | coala/teams | https://api.github.com/repos/coala/teams | closed | Release Team Leader Application: Max Hahn | process/approved | # Bio
I'm Max, and I try to make masterpieces like [this](https://cloud.githubusercontent.com/assets/7521600/21007379/9af905ea-bd62-11e6-8735-fd8edcfd5c9f.png). Anyways, I've been around coala a while and generally know how things work around here. I also like putting check-marks in check-boxes so I think I would get along well with the current applicant we have to release team. (no I'm not German, it's just spelled that way)
# coala Contributions so far
>What contributions, coding or not, have you done to coala?
I've written a few patches as can be seen in my profile. I joined around the 0.9 release crunch and I started reviewing patches then. I have been regularly reviewing patches ever since. I released coala version 0.9.1 and coala-bears version 0.9.3 so I have some experience with the release process.
# Road to the Future
>How do you plan to take forward coala as a team leader/member of your team? What changes will you make happen?
I would like to automate more of the release notes/release process. One thing we have discussed already is autogenerating release notes from the issues associated with each commit. Another planned improvement is implementing the necessary things for the rultor release command to work. As part of my duties as release team leader, I plan on putting in a cEP to define release frequency and procedures.
| 1.0 | Release Team Leader Application: Max Hahn - # Bio
I'm Max, and I try to make masterpieces like [this](https://cloud.githubusercontent.com/assets/7521600/21007379/9af905ea-bd62-11e6-8735-fd8edcfd5c9f.png). Anyways, I've been around coala a while and generally know how things work around here. I also like putting check-marks in check-boxes so I think I would get along well with the current applicant we have to release team. (no I'm not German, it's just spelled that way)
# coala Contributions so far
>What contributions, coding or not, have you done to coala?
I've written a few patches as can be seen in my profile. I joined around the 0.9 release crunch and I started reviewing patches then. I have been regularly reviewing patches ever since. I released coala version 0.9.1 and coala-bears version 0.9.3 so I have some experience with the release process.
# Road to the Future
>How do you plan to take forward coala as a team leader/member of your team? What changes will you make happen?
I would like to automate more of the release notes/release process. One thing we have discussed already is autogenerating release notes from the issues associated with each commit. Another planned improvement is implementing the necessary things for the rultor release command to work. As part of my duties as release team leader, I plan on putting in a cEP to define release frequency and procedures.
| process | release team leader application max hahn bio i m max and i try to make masterpieces like anyways i ve been around coala a while and generally know how things work around here i also like putting check marks in check boxes so i think i would get along well with the current applicant we have to release team no i m not german it s just spelled that way coala contributions so far what contributions coding or not have you done to coala i ve written a few patches as can be seen in my profile i joined around the release crunch and i started reviewing patches then i have been regularly reviewing patches ever since i released coala version and coala bears version so i have some experience with the release process road to the future how do you plan to take forward coala as a team leader member of your team what changes will you make happen i would like to automate more of the release notes release process one thing we have discussed already is autogenerating release notes from the issues associated with each commit another planned improvement is implementing the necessary things for the rultor release command to work as part of my duties as release team leader i plan on putting in a cep to define release frequency and procedures | 1 |
132,713 | 12,516,068,335 | IssuesEvent | 2020-06-03 08:47:13 | vaadin/vaadin-charts-flow | https://api.github.com/repos/vaadin/vaadin-charts-flow | opened | Make RTL instructions more visible | documentation | Current instructions about RTL support are in `vaadin-charts` [jsdocs](https://github.com/vaadin/vaadin-charts/blob/master/src/vaadin-chart.html#L221-L246).
That may be a bit hard to find for Java developers, so it would be good to have it elsewhere, such as:
1) Inside Charts session at vaadin.com/docs
2) If (1), it could also be linked from the general RTL instructions for the platform documentation
3) Somewhere in javadoc (though I am not sure where it would be located).
PS. [Legend](https://github.com/vaadin/vaadin-charts-flow/blob/master/addon/src/main/java/com/vaadin/flow/component/charts/model/Legend.java) and [Tooltip](https://github.com/vaadin/vaadin-charts-flow/blob/master/addon/src/main/java/com/vaadin/flow/component/charts/model/Tooltip.java) already mention RTL, so that can be used to reflect the instructions presented at the web component documentation. | 1.0 | Make RTL instructions more visible - Current instructions about RTL support are in `vaadin-charts` [jsdocs](https://github.com/vaadin/vaadin-charts/blob/master/src/vaadin-chart.html#L221-L246).
That may be a bit hard to find for Java developers, so it would be good to have it elsewhere, such as:
1) Inside Charts session at vaadin.com/docs
2) If (1), it could also be linked from the general RTL instructions for the platform documentation
3) Somewhere in javadoc (though I am not sure where it would be located).
PS. [Legend](https://github.com/vaadin/vaadin-charts-flow/blob/master/addon/src/main/java/com/vaadin/flow/component/charts/model/Legend.java) and [Tooltip](https://github.com/vaadin/vaadin-charts-flow/blob/master/addon/src/main/java/com/vaadin/flow/component/charts/model/Tooltip.java) already mention RTL, so that can be used to reflect the instructions presented at the web component documentation. | non_process | make rtl instructions more visible current instructions about rtl support are in vaadin charts that may be a bit hard to find for java developers so it would be good to have it elsewhere such as inside charts session at vaadin com docs if it could also be linked from the general rtl instructions for the platform documentation somewhere in javadoc though i am not sure where it would be located ps and already mention rtl so that can be used to reflect the instructions presented at the web component documentation | 0 |
536,120 | 15,704,494,420 | IssuesEvent | 2021-03-26 15:03:43 | pyronear/pyro-api | https://api.github.com/repos/pyronear/pyro-api | closed | [test] Access check does not work correctly in unittests | bug help wanted high priority | While I was changing scopes on some routes, I noticed something:
- changes on the scope requirements do impact the API behaviour expectedly
- however in the unittest, even if the `get_current_access` yields an access with insufficient scope, the request is still being processed
We would need to investigate and fix this to be able to check scopes in the unittests. | 1.0 | [test] Access check does not work correctly in unittests - While I was changing scopes on some routes, I noticed something:
- changes on the scope requirements do impact the API behaviour expectedly
- however in the unittest, even if the `get_current_access` yields an access with insufficient scope, the request is still being processed
We would need to investigate and fix this to be able to check scopes in the unittests. | non_process | access check does not work correctly in unittests while i was changing scopes on some routes i noticed something changes on the scope requirements do impact the api behaviour expectedly however in the unittest even if the get current access yields an access with insufficient scope the request is still being processed we would need to investigate and fix this to be able to check scopes in the unittests | 0 |
13,971 | 16,744,623,251 | IssuesEvent | 2021-06-11 14:06:45 | sysflow-telemetry/sf-docs | https://api.github.com/repos/sysflow-telemetry/sf-docs | opened | Add CLUSTER_ID to contextual events exported to S3 | enhancement sf-processor | **Indicate project**
Processor
**Is your feature request related to a problem? Please describe.**
Add CLUSTER_ID to contextual events exported to S3 in the bucket path.
Example:
<bucket>/CLUSTER_ID/NODE_ID/Y/M/D/<Event Object>
| 1.0 | Add CLUSTER_ID to contextual events exported to S3 - **Indicate project**
Processor
**Is your feature request related to a problem? Please describe.**
Add CLUSTER_ID to contextual events exported to S3 in the bucket path.
Example:
<bucket>/CLUSTER_ID/NODE_ID/Y/M/D/<Event Object>
| process | add cluster id to contextual events exported to indicate project processor is your feature request related to a problem please describe add cluster id to contextual events exported to in the bucket path example cluster id node id y m d | 1 |
16,251 | 20,811,069,364 | IssuesEvent | 2022-03-18 02:46:21 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | Crash in QgsProcessingAlgorithm::checkParameterValues | Feedback stale Processing Bug | ### What is the bug or the crash?
```
Crash ID: f738d6a9c29c091f6d4d59196b62907e088a2876
Stack Trace
QgsProcessingAlgorithm::checkParameterValues qgsprocessingalgorithm.cpp:103
PyInit__core :
PyArg_ParseTuple_SizeT :
PyEval_EvalFrameDefault :
PyObject_GC_Del :
PyFunction_Vectorcall :
PyEval_EvalFrameDefault :
PyObject_GC_Del :
PyFunction_Vectorcall :
PyEval_EvalFrameDefault :
PyEval_EvalFrameDefault :
PyFunction_Vectorcall :
PyFloat_FromDouble :
PyVectorcall_Call :
PyObject_Call :
PyInit__core :
QgsProcessingAlgorithm::runPrepared qgsprocessingalgorithm.cpp:533
QgsProcessingAlgRunnerTask::run qgsprocessingalgrunnertask.cpp:66
PyInit__core :
QgsTask::start qgstaskmanager.cpp:81
QThreadPoolPrivate::reset :
QThread::start :
BaseThreadInitThunk :
RtlUserThreadStart :
QGIS Info
QGIS Version: 3.22.1-Bia?owie?a
QGIS code revision: 663dcf8fb9
Compiled against Qt: 5.15.2
Running against Qt: 5.15.2
Compiled against GDAL: 3.4.0
Running against GDAL: 3.4.0
System Info
CPU Type: x86_64
Kernel Type: winnt
Kernel Version: 10.0.18363
```
The crash seems to be on [this](https://github.com/qgis/QGIS/blob/final-3_22_1/src/core/processing/qgsprocessingalgorithm.cpp#L103) line. Provided that the stacktrace is correct the only reason I can see for a crash on that specific line would be `def` being null (thus triggering a segfault), but can that really be the case?
### Steps to reproduce the issue
Sometimes (around one third of runs) the above crash will happen at some point in this loop in my script:
```python
for name, expression, type, length in (
('LAN', '\"lan_LANKOD\"', 2, 2),
('KOMMUN', '\"kommuner_KOM_NAMN\"', 2, 50),
('KOMMUNNR', '\"kommuner_ID\"', 2, 4),
('MEDTEMPAR', '\"lufttemperatur_med_PK51RN_ID\"', 1, 0),
('MEDTJAN', '\"lufttemperatur_jan_INTERVALL\"', 2, 12),
('MEDTJULI', '\"lufttemperatur_jul_INTERVALL\"', 2, 12),
('AVRINNING', '\"avrinning_INTERVALL\"', 2, 12),
('NEDERBORD', '\"nederbord_INTERVALL\"', 2, 12),
('TYPOMRADE', '\"bergart_typområde\"', 2, 50),
('HK', 'if(\"bergart_typområde\" IN (\'Fjällkedjan, ö HK\', \'Sedimentär berggrund, Jämtland, ö HK\', \'Sydsvenska höglandet, ö HK\', \'Urberggsomr inom Norrlandsterräng, ö HK\'), 1, 0)', 1, 0),
('GEOLOGICAL', 'if(\"bergart_typområde\" IN (\'Fjällkedjan, ö HK\', \'Mellansvenska sänkan, u HK\', \'Norrlandskustens urberg, u HK\', \'Sydsvenska höglandet, ö HK\', \'Urberggsomr inom Norrlandsterräng, ö HK\', \'Väst- och sydostkusten, u HK\'), \'Siliceous\', \'Calcareous\')', 2, 12),
('BIOREG', '\"biogeoregioner_Bioreg\"', 2, 10),
('FIRE12', '\"fire12_Region\"', 0, 0),
('LIMNEKOREG', '\"limnekoreg_ekoreg\"', 1, 0),
('LIMNVTYPREG', '\"limnvtyp_ekoreg\"', 1, 0),
('HAROID', '\"svar_haro_HARO\"', 1, 0),
('AROID', '\"svar_aro_AROID\"', 2, 13)
):
processed = processing.run("native:fieldcalculator", dict(
INPUT=processed,
FIELD_NAME=name, FORMULA=expression,
FIELD_TYPE=type, FIELD_LENGTH=length, FIELD_PRECISION=0,
OUTPUT='TEMPORARY_OUTPUT'
), context=context, feedback=feedback)["OUTPUT"]
```
Where `processed` is the `OUTPUT` of `native:joinattributesbylocation` which (also in a loop) adds all the layers used by the expressions here.
The other two thirds of times this works fine and the script either completes successfully or I hit my other bug (#46420).
### Versions
QGIS version
3.22.1-Białowieża
QGIS code revision
663dcf8fb9
Qt version
5.15.2
Python version
3.9.5
GDAL/OGR version
3.4.0
PROJ version
8.2.0
EPSG Registry database version
v10.038 (2021-10-21)
GEOS version
3.10.0-CAPI-1.16.0
SQLite version
3.35.2
PDAL version
2.3.0
PostgreSQL client version
13.0
SpatiaLite version
5.0.1
QWT version
6.1.3
QScintilla2 version
2.11.5
OS version
Windows 10 Version 1909
Active Python plugins
firstaid
2.1.5
processing_wbt
1.3.1
db_manager
0.1.20
grassprovider
2.12.99
MetaSearch
0.3.5
processing
2.12.99
sagaprovider
2.12.99
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [X] I tried with a new QGIS profile
### Additional context
_No response_ | 1.0 | Crash in QgsProcessingAlgorithm::checkParameterValues - ### What is the bug or the crash?
```
Crash ID: f738d6a9c29c091f6d4d59196b62907e088a2876
Stack Trace
QgsProcessingAlgorithm::checkParameterValues qgsprocessingalgorithm.cpp:103
PyInit__core :
PyArg_ParseTuple_SizeT :
PyEval_EvalFrameDefault :
PyObject_GC_Del :
PyFunction_Vectorcall :
PyEval_EvalFrameDefault :
PyObject_GC_Del :
PyFunction_Vectorcall :
PyEval_EvalFrameDefault :
PyEval_EvalFrameDefault :
PyFunction_Vectorcall :
PyFloat_FromDouble :
PyVectorcall_Call :
PyObject_Call :
PyInit__core :
QgsProcessingAlgorithm::runPrepared qgsprocessingalgorithm.cpp:533
QgsProcessingAlgRunnerTask::run qgsprocessingalgrunnertask.cpp:66
PyInit__core :
QgsTask::start qgstaskmanager.cpp:81
QThreadPoolPrivate::reset :
QThread::start :
BaseThreadInitThunk :
RtlUserThreadStart :
QGIS Info
QGIS Version: 3.22.1-Bia?owie?a
QGIS code revision: 663dcf8fb9
Compiled against Qt: 5.15.2
Running against Qt: 5.15.2
Compiled against GDAL: 3.4.0
Running against GDAL: 3.4.0
System Info
CPU Type: x86_64
Kernel Type: winnt
Kernel Version: 10.0.18363
```
The crash seems to be on [this](https://github.com/qgis/QGIS/blob/final-3_22_1/src/core/processing/qgsprocessingalgorithm.cpp#L103) line. Provided that the stacktrace is correct the only reason I can see for a crash on that specific line would be `def` being null (thus triggering a segfault), but can that really be the case?
### Steps to reproduce the issue
Sometimes (around one third of runs) the above crash will happen at some point in this loop in my script:
```python
for name, expression, type, length in (
('LAN', '\"lan_LANKOD\"', 2, 2),
('KOMMUN', '\"kommuner_KOM_NAMN\"', 2, 50),
('KOMMUNNR', '\"kommuner_ID\"', 2, 4),
('MEDTEMPAR', '\"lufttemperatur_med_PK51RN_ID\"', 1, 0),
('MEDTJAN', '\"lufttemperatur_jan_INTERVALL\"', 2, 12),
('MEDTJULI', '\"lufttemperatur_jul_INTERVALL\"', 2, 12),
('AVRINNING', '\"avrinning_INTERVALL\"', 2, 12),
('NEDERBORD', '\"nederbord_INTERVALL\"', 2, 12),
('TYPOMRADE', '\"bergart_typområde\"', 2, 50),
('HK', 'if(\"bergart_typområde\" IN (\'Fjällkedjan, ö HK\', \'Sedimentär berggrund, Jämtland, ö HK\', \'Sydsvenska höglandet, ö HK\', \'Urberggsomr inom Norrlandsterräng, ö HK\'), 1, 0)', 1, 0),
('GEOLOGICAL', 'if(\"bergart_typområde\" IN (\'Fjällkedjan, ö HK\', \'Mellansvenska sänkan, u HK\', \'Norrlandskustens urberg, u HK\', \'Sydsvenska höglandet, ö HK\', \'Urberggsomr inom Norrlandsterräng, ö HK\', \'Väst- och sydostkusten, u HK\'), \'Siliceous\', \'Calcareous\')', 2, 12),
('BIOREG', '\"biogeoregioner_Bioreg\"', 2, 10),
('FIRE12', '\"fire12_Region\"', 0, 0),
('LIMNEKOREG', '\"limnekoreg_ekoreg\"', 1, 0),
('LIMNVTYPREG', '\"limnvtyp_ekoreg\"', 1, 0),
('HAROID', '\"svar_haro_HARO\"', 1, 0),
('AROID', '\"svar_aro_AROID\"', 2, 13)
):
processed = processing.run("native:fieldcalculator", dict(
INPUT=processed,
FIELD_NAME=name, FORMULA=expression,
FIELD_TYPE=type, FIELD_LENGTH=length, FIELD_PRECISION=0,
OUTPUT='TEMPORARY_OUTPUT'
), context=context, feedback=feedback)["OUTPUT"]
```
Where `processed` is the `OUTPUT` of `native:joinattributesbylocation` which (also in a loop) adds all the layers used by the expressions here.
The other two thirds of times this works fine and the script either completes successfully or I hit my other bug (#46420).
### Versions
QGIS version
3.22.1-Białowieża
QGIS code revision
663dcf8fb9
Qt version
5.15.2
Python version
3.9.5
GDAL/OGR version
3.4.0
PROJ version
8.2.0
EPSG Registry database version
v10.038 (2021-10-21)
GEOS version
3.10.0-CAPI-1.16.0
SQLite version
3.35.2
PDAL version
2.3.0
PostgreSQL client version
13.0
SpatiaLite version
5.0.1
QWT version
6.1.3
QScintilla2 version
2.11.5
OS version
Windows 10 Version 1909
Active Python plugins
firstaid
2.1.5
processing_wbt
1.3.1
db_manager
0.1.20
grassprovider
2.12.99
MetaSearch
0.3.5
processing
2.12.99
sagaprovider
2.12.99
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [X] I tried with a new QGIS profile
### Additional context
_No response_ | process | crash in qgsprocessingalgorithm checkparametervalues what is the bug or the crash crash id stack trace qgsprocessingalgorithm checkparametervalues qgsprocessingalgorithm cpp pyinit core pyarg parsetuple sizet pyeval evalframedefault pyobject gc del pyfunction vectorcall pyeval evalframedefault pyobject gc del pyfunction vectorcall pyeval evalframedefault pyeval evalframedefault pyfunction vectorcall pyfloat fromdouble pyvectorcall call pyobject call pyinit core qgsprocessingalgorithm runprepared qgsprocessingalgorithm cpp qgsprocessingalgrunnertask run qgsprocessingalgrunnertask cpp pyinit core qgstask start qgstaskmanager cpp qthreadpoolprivate reset qthread start basethreadinitthunk rtluserthreadstart qgis info qgis version bia owie a qgis code revision compiled against qt running against qt compiled against gdal running against gdal system info cpu type kernel type winnt kernel version the crash seems to be on line provided that the stacktrace is correct the only reason i can see for a crash on that specific line would be def being null thus triggering a segfault but can that really be the case steps to reproduce the issue sometimes around one third of runs the above crash will happen at some point in this loop in my script python for name expression type length in lan lan lankod kommun kommuner kom namn kommunnr kommuner id medtempar lufttemperatur med id medtjan lufttemperatur jan intervall medtjuli lufttemperatur jul intervall avrinning avrinning intervall nederbord nederbord intervall typomrade bergart typområde hk if bergart typområde in fjällkedjan ö hk sedimentär berggrund jämtland ö hk sydsvenska höglandet ö hk urberggsomr inom norrlandsterräng ö hk geological if bergart typområde in fjällkedjan ö hk mellansvenska sänkan u hk norrlandskustens urberg u hk sydsvenska höglandet ö hk urberggsomr inom norrlandsterräng ö hk väst och sydostkusten u hk siliceous calcareous bioreg biogeoregioner bioreg region limnekoreg limnekoreg ekoreg limnvtypreg limnvtyp ekoreg haroid svar haro haro aroid svar aro aroid processed processing run native fieldcalculator dict input processed field name name formula expression field type type field length length field precision output temporary output context context feedback feedback where processed is the output of native joinattributesbylocation which also in a loop adds all the layers used by the expressions here the other two thirds of times this works fine and the script either completes successfully or i hit my other bug versions qgis version białowieża qgis code revision qt version python version gdal ogr version proj version epsg registry database version geos version capi sqlite version pdal version postgresql client version spatialite version qwt version version os version windows version active python plugins firstaid processing wbt db manager grassprovider metasearch processing sagaprovider supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context no response | 1 |
14,699 | 17,872,191,551 | IssuesEvent | 2021-09-06 17:31:21 | GoogleCloudPlatform/dotnet-docs-samples | https://api.github.com/repos/GoogleCloudPlatform/dotnet-docs-samples | closed | CI: On ocassion Google.Cloud* packages cannot be found. | type: process priority: p2 samples | CI sample [output](https://source.cloud.google.com/results/invocations/07e66c49-eaa9-4c8d-96ad-db51cbaa0bd8/targets/github%2Fdotnet-docs-samples%2Fasset%2Fquickstart%2FExportAssetsTest/tests).
I've seen it for several different libraries. It seems when it happens that packages are only being resolved from offline sources. Wonder if the machine looses connectivity from time to time. If that's the case, then there's not much we can do.
I'll add here sample output when I see the error again to try and find some commonality. | 1.0 | CI: On ocassion Google.Cloud* packages cannot be found. - CI sample [output](https://source.cloud.google.com/results/invocations/07e66c49-eaa9-4c8d-96ad-db51cbaa0bd8/targets/github%2Fdotnet-docs-samples%2Fasset%2Fquickstart%2FExportAssetsTest/tests).
I've seen it for several different libraries. It seems when it happens that packages are only being resolved from offline sources. Wonder if the machine looses connectivity from time to time. If that's the case, then there's not much we can do.
I'll add here sample output when I see the error again to try and find some commonality. | process | ci on ocassion google cloud packages cannot be found ci sample i ve seen it for several different libraries it seems when it happens that packages are only being resolved from offline sources wonder if the machine looses connectivity from time to time if that s the case then there s not much we can do i ll add here sample output when i see the error again to try and find some commonality | 1 |
318,517 | 23,724,745,728 | IssuesEvent | 2022-08-30 18:28:55 | deephaven/deephaven-core | https://api.github.com/repos/deephaven/deephaven-core | opened | CompilerTools.Context should be refactored to be an instance of CompilerTools | clean up core NoDocumentationNeeded | The last piece of cleaning up QueryScope/QueryLibrary/CompilerTools looks like simplifying the design of CompilerTools.
Currently the entry points for CompilerTools are static methods that all fetch the ExecutionContext instance of CompilerTools.Context. This pattern (static methods that use an instance of context) is almost the same as using member methods on the installed context.
A small, but generally good, improvement would be to rename CompilerTools to QueryCompiler and embed the context as instance variables. | 1.0 | CompilerTools.Context should be refactored to be an instance of CompilerTools - The last piece of cleaning up QueryScope/QueryLibrary/CompilerTools looks like simplifying the design of CompilerTools.
Currently the entry points for CompilerTools are static methods that all fetch the ExecutionContext instance of CompilerTools.Context. This pattern (static methods that use an instance of context) is almost the same as using member methods on the installed context.
A small, but generally good, improvement would be to rename CompilerTools to QueryCompiler and embed the context as instance variables. | non_process | compilertools context should be refactored to be an instance of compilertools the last piece of cleaning up queryscope querylibrary compilertools looks like simplifying the design of compilertools currently the entry points for compilertools are static methods that all fetch the executioncontext instance of compilertools context this pattern static methods that use an instance of context is almost the same as using member methods on the installed context a small but generally good improvement would be to rename compilertools to querycompiler and embed the context as instance variables | 0 |
186,376 | 6,735,576,224 | IssuesEvent | 2017-10-18 22:25:41 | EEA-Norway-Grants/dataviz | https://api.github.com/repos/EEA-Norway-Grants/dataviz | closed | Check if the number of queries during indexing can be reduced | Component: Search Priority: Low | e.g. fetch all related stuff for organisations when doing the query
def index_queryset might help | 1.0 | Check if the number of queries during indexing can be reduced - e.g. fetch all related stuff for organisations when doing the query
def index_queryset might help | non_process | check if the number of queries during indexing can be reduced e g fetch all related stuff for organisations when doing the query def index queryset might help | 0 |
13,455 | 15,934,780,728 | IssuesEvent | 2021-04-14 09:04:35 | prisma/e2e-tests | https://api.github.com/repos/prisma/e2e-tests | closed | Add tests for global prisma installation | kind/feature process/candidate team/client | Tests if `npm install -g prisma` succeeds and tests `prisma --version`
| 1.0 | Add tests for global prisma installation - Tests if `npm install -g prisma` succeeds and tests `prisma --version`
| process | add tests for global prisma installation tests if npm install g prisma succeeds and tests prisma version | 1 |
20,072 | 26,564,001,416 | IssuesEvent | 2023-01-20 18:19:18 | MPMG-DCC-UFMG/C01 | https://api.github.com/repos/MPMG-DCC-UFMG/C01 | closed | Tratamento da exceção `[...] Future <Future pending> attached to a different loop` em coletas dinâmicas | [1] Bug [2] Baixa Prioridade [0] Desenvolvimento [3] Processamento Dinâmico | ## Comportamento esperado
Exceções devem ocorrer raramente, porém, essa sempre ocorre ao realizar coletas dinâmicas. Indicando um possível ponto de melhoria para evitar que o erro ocorra com tanta frequência.
## Comportamento atual
A exceção é sempre gerada ao executar coletas dinâmicas.
## Passos para reproduzir o erro
Reproduzir os passos da issue #843 que permitiu descobrir o erro.
Notar que essa exceção é sempre gerada pelo método `_parse_request` do módulo `scrapy_puppeteer/middleware.py` ao executar o comando `page = await self.browser.newPage()`.
## Sistema
Branch `master`. | 1.0 | Tratamento da exceção `[...] Future <Future pending> attached to a different loop` em coletas dinâmicas - ## Comportamento esperado
Exceções devem ocorrer raramente, porém, essa sempre ocorre ao realizar coletas dinâmicas. Indicando um possível ponto de melhoria para evitar que o erro ocorra com tanta frequência.
## Comportamento atual
A exceção é sempre gerada ao executar coletas dinâmicas.
## Passos para reproduzir o erro
Reproduzir os passos da issue #843 que permitiu descobrir o erro.
Notar que essa exceção é sempre gerada pelo método `_parse_request` do módulo `scrapy_puppeteer/middleware.py` ao executar o comando `page = await self.browser.newPage()`.
## Sistema
Branch `master`. | process | tratamento da exceção future attached to a different loop em coletas dinâmicas comportamento esperado exceções devem ocorrer raramente porém essa sempre ocorre ao realizar coletas dinâmicas indicando um possível ponto de melhoria para evitar que o erro ocorra com tanta frequência comportamento atual a exceção é sempre gerada ao executar coletas dinâmicas passos para reproduzir o erro reproduzir os passos da issue que permitiu descobrir o erro notar que essa exceção é sempre gerada pelo método parse request do módulo scrapy puppeteer middleware py ao executar o comando page await self browser newpage sistema branch master | 1 |
529,075 | 15,379,993,911 | IssuesEvent | 2021-03-02 20:27:05 | GideonMarsh/RL-Mega-Man | https://api.github.com/repos/GideonMarsh/RL-Mega-Man | closed | Screen capture happens at uneven intervals | Low Priority | Currently unknown if this is necessary for proper program function | 1.0 | Screen capture happens at uneven intervals - Currently unknown if this is necessary for proper program function | non_process | screen capture happens at uneven intervals currently unknown if this is necessary for proper program function | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.