Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 2 665 | labels stringlengths 4 554 | body stringlengths 3 235k | index stringclasses 6 values | text_combine stringlengths 96 235k | label stringclasses 2 values | text stringlengths 96 196k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
165,170 | 13,979,523,673 | IssuesEvent | 2020-10-27 00:20:55 | UnBArqDsw/2020.1_G12_Stock | https://api.github.com/repos/UnBArqDsw/2020.1_G12_Stock | opened | Reunião Semanal Sprint 9 | documentation | ## Descrição
Documentação dos resultados da sprint 8 e planejamento sprint 9.
| 1.0 | Reunião Semanal Sprint 9 - ## Descrição
Documentação dos resultados da sprint 8 e planejamento sprint 9.
| non_infrastructure | reunião semanal sprint descrição documentação dos resultados da sprint e planejamento sprint | 0 |
784,333 | 27,567,064,079 | IssuesEvent | 2023-03-08 05:20:17 | phyloref/klados | https://api.github.com/repos/phyloref/klados | closed | Add additional example files | priority: high | In PR #230, I removed the following example files:
* fisher_et_al_2007.json (from https://doi.org/10.1639/0007-2745%282007%29110%5B46%3APOTCWA%5D2.0.CO%3B2)
- Includes an apomorphy-based definition
* hillis_and_wilcox_2005.json (from https://doi.org/10.1016/j.ympev.2004.10.007)
- Includes specimen-based definitions
* Most of the phyloreferences from Brochu 2003, replacing it with the minimal version used in the phyx.js tests.
This leaves us with a single small example file, Brochu 2003. This issue tracks us adding additional example files (probably from the Clade Ontology) to Klados. Ideally, they should demonstrate specimen identifiers (like Fisher et al did) or apomorphy-based phyloreferences. Using files from the Clade Ontology will almost certainly be easier than attempting to convert these v0.2.0 files to v1.0.0 Phyloref files.
- [ ] Also check whether phyloreferences with specimens and external references as specifiers can be exported correctly as CSV.
Could be part of the tutorial (#227). | 1.0 | Add additional example files - In PR #230, I removed the following example files:
* fisher_et_al_2007.json (from https://doi.org/10.1639/0007-2745%282007%29110%5B46%3APOTCWA%5D2.0.CO%3B2)
- Includes an apomorphy-based definition
* hillis_and_wilcox_2005.json (from https://doi.org/10.1016/j.ympev.2004.10.007)
- Includes specimen-based definitions
* Most of the phyloreferences from Brochu 2003, replacing it with the minimal version used in the phyx.js tests.
This leaves us with a single small example file, Brochu 2003. This issue tracks us adding additional example files (probably from the Clade Ontology) to Klados. Ideally, they should demonstrate specimen identifiers (like Fisher et al did) or apomorphy-based phyloreferences. Using files from the Clade Ontology will almost certainly be easier than attempting to convert these v0.2.0 files to v1.0.0 Phyloref files.
- [ ] Also check whether phyloreferences with specimens and external references as specifiers can be exported correctly as CSV.
Could be part of the tutorial (#227). | non_infrastructure | add additional example files in pr i removed the following example files fisher et al json from includes an apomorphy based definition hillis and wilcox json from includes specimen based definitions most of the phyloreferences from brochu replacing it with the minimal version used in the phyx js tests this leaves us with a single small example file brochu this issue tracks us adding additional example files probably from the clade ontology to klados ideally they should demonstrate specimen identifiers like fisher et al did or apomorphy based phyloreferences using files from the clade ontology will almost certainly be easier than attempting to convert these files to phyloref files also check whether phyloreferences with specimens and external references as specifiers can be exported correctly as csv could be part of the tutorial | 0 |
10,775 | 8,717,409,410 | IssuesEvent | 2018-12-07 17:03:11 | camdram/camdram | https://api.github.com/repos/camdram/camdram | closed | Branch builds are created with a readonly database | infrastructure | `/var/www/camdram/dev/multiple-socs/shared/app/data/orm.db` was created as mode `644` and ownership `deploy:camdram`. This means it's readonly to `camdram` and so no changes can be made to the database by the website, which means the site https://multiple-socs.dev.camdram.net/ doesn't work.
(I've manually changed this specific file to `664` and it works fine, but the deployer script needs updating.) | 1.0 | Branch builds are created with a readonly database - `/var/www/camdram/dev/multiple-socs/shared/app/data/orm.db` was created as mode `644` and ownership `deploy:camdram`. This means it's readonly to `camdram` and so no changes can be made to the database by the website, which means the site https://multiple-socs.dev.camdram.net/ doesn't work.
(I've manually changed this specific file to `664` and it works fine, but the deployer script needs updating.) | infrastructure | branch builds are created with a readonly database var www camdram dev multiple socs shared app data orm db was created as mode and ownership deploy camdram this means it s readonly to camdram and so no changes can be made to the database by the website which means the site doesn t work i ve manually changed this specific file to and it works fine but the deployer script needs updating | 1 |
26,076 | 19,634,859,310 | IssuesEvent | 2022-01-08 04:37:21 | dealii/dealii | https://api.github.com/repos/dealii/dealii | closed | GDB Pretty Printing of Vector depends on deprecated member | Infrastructure | I have recently tried to use the pretty printer functions for gdb supplied in contrib/utilities/dotgdbinit.py to print the values
of a Vector object. This fails with the error message:
````
Python Exception <class 'gdb.error'> There is no member or method name data_end.:
````
This problem first ocured after upgrading from version 9.2 to #9.3.
Other pretty printer functions (for example for tensors and points) function as expected.
After some digging I've found that gdb tries to use the members data_end and data_begin of the AlignedVector class
to calculate the length of the vector and to iterate over the container.
Both members where present in deal 9.2 but seem to have been deprecated in version 9.3.
As such the pretty printer function no longer works.
In order to reproduce the problem install the pretty printer functions as described [here](https://www.dealii.org/9.2.0/users/gdb.html).
Then debug the step-3 tutorial using gdb and try to print the system_rhs vector. | 1.0 | GDB Pretty Printing of Vector depends on deprecated member - I have recently tried to use the pretty printer functions for gdb supplied in contrib/utilities/dotgdbinit.py to print the values
of a Vector object. This fails with the error message:
````
Python Exception <class 'gdb.error'> There is no member or method name data_end.:
````
This problem first ocured after upgrading from version 9.2 to #9.3.
Other pretty printer functions (for example for tensors and points) function as expected.
After some digging I've found that gdb tries to use the members data_end and data_begin of the AlignedVector class
to calculate the length of the vector and to iterate over the container.
Both members where present in deal 9.2 but seem to have been deprecated in version 9.3.
As such the pretty printer function no longer works.
In order to reproduce the problem install the pretty printer functions as described [here](https://www.dealii.org/9.2.0/users/gdb.html).
Then debug the step-3 tutorial using gdb and try to print the system_rhs vector. | infrastructure | gdb pretty printing of vector depends on deprecated member i have recently tried to use the pretty printer functions for gdb supplied in contrib utilities dotgdbinit py to print the values of a vector object this fails with the error message python exception there is no member or method name data end this problem first ocured after upgrading from version to other pretty printer functions for example for tensors and points function as expected after some digging i ve found that gdb tries to use the members data end and data begin of the alignedvector class to calculate the length of the vector and to iterate over the container both members where present in deal but seem to have been deprecated in version as such the pretty printer function no longer works in order to reproduce the problem install the pretty printer functions as described then debug the step tutorial using gdb and try to print the system rhs vector | 1 |
12,775 | 9,941,228,514 | IssuesEvent | 2019-07-03 11:03:14 | elastic/beats | https://api.github.com/repos/elastic/beats | opened | Filebeat apache module doesn't support timezones | :infrastructure Filebeat bug module | Apache module is not timezone aware, so dates are read as UTC.
We should update it to include the `add_locale` processor and take the local timezone into account when ingesting them. Similar NGINX module is already doing it with:
https://github.com/elastic/beats/blob/983564f9e0e5a8a92d04d736c35780938ed47c08/filebeat/module/nginx/access/config/nginx-access.yml#L8-L9
https://github.com/elastic/beats/blob/983564f9e0e5a8a92d04d736c35780938ed47c08/filebeat/module/nginx/access/ingest/default.json#L96-L102 | 1.0 | Filebeat apache module doesn't support timezones - Apache module is not timezone aware, so dates are read as UTC.
We should update it to include the `add_locale` processor and take the local timezone into account when ingesting them. Similar NGINX module is already doing it with:
https://github.com/elastic/beats/blob/983564f9e0e5a8a92d04d736c35780938ed47c08/filebeat/module/nginx/access/config/nginx-access.yml#L8-L9
https://github.com/elastic/beats/blob/983564f9e0e5a8a92d04d736c35780938ed47c08/filebeat/module/nginx/access/ingest/default.json#L96-L102 | infrastructure | filebeat apache module doesn t support timezones apache module is not timezone aware so dates are read as utc we should update it to include the add locale processor and take the local timezone into account when ingesting them similar nginx module is already doing it with | 1 |
222,054 | 17,390,082,794 | IssuesEvent | 2021-08-02 05:52:03 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | closed | com.hazelcast.partition.PartitionDistributionTest.testTenNodes_1111Partitions | Module: Partitioning Source: Internal Team: Core Type: Test-Failure | _4.2.z_ (commit 1e1a816a1fb4af2a8b9377a7751949bf0488e267)
Failed on Sonar build (Oracle JDK 11): http://jenkins.hazelcast.com/view/Official%20Builds/job/Hazelcast-4.maintenance-sonar/832/testReport/com.hazelcast.partition/PartitionDistributionTest/testTenNodes_1111Partitions/
Stacktrace:
```
java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:87)
at org.junit.Assert.assertTrue(Assert.java:42)
at org.junit.Assert.assertNotNull(Assert.java:713)
at org.junit.Assert.assertNotNull(Assert.java:723)
at com.hazelcast.partition.PartitionDistributionTest.testPartitionDistribution(PartitionDistributionTest.java:200)
at com.hazelcast.partition.PartitionDistributionTest.testPartitionDistribution(PartitionDistributionTest.java:156)
at com.hazelcast.partition.PartitionDistributionTest.testTenNodes_1111Partitions(PartitionDistributionTest.java:142)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:115)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:107)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.lang.Thread.run(Thread.java:834)
```
Standard output:
```
05:15:56,927 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [LOCAL] [dev] [4.2.1-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
05:15:56,927 INFO |testTenNodes_1111Partitions| - [system] testTenNodes_1111Partitions - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT] Hazelcast 4.2.1-SNAPSHOT (20210603 - 1e1a816) starting at [127.0.0.1]:5701
05:15:56,928 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
05:15:56,931 WARN |testTenNodes_1111Partitions| - [CPSubsystem] testTenNodes_1111Partitions - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
05:15:56,937 INFO |testTenNodes_1111Partitions| - [Diagnostics] testTenNodes_1111Partitions - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
05:15:56,937 INFO |testTenNodes_1111Partitions| - [LifecycleService] testTenNodes_1111Partitions - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT] [127.0.0.1]:5701 is STARTING
05:15:56,937 INFO |testTenNodes_1111Partitions| - [ClusterService] testTenNodes_1111Partitions - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT]
Members {size:1, ver:1} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e this
]
05:15:56,937 INFO |testTenNodes_1111Partitions| - [LifecycleService] testTenNodes_1111Partitions - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT] [127.0.0.1]:5701 is STARTED
05:15:56,937 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [LOCAL] [dev] [4.2.1-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
05:15:56,937 INFO |testTenNodes_1111Partitions| - [system] testTenNodes_1111Partitions - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT] Hazelcast 4.2.1-SNAPSHOT (20210603 - 1e1a816) starting at [127.0.0.1]:5702
05:15:56,938 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
05:15:56,940 WARN |testTenNodes_1111Partitions| - [CPSubsystem] testTenNodes_1111Partitions - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
05:15:56,946 INFO |testTenNodes_1111Partitions| - [Diagnostics] testTenNodes_1111Partitions - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
05:15:56,946 INFO |testTenNodes_1111Partitions| - [LifecycleService] testTenNodes_1111Partitions - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT] [127.0.0.1]:5702 is STARTING
05:15:56,946 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5701, connection: MockConnection{localEndpoint=[127.0.0.1]:5702, remoteEndpoint=[127.0.0.1]:5701, alive=true}
05:15:56,947 INFO |testTenNodes_1111Partitions| - [MockServer] hz.ecstatic_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5702, connection: MockConnection{localEndpoint=[127.0.0.1]:5701, remoteEndpoint=[127.0.0.1]:5702, alive=true}
05:15:56,947 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.ecstatic_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT]
Members {size:2, ver:2} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e this
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
]
05:15:57,047 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.clever_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT]
Members {size:2, ver:2} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31 this
]
05:15:57,447 INFO |testTenNodes_1111Partitions| - [LifecycleService] testTenNodes_1111Partitions - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT] [127.0.0.1]:5702 is STARTED
05:15:57,447 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [LOCAL] [dev] [4.2.1-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
05:15:57,448 INFO |testTenNodes_1111Partitions| - [system] testTenNodes_1111Partitions - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT] Hazelcast 4.2.1-SNAPSHOT (20210603 - 1e1a816) starting at [127.0.0.1]:5703
05:15:57,449 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
05:15:57,452 WARN |testTenNodes_1111Partitions| - [CPSubsystem] testTenNodes_1111Partitions - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
05:15:57,459 INFO |testTenNodes_1111Partitions| - [Diagnostics] testTenNodes_1111Partitions - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
05:15:57,459 INFO |testTenNodes_1111Partitions| - [LifecycleService] testTenNodes_1111Partitions - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT] [127.0.0.1]:5703 is STARTING
05:15:57,459 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5701, connection: MockConnection{localEndpoint=[127.0.0.1]:5703, remoteEndpoint=[127.0.0.1]:5701, alive=true}
05:15:57,460 INFO |testTenNodes_1111Partitions| - [MockServer] hz.ecstatic_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5703, connection: MockConnection{localEndpoint=[127.0.0.1]:5701, remoteEndpoint=[127.0.0.1]:5703, alive=true}
05:15:57,460 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.ecstatic_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT]
Members {size:3, ver:3} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e this
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
]
05:15:57,461 INFO |testTenNodes_1111Partitions| - [MockServer] hz.clever_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5703, connection: MockConnection{localEndpoint=[127.0.0.1]:5702, remoteEndpoint=[127.0.0.1]:5703, alive=true}
05:15:57,461 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.clever_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT]
Members {size:3, ver:3} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31 this
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
]
05:15:57,561 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.musing_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT]
Members {size:3, ver:3} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1 this
]
05:15:57,960 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5702, connection: MockConnection{localEndpoint=[127.0.0.1]:5703, remoteEndpoint=[127.0.0.1]:5702, alive=true}
05:15:57,960 INFO |testTenNodes_1111Partitions| - [LifecycleService] testTenNodes_1111Partitions - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT] [127.0.0.1]:5703 is STARTED
05:15:57,960 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [LOCAL] [dev] [4.2.1-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
05:15:57,961 INFO |testTenNodes_1111Partitions| - [system] testTenNodes_1111Partitions - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT] Hazelcast 4.2.1-SNAPSHOT (20210603 - 1e1a816) starting at [127.0.0.1]:5704
05:15:57,962 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
05:15:57,965 WARN |testTenNodes_1111Partitions| - [CPSubsystem] testTenNodes_1111Partitions - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
05:15:57,971 INFO |testTenNodes_1111Partitions| - [Diagnostics] testTenNodes_1111Partitions - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
05:15:57,971 INFO |testTenNodes_1111Partitions| - [LifecycleService] testTenNodes_1111Partitions - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT] [127.0.0.1]:5704 is STARTING
05:15:57,972 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5701, connection: MockConnection{localEndpoint=[127.0.0.1]:5704, remoteEndpoint=[127.0.0.1]:5701, alive=true}
05:15:57,972 INFO |testTenNodes_1111Partitions| - [MockServer] hz.ecstatic_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5704, connection: MockConnection{localEndpoint=[127.0.0.1]:5701, remoteEndpoint=[127.0.0.1]:5704, alive=true}
05:15:57,973 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.ecstatic_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT]
Members {size:4, ver:4} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e this
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
]
05:15:57,973 INFO |testTenNodes_1111Partitions| - [MockServer] hz.clever_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5704, connection: MockConnection{localEndpoint=[127.0.0.1]:5702, remoteEndpoint=[127.0.0.1]:5704, alive=true}
05:15:57,973 INFO |testTenNodes_1111Partitions| - [MockServer] hz.musing_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5704, connection: MockConnection{localEndpoint=[127.0.0.1]:5703, remoteEndpoint=[127.0.0.1]:5704, alive=true}
05:15:57,973 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.clever_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT]
Members {size:4, ver:4} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31 this
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
]
05:15:57,974 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.musing_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT]
Members {size:4, ver:4} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1 this
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
]
05:15:58,073 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.distracted_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT]
Members {size:4, ver:4} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac this
]
05:15:58,472 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5702, connection: MockConnection{localEndpoint=[127.0.0.1]:5704, remoteEndpoint=[127.0.0.1]:5702, alive=true}
05:15:58,472 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5703, connection: MockConnection{localEndpoint=[127.0.0.1]:5704, remoteEndpoint=[127.0.0.1]:5703, alive=true}
05:15:58,472 INFO |testTenNodes_1111Partitions| - [LifecycleService] testTenNodes_1111Partitions - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT] [127.0.0.1]:5704 is STARTED
05:15:58,473 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [LOCAL] [dev] [4.2.1-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
05:15:58,473 INFO |testTenNodes_1111Partitions| - [system] testTenNodes_1111Partitions - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT] Hazelcast 4.2.1-SNAPSHOT (20210603 - 1e1a816) starting at [127.0.0.1]:5705
05:15:58,473 INFO |testTenNodes_1111Partitions| - [HealthMonitor] hz.distracted_hypatia.HealthMonitor - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT] processors=8, physical.memory.total=377.6G, physical.memory.free=112.9G, swap.space.total=4.0G, swap.space.free=2.7G, heap.memory.used=1.4G, heap.memory.free=601.2M, heap.memory.total=2.0G, heap.memory.max=2.0G, heap.memory.used/total=70.63%, heap.memory.used/max=70.63%, minor.gc.count=8618, minor.gc.time=79659ms, major.gc.count=5, major.gc.time=1280ms, load.process=2.25%, load.system=21.36%, load.systemAverage=9.55, thread.count=561, thread.peakCount=2580, cluster.timeDiff=-1, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.client.query.size=0, executor.q.client.blocking.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=17, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=0, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
05:15:58,474 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
05:15:58,477 WARN |testTenNodes_1111Partitions| - [CPSubsystem] testTenNodes_1111Partitions - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
05:15:58,483 INFO |testTenNodes_1111Partitions| - [Diagnostics] testTenNodes_1111Partitions - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
05:15:58,483 INFO |testTenNodes_1111Partitions| - [LifecycleService] testTenNodes_1111Partitions - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT] [127.0.0.1]:5705 is STARTING
05:15:58,483 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5701, connection: MockConnection{localEndpoint=[127.0.0.1]:5705, remoteEndpoint=[127.0.0.1]:5701, alive=true}
05:15:58,483 INFO |testTenNodes_1111Partitions| - [MockServer] hz.ecstatic_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5705, connection: MockConnection{localEndpoint=[127.0.0.1]:5701, remoteEndpoint=[127.0.0.1]:5705, alive=true}
05:15:58,484 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.ecstatic_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT]
Members {size:5, ver:5} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e this
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
]
05:15:58,485 INFO |testTenNodes_1111Partitions| - [MockServer] hz.clever_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5705, connection: MockConnection{localEndpoint=[127.0.0.1]:5702, remoteEndpoint=[127.0.0.1]:5705, alive=true}
05:15:58,485 INFO |testTenNodes_1111Partitions| - [MockServer] hz.distracted_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5705, connection: MockConnection{localEndpoint=[127.0.0.1]:5704, remoteEndpoint=[127.0.0.1]:5705, alive=true}
05:15:58,485 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.clever_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT]
Members {size:5, ver:5} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31 this
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
]
05:15:58,485 INFO |testTenNodes_1111Partitions| - [MockServer] hz.musing_hypatia.generic-operation.thread-1 - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5705, connection: MockConnection{localEndpoint=[127.0.0.1]:5703, remoteEndpoint=[127.0.0.1]:5705, alive=true}
05:15:58,485 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.distracted_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT]
Members {size:5, ver:5} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac this
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
]
05:15:58,485 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.musing_hypatia.generic-operation.thread-1 - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT]
Members {size:5, ver:5} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1 this
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
]
05:15:58,584 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.modest_hypatia.generic-operation.thread-1 - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT]
Members {size:5, ver:5} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a this
]
05:15:58,983 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5702, connection: MockConnection{localEndpoint=[127.0.0.1]:5705, remoteEndpoint=[127.0.0.1]:5702, alive=true}
05:15:58,984 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5703, connection: MockConnection{localEndpoint=[127.0.0.1]:5705, remoteEndpoint=[127.0.0.1]:5703, alive=true}
05:15:58,984 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5704, connection: MockConnection{localEndpoint=[127.0.0.1]:5705, remoteEndpoint=[127.0.0.1]:5704, alive=true}
05:15:58,984 INFO |testTenNodes_1111Partitions| - [LifecycleService] testTenNodes_1111Partitions - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT] [127.0.0.1]:5705 is STARTED
05:15:58,984 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [LOCAL] [dev] [4.2.1-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
05:15:58,984 INFO |testTenNodes_1111Partitions| - [system] testTenNodes_1111Partitions - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT] Hazelcast 4.2.1-SNAPSHOT (20210603 - 1e1a816) starting at [127.0.0.1]:5706
05:15:58,985 INFO |testTenNodes_1111Partitions| - [HealthMonitor] hz.modest_hypatia.HealthMonitor - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT] processors=8, physical.memory.total=377.6G, physical.memory.free=112.9G, swap.space.total=4.0G, swap.space.free=2.7G, heap.memory.used=1.5G, heap.memory.free=531.4M, heap.memory.total=2.0G, heap.memory.max=2.0G, heap.memory.used/total=74.04%, heap.memory.used/max=74.04%, minor.gc.count=8618, minor.gc.time=79659ms, major.gc.count=5, major.gc.time=1280ms, load.process=2.74%, load.system=23.91%, load.systemAverage=9.55, thread.count=603, thread.peakCount=2580, cluster.timeDiff=-1, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.client.query.size=0, executor.q.client.blocking.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=18, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=0, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
05:15:58,985 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
05:15:58,988 WARN |testTenNodes_1111Partitions| - [CPSubsystem] testTenNodes_1111Partitions - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
05:15:58,994 INFO |testTenNodes_1111Partitions| - [Diagnostics] testTenNodes_1111Partitions - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
05:15:58,994 INFO |testTenNodes_1111Partitions| - [LifecycleService] testTenNodes_1111Partitions - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT] [127.0.0.1]:5706 is STARTING
05:15:58,995 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5701, connection: MockConnection{localEndpoint=[127.0.0.1]:5706, remoteEndpoint=[127.0.0.1]:5701, alive=true}
05:15:58,995 INFO |testTenNodes_1111Partitions| - [MockServer] hz.ecstatic_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5706, connection: MockConnection{localEndpoint=[127.0.0.1]:5701, remoteEndpoint=[127.0.0.1]:5706, alive=true}
05:15:58,995 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.ecstatic_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT]
Members {size:6, ver:6} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e this
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79
]
05:15:58,997 INFO |testTenNodes_1111Partitions| - [MockServer] hz.clever_hypatia.generic-operation.thread-0 - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5706, connection: MockConnection{localEndpoint=[127.0.0.1]:5702, remoteEndpoint=[127.0.0.1]:5706, alive=true}
05:15:58,997 INFO |testTenNodes_1111Partitions| - [MockServer] hz.musing_hypatia.generic-operation.thread-1 - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5706, connection: MockConnection{localEndpoint=[127.0.0.1]:5703, remoteEndpoint=[127.0.0.1]:5706, alive=true}
05:15:58,997 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.clever_hypatia.generic-operation.thread-0 - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT]
Members {size:6, ver:6} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31 this
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79
]
05:15:58,997 INFO |testTenNodes_1111Partitions| - [MockServer] hz.distracted_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5706, connection: MockConnection{localEndpoint=[127.0.0.1]:5704, remoteEndpoint=[127.0.0.1]:5706, alive=true}
05:15:58,997 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.musing_hypatia.generic-operation.thread-1 - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT]
Members {size:6, ver:6} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1 this
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79
]
05:15:58,997 INFO |testTenNodes_1111Partitions| - [MockServer] hz.modest_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5706, connection: MockConnection{localEndpoint=[127.0.0.1]:5705, remoteEndpoint=[127.0.0.1]:5706, alive=true}
05:15:58,997 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.distracted_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT]
Members {size:6, ver:6} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac this
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79
]
05:15:58,997 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.modest_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT]
Members {size:6, ver:6} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a this
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79
]
05:15:59,096 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.upbeat_hypatia.generic-operation.thread-0 - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT]
Members {size:6, ver:6} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79 this
]
05:15:59,495 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5702, connection: MockConnection{localEndpoint=[127.0.0.1]:5706, remoteEndpoint=[127.0.0.1]:5702, alive=true}
05:15:59,495 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5703, connection: MockConnection{localEndpoint=[127.0.0.1]:5706, remoteEndpoint=[127.0.0.1]:5703, alive=true}
05:15:59,495 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5704, connection: MockConnection{localEndpoint=[127.0.0.1]:5706, remoteEndpoint=[127.0.0.1]:5704, alive=true}
05:15:59,495 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5705, connection: MockConnection{localEndpoint=[127.0.0.1]:5706, remoteEndpoint=[127.0.0.1]:5705, alive=true}
05:15:59,495 INFO |testTenNodes_1111Partitions| - [LifecycleService] testTenNodes_1111Partitions - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT] [127.0.0.1]:5706 is STARTED
05:15:59,496 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [LOCAL] [dev] [4.2.1-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
05:15:59,496 INFO |testTenNodes_1111Partitions| - [system] testTenNodes_1111Partitions - [127.0.0.1]:5707 [dev] [4.2.1-SNAPSHOT] Hazelcast 4.2.1-SNAPSHOT (20210603 - 1e1a816) starting at [127.0.0.1]:5707
05:15:59,496 INFO |testTenNodes_1111Partitions| - [HealthMonitor] hz.upbeat_hypatia.HealthMonitor - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT] processors=8, physical.memory.total=377.6G, physical.memory.free=112.9G, swap.space.total=4.0G, swap.space.free=2.7G, heap.memory.used=1.5G, heap.memory.free=461.7M, heap.memory.total=2.0G, heap.memory.max=2.0G, heap.memory.used/total=77.45%, heap.memory.used/max=77.45%, minor.gc.count=8618, minor.gc.time=79659ms, major.gc.count=5, major.gc.time=1280ms, load.process=2.50%, load.system=9.68%, load.systemAverage=9.55, thread.count=644, thread.peakCount=2580, cluster.timeDiff=-1, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.client.query.size=0, executor.q.client.blocking.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=19, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=0, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
05:15:59,497 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [127.0.0.1]:5707 [dev] [4.2.1-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
05:15:59,500 WARN |testTenNodes_1111Partitions| - [CPSubsystem] testTenNodes_1111Partitions - [127.0.0.1]:5707 [dev] [4.2.1-SNAPSHOT] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
05:15:59,506 INFO |testTenNodes_1111Partitions| - [Diagnostics] testTenNodes_1111Partitions - [127.0.0.1]:5707 [dev] [4.2.1-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
05:15:59,506 INFO |testTenNodes_1111Partitions| - [LifecycleService] testTenNodes_1111Partitions - [127.0.0.1]:5707 [dev] [4.2.1-SNAPSHOT] [127.0.0.1]:5707 is STARTING
05:15:59,506 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5707 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5701, connection: MockConnection{localEndpoint=[127.0.0.1]:5707, remoteEndpoint=[127.0.0.1]:5701, alive=true}
05:15:59,507 INFO |testTenNodes_1111Partitions| - [MockServer] hz.ecstatic_hypatia.generic-operation.thread-2 - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5707, connection: MockConnection{localEndpoint=[127.0.0.1]:5701, remoteEndpoint=[127.0.0.1]:5707, alive=true}
05:15:59,507 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.ecstatic_hypatia.generic-operation.thread-2 - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT]
Members {size:7, ver:7} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e this
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79
Member [127.0.0.1]:5707 - c422c74f-cb30-4fc9-8200-13ddb2871bd6
]
05:15:59,508 INFO |testTenNodes_1111Partitions| - [MockServer] hz.clever_hypatia.generic-operation.thread-3 - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5707, connection: MockConnection{localEndpoint=[127.0.0.1]:5702, remoteEndpoint=[127.0.0.1]:5707, alive=true}
05:15:59,508 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.clever_hypatia.generic-operation.thread-3 - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT]
Members {size:7, ver:7} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31 this
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79
Member [127.0.0.1]:5707 - c422c74f-cb30-4fc9-8200-13ddb2871bd6
]
05:15:59,509 INFO |testTenNodes_1111Partitions| - [MockServer] hz.musing_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5707, connection: MockConnection{localEndpoint=[127.0.0.1]:5703, remoteEndpoint=[127.0.0.1]:5707, alive=true}
05:15:59,509 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.musing_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT]
Members {size:7, ver:7} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1 this
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79
Member [127.0.0.1]:5707 - c422c74f-cb30-4fc9-8200-13ddb2871bd6
]
05:15:59,509 INFO |testTenNodes_1111Partitions| - [MockServer] hz.distracted_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5707, connection: MockConnection{localEndpoint=[127.0.0.1]:5704, remoteEndpoint=[127.0.0.1]:5707, alive=true}
05:15:59,509 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.distracted_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT]
Members {size:7, ver:7} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac this
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79
Member [127.0.0.1]:5707 - c422c74f-cb30-4fc9-8200-13ddb2871bd6
]
05:15:59,509 INFO |testTenNodes_1111Partitions| - [MockServer] hz.modest_hypatia.generic-operation.thread-1 - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5707, connection: MockConnection{localEndpoint=[127.0.0.1]:5705, remoteEndpoint=[127.0.0.1]:5707, alive=true}
05:15:59,509 INFO |testTenNodes_1111Partitions| - [MockServer] hz.upbeat_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5707, connection: MockConnection{localEndpoint=[127.0.0.1]:5706, remoteEndpoint=[127.0.0.1]:5707, alive=true}
05:15:59,510 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.modest_hypatia.generic-operation.thread-1 - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT]
Members {size:7, ver:7} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a this
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79
Member [127.0.0.1]:5707 - c422c74f-cb30-4fc9-8200-13ddb2871bd6
]
05:15:59,510 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.upbeat_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT]
Members {size:7, ver:7} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79 this
Member [127.0.0.1]:5707 - c422c74f-cb30-4fc9-8200-13ddb2871bd6
]
05:15:59,608 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.kind_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5707 [dev] [4.2.1-SNAPSHOT]
Members {size:7, ver:7} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79
Member [127.0.0.1]:5707 - c422c74f-cb30-4fc9-8200-13ddb2871bd6 this
]
05:16:00,007 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5707 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5702, connection: MockConnection{localEndpoint=[127.0.0.1]:5707, remoteEndpoint=[127.0.0.1]:5702, alive=true}
05:16:00,007 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5707 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5703, connection: MockConnection{localEndpoint=[127.0.0.1]:5707, remoteEndpoint=[127.0.0.1]:5703, alive=true}
05:16:00,007 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5707 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5704, connection: MockConnection{localEndpoint=[127.0.0.1]:5707, remoteEndpoint=[127.0.0.1]:5704, alive=true}
05:16:00,007 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5707 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5705, connection: MockConnection{localEndpoint=[127.0.0.1]:5707, remoteEndpoint=[127.0.0.1]:5705, alive=true}
05:16:00,007 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5707 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5706, connection: MockConnection{localEndpoint=[127.0.0.1]:5707, remoteEndpoint=[127.0.0.1]:5706, alive=true}
05:16:00,007 INFO |testTenNodes_1111Partitions| - [LifecycleService] testTenNodes_1111Partitions - [127.0.0.1]:5707 [dev] [4.2.1-SNAPSHOT] [127.0.0.1]:5707 is STARTED
05:16:00,007 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [LOCAL] [dev] [4.2.1-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
05:16:00,007 INFO |testTenNodes_1111Partitions| - [system] testTenNodes_1111Partitions - [127.0.0.1]:5708 [dev] [4.2.1-SNAPSHOT] Hazelcast 4.2.1-SNAPSHOT (20210603 - 1e1a816) starting at [127.0.0.1]:5708
05:16:00,008 INFO |testTenNodes_1111Partitions| - [HealthMonitor] hz.kind_hypatia.HealthMonitor - [127.0.0.1]:5707 [dev] [4.2.1-SNAPSHOT] processors=8, physical.memory.total=377.6G, physical.memory.free=112.9G, swap.space.total=4.0G, swap.space.free=2.7G, heap.memory.used=1.6G, heap.memory.free=390.7M, heap.memory.total=2.0G, heap.memory.max=2.0G, heap.memory.used/total=80.91%, heap.memory.used/max=80.91%, minor.gc.count=8618, minor.gc.time=79659ms, major.gc.count=5, major.gc.time=1280ms, load.process=2.56%, load.system=9.89%, load.systemAverage=9.55, thread.count=685, thread.peakCount=2580, cluster.timeDiff=0, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.client.query.size=0, executor.q.client.blocking.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=20, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=0, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
05:16:00,008 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [127.0.0.1]:5708 [dev] [4.2.1-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
05:16:00,011 WARN |testTenNodes_1111Partitions| - [CPSubsystem] testTenNodes_1111Partitions - [127.0.0.1]:5708 [dev] [4.2.1-SNAPSHOT] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
05:16:00,018 INFO |testTenNodes_1111Partitions| - [Diagnostics] testTenNodes_1111Partitions - [127.0.0.1]:5708 [dev] [4.2.1-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
05:16:00,018 INFO |testTenNodes_1111Partitions| - [LifecycleService] testTenNodes_1111Partitions - [127.0.0.1]:5708 [dev] [4.2.1-SNAPSHOT] [127.0.0.1]:5708 is STARTING
05:16:00,019 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5708 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5701, connection: MockConnection{localEndpoint=[127.0.0.1]:5708, remoteEndpoint=[127.0.0.1]:5701, alive=true}
05:16:00,019 INFO |testTenNodes_1111Partitions| - [MockServer] hz.ecstatic_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5708, connection: MockConnection{localEndpoint=[127.0.0.1]:5701, remoteEndpoint=[127.0.0.1]:5708, alive=true}
05:16:00,020 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.ecstatic_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT]
Members {size:8, ver:8} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e this
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79
Member [127.0.0.1]:5707 - c422c74f-cb30-4fc9-8200-13ddb2871bd6
Member [127.0.0.1]:5708 - 1c4b918a-fdab-4c39-a07e-fa8cc3c133d6
]
05:16:00,022 INFO |testTenNodes_1111Partitions| - [MockServer] hz.clever_hypatia.generic-operation.thread-1 - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5708, connection: MockConnection{localEndpoint=[127.0.0.1]:5702, remoteEndpoint=[127.0.0.1]:5708, alive=true}
05:16:00,022 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.clever_hypatia.generic-operation.thread-1 - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT]
Members {size:8, ver:8} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31 this
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79
Member [127.0.0.1]:5707 - c422c74f-cb30-4fc9-8200-13ddb2871bd6
Member [127.0.0.1]:5708 - 1c4b918a-fdab-4c39-a07e-fa8cc3c133d6
]
05:16:00,023 INFO |testTenNodes_1111Partitions| - [MockServer] hz.musing_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5708, connection: MockConnection{localEndpoint=[127.0.0.1]:5703, remoteEndpoint=[127.0.0.1]:5708, alive=true}
05:16:00,023 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.musing_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT]
Members {size:8, ver:8} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1 this
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79
Member [127.0.0.1]:5707 - c422c74f-cb30-4fc9-8200-13ddb2871bd6
Member [127.0.0.1]:5708 - 1c4b918a-fdab-4c39-a07e-fa8cc3c133d6
]
05:16:00,024 INFO |testTenNodes_1111Partitions| - [MockServer] hz.distracted_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5708, connection: MockConnection{localEndpoint=[127.0.0.1]:5704, remoteEndpoint=[127.0.0.1]:5708, alive=true}
05:16:00,024 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.distracted_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT]
Members {size:8, ver:8} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac this
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79
Member [127.0.0.1]:5707 - c422c74f-cb30-4fc9-8200-13ddb2871bd6
Member [127.0.0.1]:5708 - 1c4b918a-fdab-4c39-a07e-fa8cc3c133d6
]
05:16:00,024 INFO |testTenNodes_1111Partitions| - [MockServer] hz.modest_hypatia.generic-operation.thread-1 - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5708, connection: MockConnection{localEndpoint=[127.0.0.1]:5705, remoteEndpoint=[127.0.0.1]:5708, alive=true}
05:16:00,025 INFO |testTenNodes_1111Partitions| - [MockServer] hz.upbeat_hypatia.generic-operation.thread-1 - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5708, connection: MockConnection{localEndpoint=[127.0.0.1]:5706, remoteEndpoint=[127.0.0.1]:5708, alive=true}
05:16:00,025 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.modest_hypatia.generic-operation.thread-1 - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT]
Members {size:8, ver:8} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.
...[truncated 16839708 chars]...
onsCount]=61
[ecstatic_hypatia] [05:17:46,944] [partitionId=476,unit=count,metric=operation.partition.executedOperationsCount]=62
[ecstatic_hypatia] [05:17:46,944] [partitionId=454,unit=count,metric=operation.partition.executedOperationsCount]=68
[ecstatic_hypatia] [05:17:46,944] [partitionId=432,unit=count,metric=operation.partition.executedOperationsCount]=66
[ecstatic_hypatia] [05:17:46,944] [partitionId=410,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=399,unit=count,metric=operation.partition.executedOperationsCount]=69
[ecstatic_hypatia] [05:17:46,944] [partitionId=311,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=333,unit=count,metric=operation.partition.executedOperationsCount]=74
[ecstatic_hypatia] [05:17:46,944] [partitionId=355,unit=count,metric=operation.partition.executedOperationsCount]=73
[ecstatic_hypatia] [05:17:46,944] [partitionId=377,unit=count,metric=operation.partition.executedOperationsCount]=75
[ecstatic_hypatia] [05:17:46,944] [partitionId=278,unit=count,metric=operation.partition.executedOperationsCount]=78
[ecstatic_hypatia] [05:17:46,944] [partitionId=256,unit=count,metric=operation.partition.executedOperationsCount]=77
[ecstatic_hypatia] [05:17:46,944] [partitionId=234,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=212,unit=count,metric=operation.partition.executedOperationsCount]=81
[ecstatic_hypatia] [05:17:46,944] [partitionId=113,unit=count,metric=operation.partition.executedOperationsCount]=84
[ecstatic_hypatia] [05:17:46,944] [partitionId=135,unit=count,metric=operation.partition.executedOperationsCount]=85
[ecstatic_hypatia] [05:17:46,944] [partitionId=157,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=179,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-6,unit=count,metric=operation.thread.priorityPendingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=751,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=773,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=795,unit=count,metric=operation.partition.executedOperationsCount]=46
[ecstatic_hypatia] [05:17:46,944] [partitionId=850,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=872,unit=count,metric=operation.partition.executedOperationsCount]=27
[ecstatic_hypatia] [05:17:46,944] [partitionId=894,unit=count,metric=operation.partition.executedOperationsCount]=30
[ecstatic_hypatia] [05:17:46,944] [partitionId=531,unit=count,metric=operation.partition.executedOperationsCount]=69
[ecstatic_hypatia] [05:17:46,944] [partitionId=553,unit=count,metric=operation.partition.executedOperationsCount]=60
[ecstatic_hypatia] [05:17:46,944] [partitionId=575,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=597,unit=count,metric=operation.partition.executedOperationsCount]=54
[ecstatic_hypatia] [05:17:46,944] [partitionId=630,unit=count,metric=operation.partition.executedOperationsCount]=51
[ecstatic_hypatia] [05:17:46,944] [partitionId=652,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=674,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=696,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-0,unit=count,metric=operation.thread.normalPendingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=971,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=993,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-5,unit=count,metric=operation.thread.completedTotalCount]=6676
[ecstatic_hypatia] [05:17:46,944] [unit=count,metric=operation.responses.missingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=499,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=477,unit=count,metric=operation.partition.executedOperationsCount]=64
[ecstatic_hypatia] [05:17:46,944] [partitionId=455,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=433,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=411,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=312,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=334,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=356,unit=count,metric=operation.partition.executedOperationsCount]=70
[ecstatic_hypatia] [05:17:46,944] [partitionId=378,unit=count,metric=operation.partition.executedOperationsCount]=71
[ecstatic_hypatia] [05:17:46,944] [unit=count,metric=raft.metadata.groups]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=279,unit=count,metric=operation.partition.executedOperationsCount]=75
[ecstatic_hypatia] [05:17:46,944] [partitionId=257,unit=count,metric=operation.partition.executedOperationsCount]=79
[ecstatic_hypatia] [05:17:46,944] [partitionId=235,unit=count,metric=operation.partition.executedOperationsCount]=77
[ecstatic_hypatia] [05:17:46,944] [partitionId=114,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=136,unit=count,metric=operation.partition.executedOperationsCount]=85
[ecstatic_hypatia] [05:17:46,944] [partitionId=158,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=213,unit=count,metric=operation.partition.executedOperationsCount]=79
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-5,unit=count,metric=operation.thread.priorityPendingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=730,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=752,unit=count,metric=operation.partition.executedOperationsCount]=49
[ecstatic_hypatia] [05:17:46,944] [partitionId=774,unit=count,metric=operation.partition.executedOperationsCount]=45
[ecstatic_hypatia] [05:17:46,944] [partitionId=796,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=851,unit=count,metric=operation.partition.executedOperationsCount]=30
[ecstatic_hypatia] [05:17:46,944] [partitionId=873,unit=count,metric=operation.partition.executedOperationsCount]=34
[ecstatic_hypatia] [05:17:46,944] [partitionId=895,unit=count,metric=operation.partition.executedOperationsCount]=38
[ecstatic_hypatia] [05:17:46,944] [partitionId=532,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=510,unit=count,metric=operation.partition.executedOperationsCount]=65
[ecstatic_hypatia] [05:17:46,944] [partitionId=554,unit=count,metric=operation.partition.executedOperationsCount]=56
[ecstatic_hypatia] [05:17:46,944] [partitionId=576,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=598,unit=count,metric=operation.partition.executedOperationsCount]=54
[ecstatic_hypatia] [05:17:46,944] [partitionId=631,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=653,unit=count,metric=operation.partition.executedOperationsCount]=51
[ecstatic_hypatia] [05:17:46,944] [partitionId=675,unit=count,metric=operation.partition.executedOperationsCount]=57
[ecstatic_hypatia] [05:17:46,944] [partitionId=697,unit=count,metric=operation.partition.executedOperationsCount]=54
[ecstatic_hypatia] [05:17:46,944] [partitionId=950,unit=count,metric=operation.partition.executedOperationsCount]=11
[ecstatic_hypatia] [05:17:46,944] [partitionId=972,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=994,unit=count,metric=operation.partition.executedOperationsCount]=7
[ecstatic_hypatia] [05:17:46,944] [unit=count,metric=operation.retryCount]=2
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-1,unit=count,metric=operation.thread.completedRunnableCount]=457
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-6,unit=count,metric=operation.thread.completedTotalCount]=7280
[ecstatic_hypatia] [05:17:46,944] [genericId=2,unit=count,metric=operation.generic.executedOperationsCount]=254
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-5,unit=count,metric=operation.thread.completedOperationBatchCount]=89
[ecstatic_hypatia] [05:17:46,944] [service=hz:core:proxyService,unit=count,metric=event.publicationCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=496,unit=count,metric=operation.partition.executedOperationsCount]=68
[ecstatic_hypatia] [05:17:46,944] [partitionId=474,unit=count,metric=operation.partition.executedOperationsCount]=72
[ecstatic_hypatia] [05:17:46,944] [partitionId=452,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=331,unit=count,metric=operation.partition.executedOperationsCount]=72
[ecstatic_hypatia] [05:17:46,944] [partitionId=353,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=375,unit=count,metric=operation.partition.executedOperationsCount]=70
[ecstatic_hypatia] [05:17:46,944] [partitionId=397,unit=count,metric=operation.partition.executedOperationsCount]=68
[ecstatic_hypatia] [05:17:46,944] [partitionId=430,unit=count,metric=operation.partition.executedOperationsCount]=70
[ecstatic_hypatia] [05:17:46,944] [partitionId=298,unit=count,metric=operation.partition.executedOperationsCount]=74
[ecstatic_hypatia] [05:17:46,944] [partitionId=276,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=254,unit=count,metric=operation.partition.executedOperationsCount]=78
[ecstatic_hypatia] [05:17:46,944] [partitionId=232,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=210,unit=count,metric=operation.partition.executedOperationsCount]=82
[ecstatic_hypatia] [05:17:46,944] [partitionId=199,unit=count,metric=operation.partition.executedOperationsCount]=82
[ecstatic_hypatia] [05:17:46,944] [partitionId=111,unit=count,metric=operation.partition.executedOperationsCount]=87
[ecstatic_hypatia] [05:17:46,944] [partitionId=133,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=155,unit=count,metric=operation.partition.executedOperationsCount]=81
[ecstatic_hypatia] [05:17:46,944] [partitionId=177,unit=count,metric=operation.partition.executedOperationsCount]=80
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-4,unit=count,metric=operation.thread.priorityPendingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=771,unit=count,metric=operation.partition.executedOperationsCount]=39
[ecstatic_hypatia] [05:17:46,944] [partitionId=793,unit=count,metric=operation.partition.executedOperationsCount]=43
[ecstatic_hypatia] [05:17:46,944] [partitionId=870,unit=count,metric=operation.partition.executedOperationsCount]=25
[ecstatic_hypatia] [05:17:46,944] [partitionId=892,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [unit=ms,metric=gc.unknownTime]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=551,unit=count,metric=operation.partition.executedOperationsCount]=59
[ecstatic_hypatia] [05:17:46,944] [partitionId=573,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=595,unit=count,metric=operation.partition.executedOperationsCount]=56
[ecstatic_hypatia] [05:17:46,944] [partitionId=650,unit=count,metric=operation.partition.executedOperationsCount]=51
[ecstatic_hypatia] [05:17:46,944] [partitionId=672,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=694,unit=count,metric=operation.partition.executedOperationsCount]=51
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-2,unit=count,metric=operation.thread.normalPendingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=991,unit=count,metric=operation.partition.executedOperationsCount]=22
[ecstatic_hypatia] [05:17:46,944] [unit=ms,metric=cluster.clock.clusterUpTime]=110003
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-0,unit=count,metric=operation.thread.completedRunnableCount]=1347
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-7,unit=count,metric=operation.thread.completedTotalCount]=6710
[ecstatic_hypatia] [05:17:46,944] [genericId=1,unit=count,metric=operation.generic.executedOperationsCount]=271
[ecstatic_hypatia] [05:17:46,944] [service=hz:core:clusterService,unit=count,metric=event.listenerCount]=1
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-6,unit=count,metric=operation.thread.completedOperationBatchCount]=89
[ecstatic_hypatia] [05:17:46,944] [unit=count,metric=operation.partitionThreadCount]=8
[ecstatic_hypatia] [05:17:46,944] [partitionId=497,unit=count,metric=operation.partition.executedOperationsCount]=61
[ecstatic_hypatia] [05:17:46,944] [partitionId=475,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=453,unit=count,metric=operation.partition.executedOperationsCount]=66
[ecstatic_hypatia] [05:17:46,944] [partitionId=431,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=398,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=310,unit=count,metric=operation.partition.executedOperationsCount]=75
[ecstatic_hypatia] [05:17:46,944] [partitionId=332,unit=count,metric=operation.partition.executedOperationsCount]=74
[ecstatic_hypatia] [05:17:46,944] [partitionId=354,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=376,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=299,unit=count,metric=operation.partition.executedOperationsCount]=77
[ecstatic_hypatia] [05:17:46,944] [partitionId=277,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=255,unit=count,metric=operation.partition.executedOperationsCount]=76
[ecstatic_hypatia] [05:17:46,944] [partitionId=233,unit=count,metric=operation.partition.executedOperationsCount]=77
[ecstatic_hypatia] [05:17:46,944] [partitionId=211,unit=count,metric=operation.partition.executedOperationsCount]=82
[ecstatic_hypatia] [05:17:46,944] [partitionId=112,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=134,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=156,unit=count,metric=operation.partition.executedOperationsCount]=84
[ecstatic_hypatia] [05:17:46,944] [partitionId=178,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-3,unit=count,metric=operation.thread.priorityPendingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=750,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=772,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=794,unit=count,metric=operation.partition.executedOperationsCount]=44
[ecstatic_hypatia] [05:17:46,944] [partitionId=871,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=893,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=530,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=552,unit=count,metric=operation.partition.executedOperationsCount]=57
[ecstatic_hypatia] [05:17:46,944] [partitionId=574,unit=count,metric=operation.partition.executedOperationsCount]=56
[ecstatic_hypatia] [05:17:46,944] [partitionId=596,unit=count,metric=operation.partition.executedOperationsCount]=51
[ecstatic_hypatia] [05:17:46,944] [partitionId=651,unit=count,metric=operation.partition.executedOperationsCount]=49
[ecstatic_hypatia] [05:17:46,944] [partitionId=673,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=695,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-1,unit=count,metric=operation.thread.normalPendingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=970,unit=count,metric=operation.partition.executedOperationsCount]=18
[ecstatic_hypatia] [05:17:46,944] [partitionId=992,unit=count,metric=operation.partition.executedOperationsCount]=20
[ecstatic_hypatia] [05:17:46,944] [unit=count,metric=partitions.partitionCount]=1111
[ecstatic_hypatia] [05:17:46,944] [service=hz:core:partitionService,unit=count,metric=event.publicationCount]=396
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-1,unit=count,metric=operation.thread.errorCount]=0
[ecstatic_hypatia] [05:17:46,944] [metric=thread.daemonThreadCount]=476
[ecstatic_hypatia] [05:17:46,944] [unit=bytes,metric=memory.freeNative]=0
[ecstatic_hypatia] [05:17:46,944] [unit=count,metric=cluster.size]=5
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-3,unit=count,metric=operation.thread.completedOperationBatchCount]=89
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.priority-generic-operation.thread-0,unit=count,metric=operation.thread.completedRunnableCount]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-2,unit=count,metric=operation.thread.priorityPendingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=719,unit=count,metric=operation.partition.executedOperationsCount]=46
[ecstatic_hypatia] [05:17:46,944] [partitionId=818,unit=count,metric=operation.partition.executedOperationsCount]=42
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-4,unit=count,metric=operation.thread.normalPendingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=917,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=939,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=75,unit=count,metric=operation.partition.executedOperationsCount]=89
[ecstatic_hypatia] [05:17:46,944] [partitionId=53,unit=count,metric=operation.partition.executedOperationsCount]=86
[ecstatic_hypatia] [05:17:46,944] [partitionId=31,unit=count,metric=operation.partition.executedOperationsCount]=91
[ecstatic_hypatia] [05:17:46,944] [partitionId=97,unit=count,metric=operation.partition.executedOperationsCount]=86
[ecstatic_hypatia] [05:17:46,944] [partitionId=1017,unit=count,metric=operation.partition.executedOperationsCount]=7
[ecstatic_hypatia] [05:17:46,944] [partitionId=1039,unit=count,metric=operation.partition.executedOperationsCount]=4
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-0,unit=count,metric=operation.thread.errorCount]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-4,unit=count,metric=operation.thread.completedOperationBatchCount]=89
[ecstatic_hypatia] [05:17:46,944] [dir=user.home,unit=bytes,metric=file.partition.freeSpace]=15400296448
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-1,unit=count,metric=operation.thread.priorityPendingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=819,unit=count,metric=operation.partition.executedOperationsCount]=36
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-3,unit=count,metric=operation.thread.normalPendingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=918,unit=count,metric=operation.partition.executedOperationsCount]=30
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.priority-generic-operation.thread-0,unit=count,metric=operation.thread.completedOperationBatchCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=74,unit=count,metric=operation.partition.executedOperationsCount]=88
[ecstatic_hypatia] [05:17:46,944] [partitionId=52,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=30,unit=count,metric=operation.partition.executedOperationsCount]=90
[ecstatic_hypatia] [05:17:46,944] [partitionId=96,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=1018,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [unit=count,metric=operation.priorityQueueSize]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-1,unit=count,metric=operation.thread.completedOperationBatchCount]=89
[ecstatic_hypatia] [05:17:46,944] [unit=count,metric=proxy.proxyCount]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-0,unit=count,metric=operation.thread.priorityPendingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=717,unit=count,metric=operation.partition.executedOperationsCount]=50
[ecstatic_hypatia] [05:17:46,944] [partitionId=739,unit=count,metric=operation.partition.executedOperationsCount]=44
[ecstatic_hypatia] [05:17:46,944] [partitionId=816,unit=count,metric=operation.partition.executedOperationsCount]=41
[ecstatic_hypatia] [05:17:46,944] [partitionId=838,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=519,unit=count,metric=operation.partition.executedOperationsCount]=69
[ecstatic_hypatia] [05:17:46,944] [partitionId=618,unit=count,metric=operation.partition.executedOperationsCount]=55
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-6,unit=count,metric=operation.thread.normalPendingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=915,unit=count,metric=operation.partition.executedOperationsCount]=33
[ecstatic_hypatia] [05:17:46,944] [partitionId=937,unit=count,metric=operation.partition.executedOperationsCount]=31
[ecstatic_hypatia] [05:17:46,944] [partitionId=959,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=11,unit=count,metric=operation.partition.executedOperationsCount]=91
[ecstatic_hypatia] [05:17:46,944] [partitionId=33,unit=count,metric=operation.partition.executedOperationsCount]=88
[ecstatic_hypatia] [05:17:46,944] [partitionId=55,unit=count,metric=operation.partition.executedOperationsCount]=89
[ecstatic_hypatia] [05:17:46,944] [partitionId=77,unit=count,metric=operation.partition.executedOperationsCount]=89
[ecstatic_hypatia] [05:17:46,944] [partitionId=99,unit=count,metric=operation.partition.executedOperationsCount]=85
[ecstatic_hypatia] [05:17:46,944] [partitionId=1019,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [unit=ns,metric=partitions.elapsedMigrationOperationTime]=0
[ecstatic_hypatia] [05:17:46,944] [unit=count,metric=gc.minorCount]=8627
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-2,unit=count,metric=operation.thread.completedOperationBatchCount]=89
[ecstatic_hypatia] [05:17:46,944] [metric=os.freeSwapSpaceSize]=2849095680
[ecstatic_hypatia] [05:17:46,944] [metric=os.totalPhysicalMemorySize]=405449981952
[ecstatic_hypatia] [05:17:46,944] [partitionId=718,unit=count,metric=operation.partition.executedOperationsCount]=46
[ecstatic_hypatia] [05:17:46,944] [partitionId=817,unit=count,metric=operation.partition.executedOperationsCount]=39
[ecstatic_hypatia] [05:17:46,944] [partitionId=839,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=619,unit=count,metric=operation.partition.executedOperationsCount]=51
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-5,unit=count,metric=operation.thread.normalPendingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=916,unit=count,metric=operation.partition.executedOperationsCount]=24
[ecstatic_hypatia] [05:17:46,944] [partitionId=938,unit=count,metric=operation.partition.executedOperationsCount]=14
[ecstatic_hypatia] [05:17:46,944] [metric=runtime.availableProcessors]=8
[ecstatic_hypatia] [05:17:46,944] [partitionId=10,unit=count,metric=operation.partition.executedOperationsCount]=91
[ecstatic_hypatia] [05:17:46,944] [partitionId=32,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=54,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=76,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=98,unit=count,metric=operation.partition.executedOperationsCount]=86
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-5,unit=count,metric=operation.thread.errorCount]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.generic-operation.thread-1,unit=count,metric=operation.thread.completedPacketCount]=254
[ecstatic_hypatia] [05:17:46,944] [unit=count,metric=gc.unknownCount]=0
[ecstatic_hypatia] [05:17:46,944] [unit=bytes,metric=memory.maxMetadata]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=418,unit=count,metric=operation.partition.executedOperationsCount]=68
[ecstatic_hypatia] [05:17:46,944] [partitionId=319,unit=count,metric=operation.partition.executedOperationsCount]=77
[ecstatic_hypatia] [05:17:46,944] [unit=count,metric=partitions.localPartitionCount]=145
[ecstatic_hypatia] [05:17:46,944] [partitionId=715,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=737,unit=count,metric=operation.partition.executedOperationsCount]=41
[ecstatic_hypatia] [05:17:46,944] [partitionId=759,unit=count,metric=operation.partition.executedOperationsCount]=47
[ecstatic_hypatia] [05:17:46,944] [partitionId=814,unit=count,metric=operation.partition.executedOperationsCount]=34
[ecstatic_hypatia] [05:17:46,944] [partitionId=836,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=858,unit=count,metric=operation.partition.executedOperationsCount]=34
[ecstatic_hypatia] [05:17:46,944] [partitionId=517,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=539,unit=count,metric=operation.partition.executedOperationsCount]=68
[ecstatic_hypatia] [05:17:46,944] [partitionId=616,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=638,unit=count,metric=operation.partition.executedOperationsCount]=57
[ecstatic_hypatia] [05:17:46,944] [partitionId=913,unit=count,metric=operation.partition.executedOperationsCount]=32
[ecstatic_hypatia] [05:17:46,944] [partitionId=935,unit=count,metric=operation.partition.executedOperationsCount]=28
[ecstatic_hypatia] [05:17:46,944] [partitionId=957,unit=count,metric=operation.partition.executedOperationsCount]=16
[ecstatic_hypatia] [05:17:46,944] [partitionId=979,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=71,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=93,unit=count,metric=operation.partition.executedOperationsCount]=84
[ecstatic_hypatia] [05:17:46,944] [partitionId=1013,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=1035,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=1057,unit=count,metric=operation.partition.executedOperationsCount]=3
[ecstatic_hypatia] [05:17:46,944] [partitionId=1079,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-4,unit=count,metric=operation.thread.errorCount]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.generic-operation.thread-2,unit=count,metric=operation.thread.completedPacketCount]=284
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-0,unit=count,metric=operation.thread.completedOperationBatchCount]=89
[ecstatic_hypatia] [05:17:46,944] [partitionId=419,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=716,unit=count,metric=operation.partition.executedOperationsCount]=47
[ecstatic_hypatia] [05:17:46,944] [partitionId=738,unit=count,metric=operation.partition.executedOperationsCount]=41
[ecstatic_hypatia] [05:17:46,944] [partitionId=815,unit=count,metric=operation.partition.executedOperationsCount]=39
[ecstatic_hypatia] [05:17:46,944] [partitionId=837,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=859,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=518,unit=count,metric=operation.partition.executedOperationsCount]=60
[ecstatic_hypatia] [05:17:46,944] [partitionId=617,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=639,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-7,unit=count,metric=operation.thread.normalPendingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=914,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=936,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=958,unit=count,metric=operation.partition.executedOperationsCount]=11
[ecstatic_hypatia] [05:17:46,944] [partitionId=70,unit=count,metric=operation.partition.executedOperationsCount]=90
[ecstatic_hypatia] [05:17:46,944] [partitionId=92,unit=count,metric=operation.partition.executedOperationsCount]=86
[ecstatic_hypatia] [05:17:46,944] [partitionId=1014,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=1036,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=1058,unit=count,metric=operation.partition.executedOperationsCount]=10
[ecstatic_hypatia] [05:17:46,944] [service=hz:core:partitionService,unit=count,metric=event.listenerCount]=15
[ecstatic_hypatia] [05:17:46,944] [unit=count,metric=operation.invocations.pending]=25
[ecstatic_hypatia] [05:17:46,944] [unit=bytes,metric=memory.usedNative]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-3,unit=count,metric=operation.thread.errorCount]=0
[ecstatic_hypatia] [05:17:46,944] [dir=user.home,unit=bytes,metric=file.partition.totalSpace]=21464350720
[ecstatic_hypatia] [05:17:46,944] [unit=ms,metric=cluster.clock.clusterTime]=1622783866940
[ecstatic_hypatia] [05:17:46,944] [partitionId=317,unit=count,metric=operation.partition.executedOperationsCount]=73
[ecstatic_hypatia] [05:17:46,944] [partitionId=339,unit=count,metric=operation.partition.executedOperationsCount]=72
[ecstatic_hypatia] [05:17:46,944] [partitionId=416,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=438,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=218,unit=count,metric=operation.partition.executedOperationsCount]=80
[ecstatic_hypatia] [05:17:46,944] [partitionId=119,unit=count,metric=operation.partition.executedOperationsCount]=86
[ecstatic_hypatia] [05:17:46,944] [partitionId=713,unit=count,metric=operation.partition.executedOperationsCount]=50
[ecstatic_hypatia] [05:17:46,944] [partitionId=735,unit=count,metric=operation.partition.executedOperationsCount]=45
[ecstatic_hypatia] [05:17:46,944] [partitionId=757,unit=count,metric=operation.partition.executedOperationsCount]=42
[ecstatic_hypatia] [05:17:46,944] [partitionId=779,unit=count,metric=operation.partition.executedOperationsCount]=47
[ecstatic_hypatia] [05:17:46,944] [partitionId=812,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=834,unit=count,metric=operation.partition.executedOperationsCount]=38
[ecstatic_hypatia] [05:17:46,944] [partitionId=856,unit=count,metric=operation.partition.executedOperationsCount]=35
[ecstatic_hypatia] [05:17:46,944] [partitionId=878,unit=count,metric=operation.partition.executedOperationsCount]=34
[ecstatic_hypatia] [05:17:46,944] [partitionId=515,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=537,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=559,unit=count,metric=operation.partition.executedOperationsCount]=57
[ecstatic_hypatia] [05:17:46,944] [partitionId=614,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=636,unit=count,metric=operation.partition.executedOperationsCount]=54
[ecstatic_hypatia] [05:17:46,944] [partitionId=658,unit=count,metric=operation.partition.executedOperationsCount]=53
[ecstatic_hypatia] [05:17:46,944] [partitionId=911,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=933,unit=count,metric=operation.partition.executedOperationsCount]=28
[ecstatic_hypatia] [05:17:46,944] [partitionId=955,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=977,unit=count,metric=operation.partition.executedOperationsCount]=23
[ecstatic_hypatia] [05:17:46,944] [partitionId=999,unit=count,metric=operation.partition.executedOperationsCount]=6
[ecstatic_hypatia] [05:17:46,944] [partitionId=73,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=51,unit=count,metric=operation.partition.executedOperationsCount]=90
[ecstatic_hypatia] [05:17:46,944] [partitionId=95,unit=count,metric=operation.partition.executedOperationsCount]=84
[ecstatic_hypatia] [05:17:46,944] [unit=count,metric=operation.invocations.normalTimeouts]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=1015,unit=count,metric=operation.partition.executedOperationsCount]=10
[ecstatic_hypatia] [05:17:46,944] [partitionId=1037,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=1059,unit=count,metric=operation.partition.executedOperationsCount]=5
[ecstatic_hypatia] [05:17:46,944] [unit=count,metric=operation.failedBackups]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-2,unit=count,metric=operation.thread.errorCount]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.generic-operation.thread-0,unit=count,metric=operation.thread.completedPacketCount]=271
[ecstatic_hypatia] [05:17:46,944] [partitionId=417,unit=count,metric=operation.partition.executedOperationsCount]=71
[ecstatic_hypatia] [05:17:46,944] [partitionId=318,unit=count,metric=operation.partition.executedOperationsCount]=76
[ecstatic_hypatia] [05:17:46,944] [partitionId=439,unit=count,metric=operation.partition.executedOperationsCount]=72
[ecstatic_hypatia] [05:17:46,944] [partitionId=219,unit=count,metric=operation.partition.executedOperationsCount]=82
[ecstatic_hypatia] [05:17:46,944] [partitionId=714,unit=count,metric=operation.partition.executedOperationsCount]=50
[ecstatic_hypatia] [05:17:46,944] [partitionId=736,unit=count,metric=operation.partition.executedOperationsCount]=48
[ecstatic_hypatia] [05:17:46,944] [partitionId=758,unit=count,metric=operation.partition.executedOperationsCount]=43
[ecstatic_hypatia] [05:17:46,944] [partitionId=813,unit=count,metric=operation.partition.executedOperationsCount]=40
[ecstatic_hypatia] [05:17:46,944] [partitionId=835,unit=count,metric=operation.partition.executedOperationsCount]=37
[ecstatic_hypatia] [05:17:46,944] [partitionId=857,unit=count,metric=operation.partition.executedOperationsCount]=28
[ecstatic_hypatia] [05:17:46,944] [partitionId=879,unit=count,metric=operation.partition.executedOperationsCount]=24
[ecstatic_hypatia] [05:17:46,944] [partitionId=516,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=538,unit=count,metric=operation.partition.executedOperationsCount]=65
[ecstatic_hypatia] [05:17:46,944] [partitionId=615,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=637,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=659,unit=count,metric=operation.partition.executedOperationsCount]=53
[ecstatic_hypatia] [05:17:46,944] [partitionId=912,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=934,unit=count,metric=operation.partition.executedOperationsCount]=30
[ecstatic_hypatia] [05:17:46,944] [partitionId=956,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=978,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [unit=count,metric=partitions.completedMigrations]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=72,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=50,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=94,unit=count,metric=operation.partition.executedOperationsCount]=86
[ecstatic_hypatia] [05:17:46,944] [partitionId=1016,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=1038,unit=count,metric=operation.partition.executedOperationsCount]=3
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster,unit=count,metric=executor.internal.maximumPoolSize]=2
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster,unit=count,metric=executor.internal.completedTasks]=23
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster,unit=count,metric=executor.internal.remainingQueueCapacity]=2147483647
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:event,unit=count,metric=executor.internal.maximumPoolSize]=1
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:event,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:event,unit=count,metric=executor.internal.completedTasks]=90
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:event,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:event,unit=count,metric=executor.internal.remainingQueueCapacity]=2147483647
[ecstatic_hypatia] [05:17:46,944] [name=hz:scheduled:cqc:d1dcd21e-2d41-44c9-9be6-f09da504606d,unit=count,metric=executor.internal.maximumPoolSize]=1
[ecstatic_hypatia] [05:17:46,944] [name=hz:scheduled:cqc:d1dcd21e-2d41-44c9-9be6-f09da504606d,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:scheduled:cqc:d1dcd21e-2d41-44c9-9be6-f09da504606d,unit=count,metric=executor.internal.completedTasks]=22
[ecstatic_hypatia] [05:17:46,944] [name=hz:scheduled:cqc:d1dcd21e-2d41-44c9-9be6-f09da504606d,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:scheduled:cqc:d1dcd21e-2d41-44c9-9be6-f09da504606d,unit=count,metric=executor.internal.remainingQueueCapacity]=10000
[ecstatic_hypatia] [05:17:46,944] [name=hz:offloadable,unit=count,metric=executor.internal.maximumPoolSize]=8
[ecstatic_hypatia] [05:17:46,944] [name=hz:offloadable,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:offloadable,unit=count,metric=executor.internal.completedTasks]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:offloadable,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:offloadable,unit=count,metric=executor.internal.remainingQueueCapacity]=100000
[ecstatic_hypatia] [05:17:46,944] [name=hz:client,unit=count,metric=executor.internal.maximumPoolSize]=8
[ecstatic_hypatia] [05:17:46,944] [name=hz:client,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:client,unit=count,metric=executor.internal.completedTasks]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:client,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:client,unit=count,metric=executor.internal.remainingQueueCapacity]=800000
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:mastership,unit=count,metric=executor.internal.maximumPoolSize]=1
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:mastership,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:mastership,unit=count,metric=executor.internal.completedTasks]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:mastership,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:mastership,unit=count,metric=executor.internal.remainingQueueCapacity]=2147483647
[ecstatic_hypatia] [05:17:46,944] [name=hz:async,unit=count,metric=executor.internal.maximumPoolSize]=8
[ecstatic_hypatia] [05:17:46,944] [name=hz:async,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:async,unit=count,metric=executor.internal.completedTasks]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:async,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:async,unit=count,metric=executor.internal.remainingQueueCapacity]=100000
[ecstatic_hypatia] [05:17:46,944] [name=hz:client-blocking-tasks,unit=count,metric=executor.internal.maximumPoolSize]=160
[ecstatic_hypatia] [05:17:46,944] [name=hz:client-blocking-tasks,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:client-blocking-tasks,unit=count,metric=executor.internal.completedTasks]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:client-blocking-tasks,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:client-blocking-tasks,unit=count,metric=executor.internal.remainingQueueCapacity]=800000
[ecstatic_hypatia] [05:17:46,944] [name=hz:mc,unit=count,metric=executor.internal.maximumPoolSize]=2
[ecstatic_hypatia] [05:17:46,944] [name=hz:mc,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:mc,unit=count,metric=executor.internal.completedTasks]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:mc,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:mc,unit=count,metric=executor.internal.remainingQueueCapacity]=2000
[ecstatic_hypatia] [05:17:46,944] [name=hz:client-query,unit=count,metric=executor.internal.maximumPoolSize]=8
[ecstatic_hypatia] [05:17:46,944] [name=hz:client-query,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:client-query,unit=count,metric=executor.internal.completedTasks]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:client-query,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:client-query,unit=count,metric=executor.internal.remainingQueueCapacity]=800000
[ecstatic_hypatia] [05:17:46,944] [name=hz:scheduled,unit=count,metric=executor.internal.maximumPoolSize]=16
[ecstatic_hypatia] [05:17:46,944] [name=hz:scheduled,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:scheduled,unit=count,metric=executor.internal.completedTasks]=1991
[ecstatic_hypatia] [05:17:46,944] [name=hz:scheduled,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:scheduled,unit=count,metric=executor.internal.remainingQueueCapacity]=800000
[ecstatic_hypatia] [05:17:46,944] [name=hz:system,unit=count,metric=executor.internal.maximumPoolSize]=8
[ecstatic_hypatia] [05:17:46,944] [name=hz:system,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:system,unit=count,metric=executor.internal.completedTasks]=38
[ecstatic_hypatia] [05:17:46,944] [name=hz:system,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:system,unit=count,metric=executor.internal.remainingQueueCapacity]=2147483647
[ecstatic_hypatia] [05:17:46,944] [name=MetricsPublisher,unit=count,metric=executor.internal.maximumPoolSize]=16
[ecstatic_hypatia] [05:17:46,944] [name=MetricsPublisher,unit=count,metric=executor.internal.poolSize]=1
[ecstatic_hypatia] [05:17:46,944] [name=MetricsPublisher,unit=count,metric=executor.internal.completedTasks]=109
[ecstatic_hypatia] [05:17:46,944] [name=MetricsPublisher,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=MetricsPublisher,unit=count,metric=executor.internal.remainingQueueCapacity]=2147483647
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:splitbrain,unit=count,metric=executor.internal.maximumPoolSize]=2
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:splitbrain,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:splitbrain,unit=count,metric=executor.internal.completedTasks]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:splitbrain,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:splitbrain,unit=count,metric=executor.internal.remainingQueueCapacity]=2147483647
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:version:auto:upgrade,unit=count,metric=executor.internal.maximumPoolSize]=1
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:version:auto:upgrade,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:version:auto:upgrade,unit=count,metric=executor.internal.completedTasks]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:version:auto:upgrade,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:version:auto:upgrade,unit=count,metric=executor.internal.remainingQueueCapacity]=2147483647
[ecstatic_hypatia] [05:17:46,944] [name=hz:query,unit=count,metric=executor.internal.maximumPoolSize]=16
[ecstatic_hypatia] [05:17:46,944] [name=hz:query,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:query,unit=count,metric=executor.internal.completedTasks]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:query,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:query,unit=count,metric=executor.internal.remainingQueueCapacity]=2147483647
[ecstatic_hypatia] [05:17:46,944] [name=hz:CRDTReplicationMigration,unit=count,metric=executor.internal.maximumPoolSize]=16
[ecstatic_hypatia] [05:17:46,944] [name=hz:CRDTReplicationMigration,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:CRDTReplicationMigration,unit=count,metric=executor.internal.completedTasks]=125
[ecstatic_hypatia] [05:17:46,944] [name=hz:CRDTReplicationMigration,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:CRDTReplicationMigration,unit=count,metric=executor.internal.remainingQueueCapacity]=2147483647
[ecstatic_hypatia] [05:17:46,944] [name=hz:io,unit=count,metric=executor.internal.maximumPoolSize]=16
[ecstatic_hypatia] [05:17:46,944] [name=hz:io,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:io,unit=count,metric=executor.internal.completedTasks]=26
[ecstatic_hypatia] [05:17:46,944] [name=hz:io,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:io,unit=count,metric=executor.internal.remainingQueueCapacity]=2147483647
```
| 1.0 | com.hazelcast.partition.PartitionDistributionTest.testTenNodes_1111Partitions - _4.2.z_ (commit 1e1a816a1fb4af2a8b9377a7751949bf0488e267)
Failed on Sonar build (Oracle JDK 11): http://jenkins.hazelcast.com/view/Official%20Builds/job/Hazelcast-4.maintenance-sonar/832/testReport/com.hazelcast.partition/PartitionDistributionTest/testTenNodes_1111Partitions/
Stacktrace:
```
java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:87)
at org.junit.Assert.assertTrue(Assert.java:42)
at org.junit.Assert.assertNotNull(Assert.java:713)
at org.junit.Assert.assertNotNull(Assert.java:723)
at com.hazelcast.partition.PartitionDistributionTest.testPartitionDistribution(PartitionDistributionTest.java:200)
at com.hazelcast.partition.PartitionDistributionTest.testPartitionDistribution(PartitionDistributionTest.java:156)
at com.hazelcast.partition.PartitionDistributionTest.testTenNodes_1111Partitions(PartitionDistributionTest.java:142)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:115)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:107)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.lang.Thread.run(Thread.java:834)
```
Standard output:
```
05:15:56,927 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [LOCAL] [dev] [4.2.1-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
05:15:56,927 INFO |testTenNodes_1111Partitions| - [system] testTenNodes_1111Partitions - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT] Hazelcast 4.2.1-SNAPSHOT (20210603 - 1e1a816) starting at [127.0.0.1]:5701
05:15:56,928 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
05:15:56,931 WARN |testTenNodes_1111Partitions| - [CPSubsystem] testTenNodes_1111Partitions - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
05:15:56,937 INFO |testTenNodes_1111Partitions| - [Diagnostics] testTenNodes_1111Partitions - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
05:15:56,937 INFO |testTenNodes_1111Partitions| - [LifecycleService] testTenNodes_1111Partitions - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT] [127.0.0.1]:5701 is STARTING
05:15:56,937 INFO |testTenNodes_1111Partitions| - [ClusterService] testTenNodes_1111Partitions - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT]
Members {size:1, ver:1} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e this
]
05:15:56,937 INFO |testTenNodes_1111Partitions| - [LifecycleService] testTenNodes_1111Partitions - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT] [127.0.0.1]:5701 is STARTED
05:15:56,937 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [LOCAL] [dev] [4.2.1-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
05:15:56,937 INFO |testTenNodes_1111Partitions| - [system] testTenNodes_1111Partitions - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT] Hazelcast 4.2.1-SNAPSHOT (20210603 - 1e1a816) starting at [127.0.0.1]:5702
05:15:56,938 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
05:15:56,940 WARN |testTenNodes_1111Partitions| - [CPSubsystem] testTenNodes_1111Partitions - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
05:15:56,946 INFO |testTenNodes_1111Partitions| - [Diagnostics] testTenNodes_1111Partitions - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
05:15:56,946 INFO |testTenNodes_1111Partitions| - [LifecycleService] testTenNodes_1111Partitions - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT] [127.0.0.1]:5702 is STARTING
05:15:56,946 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5701, connection: MockConnection{localEndpoint=[127.0.0.1]:5702, remoteEndpoint=[127.0.0.1]:5701, alive=true}
05:15:56,947 INFO |testTenNodes_1111Partitions| - [MockServer] hz.ecstatic_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5702, connection: MockConnection{localEndpoint=[127.0.0.1]:5701, remoteEndpoint=[127.0.0.1]:5702, alive=true}
05:15:56,947 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.ecstatic_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT]
Members {size:2, ver:2} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e this
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
]
05:15:57,047 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.clever_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT]
Members {size:2, ver:2} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31 this
]
05:15:57,447 INFO |testTenNodes_1111Partitions| - [LifecycleService] testTenNodes_1111Partitions - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT] [127.0.0.1]:5702 is STARTED
05:15:57,447 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [LOCAL] [dev] [4.2.1-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
05:15:57,448 INFO |testTenNodes_1111Partitions| - [system] testTenNodes_1111Partitions - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT] Hazelcast 4.2.1-SNAPSHOT (20210603 - 1e1a816) starting at [127.0.0.1]:5703
05:15:57,449 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
05:15:57,452 WARN |testTenNodes_1111Partitions| - [CPSubsystem] testTenNodes_1111Partitions - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
05:15:57,459 INFO |testTenNodes_1111Partitions| - [Diagnostics] testTenNodes_1111Partitions - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
05:15:57,459 INFO |testTenNodes_1111Partitions| - [LifecycleService] testTenNodes_1111Partitions - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT] [127.0.0.1]:5703 is STARTING
05:15:57,459 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5701, connection: MockConnection{localEndpoint=[127.0.0.1]:5703, remoteEndpoint=[127.0.0.1]:5701, alive=true}
05:15:57,460 INFO |testTenNodes_1111Partitions| - [MockServer] hz.ecstatic_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5703, connection: MockConnection{localEndpoint=[127.0.0.1]:5701, remoteEndpoint=[127.0.0.1]:5703, alive=true}
05:15:57,460 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.ecstatic_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT]
Members {size:3, ver:3} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e this
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
]
05:15:57,461 INFO |testTenNodes_1111Partitions| - [MockServer] hz.clever_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5703, connection: MockConnection{localEndpoint=[127.0.0.1]:5702, remoteEndpoint=[127.0.0.1]:5703, alive=true}
05:15:57,461 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.clever_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT]
Members {size:3, ver:3} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31 this
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
]
05:15:57,561 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.musing_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT]
Members {size:3, ver:3} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1 this
]
05:15:57,960 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5702, connection: MockConnection{localEndpoint=[127.0.0.1]:5703, remoteEndpoint=[127.0.0.1]:5702, alive=true}
05:15:57,960 INFO |testTenNodes_1111Partitions| - [LifecycleService] testTenNodes_1111Partitions - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT] [127.0.0.1]:5703 is STARTED
05:15:57,960 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [LOCAL] [dev] [4.2.1-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
05:15:57,961 INFO |testTenNodes_1111Partitions| - [system] testTenNodes_1111Partitions - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT] Hazelcast 4.2.1-SNAPSHOT (20210603 - 1e1a816) starting at [127.0.0.1]:5704
05:15:57,962 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
05:15:57,965 WARN |testTenNodes_1111Partitions| - [CPSubsystem] testTenNodes_1111Partitions - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
05:15:57,971 INFO |testTenNodes_1111Partitions| - [Diagnostics] testTenNodes_1111Partitions - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
05:15:57,971 INFO |testTenNodes_1111Partitions| - [LifecycleService] testTenNodes_1111Partitions - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT] [127.0.0.1]:5704 is STARTING
05:15:57,972 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5701, connection: MockConnection{localEndpoint=[127.0.0.1]:5704, remoteEndpoint=[127.0.0.1]:5701, alive=true}
05:15:57,972 INFO |testTenNodes_1111Partitions| - [MockServer] hz.ecstatic_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5704, connection: MockConnection{localEndpoint=[127.0.0.1]:5701, remoteEndpoint=[127.0.0.1]:5704, alive=true}
05:15:57,973 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.ecstatic_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT]
Members {size:4, ver:4} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e this
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
]
05:15:57,973 INFO |testTenNodes_1111Partitions| - [MockServer] hz.clever_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5704, connection: MockConnection{localEndpoint=[127.0.0.1]:5702, remoteEndpoint=[127.0.0.1]:5704, alive=true}
05:15:57,973 INFO |testTenNodes_1111Partitions| - [MockServer] hz.musing_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5704, connection: MockConnection{localEndpoint=[127.0.0.1]:5703, remoteEndpoint=[127.0.0.1]:5704, alive=true}
05:15:57,973 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.clever_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT]
Members {size:4, ver:4} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31 this
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
]
05:15:57,974 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.musing_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT]
Members {size:4, ver:4} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1 this
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
]
05:15:58,073 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.distracted_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT]
Members {size:4, ver:4} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac this
]
05:15:58,472 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5702, connection: MockConnection{localEndpoint=[127.0.0.1]:5704, remoteEndpoint=[127.0.0.1]:5702, alive=true}
05:15:58,472 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5703, connection: MockConnection{localEndpoint=[127.0.0.1]:5704, remoteEndpoint=[127.0.0.1]:5703, alive=true}
05:15:58,472 INFO |testTenNodes_1111Partitions| - [LifecycleService] testTenNodes_1111Partitions - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT] [127.0.0.1]:5704 is STARTED
05:15:58,473 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [LOCAL] [dev] [4.2.1-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
05:15:58,473 INFO |testTenNodes_1111Partitions| - [system] testTenNodes_1111Partitions - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT] Hazelcast 4.2.1-SNAPSHOT (20210603 - 1e1a816) starting at [127.0.0.1]:5705
05:15:58,473 INFO |testTenNodes_1111Partitions| - [HealthMonitor] hz.distracted_hypatia.HealthMonitor - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT] processors=8, physical.memory.total=377.6G, physical.memory.free=112.9G, swap.space.total=4.0G, swap.space.free=2.7G, heap.memory.used=1.4G, heap.memory.free=601.2M, heap.memory.total=2.0G, heap.memory.max=2.0G, heap.memory.used/total=70.63%, heap.memory.used/max=70.63%, minor.gc.count=8618, minor.gc.time=79659ms, major.gc.count=5, major.gc.time=1280ms, load.process=2.25%, load.system=21.36%, load.systemAverage=9.55, thread.count=561, thread.peakCount=2580, cluster.timeDiff=-1, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.client.query.size=0, executor.q.client.blocking.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=17, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=0, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
05:15:58,474 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
05:15:58,477 WARN |testTenNodes_1111Partitions| - [CPSubsystem] testTenNodes_1111Partitions - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
05:15:58,483 INFO |testTenNodes_1111Partitions| - [Diagnostics] testTenNodes_1111Partitions - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
05:15:58,483 INFO |testTenNodes_1111Partitions| - [LifecycleService] testTenNodes_1111Partitions - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT] [127.0.0.1]:5705 is STARTING
05:15:58,483 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5701, connection: MockConnection{localEndpoint=[127.0.0.1]:5705, remoteEndpoint=[127.0.0.1]:5701, alive=true}
05:15:58,483 INFO |testTenNodes_1111Partitions| - [MockServer] hz.ecstatic_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5705, connection: MockConnection{localEndpoint=[127.0.0.1]:5701, remoteEndpoint=[127.0.0.1]:5705, alive=true}
05:15:58,484 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.ecstatic_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT]
Members {size:5, ver:5} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e this
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
]
05:15:58,485 INFO |testTenNodes_1111Partitions| - [MockServer] hz.clever_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5705, connection: MockConnection{localEndpoint=[127.0.0.1]:5702, remoteEndpoint=[127.0.0.1]:5705, alive=true}
05:15:58,485 INFO |testTenNodes_1111Partitions| - [MockServer] hz.distracted_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5705, connection: MockConnection{localEndpoint=[127.0.0.1]:5704, remoteEndpoint=[127.0.0.1]:5705, alive=true}
05:15:58,485 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.clever_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT]
Members {size:5, ver:5} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31 this
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
]
05:15:58,485 INFO |testTenNodes_1111Partitions| - [MockServer] hz.musing_hypatia.generic-operation.thread-1 - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5705, connection: MockConnection{localEndpoint=[127.0.0.1]:5703, remoteEndpoint=[127.0.0.1]:5705, alive=true}
05:15:58,485 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.distracted_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT]
Members {size:5, ver:5} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac this
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
]
05:15:58,485 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.musing_hypatia.generic-operation.thread-1 - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT]
Members {size:5, ver:5} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1 this
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
]
05:15:58,584 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.modest_hypatia.generic-operation.thread-1 - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT]
Members {size:5, ver:5} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a this
]
05:15:58,983 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5702, connection: MockConnection{localEndpoint=[127.0.0.1]:5705, remoteEndpoint=[127.0.0.1]:5702, alive=true}
05:15:58,984 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5703, connection: MockConnection{localEndpoint=[127.0.0.1]:5705, remoteEndpoint=[127.0.0.1]:5703, alive=true}
05:15:58,984 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5704, connection: MockConnection{localEndpoint=[127.0.0.1]:5705, remoteEndpoint=[127.0.0.1]:5704, alive=true}
05:15:58,984 INFO |testTenNodes_1111Partitions| - [LifecycleService] testTenNodes_1111Partitions - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT] [127.0.0.1]:5705 is STARTED
05:15:58,984 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [LOCAL] [dev] [4.2.1-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
05:15:58,984 INFO |testTenNodes_1111Partitions| - [system] testTenNodes_1111Partitions - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT] Hazelcast 4.2.1-SNAPSHOT (20210603 - 1e1a816) starting at [127.0.0.1]:5706
05:15:58,985 INFO |testTenNodes_1111Partitions| - [HealthMonitor] hz.modest_hypatia.HealthMonitor - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT] processors=8, physical.memory.total=377.6G, physical.memory.free=112.9G, swap.space.total=4.0G, swap.space.free=2.7G, heap.memory.used=1.5G, heap.memory.free=531.4M, heap.memory.total=2.0G, heap.memory.max=2.0G, heap.memory.used/total=74.04%, heap.memory.used/max=74.04%, minor.gc.count=8618, minor.gc.time=79659ms, major.gc.count=5, major.gc.time=1280ms, load.process=2.74%, load.system=23.91%, load.systemAverage=9.55, thread.count=603, thread.peakCount=2580, cluster.timeDiff=-1, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.client.query.size=0, executor.q.client.blocking.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=18, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=0, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
05:15:58,985 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
05:15:58,988 WARN |testTenNodes_1111Partitions| - [CPSubsystem] testTenNodes_1111Partitions - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
05:15:58,994 INFO |testTenNodes_1111Partitions| - [Diagnostics] testTenNodes_1111Partitions - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
05:15:58,994 INFO |testTenNodes_1111Partitions| - [LifecycleService] testTenNodes_1111Partitions - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT] [127.0.0.1]:5706 is STARTING
05:15:58,995 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5701, connection: MockConnection{localEndpoint=[127.0.0.1]:5706, remoteEndpoint=[127.0.0.1]:5701, alive=true}
05:15:58,995 INFO |testTenNodes_1111Partitions| - [MockServer] hz.ecstatic_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5706, connection: MockConnection{localEndpoint=[127.0.0.1]:5701, remoteEndpoint=[127.0.0.1]:5706, alive=true}
05:15:58,995 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.ecstatic_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT]
Members {size:6, ver:6} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e this
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79
]
05:15:58,997 INFO |testTenNodes_1111Partitions| - [MockServer] hz.clever_hypatia.generic-operation.thread-0 - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5706, connection: MockConnection{localEndpoint=[127.0.0.1]:5702, remoteEndpoint=[127.0.0.1]:5706, alive=true}
05:15:58,997 INFO |testTenNodes_1111Partitions| - [MockServer] hz.musing_hypatia.generic-operation.thread-1 - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5706, connection: MockConnection{localEndpoint=[127.0.0.1]:5703, remoteEndpoint=[127.0.0.1]:5706, alive=true}
05:15:58,997 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.clever_hypatia.generic-operation.thread-0 - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT]
Members {size:6, ver:6} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31 this
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79
]
05:15:58,997 INFO |testTenNodes_1111Partitions| - [MockServer] hz.distracted_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5706, connection: MockConnection{localEndpoint=[127.0.0.1]:5704, remoteEndpoint=[127.0.0.1]:5706, alive=true}
05:15:58,997 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.musing_hypatia.generic-operation.thread-1 - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT]
Members {size:6, ver:6} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1 this
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79
]
05:15:58,997 INFO |testTenNodes_1111Partitions| - [MockServer] hz.modest_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5706, connection: MockConnection{localEndpoint=[127.0.0.1]:5705, remoteEndpoint=[127.0.0.1]:5706, alive=true}
05:15:58,997 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.distracted_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT]
Members {size:6, ver:6} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac this
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79
]
05:15:58,997 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.modest_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT]
Members {size:6, ver:6} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a this
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79
]
05:15:59,096 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.upbeat_hypatia.generic-operation.thread-0 - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT]
Members {size:6, ver:6} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79 this
]
05:15:59,495 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5702, connection: MockConnection{localEndpoint=[127.0.0.1]:5706, remoteEndpoint=[127.0.0.1]:5702, alive=true}
05:15:59,495 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5703, connection: MockConnection{localEndpoint=[127.0.0.1]:5706, remoteEndpoint=[127.0.0.1]:5703, alive=true}
05:15:59,495 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5704, connection: MockConnection{localEndpoint=[127.0.0.1]:5706, remoteEndpoint=[127.0.0.1]:5704, alive=true}
05:15:59,495 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5705, connection: MockConnection{localEndpoint=[127.0.0.1]:5706, remoteEndpoint=[127.0.0.1]:5705, alive=true}
05:15:59,495 INFO |testTenNodes_1111Partitions| - [LifecycleService] testTenNodes_1111Partitions - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT] [127.0.0.1]:5706 is STARTED
05:15:59,496 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [LOCAL] [dev] [4.2.1-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
05:15:59,496 INFO |testTenNodes_1111Partitions| - [system] testTenNodes_1111Partitions - [127.0.0.1]:5707 [dev] [4.2.1-SNAPSHOT] Hazelcast 4.2.1-SNAPSHOT (20210603 - 1e1a816) starting at [127.0.0.1]:5707
05:15:59,496 INFO |testTenNodes_1111Partitions| - [HealthMonitor] hz.upbeat_hypatia.HealthMonitor - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT] processors=8, physical.memory.total=377.6G, physical.memory.free=112.9G, swap.space.total=4.0G, swap.space.free=2.7G, heap.memory.used=1.5G, heap.memory.free=461.7M, heap.memory.total=2.0G, heap.memory.max=2.0G, heap.memory.used/total=77.45%, heap.memory.used/max=77.45%, minor.gc.count=8618, minor.gc.time=79659ms, major.gc.count=5, major.gc.time=1280ms, load.process=2.50%, load.system=9.68%, load.systemAverage=9.55, thread.count=644, thread.peakCount=2580, cluster.timeDiff=-1, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.client.query.size=0, executor.q.client.blocking.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=19, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=0, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
05:15:59,497 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [127.0.0.1]:5707 [dev] [4.2.1-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
05:15:59,500 WARN |testTenNodes_1111Partitions| - [CPSubsystem] testTenNodes_1111Partitions - [127.0.0.1]:5707 [dev] [4.2.1-SNAPSHOT] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
05:15:59,506 INFO |testTenNodes_1111Partitions| - [Diagnostics] testTenNodes_1111Partitions - [127.0.0.1]:5707 [dev] [4.2.1-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
05:15:59,506 INFO |testTenNodes_1111Partitions| - [LifecycleService] testTenNodes_1111Partitions - [127.0.0.1]:5707 [dev] [4.2.1-SNAPSHOT] [127.0.0.1]:5707 is STARTING
05:15:59,506 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5707 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5701, connection: MockConnection{localEndpoint=[127.0.0.1]:5707, remoteEndpoint=[127.0.0.1]:5701, alive=true}
05:15:59,507 INFO |testTenNodes_1111Partitions| - [MockServer] hz.ecstatic_hypatia.generic-operation.thread-2 - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5707, connection: MockConnection{localEndpoint=[127.0.0.1]:5701, remoteEndpoint=[127.0.0.1]:5707, alive=true}
05:15:59,507 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.ecstatic_hypatia.generic-operation.thread-2 - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT]
Members {size:7, ver:7} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e this
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79
Member [127.0.0.1]:5707 - c422c74f-cb30-4fc9-8200-13ddb2871bd6
]
05:15:59,508 INFO |testTenNodes_1111Partitions| - [MockServer] hz.clever_hypatia.generic-operation.thread-3 - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5707, connection: MockConnection{localEndpoint=[127.0.0.1]:5702, remoteEndpoint=[127.0.0.1]:5707, alive=true}
05:15:59,508 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.clever_hypatia.generic-operation.thread-3 - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT]
Members {size:7, ver:7} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31 this
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79
Member [127.0.0.1]:5707 - c422c74f-cb30-4fc9-8200-13ddb2871bd6
]
05:15:59,509 INFO |testTenNodes_1111Partitions| - [MockServer] hz.musing_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5707, connection: MockConnection{localEndpoint=[127.0.0.1]:5703, remoteEndpoint=[127.0.0.1]:5707, alive=true}
05:15:59,509 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.musing_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT]
Members {size:7, ver:7} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1 this
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79
Member [127.0.0.1]:5707 - c422c74f-cb30-4fc9-8200-13ddb2871bd6
]
05:15:59,509 INFO |testTenNodes_1111Partitions| - [MockServer] hz.distracted_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5707, connection: MockConnection{localEndpoint=[127.0.0.1]:5704, remoteEndpoint=[127.0.0.1]:5707, alive=true}
05:15:59,509 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.distracted_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT]
Members {size:7, ver:7} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac this
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79
Member [127.0.0.1]:5707 - c422c74f-cb30-4fc9-8200-13ddb2871bd6
]
05:15:59,509 INFO |testTenNodes_1111Partitions| - [MockServer] hz.modest_hypatia.generic-operation.thread-1 - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5707, connection: MockConnection{localEndpoint=[127.0.0.1]:5705, remoteEndpoint=[127.0.0.1]:5707, alive=true}
05:15:59,509 INFO |testTenNodes_1111Partitions| - [MockServer] hz.upbeat_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5707, connection: MockConnection{localEndpoint=[127.0.0.1]:5706, remoteEndpoint=[127.0.0.1]:5707, alive=true}
05:15:59,510 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.modest_hypatia.generic-operation.thread-1 - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT]
Members {size:7, ver:7} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a this
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79
Member [127.0.0.1]:5707 - c422c74f-cb30-4fc9-8200-13ddb2871bd6
]
05:15:59,510 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.upbeat_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT]
Members {size:7, ver:7} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79 this
Member [127.0.0.1]:5707 - c422c74f-cb30-4fc9-8200-13ddb2871bd6
]
05:15:59,608 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.kind_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5707 [dev] [4.2.1-SNAPSHOT]
Members {size:7, ver:7} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79
Member [127.0.0.1]:5707 - c422c74f-cb30-4fc9-8200-13ddb2871bd6 this
]
05:16:00,007 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5707 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5702, connection: MockConnection{localEndpoint=[127.0.0.1]:5707, remoteEndpoint=[127.0.0.1]:5702, alive=true}
05:16:00,007 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5707 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5703, connection: MockConnection{localEndpoint=[127.0.0.1]:5707, remoteEndpoint=[127.0.0.1]:5703, alive=true}
05:16:00,007 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5707 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5704, connection: MockConnection{localEndpoint=[127.0.0.1]:5707, remoteEndpoint=[127.0.0.1]:5704, alive=true}
05:16:00,007 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5707 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5705, connection: MockConnection{localEndpoint=[127.0.0.1]:5707, remoteEndpoint=[127.0.0.1]:5705, alive=true}
05:16:00,007 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5707 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5706, connection: MockConnection{localEndpoint=[127.0.0.1]:5707, remoteEndpoint=[127.0.0.1]:5706, alive=true}
05:16:00,007 INFO |testTenNodes_1111Partitions| - [LifecycleService] testTenNodes_1111Partitions - [127.0.0.1]:5707 [dev] [4.2.1-SNAPSHOT] [127.0.0.1]:5707 is STARTED
05:16:00,007 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [LOCAL] [dev] [4.2.1-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
05:16:00,007 INFO |testTenNodes_1111Partitions| - [system] testTenNodes_1111Partitions - [127.0.0.1]:5708 [dev] [4.2.1-SNAPSHOT] Hazelcast 4.2.1-SNAPSHOT (20210603 - 1e1a816) starting at [127.0.0.1]:5708
05:16:00,008 INFO |testTenNodes_1111Partitions| - [HealthMonitor] hz.kind_hypatia.HealthMonitor - [127.0.0.1]:5707 [dev] [4.2.1-SNAPSHOT] processors=8, physical.memory.total=377.6G, physical.memory.free=112.9G, swap.space.total=4.0G, swap.space.free=2.7G, heap.memory.used=1.6G, heap.memory.free=390.7M, heap.memory.total=2.0G, heap.memory.max=2.0G, heap.memory.used/total=80.91%, heap.memory.used/max=80.91%, minor.gc.count=8618, minor.gc.time=79659ms, major.gc.count=5, major.gc.time=1280ms, load.process=2.56%, load.system=9.89%, load.systemAverage=9.55, thread.count=685, thread.peakCount=2580, cluster.timeDiff=0, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.client.query.size=0, executor.q.client.blocking.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=20, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=0, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
05:16:00,008 INFO |testTenNodes_1111Partitions| - [MetricsConfigHelper] testTenNodes_1111Partitions - [127.0.0.1]:5708 [dev] [4.2.1-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
05:16:00,011 WARN |testTenNodes_1111Partitions| - [CPSubsystem] testTenNodes_1111Partitions - [127.0.0.1]:5708 [dev] [4.2.1-SNAPSHOT] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
05:16:00,018 INFO |testTenNodes_1111Partitions| - [Diagnostics] testTenNodes_1111Partitions - [127.0.0.1]:5708 [dev] [4.2.1-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
05:16:00,018 INFO |testTenNodes_1111Partitions| - [LifecycleService] testTenNodes_1111Partitions - [127.0.0.1]:5708 [dev] [4.2.1-SNAPSHOT] [127.0.0.1]:5708 is STARTING
05:16:00,019 INFO |testTenNodes_1111Partitions| - [MockServer] testTenNodes_1111Partitions - [127.0.0.1]:5708 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5701, connection: MockConnection{localEndpoint=[127.0.0.1]:5708, remoteEndpoint=[127.0.0.1]:5701, alive=true}
05:16:00,019 INFO |testTenNodes_1111Partitions| - [MockServer] hz.ecstatic_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5708, connection: MockConnection{localEndpoint=[127.0.0.1]:5701, remoteEndpoint=[127.0.0.1]:5708, alive=true}
05:16:00,020 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.ecstatic_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [dev] [4.2.1-SNAPSHOT]
Members {size:8, ver:8} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e this
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79
Member [127.0.0.1]:5707 - c422c74f-cb30-4fc9-8200-13ddb2871bd6
Member [127.0.0.1]:5708 - 1c4b918a-fdab-4c39-a07e-fa8cc3c133d6
]
05:16:00,022 INFO |testTenNodes_1111Partitions| - [MockServer] hz.clever_hypatia.generic-operation.thread-1 - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5708, connection: MockConnection{localEndpoint=[127.0.0.1]:5702, remoteEndpoint=[127.0.0.1]:5708, alive=true}
05:16:00,022 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.clever_hypatia.generic-operation.thread-1 - [127.0.0.1]:5702 [dev] [4.2.1-SNAPSHOT]
Members {size:8, ver:8} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31 this
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79
Member [127.0.0.1]:5707 - c422c74f-cb30-4fc9-8200-13ddb2871bd6
Member [127.0.0.1]:5708 - 1c4b918a-fdab-4c39-a07e-fa8cc3c133d6
]
05:16:00,023 INFO |testTenNodes_1111Partitions| - [MockServer] hz.musing_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5708, connection: MockConnection{localEndpoint=[127.0.0.1]:5703, remoteEndpoint=[127.0.0.1]:5708, alive=true}
05:16:00,023 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.musing_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5703 [dev] [4.2.1-SNAPSHOT]
Members {size:8, ver:8} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1 this
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79
Member [127.0.0.1]:5707 - c422c74f-cb30-4fc9-8200-13ddb2871bd6
Member [127.0.0.1]:5708 - 1c4b918a-fdab-4c39-a07e-fa8cc3c133d6
]
05:16:00,024 INFO |testTenNodes_1111Partitions| - [MockServer] hz.distracted_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5708, connection: MockConnection{localEndpoint=[127.0.0.1]:5704, remoteEndpoint=[127.0.0.1]:5708, alive=true}
05:16:00,024 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.distracted_hypatia.priority-generic-operation.thread-0 - [127.0.0.1]:5704 [dev] [4.2.1-SNAPSHOT]
Members {size:8, ver:8} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.1]:5703 - 821af8ee-97c9-476e-a528-e8fea98704a1
Member [127.0.0.1]:5704 - afcc2f65-4428-44a2-a944-644a5f02c0ac this
Member [127.0.0.1]:5705 - daf4f44e-f82c-488e-bbd2-f41abe51da0a
Member [127.0.0.1]:5706 - fee8af83-682e-423c-b6c8-976b49ecfe79
Member [127.0.0.1]:5707 - c422c74f-cb30-4fc9-8200-13ddb2871bd6
Member [127.0.0.1]:5708 - 1c4b918a-fdab-4c39-a07e-fa8cc3c133d6
]
05:16:00,024 INFO |testTenNodes_1111Partitions| - [MockServer] hz.modest_hypatia.generic-operation.thread-1 - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5708, connection: MockConnection{localEndpoint=[127.0.0.1]:5705, remoteEndpoint=[127.0.0.1]:5708, alive=true}
05:16:00,025 INFO |testTenNodes_1111Partitions| - [MockServer] hz.upbeat_hypatia.generic-operation.thread-1 - [127.0.0.1]:5706 [dev] [4.2.1-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5708, connection: MockConnection{localEndpoint=[127.0.0.1]:5706, remoteEndpoint=[127.0.0.1]:5708, alive=true}
05:16:00,025 INFO |testTenNodes_1111Partitions| - [ClusterService] hz.modest_hypatia.generic-operation.thread-1 - [127.0.0.1]:5705 [dev] [4.2.1-SNAPSHOT]
Members {size:8, ver:8} [
Member [127.0.0.1]:5701 - e988f53c-fa8f-4db1-b71b-f20822b7c41e
Member [127.0.0.1]:5702 - a7e47782-2eff-4950-994c-da6c49e6bd31
Member [127.0.0.
...[truncated 16839708 chars]...
onsCount]=61
[ecstatic_hypatia] [05:17:46,944] [partitionId=476,unit=count,metric=operation.partition.executedOperationsCount]=62
[ecstatic_hypatia] [05:17:46,944] [partitionId=454,unit=count,metric=operation.partition.executedOperationsCount]=68
[ecstatic_hypatia] [05:17:46,944] [partitionId=432,unit=count,metric=operation.partition.executedOperationsCount]=66
[ecstatic_hypatia] [05:17:46,944] [partitionId=410,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=399,unit=count,metric=operation.partition.executedOperationsCount]=69
[ecstatic_hypatia] [05:17:46,944] [partitionId=311,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=333,unit=count,metric=operation.partition.executedOperationsCount]=74
[ecstatic_hypatia] [05:17:46,944] [partitionId=355,unit=count,metric=operation.partition.executedOperationsCount]=73
[ecstatic_hypatia] [05:17:46,944] [partitionId=377,unit=count,metric=operation.partition.executedOperationsCount]=75
[ecstatic_hypatia] [05:17:46,944] [partitionId=278,unit=count,metric=operation.partition.executedOperationsCount]=78
[ecstatic_hypatia] [05:17:46,944] [partitionId=256,unit=count,metric=operation.partition.executedOperationsCount]=77
[ecstatic_hypatia] [05:17:46,944] [partitionId=234,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=212,unit=count,metric=operation.partition.executedOperationsCount]=81
[ecstatic_hypatia] [05:17:46,944] [partitionId=113,unit=count,metric=operation.partition.executedOperationsCount]=84
[ecstatic_hypatia] [05:17:46,944] [partitionId=135,unit=count,metric=operation.partition.executedOperationsCount]=85
[ecstatic_hypatia] [05:17:46,944] [partitionId=157,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=179,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-6,unit=count,metric=operation.thread.priorityPendingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=751,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=773,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=795,unit=count,metric=operation.partition.executedOperationsCount]=46
[ecstatic_hypatia] [05:17:46,944] [partitionId=850,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=872,unit=count,metric=operation.partition.executedOperationsCount]=27
[ecstatic_hypatia] [05:17:46,944] [partitionId=894,unit=count,metric=operation.partition.executedOperationsCount]=30
[ecstatic_hypatia] [05:17:46,944] [partitionId=531,unit=count,metric=operation.partition.executedOperationsCount]=69
[ecstatic_hypatia] [05:17:46,944] [partitionId=553,unit=count,metric=operation.partition.executedOperationsCount]=60
[ecstatic_hypatia] [05:17:46,944] [partitionId=575,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=597,unit=count,metric=operation.partition.executedOperationsCount]=54
[ecstatic_hypatia] [05:17:46,944] [partitionId=630,unit=count,metric=operation.partition.executedOperationsCount]=51
[ecstatic_hypatia] [05:17:46,944] [partitionId=652,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=674,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=696,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-0,unit=count,metric=operation.thread.normalPendingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=971,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=993,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-5,unit=count,metric=operation.thread.completedTotalCount]=6676
[ecstatic_hypatia] [05:17:46,944] [unit=count,metric=operation.responses.missingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=499,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=477,unit=count,metric=operation.partition.executedOperationsCount]=64
[ecstatic_hypatia] [05:17:46,944] [partitionId=455,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=433,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=411,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=312,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=334,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=356,unit=count,metric=operation.partition.executedOperationsCount]=70
[ecstatic_hypatia] [05:17:46,944] [partitionId=378,unit=count,metric=operation.partition.executedOperationsCount]=71
[ecstatic_hypatia] [05:17:46,944] [unit=count,metric=raft.metadata.groups]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=279,unit=count,metric=operation.partition.executedOperationsCount]=75
[ecstatic_hypatia] [05:17:46,944] [partitionId=257,unit=count,metric=operation.partition.executedOperationsCount]=79
[ecstatic_hypatia] [05:17:46,944] [partitionId=235,unit=count,metric=operation.partition.executedOperationsCount]=77
[ecstatic_hypatia] [05:17:46,944] [partitionId=114,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=136,unit=count,metric=operation.partition.executedOperationsCount]=85
[ecstatic_hypatia] [05:17:46,944] [partitionId=158,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=213,unit=count,metric=operation.partition.executedOperationsCount]=79
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-5,unit=count,metric=operation.thread.priorityPendingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=730,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=752,unit=count,metric=operation.partition.executedOperationsCount]=49
[ecstatic_hypatia] [05:17:46,944] [partitionId=774,unit=count,metric=operation.partition.executedOperationsCount]=45
[ecstatic_hypatia] [05:17:46,944] [partitionId=796,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=851,unit=count,metric=operation.partition.executedOperationsCount]=30
[ecstatic_hypatia] [05:17:46,944] [partitionId=873,unit=count,metric=operation.partition.executedOperationsCount]=34
[ecstatic_hypatia] [05:17:46,944] [partitionId=895,unit=count,metric=operation.partition.executedOperationsCount]=38
[ecstatic_hypatia] [05:17:46,944] [partitionId=532,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=510,unit=count,metric=operation.partition.executedOperationsCount]=65
[ecstatic_hypatia] [05:17:46,944] [partitionId=554,unit=count,metric=operation.partition.executedOperationsCount]=56
[ecstatic_hypatia] [05:17:46,944] [partitionId=576,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=598,unit=count,metric=operation.partition.executedOperationsCount]=54
[ecstatic_hypatia] [05:17:46,944] [partitionId=631,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=653,unit=count,metric=operation.partition.executedOperationsCount]=51
[ecstatic_hypatia] [05:17:46,944] [partitionId=675,unit=count,metric=operation.partition.executedOperationsCount]=57
[ecstatic_hypatia] [05:17:46,944] [partitionId=697,unit=count,metric=operation.partition.executedOperationsCount]=54
[ecstatic_hypatia] [05:17:46,944] [partitionId=950,unit=count,metric=operation.partition.executedOperationsCount]=11
[ecstatic_hypatia] [05:17:46,944] [partitionId=972,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=994,unit=count,metric=operation.partition.executedOperationsCount]=7
[ecstatic_hypatia] [05:17:46,944] [unit=count,metric=operation.retryCount]=2
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-1,unit=count,metric=operation.thread.completedRunnableCount]=457
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-6,unit=count,metric=operation.thread.completedTotalCount]=7280
[ecstatic_hypatia] [05:17:46,944] [genericId=2,unit=count,metric=operation.generic.executedOperationsCount]=254
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-5,unit=count,metric=operation.thread.completedOperationBatchCount]=89
[ecstatic_hypatia] [05:17:46,944] [service=hz:core:proxyService,unit=count,metric=event.publicationCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=496,unit=count,metric=operation.partition.executedOperationsCount]=68
[ecstatic_hypatia] [05:17:46,944] [partitionId=474,unit=count,metric=operation.partition.executedOperationsCount]=72
[ecstatic_hypatia] [05:17:46,944] [partitionId=452,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=331,unit=count,metric=operation.partition.executedOperationsCount]=72
[ecstatic_hypatia] [05:17:46,944] [partitionId=353,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=375,unit=count,metric=operation.partition.executedOperationsCount]=70
[ecstatic_hypatia] [05:17:46,944] [partitionId=397,unit=count,metric=operation.partition.executedOperationsCount]=68
[ecstatic_hypatia] [05:17:46,944] [partitionId=430,unit=count,metric=operation.partition.executedOperationsCount]=70
[ecstatic_hypatia] [05:17:46,944] [partitionId=298,unit=count,metric=operation.partition.executedOperationsCount]=74
[ecstatic_hypatia] [05:17:46,944] [partitionId=276,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=254,unit=count,metric=operation.partition.executedOperationsCount]=78
[ecstatic_hypatia] [05:17:46,944] [partitionId=232,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=210,unit=count,metric=operation.partition.executedOperationsCount]=82
[ecstatic_hypatia] [05:17:46,944] [partitionId=199,unit=count,metric=operation.partition.executedOperationsCount]=82
[ecstatic_hypatia] [05:17:46,944] [partitionId=111,unit=count,metric=operation.partition.executedOperationsCount]=87
[ecstatic_hypatia] [05:17:46,944] [partitionId=133,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=155,unit=count,metric=operation.partition.executedOperationsCount]=81
[ecstatic_hypatia] [05:17:46,944] [partitionId=177,unit=count,metric=operation.partition.executedOperationsCount]=80
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-4,unit=count,metric=operation.thread.priorityPendingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=771,unit=count,metric=operation.partition.executedOperationsCount]=39
[ecstatic_hypatia] [05:17:46,944] [partitionId=793,unit=count,metric=operation.partition.executedOperationsCount]=43
[ecstatic_hypatia] [05:17:46,944] [partitionId=870,unit=count,metric=operation.partition.executedOperationsCount]=25
[ecstatic_hypatia] [05:17:46,944] [partitionId=892,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [unit=ms,metric=gc.unknownTime]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=551,unit=count,metric=operation.partition.executedOperationsCount]=59
[ecstatic_hypatia] [05:17:46,944] [partitionId=573,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=595,unit=count,metric=operation.partition.executedOperationsCount]=56
[ecstatic_hypatia] [05:17:46,944] [partitionId=650,unit=count,metric=operation.partition.executedOperationsCount]=51
[ecstatic_hypatia] [05:17:46,944] [partitionId=672,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=694,unit=count,metric=operation.partition.executedOperationsCount]=51
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-2,unit=count,metric=operation.thread.normalPendingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=991,unit=count,metric=operation.partition.executedOperationsCount]=22
[ecstatic_hypatia] [05:17:46,944] [unit=ms,metric=cluster.clock.clusterUpTime]=110003
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-0,unit=count,metric=operation.thread.completedRunnableCount]=1347
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-7,unit=count,metric=operation.thread.completedTotalCount]=6710
[ecstatic_hypatia] [05:17:46,944] [genericId=1,unit=count,metric=operation.generic.executedOperationsCount]=271
[ecstatic_hypatia] [05:17:46,944] [service=hz:core:clusterService,unit=count,metric=event.listenerCount]=1
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-6,unit=count,metric=operation.thread.completedOperationBatchCount]=89
[ecstatic_hypatia] [05:17:46,944] [unit=count,metric=operation.partitionThreadCount]=8
[ecstatic_hypatia] [05:17:46,944] [partitionId=497,unit=count,metric=operation.partition.executedOperationsCount]=61
[ecstatic_hypatia] [05:17:46,944] [partitionId=475,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=453,unit=count,metric=operation.partition.executedOperationsCount]=66
[ecstatic_hypatia] [05:17:46,944] [partitionId=431,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=398,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=310,unit=count,metric=operation.partition.executedOperationsCount]=75
[ecstatic_hypatia] [05:17:46,944] [partitionId=332,unit=count,metric=operation.partition.executedOperationsCount]=74
[ecstatic_hypatia] [05:17:46,944] [partitionId=354,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=376,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=299,unit=count,metric=operation.partition.executedOperationsCount]=77
[ecstatic_hypatia] [05:17:46,944] [partitionId=277,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=255,unit=count,metric=operation.partition.executedOperationsCount]=76
[ecstatic_hypatia] [05:17:46,944] [partitionId=233,unit=count,metric=operation.partition.executedOperationsCount]=77
[ecstatic_hypatia] [05:17:46,944] [partitionId=211,unit=count,metric=operation.partition.executedOperationsCount]=82
[ecstatic_hypatia] [05:17:46,944] [partitionId=112,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=134,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=156,unit=count,metric=operation.partition.executedOperationsCount]=84
[ecstatic_hypatia] [05:17:46,944] [partitionId=178,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-3,unit=count,metric=operation.thread.priorityPendingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=750,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=772,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=794,unit=count,metric=operation.partition.executedOperationsCount]=44
[ecstatic_hypatia] [05:17:46,944] [partitionId=871,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=893,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=530,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=552,unit=count,metric=operation.partition.executedOperationsCount]=57
[ecstatic_hypatia] [05:17:46,944] [partitionId=574,unit=count,metric=operation.partition.executedOperationsCount]=56
[ecstatic_hypatia] [05:17:46,944] [partitionId=596,unit=count,metric=operation.partition.executedOperationsCount]=51
[ecstatic_hypatia] [05:17:46,944] [partitionId=651,unit=count,metric=operation.partition.executedOperationsCount]=49
[ecstatic_hypatia] [05:17:46,944] [partitionId=673,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=695,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-1,unit=count,metric=operation.thread.normalPendingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=970,unit=count,metric=operation.partition.executedOperationsCount]=18
[ecstatic_hypatia] [05:17:46,944] [partitionId=992,unit=count,metric=operation.partition.executedOperationsCount]=20
[ecstatic_hypatia] [05:17:46,944] [unit=count,metric=partitions.partitionCount]=1111
[ecstatic_hypatia] [05:17:46,944] [service=hz:core:partitionService,unit=count,metric=event.publicationCount]=396
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-1,unit=count,metric=operation.thread.errorCount]=0
[ecstatic_hypatia] [05:17:46,944] [metric=thread.daemonThreadCount]=476
[ecstatic_hypatia] [05:17:46,944] [unit=bytes,metric=memory.freeNative]=0
[ecstatic_hypatia] [05:17:46,944] [unit=count,metric=cluster.size]=5
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-3,unit=count,metric=operation.thread.completedOperationBatchCount]=89
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.priority-generic-operation.thread-0,unit=count,metric=operation.thread.completedRunnableCount]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-2,unit=count,metric=operation.thread.priorityPendingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=719,unit=count,metric=operation.partition.executedOperationsCount]=46
[ecstatic_hypatia] [05:17:46,944] [partitionId=818,unit=count,metric=operation.partition.executedOperationsCount]=42
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-4,unit=count,metric=operation.thread.normalPendingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=917,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=939,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=75,unit=count,metric=operation.partition.executedOperationsCount]=89
[ecstatic_hypatia] [05:17:46,944] [partitionId=53,unit=count,metric=operation.partition.executedOperationsCount]=86
[ecstatic_hypatia] [05:17:46,944] [partitionId=31,unit=count,metric=operation.partition.executedOperationsCount]=91
[ecstatic_hypatia] [05:17:46,944] [partitionId=97,unit=count,metric=operation.partition.executedOperationsCount]=86
[ecstatic_hypatia] [05:17:46,944] [partitionId=1017,unit=count,metric=operation.partition.executedOperationsCount]=7
[ecstatic_hypatia] [05:17:46,944] [partitionId=1039,unit=count,metric=operation.partition.executedOperationsCount]=4
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-0,unit=count,metric=operation.thread.errorCount]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-4,unit=count,metric=operation.thread.completedOperationBatchCount]=89
[ecstatic_hypatia] [05:17:46,944] [dir=user.home,unit=bytes,metric=file.partition.freeSpace]=15400296448
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-1,unit=count,metric=operation.thread.priorityPendingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=819,unit=count,metric=operation.partition.executedOperationsCount]=36
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-3,unit=count,metric=operation.thread.normalPendingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=918,unit=count,metric=operation.partition.executedOperationsCount]=30
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.priority-generic-operation.thread-0,unit=count,metric=operation.thread.completedOperationBatchCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=74,unit=count,metric=operation.partition.executedOperationsCount]=88
[ecstatic_hypatia] [05:17:46,944] [partitionId=52,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=30,unit=count,metric=operation.partition.executedOperationsCount]=90
[ecstatic_hypatia] [05:17:46,944] [partitionId=96,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=1018,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [unit=count,metric=operation.priorityQueueSize]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-1,unit=count,metric=operation.thread.completedOperationBatchCount]=89
[ecstatic_hypatia] [05:17:46,944] [unit=count,metric=proxy.proxyCount]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-0,unit=count,metric=operation.thread.priorityPendingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=717,unit=count,metric=operation.partition.executedOperationsCount]=50
[ecstatic_hypatia] [05:17:46,944] [partitionId=739,unit=count,metric=operation.partition.executedOperationsCount]=44
[ecstatic_hypatia] [05:17:46,944] [partitionId=816,unit=count,metric=operation.partition.executedOperationsCount]=41
[ecstatic_hypatia] [05:17:46,944] [partitionId=838,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=519,unit=count,metric=operation.partition.executedOperationsCount]=69
[ecstatic_hypatia] [05:17:46,944] [partitionId=618,unit=count,metric=operation.partition.executedOperationsCount]=55
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-6,unit=count,metric=operation.thread.normalPendingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=915,unit=count,metric=operation.partition.executedOperationsCount]=33
[ecstatic_hypatia] [05:17:46,944] [partitionId=937,unit=count,metric=operation.partition.executedOperationsCount]=31
[ecstatic_hypatia] [05:17:46,944] [partitionId=959,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=11,unit=count,metric=operation.partition.executedOperationsCount]=91
[ecstatic_hypatia] [05:17:46,944] [partitionId=33,unit=count,metric=operation.partition.executedOperationsCount]=88
[ecstatic_hypatia] [05:17:46,944] [partitionId=55,unit=count,metric=operation.partition.executedOperationsCount]=89
[ecstatic_hypatia] [05:17:46,944] [partitionId=77,unit=count,metric=operation.partition.executedOperationsCount]=89
[ecstatic_hypatia] [05:17:46,944] [partitionId=99,unit=count,metric=operation.partition.executedOperationsCount]=85
[ecstatic_hypatia] [05:17:46,944] [partitionId=1019,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [unit=ns,metric=partitions.elapsedMigrationOperationTime]=0
[ecstatic_hypatia] [05:17:46,944] [unit=count,metric=gc.minorCount]=8627
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-2,unit=count,metric=operation.thread.completedOperationBatchCount]=89
[ecstatic_hypatia] [05:17:46,944] [metric=os.freeSwapSpaceSize]=2849095680
[ecstatic_hypatia] [05:17:46,944] [metric=os.totalPhysicalMemorySize]=405449981952
[ecstatic_hypatia] [05:17:46,944] [partitionId=718,unit=count,metric=operation.partition.executedOperationsCount]=46
[ecstatic_hypatia] [05:17:46,944] [partitionId=817,unit=count,metric=operation.partition.executedOperationsCount]=39
[ecstatic_hypatia] [05:17:46,944] [partitionId=839,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=619,unit=count,metric=operation.partition.executedOperationsCount]=51
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-5,unit=count,metric=operation.thread.normalPendingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=916,unit=count,metric=operation.partition.executedOperationsCount]=24
[ecstatic_hypatia] [05:17:46,944] [partitionId=938,unit=count,metric=operation.partition.executedOperationsCount]=14
[ecstatic_hypatia] [05:17:46,944] [metric=runtime.availableProcessors]=8
[ecstatic_hypatia] [05:17:46,944] [partitionId=10,unit=count,metric=operation.partition.executedOperationsCount]=91
[ecstatic_hypatia] [05:17:46,944] [partitionId=32,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=54,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=76,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=98,unit=count,metric=operation.partition.executedOperationsCount]=86
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-5,unit=count,metric=operation.thread.errorCount]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.generic-operation.thread-1,unit=count,metric=operation.thread.completedPacketCount]=254
[ecstatic_hypatia] [05:17:46,944] [unit=count,metric=gc.unknownCount]=0
[ecstatic_hypatia] [05:17:46,944] [unit=bytes,metric=memory.maxMetadata]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=418,unit=count,metric=operation.partition.executedOperationsCount]=68
[ecstatic_hypatia] [05:17:46,944] [partitionId=319,unit=count,metric=operation.partition.executedOperationsCount]=77
[ecstatic_hypatia] [05:17:46,944] [unit=count,metric=partitions.localPartitionCount]=145
[ecstatic_hypatia] [05:17:46,944] [partitionId=715,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=737,unit=count,metric=operation.partition.executedOperationsCount]=41
[ecstatic_hypatia] [05:17:46,944] [partitionId=759,unit=count,metric=operation.partition.executedOperationsCount]=47
[ecstatic_hypatia] [05:17:46,944] [partitionId=814,unit=count,metric=operation.partition.executedOperationsCount]=34
[ecstatic_hypatia] [05:17:46,944] [partitionId=836,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=858,unit=count,metric=operation.partition.executedOperationsCount]=34
[ecstatic_hypatia] [05:17:46,944] [partitionId=517,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=539,unit=count,metric=operation.partition.executedOperationsCount]=68
[ecstatic_hypatia] [05:17:46,944] [partitionId=616,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=638,unit=count,metric=operation.partition.executedOperationsCount]=57
[ecstatic_hypatia] [05:17:46,944] [partitionId=913,unit=count,metric=operation.partition.executedOperationsCount]=32
[ecstatic_hypatia] [05:17:46,944] [partitionId=935,unit=count,metric=operation.partition.executedOperationsCount]=28
[ecstatic_hypatia] [05:17:46,944] [partitionId=957,unit=count,metric=operation.partition.executedOperationsCount]=16
[ecstatic_hypatia] [05:17:46,944] [partitionId=979,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=71,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=93,unit=count,metric=operation.partition.executedOperationsCount]=84
[ecstatic_hypatia] [05:17:46,944] [partitionId=1013,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=1035,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=1057,unit=count,metric=operation.partition.executedOperationsCount]=3
[ecstatic_hypatia] [05:17:46,944] [partitionId=1079,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-4,unit=count,metric=operation.thread.errorCount]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.generic-operation.thread-2,unit=count,metric=operation.thread.completedPacketCount]=284
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-0,unit=count,metric=operation.thread.completedOperationBatchCount]=89
[ecstatic_hypatia] [05:17:46,944] [partitionId=419,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=716,unit=count,metric=operation.partition.executedOperationsCount]=47
[ecstatic_hypatia] [05:17:46,944] [partitionId=738,unit=count,metric=operation.partition.executedOperationsCount]=41
[ecstatic_hypatia] [05:17:46,944] [partitionId=815,unit=count,metric=operation.partition.executedOperationsCount]=39
[ecstatic_hypatia] [05:17:46,944] [partitionId=837,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=859,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=518,unit=count,metric=operation.partition.executedOperationsCount]=60
[ecstatic_hypatia] [05:17:46,944] [partitionId=617,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=639,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-7,unit=count,metric=operation.thread.normalPendingCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=914,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=936,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=958,unit=count,metric=operation.partition.executedOperationsCount]=11
[ecstatic_hypatia] [05:17:46,944] [partitionId=70,unit=count,metric=operation.partition.executedOperationsCount]=90
[ecstatic_hypatia] [05:17:46,944] [partitionId=92,unit=count,metric=operation.partition.executedOperationsCount]=86
[ecstatic_hypatia] [05:17:46,944] [partitionId=1014,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=1036,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=1058,unit=count,metric=operation.partition.executedOperationsCount]=10
[ecstatic_hypatia] [05:17:46,944] [service=hz:core:partitionService,unit=count,metric=event.listenerCount]=15
[ecstatic_hypatia] [05:17:46,944] [unit=count,metric=operation.invocations.pending]=25
[ecstatic_hypatia] [05:17:46,944] [unit=bytes,metric=memory.usedNative]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-3,unit=count,metric=operation.thread.errorCount]=0
[ecstatic_hypatia] [05:17:46,944] [dir=user.home,unit=bytes,metric=file.partition.totalSpace]=21464350720
[ecstatic_hypatia] [05:17:46,944] [unit=ms,metric=cluster.clock.clusterTime]=1622783866940
[ecstatic_hypatia] [05:17:46,944] [partitionId=317,unit=count,metric=operation.partition.executedOperationsCount]=73
[ecstatic_hypatia] [05:17:46,944] [partitionId=339,unit=count,metric=operation.partition.executedOperationsCount]=72
[ecstatic_hypatia] [05:17:46,944] [partitionId=416,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=438,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=218,unit=count,metric=operation.partition.executedOperationsCount]=80
[ecstatic_hypatia] [05:17:46,944] [partitionId=119,unit=count,metric=operation.partition.executedOperationsCount]=86
[ecstatic_hypatia] [05:17:46,944] [partitionId=713,unit=count,metric=operation.partition.executedOperationsCount]=50
[ecstatic_hypatia] [05:17:46,944] [partitionId=735,unit=count,metric=operation.partition.executedOperationsCount]=45
[ecstatic_hypatia] [05:17:46,944] [partitionId=757,unit=count,metric=operation.partition.executedOperationsCount]=42
[ecstatic_hypatia] [05:17:46,944] [partitionId=779,unit=count,metric=operation.partition.executedOperationsCount]=47
[ecstatic_hypatia] [05:17:46,944] [partitionId=812,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=834,unit=count,metric=operation.partition.executedOperationsCount]=38
[ecstatic_hypatia] [05:17:46,944] [partitionId=856,unit=count,metric=operation.partition.executedOperationsCount]=35
[ecstatic_hypatia] [05:17:46,944] [partitionId=878,unit=count,metric=operation.partition.executedOperationsCount]=34
[ecstatic_hypatia] [05:17:46,944] [partitionId=515,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=537,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=559,unit=count,metric=operation.partition.executedOperationsCount]=57
[ecstatic_hypatia] [05:17:46,944] [partitionId=614,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=636,unit=count,metric=operation.partition.executedOperationsCount]=54
[ecstatic_hypatia] [05:17:46,944] [partitionId=658,unit=count,metric=operation.partition.executedOperationsCount]=53
[ecstatic_hypatia] [05:17:46,944] [partitionId=911,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=933,unit=count,metric=operation.partition.executedOperationsCount]=28
[ecstatic_hypatia] [05:17:46,944] [partitionId=955,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=977,unit=count,metric=operation.partition.executedOperationsCount]=23
[ecstatic_hypatia] [05:17:46,944] [partitionId=999,unit=count,metric=operation.partition.executedOperationsCount]=6
[ecstatic_hypatia] [05:17:46,944] [partitionId=73,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=51,unit=count,metric=operation.partition.executedOperationsCount]=90
[ecstatic_hypatia] [05:17:46,944] [partitionId=95,unit=count,metric=operation.partition.executedOperationsCount]=84
[ecstatic_hypatia] [05:17:46,944] [unit=count,metric=operation.invocations.normalTimeouts]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=1015,unit=count,metric=operation.partition.executedOperationsCount]=10
[ecstatic_hypatia] [05:17:46,944] [partitionId=1037,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=1059,unit=count,metric=operation.partition.executedOperationsCount]=5
[ecstatic_hypatia] [05:17:46,944] [unit=count,metric=operation.failedBackups]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.partition-operation.thread-2,unit=count,metric=operation.thread.errorCount]=0
[ecstatic_hypatia] [05:17:46,944] [thread=hz.ecstatic_hypatia.generic-operation.thread-0,unit=count,metric=operation.thread.completedPacketCount]=271
[ecstatic_hypatia] [05:17:46,944] [partitionId=417,unit=count,metric=operation.partition.executedOperationsCount]=71
[ecstatic_hypatia] [05:17:46,944] [partitionId=318,unit=count,metric=operation.partition.executedOperationsCount]=76
[ecstatic_hypatia] [05:17:46,944] [partitionId=439,unit=count,metric=operation.partition.executedOperationsCount]=72
[ecstatic_hypatia] [05:17:46,944] [partitionId=219,unit=count,metric=operation.partition.executedOperationsCount]=82
[ecstatic_hypatia] [05:17:46,944] [partitionId=714,unit=count,metric=operation.partition.executedOperationsCount]=50
[ecstatic_hypatia] [05:17:46,944] [partitionId=736,unit=count,metric=operation.partition.executedOperationsCount]=48
[ecstatic_hypatia] [05:17:46,944] [partitionId=758,unit=count,metric=operation.partition.executedOperationsCount]=43
[ecstatic_hypatia] [05:17:46,944] [partitionId=813,unit=count,metric=operation.partition.executedOperationsCount]=40
[ecstatic_hypatia] [05:17:46,944] [partitionId=835,unit=count,metric=operation.partition.executedOperationsCount]=37
[ecstatic_hypatia] [05:17:46,944] [partitionId=857,unit=count,metric=operation.partition.executedOperationsCount]=28
[ecstatic_hypatia] [05:17:46,944] [partitionId=879,unit=count,metric=operation.partition.executedOperationsCount]=24
[ecstatic_hypatia] [05:17:46,944] [partitionId=516,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=538,unit=count,metric=operation.partition.executedOperationsCount]=65
[ecstatic_hypatia] [05:17:46,944] [partitionId=615,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=637,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=659,unit=count,metric=operation.partition.executedOperationsCount]=53
[ecstatic_hypatia] [05:17:46,944] [partitionId=912,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=934,unit=count,metric=operation.partition.executedOperationsCount]=30
[ecstatic_hypatia] [05:17:46,944] [partitionId=956,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=978,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [unit=count,metric=partitions.completedMigrations]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=72,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=50,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=94,unit=count,metric=operation.partition.executedOperationsCount]=86
[ecstatic_hypatia] [05:17:46,944] [partitionId=1016,unit=count,metric=operation.partition.executedOperationsCount]=0
[ecstatic_hypatia] [05:17:46,944] [partitionId=1038,unit=count,metric=operation.partition.executedOperationsCount]=3
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster,unit=count,metric=executor.internal.maximumPoolSize]=2
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster,unit=count,metric=executor.internal.completedTasks]=23
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster,unit=count,metric=executor.internal.remainingQueueCapacity]=2147483647
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:event,unit=count,metric=executor.internal.maximumPoolSize]=1
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:event,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:event,unit=count,metric=executor.internal.completedTasks]=90
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:event,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:event,unit=count,metric=executor.internal.remainingQueueCapacity]=2147483647
[ecstatic_hypatia] [05:17:46,944] [name=hz:scheduled:cqc:d1dcd21e-2d41-44c9-9be6-f09da504606d,unit=count,metric=executor.internal.maximumPoolSize]=1
[ecstatic_hypatia] [05:17:46,944] [name=hz:scheduled:cqc:d1dcd21e-2d41-44c9-9be6-f09da504606d,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:scheduled:cqc:d1dcd21e-2d41-44c9-9be6-f09da504606d,unit=count,metric=executor.internal.completedTasks]=22
[ecstatic_hypatia] [05:17:46,944] [name=hz:scheduled:cqc:d1dcd21e-2d41-44c9-9be6-f09da504606d,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:scheduled:cqc:d1dcd21e-2d41-44c9-9be6-f09da504606d,unit=count,metric=executor.internal.remainingQueueCapacity]=10000
[ecstatic_hypatia] [05:17:46,944] [name=hz:offloadable,unit=count,metric=executor.internal.maximumPoolSize]=8
[ecstatic_hypatia] [05:17:46,944] [name=hz:offloadable,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:offloadable,unit=count,metric=executor.internal.completedTasks]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:offloadable,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:offloadable,unit=count,metric=executor.internal.remainingQueueCapacity]=100000
[ecstatic_hypatia] [05:17:46,944] [name=hz:client,unit=count,metric=executor.internal.maximumPoolSize]=8
[ecstatic_hypatia] [05:17:46,944] [name=hz:client,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:client,unit=count,metric=executor.internal.completedTasks]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:client,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:client,unit=count,metric=executor.internal.remainingQueueCapacity]=800000
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:mastership,unit=count,metric=executor.internal.maximumPoolSize]=1
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:mastership,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:mastership,unit=count,metric=executor.internal.completedTasks]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:mastership,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:mastership,unit=count,metric=executor.internal.remainingQueueCapacity]=2147483647
[ecstatic_hypatia] [05:17:46,944] [name=hz:async,unit=count,metric=executor.internal.maximumPoolSize]=8
[ecstatic_hypatia] [05:17:46,944] [name=hz:async,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:async,unit=count,metric=executor.internal.completedTasks]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:async,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:async,unit=count,metric=executor.internal.remainingQueueCapacity]=100000
[ecstatic_hypatia] [05:17:46,944] [name=hz:client-blocking-tasks,unit=count,metric=executor.internal.maximumPoolSize]=160
[ecstatic_hypatia] [05:17:46,944] [name=hz:client-blocking-tasks,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:client-blocking-tasks,unit=count,metric=executor.internal.completedTasks]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:client-blocking-tasks,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:client-blocking-tasks,unit=count,metric=executor.internal.remainingQueueCapacity]=800000
[ecstatic_hypatia] [05:17:46,944] [name=hz:mc,unit=count,metric=executor.internal.maximumPoolSize]=2
[ecstatic_hypatia] [05:17:46,944] [name=hz:mc,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:mc,unit=count,metric=executor.internal.completedTasks]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:mc,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:mc,unit=count,metric=executor.internal.remainingQueueCapacity]=2000
[ecstatic_hypatia] [05:17:46,944] [name=hz:client-query,unit=count,metric=executor.internal.maximumPoolSize]=8
[ecstatic_hypatia] [05:17:46,944] [name=hz:client-query,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:client-query,unit=count,metric=executor.internal.completedTasks]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:client-query,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:client-query,unit=count,metric=executor.internal.remainingQueueCapacity]=800000
[ecstatic_hypatia] [05:17:46,944] [name=hz:scheduled,unit=count,metric=executor.internal.maximumPoolSize]=16
[ecstatic_hypatia] [05:17:46,944] [name=hz:scheduled,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:scheduled,unit=count,metric=executor.internal.completedTasks]=1991
[ecstatic_hypatia] [05:17:46,944] [name=hz:scheduled,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:scheduled,unit=count,metric=executor.internal.remainingQueueCapacity]=800000
[ecstatic_hypatia] [05:17:46,944] [name=hz:system,unit=count,metric=executor.internal.maximumPoolSize]=8
[ecstatic_hypatia] [05:17:46,944] [name=hz:system,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:system,unit=count,metric=executor.internal.completedTasks]=38
[ecstatic_hypatia] [05:17:46,944] [name=hz:system,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:system,unit=count,metric=executor.internal.remainingQueueCapacity]=2147483647
[ecstatic_hypatia] [05:17:46,944] [name=MetricsPublisher,unit=count,metric=executor.internal.maximumPoolSize]=16
[ecstatic_hypatia] [05:17:46,944] [name=MetricsPublisher,unit=count,metric=executor.internal.poolSize]=1
[ecstatic_hypatia] [05:17:46,944] [name=MetricsPublisher,unit=count,metric=executor.internal.completedTasks]=109
[ecstatic_hypatia] [05:17:46,944] [name=MetricsPublisher,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=MetricsPublisher,unit=count,metric=executor.internal.remainingQueueCapacity]=2147483647
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:splitbrain,unit=count,metric=executor.internal.maximumPoolSize]=2
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:splitbrain,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:splitbrain,unit=count,metric=executor.internal.completedTasks]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:splitbrain,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:splitbrain,unit=count,metric=executor.internal.remainingQueueCapacity]=2147483647
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:version:auto:upgrade,unit=count,metric=executor.internal.maximumPoolSize]=1
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:version:auto:upgrade,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:version:auto:upgrade,unit=count,metric=executor.internal.completedTasks]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:version:auto:upgrade,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:cluster:version:auto:upgrade,unit=count,metric=executor.internal.remainingQueueCapacity]=2147483647
[ecstatic_hypatia] [05:17:46,944] [name=hz:query,unit=count,metric=executor.internal.maximumPoolSize]=16
[ecstatic_hypatia] [05:17:46,944] [name=hz:query,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:query,unit=count,metric=executor.internal.completedTasks]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:query,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:query,unit=count,metric=executor.internal.remainingQueueCapacity]=2147483647
[ecstatic_hypatia] [05:17:46,944] [name=hz:CRDTReplicationMigration,unit=count,metric=executor.internal.maximumPoolSize]=16
[ecstatic_hypatia] [05:17:46,944] [name=hz:CRDTReplicationMigration,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:CRDTReplicationMigration,unit=count,metric=executor.internal.completedTasks]=125
[ecstatic_hypatia] [05:17:46,944] [name=hz:CRDTReplicationMigration,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:CRDTReplicationMigration,unit=count,metric=executor.internal.remainingQueueCapacity]=2147483647
[ecstatic_hypatia] [05:17:46,944] [name=hz:io,unit=count,metric=executor.internal.maximumPoolSize]=16
[ecstatic_hypatia] [05:17:46,944] [name=hz:io,unit=count,metric=executor.internal.poolSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:io,unit=count,metric=executor.internal.completedTasks]=26
[ecstatic_hypatia] [05:17:46,944] [name=hz:io,unit=count,metric=executor.internal.queueSize]=0
[ecstatic_hypatia] [05:17:46,944] [name=hz:io,unit=count,metric=executor.internal.remainingQueueCapacity]=2147483647
```
| non_infrastructure | com hazelcast partition partitiondistributiontest testtennodes z commit failed on sonar build oracle jdk stacktrace java lang assertionerror at org junit assert fail assert java at org junit assert asserttrue assert java at org junit assert assertnotnull assert java at org junit assert assertnotnull assert java at com hazelcast partition partitiondistributiontest testpartitiondistribution partitiondistributiontest java at com hazelcast partition partitiondistributiontest testpartitiondistribution partitiondistributiontest java at com hazelcast partition partitiondistributiontest testtennodes partitiondistributiontest java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at com hazelcast test failontimeoutstatement callablestatement call failontimeoutstatement java at com hazelcast test failontimeoutstatement callablestatement call failontimeoutstatement java at java base java util concurrent futuretask run futuretask java at java base java lang thread run thread java standard output info testtennodes testtennodes overridden metrics configuration with system property hazelcast metrics collection frequency metricsconfig collectionfrequencyseconds info testtennodes testtennodes hazelcast snapshot starting at info testtennodes testtennodes collecting debug metrics and sending to diagnostics is enabled warn testtennodes testtennodes cp subsystem is not enabled cp data structures will operate in unsafe mode please note that unsafe mode will not provide strong consistency guarantees info testtennodes testtennodes diagnostics disabled to enable add dhazelcast diagnostics enabled true to the jvm arguments info testtennodes testtennodes is starting info testtennodes testtennodes members size ver member this info testtennodes testtennodes is started info testtennodes testtennodes overridden metrics configuration with system property hazelcast metrics collection frequency metricsconfig collectionfrequencyseconds info testtennodes testtennodes hazelcast snapshot starting at info testtennodes testtennodes collecting debug metrics and sending to diagnostics is enabled warn testtennodes testtennodes cp subsystem is not enabled cp data structures will operate in unsafe mode please note that unsafe mode will not provide strong consistency guarantees info testtennodes testtennodes diagnostics disabled to enable add dhazelcast diagnostics enabled true to the jvm arguments info testtennodes testtennodes is starting info testtennodes testtennodes created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz ecstatic hypatia priority generic operation thread created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz ecstatic hypatia priority generic operation thread members size ver member this member info testtennodes hz clever hypatia priority generic operation thread members size ver member member this info testtennodes testtennodes is started info testtennodes testtennodes overridden metrics configuration with system property hazelcast metrics collection frequency metricsconfig collectionfrequencyseconds info testtennodes testtennodes hazelcast snapshot starting at info testtennodes testtennodes collecting debug metrics and sending to diagnostics is enabled warn testtennodes testtennodes cp subsystem is not enabled cp data structures will operate in unsafe mode please note that unsafe mode will not provide strong consistency guarantees info testtennodes testtennodes diagnostics disabled to enable add dhazelcast diagnostics enabled true to the jvm arguments info testtennodes testtennodes is starting info testtennodes testtennodes created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz ecstatic hypatia priority generic operation thread created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz ecstatic hypatia priority generic operation thread members size ver member this member member info testtennodes hz clever hypatia priority generic operation thread created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz clever hypatia priority generic operation thread members size ver member member this member info testtennodes hz musing hypatia priority generic operation thread members size ver member member member this info testtennodes testtennodes created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes testtennodes is started info testtennodes testtennodes overridden metrics configuration with system property hazelcast metrics collection frequency metricsconfig collectionfrequencyseconds info testtennodes testtennodes hazelcast snapshot starting at info testtennodes testtennodes collecting debug metrics and sending to diagnostics is enabled warn testtennodes testtennodes cp subsystem is not enabled cp data structures will operate in unsafe mode please note that unsafe mode will not provide strong consistency guarantees info testtennodes testtennodes diagnostics disabled to enable add dhazelcast diagnostics enabled true to the jvm arguments info testtennodes testtennodes is starting info testtennodes testtennodes created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz ecstatic hypatia priority generic operation thread created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz ecstatic hypatia priority generic operation thread members size ver member this member member member info testtennodes hz clever hypatia priority generic operation thread created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz musing hypatia priority generic operation thread created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz clever hypatia priority generic operation thread members size ver member member this member member info testtennodes hz musing hypatia priority generic operation thread members size ver member member member this member info testtennodes hz distracted hypatia priority generic operation thread members size ver member member member member this info testtennodes testtennodes created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes testtennodes created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes testtennodes is started info testtennodes testtennodes overridden metrics configuration with system property hazelcast metrics collection frequency metricsconfig collectionfrequencyseconds info testtennodes testtennodes hazelcast snapshot starting at info testtennodes hz distracted hypatia healthmonitor processors physical memory total physical memory free swap space total swap space free heap memory used heap memory free heap memory total heap memory max heap memory used total heap memory used max minor gc count minor gc time major gc count major gc time load process load system load systemaverage thread count thread peakcount cluster timediff event q size executor q async size executor q client size executor q client query size executor q client blocking size executor q query size executor q scheduled size executor q io size executor q system size executor q operations size executor q priorityoperation size operations completed count executor q mapload size executor q maploadallkeys size executor q cluster size executor q response size operations running count operations pending invocations percentage operations pending invocations count proxy count clientendpoint count connection active count client connection count connection count info testtennodes testtennodes collecting debug metrics and sending to diagnostics is enabled warn testtennodes testtennodes cp subsystem is not enabled cp data structures will operate in unsafe mode please note that unsafe mode will not provide strong consistency guarantees info testtennodes testtennodes diagnostics disabled to enable add dhazelcast diagnostics enabled true to the jvm arguments info testtennodes testtennodes is starting info testtennodes testtennodes created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz ecstatic hypatia priority generic operation thread created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz ecstatic hypatia priority generic operation thread members size ver member this member member member member info testtennodes hz clever hypatia priority generic operation thread created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz distracted hypatia priority generic operation thread created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz clever hypatia priority generic operation thread members size ver member member this member member member info testtennodes hz musing hypatia generic operation thread created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz distracted hypatia priority generic operation thread members size ver member member member member this member info testtennodes hz musing hypatia generic operation thread members size ver member member member this member member info testtennodes hz modest hypatia generic operation thread members size ver member member member member member this info testtennodes testtennodes created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes testtennodes created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes testtennodes created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes testtennodes is started info testtennodes testtennodes overridden metrics configuration with system property hazelcast metrics collection frequency metricsconfig collectionfrequencyseconds info testtennodes testtennodes hazelcast snapshot starting at info testtennodes hz modest hypatia healthmonitor processors physical memory total physical memory free swap space total swap space free heap memory used heap memory free heap memory total heap memory max heap memory used total heap memory used max minor gc count minor gc time major gc count major gc time load process load system load systemaverage thread count thread peakcount cluster timediff event q size executor q async size executor q client size executor q client query size executor q client blocking size executor q query size executor q scheduled size executor q io size executor q system size executor q operations size executor q priorityoperation size operations completed count executor q mapload size executor q maploadallkeys size executor q cluster size executor q response size operations running count operations pending invocations percentage operations pending invocations count proxy count clientendpoint count connection active count client connection count connection count info testtennodes testtennodes collecting debug metrics and sending to diagnostics is enabled warn testtennodes testtennodes cp subsystem is not enabled cp data structures will operate in unsafe mode please note that unsafe mode will not provide strong consistency guarantees info testtennodes testtennodes diagnostics disabled to enable add dhazelcast diagnostics enabled true to the jvm arguments info testtennodes testtennodes is starting info testtennodes testtennodes created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz ecstatic hypatia priority generic operation thread created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz ecstatic hypatia priority generic operation thread members size ver member this member member member member member info testtennodes hz clever hypatia generic operation thread created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz musing hypatia generic operation thread created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz clever hypatia generic operation thread members size ver member member this member member member member info testtennodes hz distracted hypatia priority generic operation thread created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz musing hypatia generic operation thread members size ver member member member this member member member info testtennodes hz modest hypatia priority generic operation thread created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz distracted hypatia priority generic operation thread members size ver member member member member this member member info testtennodes hz modest hypatia priority generic operation thread members size ver member member member member member this member info testtennodes hz upbeat hypatia generic operation thread members size ver member member member member member member this info testtennodes testtennodes created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes testtennodes created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes testtennodes created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes testtennodes created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes testtennodes is started info testtennodes testtennodes overridden metrics configuration with system property hazelcast metrics collection frequency metricsconfig collectionfrequencyseconds info testtennodes testtennodes hazelcast snapshot starting at info testtennodes hz upbeat hypatia healthmonitor processors physical memory total physical memory free swap space total swap space free heap memory used heap memory free heap memory total heap memory max heap memory used total heap memory used max minor gc count minor gc time major gc count major gc time load process load system load systemaverage thread count thread peakcount cluster timediff event q size executor q async size executor q client size executor q client query size executor q client blocking size executor q query size executor q scheduled size executor q io size executor q system size executor q operations size executor q priorityoperation size operations completed count executor q mapload size executor q maploadallkeys size executor q cluster size executor q response size operations running count operations pending invocations percentage operations pending invocations count proxy count clientendpoint count connection active count client connection count connection count info testtennodes testtennodes collecting debug metrics and sending to diagnostics is enabled warn testtennodes testtennodes cp subsystem is not enabled cp data structures will operate in unsafe mode please note that unsafe mode will not provide strong consistency guarantees info testtennodes testtennodes diagnostics disabled to enable add dhazelcast diagnostics enabled true to the jvm arguments info testtennodes testtennodes is starting info testtennodes testtennodes created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz ecstatic hypatia generic operation thread created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz ecstatic hypatia generic operation thread members size ver member this member member member member member member info testtennodes hz clever hypatia generic operation thread created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz clever hypatia generic operation thread members size ver member member this member member member member member info testtennodes hz musing hypatia priority generic operation thread created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz musing hypatia priority generic operation thread members size ver member member member this member member member member info testtennodes hz distracted hypatia priority generic operation thread created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz distracted hypatia priority generic operation thread members size ver member member member member this member member member info testtennodes hz modest hypatia generic operation thread created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz upbeat hypatia priority generic operation thread created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz modest hypatia generic operation thread members size ver member member member member member this member member info testtennodes hz upbeat hypatia priority generic operation thread members size ver member member member member member member this member info testtennodes hz kind hypatia priority generic operation thread members size ver member member member member member member member this info testtennodes testtennodes created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes testtennodes created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes testtennodes created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes testtennodes created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes testtennodes created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes testtennodes is started info testtennodes testtennodes overridden metrics configuration with system property hazelcast metrics collection frequency metricsconfig collectionfrequencyseconds info testtennodes testtennodes hazelcast snapshot starting at info testtennodes hz kind hypatia healthmonitor processors physical memory total physical memory free swap space total swap space free heap memory used heap memory free heap memory total heap memory max heap memory used total heap memory used max minor gc count minor gc time major gc count major gc time load process load system load systemaverage thread count thread peakcount cluster timediff event q size executor q async size executor q client size executor q client query size executor q client blocking size executor q query size executor q scheduled size executor q io size executor q system size executor q operations size executor q priorityoperation size operations completed count executor q mapload size executor q maploadallkeys size executor q cluster size executor q response size operations running count operations pending invocations percentage operations pending invocations count proxy count clientendpoint count connection active count client connection count connection count info testtennodes testtennodes collecting debug metrics and sending to diagnostics is enabled warn testtennodes testtennodes cp subsystem is not enabled cp data structures will operate in unsafe mode please note that unsafe mode will not provide strong consistency guarantees info testtennodes testtennodes diagnostics disabled to enable add dhazelcast diagnostics enabled true to the jvm arguments info testtennodes testtennodes is starting info testtennodes testtennodes created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz ecstatic hypatia priority generic operation thread created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz ecstatic hypatia priority generic operation thread members size ver member this member member member member member member member fdab info testtennodes hz clever hypatia generic operation thread created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz clever hypatia generic operation thread members size ver member member this member member member member member member fdab info testtennodes hz musing hypatia priority generic operation thread created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz musing hypatia priority generic operation thread members size ver member member member this member member member member member fdab info testtennodes hz distracted hypatia priority generic operation thread created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz distracted hypatia priority generic operation thread members size ver member member member member this member member member member fdab info testtennodes hz modest hypatia generic operation thread created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz upbeat hypatia generic operation thread created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info testtennodes hz modest hypatia generic operation thread members size ver member member member onscount | 0 |
26,472 | 20,146,378,032 | IssuesEvent | 2022-02-09 08:00:21 | gammapy/gammapy | https://api.github.com/repos/gammapy/gammapy | closed | Speed up CI builds and tests | infrastructure tests | This is a reminder issue to speed up the execution of the CI builds and tests, maybe by changing to `mambaforge` as a base environment or relying on pip wheels for the setup. | 1.0 | Speed up CI builds and tests - This is a reminder issue to speed up the execution of the CI builds and tests, maybe by changing to `mambaforge` as a base environment or relying on pip wheels for the setup. | infrastructure | speed up ci builds and tests this is a reminder issue to speed up the execution of the ci builds and tests maybe by changing to mambaforge as a base environment or relying on pip wheels for the setup | 1 |
1,547 | 3,265,616,239 | IssuesEvent | 2015-10-22 16:59:35 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | Question about running unit tests | Area-Infrastructure Question | I was following this guide (https://github.com/dotnet/roslyn/wiki/Building%20Testing%20and%20Debugging)
I was stuck in this part.
~~~
Running Unit Tests
To run the unit tests:
msbuild /v:m /m BuildAndTest.proj /p:PublicBuild=true
This command will build and run all of the code / tests which are supported on the current public build of Visual Studio 2015.
To debug suites use the xunit.console.x86.exe runner command which is included in the xunit.runners NuGet package. Make sure to use a 2.0 version of the runner.
xunit.console.x86.exe [UnitTestDll] -noshadow
~~~
Both in windows & mac, typing above commands complained me that there is no command like "msbuild". Also, I have searched through xunit.console file but there wasn't.
~~~
~/roslyn - [master●] » find . -iname 'xunit.console*'
~/roslyn - [master●] » find . -iname 'xunit*'
./Binaries/Debug/xunit.dll
./packages/xunit.1.9.2
./packages/xunit.1.9.2/lib/net20/xunit.dll
./packages/xunit.1.9.2/lib/net20/xunit.dll.tdnet
./packages/xunit.1.9.2/lib/net20/xunit.runner.msbuild.dll
./packages/xunit.1.9.2/lib/net20/xunit.runner.tdnet.dll
./packages/xunit.1.9.2/lib/net20/xunit.runner.utility.dll
./packages/xunit.1.9.2/lib/net20/xunit.xml
./packages/xunit.1.9.2/xunit.1.9.2.nupkg
./packages/xunit.2.0.0-alpha-build2576
./packages/xunit.2.0.0-alpha-build2576/xunit.2.0.0-alpha-build2576.nupkg
./packages/xunit.abstractions.2.0.0-alpha-build2576
./packages/xunit.abstractions.2.0.0-alpha-build2576/lib/net35/xunit.abstractions.dll
./packages/xunit.abstractions.2.0.0-alpha-build2576/lib/net35/xunit.abstractions.xml
./packages/xunit.abstractions.2.0.0-alpha-build2576/xunit.abstractions.2.0.0-alpha-build2576.nupkg
./packages/xunit.assert.2.0.0-alpha-build2576
./packages/xunit.assert.2.0.0-alpha-build2576/lib/net45/xunit2.assert.dll
./packages/xunit.assert.2.0.0-alpha-build2576/lib/net45/xunit2.assert.xml
./packages/xunit.assert.2.0.0-alpha-build2576/xunit.assert.2.0.0-alpha-build2576.nupkg
./packages/xunit.core.2.0.0-alpha-build2576
./packages/xunit.core.2.0.0-alpha-build2576/lib/net45/xunit.runner.tdnet.dll
./packages/xunit.core.2.0.0-alpha-build2576/lib/net45/xunit.runner.utility.dll
./packages/xunit.core.2.0.0-alpha-build2576/lib/net45/xunit.runner.utility.xml
./packages/xunit.core.2.0.0-alpha-build2576/lib/net45/xunit2.dll
./packages/xunit.core.2.0.0-alpha-build2576/lib/net45/xunit2.dll.tdnet
./packages/xunit.core.2.0.0-alpha-build2576/lib/net45/xunit2.xml
./packages/xunit.core.2.0.0-alpha-build2576/xunit.core.2.0.0-alpha-build2576.nupkg
./packages/xunit.extensions.1.9.2
./packages/xunit.extensions.1.9.2/lib/net20/xunit.extensions.dll
./packages/xunit.extensions.1.9.2/lib/net20/xunit.extensions.xml
./packages/xunit.extensions.1.9.2/xunit.extensions.1.9.2.nupkg
~~~ | 1.0 | Question about running unit tests - I was following this guide (https://github.com/dotnet/roslyn/wiki/Building%20Testing%20and%20Debugging)
I was stuck in this part.
~~~
Running Unit Tests
To run the unit tests:
msbuild /v:m /m BuildAndTest.proj /p:PublicBuild=true
This command will build and run all of the code / tests which are supported on the current public build of Visual Studio 2015.
To debug suites use the xunit.console.x86.exe runner command which is included in the xunit.runners NuGet package. Make sure to use a 2.0 version of the runner.
xunit.console.x86.exe [UnitTestDll] -noshadow
~~~
Both in windows & mac, typing above commands complained me that there is no command like "msbuild". Also, I have searched through xunit.console file but there wasn't.
~~~
~/roslyn - [master●] » find . -iname 'xunit.console*'
~/roslyn - [master●] » find . -iname 'xunit*'
./Binaries/Debug/xunit.dll
./packages/xunit.1.9.2
./packages/xunit.1.9.2/lib/net20/xunit.dll
./packages/xunit.1.9.2/lib/net20/xunit.dll.tdnet
./packages/xunit.1.9.2/lib/net20/xunit.runner.msbuild.dll
./packages/xunit.1.9.2/lib/net20/xunit.runner.tdnet.dll
./packages/xunit.1.9.2/lib/net20/xunit.runner.utility.dll
./packages/xunit.1.9.2/lib/net20/xunit.xml
./packages/xunit.1.9.2/xunit.1.9.2.nupkg
./packages/xunit.2.0.0-alpha-build2576
./packages/xunit.2.0.0-alpha-build2576/xunit.2.0.0-alpha-build2576.nupkg
./packages/xunit.abstractions.2.0.0-alpha-build2576
./packages/xunit.abstractions.2.0.0-alpha-build2576/lib/net35/xunit.abstractions.dll
./packages/xunit.abstractions.2.0.0-alpha-build2576/lib/net35/xunit.abstractions.xml
./packages/xunit.abstractions.2.0.0-alpha-build2576/xunit.abstractions.2.0.0-alpha-build2576.nupkg
./packages/xunit.assert.2.0.0-alpha-build2576
./packages/xunit.assert.2.0.0-alpha-build2576/lib/net45/xunit2.assert.dll
./packages/xunit.assert.2.0.0-alpha-build2576/lib/net45/xunit2.assert.xml
./packages/xunit.assert.2.0.0-alpha-build2576/xunit.assert.2.0.0-alpha-build2576.nupkg
./packages/xunit.core.2.0.0-alpha-build2576
./packages/xunit.core.2.0.0-alpha-build2576/lib/net45/xunit.runner.tdnet.dll
./packages/xunit.core.2.0.0-alpha-build2576/lib/net45/xunit.runner.utility.dll
./packages/xunit.core.2.0.0-alpha-build2576/lib/net45/xunit.runner.utility.xml
./packages/xunit.core.2.0.0-alpha-build2576/lib/net45/xunit2.dll
./packages/xunit.core.2.0.0-alpha-build2576/lib/net45/xunit2.dll.tdnet
./packages/xunit.core.2.0.0-alpha-build2576/lib/net45/xunit2.xml
./packages/xunit.core.2.0.0-alpha-build2576/xunit.core.2.0.0-alpha-build2576.nupkg
./packages/xunit.extensions.1.9.2
./packages/xunit.extensions.1.9.2/lib/net20/xunit.extensions.dll
./packages/xunit.extensions.1.9.2/lib/net20/xunit.extensions.xml
./packages/xunit.extensions.1.9.2/xunit.extensions.1.9.2.nupkg
~~~ | infrastructure | question about running unit tests i was following this guide i was stuck in this part running unit tests to run the unit tests msbuild v m m buildandtest proj p publicbuild true this command will build and run all of the code tests which are supported on the current public build of visual studio to debug suites use the xunit console exe runner command which is included in the xunit runners nuget package make sure to use a version of the runner xunit console exe noshadow both in windows mac typing above commands complained me that there is no command like msbuild also i have searched through xunit console file but there wasn t roslyn » find iname xunit console roslyn » find iname xunit binaries debug xunit dll packages xunit packages xunit lib xunit dll packages xunit lib xunit dll tdnet packages xunit lib xunit runner msbuild dll packages xunit lib xunit runner tdnet dll packages xunit lib xunit runner utility dll packages xunit lib xunit xml packages xunit xunit nupkg packages xunit alpha packages xunit alpha xunit alpha nupkg packages xunit abstractions alpha packages xunit abstractions alpha lib xunit abstractions dll packages xunit abstractions alpha lib xunit abstractions xml packages xunit abstractions alpha xunit abstractions alpha nupkg packages xunit assert alpha packages xunit assert alpha lib assert dll packages xunit assert alpha lib assert xml packages xunit assert alpha xunit assert alpha nupkg packages xunit core alpha packages xunit core alpha lib xunit runner tdnet dll packages xunit core alpha lib xunit runner utility dll packages xunit core alpha lib xunit runner utility xml packages xunit core alpha lib dll packages xunit core alpha lib dll tdnet packages xunit core alpha lib xml packages xunit core alpha xunit core alpha nupkg packages xunit extensions packages xunit extensions lib xunit extensions dll packages xunit extensions lib xunit extensions xml packages xunit extensions xunit extensions nupkg | 1 |
9,157 | 7,844,570,953 | IssuesEvent | 2018-06-19 10:02:37 | saros-project/saros | https://api.github.com/repos/saros-project/saros | closed | Create nightly stf self test travis job | infrastructure | Execute stf self test in the same environment as the stf saros tests. | 1.0 | Create nightly stf self test travis job - Execute stf self test in the same environment as the stf saros tests. | infrastructure | create nightly stf self test travis job execute stf self test in the same environment as the stf saros tests | 1 |
26,670 | 20,424,064,675 | IssuesEvent | 2022-02-24 00:37:00 | google/oss-fuzz | https://api.github.com/repos/google/oss-fuzz | opened | CoverageReportIntegrationTest::test_coverage_report is failing | infrastructure priority | ------------------------------ Captured log call -------------------------------
ERROR root:github_api.py:75 Request to https://api.github.com/repos/None/None/actions/artifacts?per_page=100&page=1 failed. Code: 401. Response: {'message': 'Bad credentials', 'documentation_url': 'https://docs.github.com/rest'}
ERROR root:clusterfuzz_deployment.py:143 Failed to download corpus for target: do_stuff_fuzzer. Error: Github API request failed.
=========================== short test summary info ============================
FAILED infra/cifuzz/run_fuzzers_test.py::CoverageReportIntegrationTest::test_coverage_report | 1.0 | CoverageReportIntegrationTest::test_coverage_report is failing - ------------------------------ Captured log call -------------------------------
ERROR root:github_api.py:75 Request to https://api.github.com/repos/None/None/actions/artifacts?per_page=100&page=1 failed. Code: 401. Response: {'message': 'Bad credentials', 'documentation_url': 'https://docs.github.com/rest'}
ERROR root:clusterfuzz_deployment.py:143 Failed to download corpus for target: do_stuff_fuzzer. Error: Github API request failed.
=========================== short test summary info ============================
FAILED infra/cifuzz/run_fuzzers_test.py::CoverageReportIntegrationTest::test_coverage_report | infrastructure | coveragereportintegrationtest test coverage report is failing captured log call error root github api py request to failed code response message bad credentials documentation url error root clusterfuzz deployment py failed to download corpus for target do stuff fuzzer error github api request failed short test summary info failed infra cifuzz run fuzzers test py coveragereportintegrationtest test coverage report | 1 |
29,912 | 24,389,059,511 | IssuesEvent | 2022-10-04 13:58:56 | opendatahub-io/odh-dashboard | https://api.github.com/repos/opendatahub-io/odh-dashboard | closed | [Bug]: Configmap feature flag is overriden by the dashboard | kind/bug infrastructure priority/normal | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
When trying to disable one application in the `odh-enabled--applications-config`, it will be overridedn if the kfdef of that application exists. This is due to this logic: https://github.com/opendatahub-io/odh-dashboard/blob/main/backend/src/utils/resourceUtils.ts#L274
Right here it will check if the **application's kfdef** exists, and if so, just enable it and update the configmap, so If you had that application disabled it will revert back to enable.
### Expected Behavior
When you change a value in the configmap, the state of the application chagens and it reflects the status that you've selected.
### Steps To Reproduce
1. Open `odh-enabled--applications-config`.
2. Change `jupyter` to `false` (or straight up delete the configmap).
3. Reload the dashboard and wait for the watcher to be triggered it again (around 2 mins).
4. The value in the configmap will be reverted to `true`.
### Workaround (if any)
_No response_
### OpenShift Infrastructure Version
_No response_
### Openshift Version
_No response_
### What browsers are you seeing the problem on?
_No response_
### Open Data Hub Version
_No response_
### Relevant log output
_No response_ | 1.0 | [Bug]: Configmap feature flag is overriden by the dashboard - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
When trying to disable one application in the `odh-enabled--applications-config`, it will be overridedn if the kfdef of that application exists. This is due to this logic: https://github.com/opendatahub-io/odh-dashboard/blob/main/backend/src/utils/resourceUtils.ts#L274
Right here it will check if the **application's kfdef** exists, and if so, just enable it and update the configmap, so If you had that application disabled it will revert back to enable.
### Expected Behavior
When you change a value in the configmap, the state of the application chagens and it reflects the status that you've selected.
### Steps To Reproduce
1. Open `odh-enabled--applications-config`.
2. Change `jupyter` to `false` (or straight up delete the configmap).
3. Reload the dashboard and wait for the watcher to be triggered it again (around 2 mins).
4. The value in the configmap will be reverted to `true`.
### Workaround (if any)
_No response_
### OpenShift Infrastructure Version
_No response_
### Openshift Version
_No response_
### What browsers are you seeing the problem on?
_No response_
### Open Data Hub Version
_No response_
### Relevant log output
_No response_ | infrastructure | configmap feature flag is overriden by the dashboard is there an existing issue for this i have searched the existing issues current behavior when trying to disable one application in the odh enabled applications config it will be overridedn if the kfdef of that application exists this is due to this logic right here it will check if the application s kfdef exists and if so just enable it and update the configmap so if you had that application disabled it will revert back to enable expected behavior when you change a value in the configmap the state of the application chagens and it reflects the status that you ve selected steps to reproduce open odh enabled applications config change jupyter to false or straight up delete the configmap reload the dashboard and wait for the watcher to be triggered it again around mins the value in the configmap will be reverted to true workaround if any no response openshift infrastructure version no response openshift version no response what browsers are you seeing the problem on no response open data hub version no response relevant log output no response | 1 |
393,736 | 27,007,048,187 | IssuesEvent | 2023-02-10 12:33:56 | haskell/cabal | https://api.github.com/repos/haskell/cabal | closed | The `remote-repo-cache` option is implemented but documented as not implemented | type: bug documentation | **Describe the bug**
[The documentation for the `remote-repo-cache` configuration option](https://cabal.readthedocs.io/en/3.8/cabal-project.html#cfg-field-remote-repo-cache) says:
> [STRIKEOUT:The location where packages downloaded from remote repositories will be cached.] Not implemented yet.
However, the option does appear to be implemented. Observe a project configured to use an alternate package repo cache directory:
```sh
❯ pwd
/home/cleanroom/chk.hs
❯ cat cabal.project
packages: ./
❯ cat cabal.project.local
store-dir: /home/cleanroom/chk.hs/.cabal
remote-repo-cache: /home/cleanroom/chk.hs/.cabal/packages
```
And where no cache currently exists, either in the default or configured location:
```sh
❯ stat ~/.cabal/packages
stat: cannot statx '/home/cleanroom/.cabal/packages': No such file or directory
❯ stat .cabal/packages
stat: cannot statx '.cabal/packages': No such file or directory
```
A package update will populate the configured cache location:
```sh
❯ cabal update hackage.haskell.org
Downloading the latest package list from hackage.haskell.org
Package list of hackage.haskell.org has been updated.
The index-state is set to 2023-02-07T12:42:11Z.
❯ find .cabal/packages
.cabal/packages
.cabal/packages/hackage.haskell.org
.cabal/packages/hackage.haskell.org/hackage-security-lock
.cabal/packages/hackage.haskell.org/root.json
.cabal/packages/hackage.haskell.org/01-index.timestamp
.cabal/packages/hackage.haskell.org/timestamp.json
.cabal/packages/hackage.haskell.org/snapshot.json
.cabal/packages/hackage.haskell.org/mirrors.json
.cabal/packages/hackage.haskell.org/01-index.tar.gz
.cabal/packages/hackage.haskell.org/01-index.tar
.cabal/packages/hackage.haskell.org/01-index.tar.idx
.cabal/packages/hackage.haskell.org/01-index.cache
```
**Expected behavior**
The implemented behavior is what I would expect from the documentation that is currently struck out.
**System information**
- NixOS 22.11
- cabal 3.8.1.0
- ghc 8.10.7
| 1.0 | The `remote-repo-cache` option is implemented but documented as not implemented - **Describe the bug**
[The documentation for the `remote-repo-cache` configuration option](https://cabal.readthedocs.io/en/3.8/cabal-project.html#cfg-field-remote-repo-cache) says:
> [STRIKEOUT:The location where packages downloaded from remote repositories will be cached.] Not implemented yet.
However, the option does appear to be implemented. Observe a project configured to use an alternate package repo cache directory:
```sh
❯ pwd
/home/cleanroom/chk.hs
❯ cat cabal.project
packages: ./
❯ cat cabal.project.local
store-dir: /home/cleanroom/chk.hs/.cabal
remote-repo-cache: /home/cleanroom/chk.hs/.cabal/packages
```
And where no cache currently exists, either in the default or configured location:
```sh
❯ stat ~/.cabal/packages
stat: cannot statx '/home/cleanroom/.cabal/packages': No such file or directory
❯ stat .cabal/packages
stat: cannot statx '.cabal/packages': No such file or directory
```
A package update will populate the configured cache location:
```sh
❯ cabal update hackage.haskell.org
Downloading the latest package list from hackage.haskell.org
Package list of hackage.haskell.org has been updated.
The index-state is set to 2023-02-07T12:42:11Z.
❯ find .cabal/packages
.cabal/packages
.cabal/packages/hackage.haskell.org
.cabal/packages/hackage.haskell.org/hackage-security-lock
.cabal/packages/hackage.haskell.org/root.json
.cabal/packages/hackage.haskell.org/01-index.timestamp
.cabal/packages/hackage.haskell.org/timestamp.json
.cabal/packages/hackage.haskell.org/snapshot.json
.cabal/packages/hackage.haskell.org/mirrors.json
.cabal/packages/hackage.haskell.org/01-index.tar.gz
.cabal/packages/hackage.haskell.org/01-index.tar
.cabal/packages/hackage.haskell.org/01-index.tar.idx
.cabal/packages/hackage.haskell.org/01-index.cache
```
**Expected behavior**
The implemented behavior is what I would expect from the documentation that is currently struck out.
**System information**
- NixOS 22.11
- cabal 3.8.1.0
- ghc 8.10.7
| non_infrastructure | the remote repo cache option is implemented but documented as not implemented describe the bug says not implemented yet however the option does appear to be implemented observe a project configured to use an alternate package repo cache directory sh ❯ pwd home cleanroom chk hs ❯ cat cabal project packages ❯ cat cabal project local store dir home cleanroom chk hs cabal remote repo cache home cleanroom chk hs cabal packages and where no cache currently exists either in the default or configured location sh ❯ stat cabal packages stat cannot statx home cleanroom cabal packages no such file or directory ❯ stat cabal packages stat cannot statx cabal packages no such file or directory a package update will populate the configured cache location sh ❯ cabal update hackage haskell org downloading the latest package list from hackage haskell org package list of hackage haskell org has been updated the index state is set to ❯ find cabal packages cabal packages cabal packages hackage haskell org cabal packages hackage haskell org hackage security lock cabal packages hackage haskell org root json cabal packages hackage haskell org index timestamp cabal packages hackage haskell org timestamp json cabal packages hackage haskell org snapshot json cabal packages hackage haskell org mirrors json cabal packages hackage haskell org index tar gz cabal packages hackage haskell org index tar cabal packages hackage haskell org index tar idx cabal packages hackage haskell org index cache expected behavior the implemented behavior is what i would expect from the documentation that is currently struck out system information nixos cabal ghc | 0 |
285,128 | 24,644,569,354 | IssuesEvent | 2022-10-17 14:02:03 | apache/dolphinscheduler | https://api.github.com/repos/apache/dolphinscheduler | closed | [Migrate][Test] Block explicit import of jUnit 4 library | improvement backend test | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
* As we have migrated all unit tests from jUnit 4 -> jUnit5 and removed related jUnit 4 library imports, we need to add a `Spotless` step to check and block such imports like `org.junit.xxx` in the future. Contributors are supposed to import `org.junit.jupiter.xxx` instead.
* This issue is part of #12301.
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| 1.0 | [Migrate][Test] Block explicit import of jUnit 4 library - ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
* As we have migrated all unit tests from jUnit 4 -> jUnit5 and removed related jUnit 4 library imports, we need to add a `Spotless` step to check and block such imports like `org.junit.xxx` in the future. Contributors are supposed to import `org.junit.jupiter.xxx` instead.
* This issue is part of #12301.
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| non_infrastructure | block explicit import of junit library search before asking i had searched in the and found no similar feature requirement description as we have migrated all unit tests from junit and removed related junit library imports we need to add a spotless step to check and block such imports like org junit xxx in the future contributors are supposed to import org junit jupiter xxx instead this issue is part of are you willing to submit a pr yes i am willing to submit a pr code of conduct i agree to follow this project s | 0 |
30,502 | 24,875,704,412 | IssuesEvent | 2022-10-27 18:53:43 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Review .NET 5 usages | area-Infrastructure in-pr | .NET 5 has reached its end of life today. We should review its remaining usages in the runtime repo and rev them to .NET 6 or 7 (preferably using variable like `$(NetCoreAppCurrent)` where possible).
```sh
# net 5 in file contents
% git grep -Eli 'net(coreapp)?5'
docs/design/coreclr/jit/viewing-jit-dumps.md
docs/workflow/building/libraries/cross-building.md
docs/workflow/debugging/coreclr/debugging.md
docs/workflow/testing/coreclr/running-aspnet-benchmarks-with-crossgen2.md
docs/workflow/testing/host/testing.md
eng/Signing.props
eng/common/SetupNugetSources.ps1
eng/common/SetupNugetSources.sh
eng/testing/performance/performance-setup.ps1
eng/testing/performance/performance-setup.sh
src/coreclr/tools/r2rtest/Commands/CompileNugetCommand.cs
src/installer/managed/Microsoft.NET.HostModel/Bundle/Manifest.cs
src/installer/managed/Microsoft.NET.HostModel/Bundle/TargetInfo.cs
src/installer/tests/Assets/TestProjects/PortableTestApp/PortableTestApp.csproj
src/installer/tests/Assets/TestProjects/StandaloneTestApp/StandaloneTestApp.csproj
src/installer/tests/HostActivation.Tests/FrameworkResolution/MultipleHives.cs
src/installer/tests/TestUtils/Constants.cs
src/libraries/Common/tests/StaticTestGenerator/Program.cs
src/libraries/Common/tests/StaticTestGenerator/README.md
src/libraries/Microsoft.Extensions.DependencyInjection.Abstractions/src/AsyncServiceScope.cs
src/libraries/Microsoft.Extensions.DependencyModel/tests/DependencyContextJsonReaderTest.cs
src/libraries/Microsoft.Extensions.FileProviders.Composite/tests/MockFileProvider.cs
src/libraries/Microsoft.Extensions.Hosting/tests/FunctionalTests/IntegrationTesting/src/Common/Tfm.cs
src/libraries/Microsoft.Extensions.Hosting/tests/FunctionalTests/ShutdownTests.cs
src/libraries/System.Diagnostics.DiagnosticSource/tests/ActivityTests.cs
src/libraries/System.Formats.Cbor/src/CompatibilitySuppressions.xml
src/libraries/System.Private.CoreLib/src/System.Private.CoreLib.Shared.projitems
src/libraries/System.Private.CoreLib/src/System/Random.Net5CompatImpl.cs
src/libraries/System.Private.CoreLib/src/System/Random.cs
src/libraries/System.Runtime.InteropServices/tests/LibraryImportGenerator.UnitTests/AdditionalAttributesOnStub.cs
src/libraries/System.Runtime.InteropServices/tests/LibraryImportGenerator.UnitTests/Compiles.cs
src/libraries/System.Runtime.InteropServices/tests/LibraryImportGenerator.UnitTests/TestUtils.cs
src/libraries/System.Security.Permissions/src/CompatibilitySuppressions.xml
src/libraries/System.Text.Encoding.CodePages/src/Data/Tools/EncodingDataGenerator.csproj
src/libraries/System.Text.Json/src/System/Text/Json/Serialization/JsonConverterOfT.WriteCore.cs
src/libraries/testPackages/testPackages.proj
src/native/corehost/bundle/header.h
src/tests/BuildWasmApps/Wasm.Build.Tests/BlazorWasmTests.cs
src/tests/BuildWasmApps/testassets/Blazor_net50/Blazor_net50.csproj
src/tests/BuildWasmApps/testassets/Blazor_net50/Program.cs
src/tests/BuildWasmApps/testassets/Blazor_net50/Shared/NavMenu.razor
src/tests/BuildWasmApps/testassets/Blazor_net50/_Imports.razor
src/workloads/workloads.csproj
# net 5 in file names
$ git ls-files | grep -Ei 'net(coreapp)?5'
src/libraries/System.Private.CoreLib/src/System/Random.Net5CompatImpl.cs
src/tests/BuildWasmApps/testassets/Blazor_net50/App.razor
src/tests/BuildWasmApps/testassets/Blazor_net50/Blazor_net50.csproj
src/tests/BuildWasmApps/testassets/Blazor_net50/Pages/Counter.razor
src/tests/BuildWasmApps/testassets/Blazor_net50/Pages/FetchData.razor
src/tests/BuildWasmApps/testassets/Blazor_net50/Pages/Index.razor
src/tests/BuildWasmApps/testassets/Blazor_net50/Program.cs
src/tests/BuildWasmApps/testassets/Blazor_net50/Shared/MainLayout.razor
src/tests/BuildWasmApps/testassets/Blazor_net50/Shared/MainLayout.razor.css
src/tests/BuildWasmApps/testassets/Blazor_net50/Shared/NavMenu.razor
src/tests/BuildWasmApps/testassets/Blazor_net50/Shared/NavMenu.razor.css
src/tests/BuildWasmApps/testassets/Blazor_net50/Shared/SurveyPrompt.razor
src/tests/BuildWasmApps/testassets/Blazor_net50/_Imports.razor
``` | 1.0 | Review .NET 5 usages - .NET 5 has reached its end of life today. We should review its remaining usages in the runtime repo and rev them to .NET 6 or 7 (preferably using variable like `$(NetCoreAppCurrent)` where possible).
```sh
# net 5 in file contents
% git grep -Eli 'net(coreapp)?5'
docs/design/coreclr/jit/viewing-jit-dumps.md
docs/workflow/building/libraries/cross-building.md
docs/workflow/debugging/coreclr/debugging.md
docs/workflow/testing/coreclr/running-aspnet-benchmarks-with-crossgen2.md
docs/workflow/testing/host/testing.md
eng/Signing.props
eng/common/SetupNugetSources.ps1
eng/common/SetupNugetSources.sh
eng/testing/performance/performance-setup.ps1
eng/testing/performance/performance-setup.sh
src/coreclr/tools/r2rtest/Commands/CompileNugetCommand.cs
src/installer/managed/Microsoft.NET.HostModel/Bundle/Manifest.cs
src/installer/managed/Microsoft.NET.HostModel/Bundle/TargetInfo.cs
src/installer/tests/Assets/TestProjects/PortableTestApp/PortableTestApp.csproj
src/installer/tests/Assets/TestProjects/StandaloneTestApp/StandaloneTestApp.csproj
src/installer/tests/HostActivation.Tests/FrameworkResolution/MultipleHives.cs
src/installer/tests/TestUtils/Constants.cs
src/libraries/Common/tests/StaticTestGenerator/Program.cs
src/libraries/Common/tests/StaticTestGenerator/README.md
src/libraries/Microsoft.Extensions.DependencyInjection.Abstractions/src/AsyncServiceScope.cs
src/libraries/Microsoft.Extensions.DependencyModel/tests/DependencyContextJsonReaderTest.cs
src/libraries/Microsoft.Extensions.FileProviders.Composite/tests/MockFileProvider.cs
src/libraries/Microsoft.Extensions.Hosting/tests/FunctionalTests/IntegrationTesting/src/Common/Tfm.cs
src/libraries/Microsoft.Extensions.Hosting/tests/FunctionalTests/ShutdownTests.cs
src/libraries/System.Diagnostics.DiagnosticSource/tests/ActivityTests.cs
src/libraries/System.Formats.Cbor/src/CompatibilitySuppressions.xml
src/libraries/System.Private.CoreLib/src/System.Private.CoreLib.Shared.projitems
src/libraries/System.Private.CoreLib/src/System/Random.Net5CompatImpl.cs
src/libraries/System.Private.CoreLib/src/System/Random.cs
src/libraries/System.Runtime.InteropServices/tests/LibraryImportGenerator.UnitTests/AdditionalAttributesOnStub.cs
src/libraries/System.Runtime.InteropServices/tests/LibraryImportGenerator.UnitTests/Compiles.cs
src/libraries/System.Runtime.InteropServices/tests/LibraryImportGenerator.UnitTests/TestUtils.cs
src/libraries/System.Security.Permissions/src/CompatibilitySuppressions.xml
src/libraries/System.Text.Encoding.CodePages/src/Data/Tools/EncodingDataGenerator.csproj
src/libraries/System.Text.Json/src/System/Text/Json/Serialization/JsonConverterOfT.WriteCore.cs
src/libraries/testPackages/testPackages.proj
src/native/corehost/bundle/header.h
src/tests/BuildWasmApps/Wasm.Build.Tests/BlazorWasmTests.cs
src/tests/BuildWasmApps/testassets/Blazor_net50/Blazor_net50.csproj
src/tests/BuildWasmApps/testassets/Blazor_net50/Program.cs
src/tests/BuildWasmApps/testassets/Blazor_net50/Shared/NavMenu.razor
src/tests/BuildWasmApps/testassets/Blazor_net50/_Imports.razor
src/workloads/workloads.csproj
# net 5 in file names
$ git ls-files | grep -Ei 'net(coreapp)?5'
src/libraries/System.Private.CoreLib/src/System/Random.Net5CompatImpl.cs
src/tests/BuildWasmApps/testassets/Blazor_net50/App.razor
src/tests/BuildWasmApps/testassets/Blazor_net50/Blazor_net50.csproj
src/tests/BuildWasmApps/testassets/Blazor_net50/Pages/Counter.razor
src/tests/BuildWasmApps/testassets/Blazor_net50/Pages/FetchData.razor
src/tests/BuildWasmApps/testassets/Blazor_net50/Pages/Index.razor
src/tests/BuildWasmApps/testassets/Blazor_net50/Program.cs
src/tests/BuildWasmApps/testassets/Blazor_net50/Shared/MainLayout.razor
src/tests/BuildWasmApps/testassets/Blazor_net50/Shared/MainLayout.razor.css
src/tests/BuildWasmApps/testassets/Blazor_net50/Shared/NavMenu.razor
src/tests/BuildWasmApps/testassets/Blazor_net50/Shared/NavMenu.razor.css
src/tests/BuildWasmApps/testassets/Blazor_net50/Shared/SurveyPrompt.razor
src/tests/BuildWasmApps/testassets/Blazor_net50/_Imports.razor
``` | infrastructure | review net usages net has reached its end of life today we should review its remaining usages in the runtime repo and rev them to net or preferably using variable like netcoreappcurrent where possible sh net in file contents git grep eli net coreapp docs design coreclr jit viewing jit dumps md docs workflow building libraries cross building md docs workflow debugging coreclr debugging md docs workflow testing coreclr running aspnet benchmarks with md docs workflow testing host testing md eng signing props eng common setupnugetsources eng common setupnugetsources sh eng testing performance performance setup eng testing performance performance setup sh src coreclr tools commands compilenugetcommand cs src installer managed microsoft net hostmodel bundle manifest cs src installer managed microsoft net hostmodel bundle targetinfo cs src installer tests assets testprojects portabletestapp portabletestapp csproj src installer tests assets testprojects standalonetestapp standalonetestapp csproj src installer tests hostactivation tests frameworkresolution multiplehives cs src installer tests testutils constants cs src libraries common tests statictestgenerator program cs src libraries common tests statictestgenerator readme md src libraries microsoft extensions dependencyinjection abstractions src asyncservicescope cs src libraries microsoft extensions dependencymodel tests dependencycontextjsonreadertest cs src libraries microsoft extensions fileproviders composite tests mockfileprovider cs src libraries microsoft extensions hosting tests functionaltests integrationtesting src common tfm cs src libraries microsoft extensions hosting tests functionaltests shutdowntests cs src libraries system diagnostics diagnosticsource tests activitytests cs src libraries system formats cbor src compatibilitysuppressions xml src libraries system private corelib src system private corelib shared projitems src libraries system private corelib src system random cs src libraries system private corelib src system random cs src libraries system runtime interopservices tests libraryimportgenerator unittests additionalattributesonstub cs src libraries system runtime interopservices tests libraryimportgenerator unittests compiles cs src libraries system runtime interopservices tests libraryimportgenerator unittests testutils cs src libraries system security permissions src compatibilitysuppressions xml src libraries system text encoding codepages src data tools encodingdatagenerator csproj src libraries system text json src system text json serialization jsonconverteroft writecore cs src libraries testpackages testpackages proj src native corehost bundle header h src tests buildwasmapps wasm build tests blazorwasmtests cs src tests buildwasmapps testassets blazor blazor csproj src tests buildwasmapps testassets blazor program cs src tests buildwasmapps testassets blazor shared navmenu razor src tests buildwasmapps testassets blazor imports razor src workloads workloads csproj net in file names git ls files grep ei net coreapp src libraries system private corelib src system random cs src tests buildwasmapps testassets blazor app razor src tests buildwasmapps testassets blazor blazor csproj src tests buildwasmapps testassets blazor pages counter razor src tests buildwasmapps testassets blazor pages fetchdata razor src tests buildwasmapps testassets blazor pages index razor src tests buildwasmapps testassets blazor program cs src tests buildwasmapps testassets blazor shared mainlayout razor src tests buildwasmapps testassets blazor shared mainlayout razor css src tests buildwasmapps testassets blazor shared navmenu razor src tests buildwasmapps testassets blazor shared navmenu razor css src tests buildwasmapps testassets blazor shared surveyprompt razor src tests buildwasmapps testassets blazor imports razor | 1 |
26,321 | 19,988,244,650 | IssuesEvent | 2022-01-31 00:21:15 | APSIMInitiative/ApsimX | https://api.github.com/repos/APSIMInitiative/ApsimX | closed | Directed graphs (ie nutrient, rotation manager) don't respect user's font settings | bug interface/infrastructure | Title. This is probably due to our usage of `Cairo.Context.ShowText()`. We should probably be using the parent widget to create pango text layouts. | 1.0 | Directed graphs (ie nutrient, rotation manager) don't respect user's font settings - Title. This is probably due to our usage of `Cairo.Context.ShowText()`. We should probably be using the parent widget to create pango text layouts. | infrastructure | directed graphs ie nutrient rotation manager don t respect user s font settings title this is probably due to our usage of cairo context showtext we should probably be using the parent widget to create pango text layouts | 1 |
12,876 | 8,029,900,210 | IssuesEvent | 2018-07-27 17:39:26 | bitshares/bitshares-core | https://api.github.com/repos/bitshares/bitshares-core | closed | Improve performance of `database::update_expired_feeds()` | 2d Developing 3c Enhancement 4b Normal 6 Performance 9c Large performance | According to profiling data mentioned in #1083, `update_expired_feeds()` plays a significant role while replaying.
>---------------------- first 27 M blocks ----------------------------
> 764718ms th_a db_management.cpp:124 reindex ] Done reindexing, elapsed time: 5063.73962899999969522 sec
```
Flat profile:
Each sample counts as 0.01 seconds.
% cumulative self self total
time seconds seconds calls s/call s/call name
9.62 215.94 215.94 448458421 0.00 0.00 graphene::chain::generic_index<graphene::chain::account_object, ... >::find(graphene::db::object_id_type) const
8.47 406.11 190.17 20731743387 0.00 0.00 graphene::chain::operator>(graphene::chain::price const&, graphene::chain::price const&)
7.18 567.18 161.07 390785433 0.00 0.00 graphene::chain::database::adjust_balance(graphene::db::object_id<(unsigned char)1, (unsigned char)2, graphene::chain::account_object>, graphene::chain::asset)
4.68 672.30 105.12 307448392 0.00 0.00 graphene::chain::generic_index<graphene::chain::account_statistics_object, ... >::find(graphene::db::object_id_type) const
4.64 776.50 104.20 22967573183 0.00 0.00 graphene::chain::operator<(graphene::chain::price const&, graphene::chain::price const&)
4.39 874.95 98.45 sha256_block_data_order_avx
3.92 962.82 87.87 3183637020 0.00 0.00 graphene::chain::generic_index<graphene::chain::asset_bitasset_data_object, ... >::find(graphene::db::object_id_type) const
3.62 1044.04 81.22 386682199 0.00 0.00 graphene::chain::generic_index<graphene::chain::account_balance_object, ... >::modify(graphene::db::object const&, std::function<void (graphene::db::object&)> const&)
3.32 1118.50 74.46 805850110 0.00 0.00 graphene::chain::generic_index<graphene::chain::asset_object, ... >::find(graphene::db::object_id_type) const
2.85 1182.43 63.93 245765371 0.00 0.00 graphene::chain::generic_index<graphene::chain::account_statistics_object, ... >::modify(graphene::db::object const&, std::function<void (graphene::db::object&)> const&)
2.76 1244.32 61.89 27000000 0.00 0.00 graphene::chain::database::update_expired_feeds()
2.16 1292.90 48.58 72953692 0.00 0.00 graphene::chain::generic_index<graphene::chain::limit_order_object, ... >::create(std::function<void (graphene::db::object&)> const&)
```
* The 11th entry `database::update_expired_feeds()` is it itself;
* the 7th entry `index<asset_bitasset_data_object>::find()` is caused by it here: https://github.com/bitshares/bitshares-core/blob/06aee789a4e2045d899e02b096d5a1a908be2865/libraries/chain/db_update.cpp#L483
* the 9th entry `index<asset_object>::find()` is caused by it due to an unnecessary call here: https://github.com/bitshares/bitshares-core/blob/06aee789a4e2045d899e02b096d5a1a908be2865/libraries/chain/db_update.cpp#L494
These 3 entries sum up to 10% of replay time.
Current code iterates through all `asset_object`s who is a bit asset or a prediction market on every new block, then fetch their `asset_bitasset_data_object`, then check if feed is expired. However, we have more than `500` bit assets / prediction markets on the chain now, it's inefficient to iterate through them all on every block.
Another issue is related to https://github.com/cryptonomex/graphene/issues/615. For the first `5,000,000` blocks or so, `update_median_feeds()` is almost called for every bit asset on every block. Although the bug has been fixed with a hard fork, the buggy code is still there for processing blocks before the hard fork, thus wasting our time for every sync/replay.
To optimize, things need to be done:
- [x] update median feed and check call orders when feed expired
- [x] add a `by_expiration` index in `asset_bitasset_index`, only iterate through and process expired ones
- [x] refactor pre-HF615 code with better performance
- [x] update CER in asset_object when
* median CER changed, or
* CER in asset_object got updated (E.G. by `asset_update_operation`)
This is a sub-task of #982. | True | Improve performance of `database::update_expired_feeds()` - According to profiling data mentioned in #1083, `update_expired_feeds()` plays a significant role while replaying.
>---------------------- first 27 M blocks ----------------------------
> 764718ms th_a db_management.cpp:124 reindex ] Done reindexing, elapsed time: 5063.73962899999969522 sec
```
Flat profile:
Each sample counts as 0.01 seconds.
% cumulative self self total
time seconds seconds calls s/call s/call name
9.62 215.94 215.94 448458421 0.00 0.00 graphene::chain::generic_index<graphene::chain::account_object, ... >::find(graphene::db::object_id_type) const
8.47 406.11 190.17 20731743387 0.00 0.00 graphene::chain::operator>(graphene::chain::price const&, graphene::chain::price const&)
7.18 567.18 161.07 390785433 0.00 0.00 graphene::chain::database::adjust_balance(graphene::db::object_id<(unsigned char)1, (unsigned char)2, graphene::chain::account_object>, graphene::chain::asset)
4.68 672.30 105.12 307448392 0.00 0.00 graphene::chain::generic_index<graphene::chain::account_statistics_object, ... >::find(graphene::db::object_id_type) const
4.64 776.50 104.20 22967573183 0.00 0.00 graphene::chain::operator<(graphene::chain::price const&, graphene::chain::price const&)
4.39 874.95 98.45 sha256_block_data_order_avx
3.92 962.82 87.87 3183637020 0.00 0.00 graphene::chain::generic_index<graphene::chain::asset_bitasset_data_object, ... >::find(graphene::db::object_id_type) const
3.62 1044.04 81.22 386682199 0.00 0.00 graphene::chain::generic_index<graphene::chain::account_balance_object, ... >::modify(graphene::db::object const&, std::function<void (graphene::db::object&)> const&)
3.32 1118.50 74.46 805850110 0.00 0.00 graphene::chain::generic_index<graphene::chain::asset_object, ... >::find(graphene::db::object_id_type) const
2.85 1182.43 63.93 245765371 0.00 0.00 graphene::chain::generic_index<graphene::chain::account_statistics_object, ... >::modify(graphene::db::object const&, std::function<void (graphene::db::object&)> const&)
2.76 1244.32 61.89 27000000 0.00 0.00 graphene::chain::database::update_expired_feeds()
2.16 1292.90 48.58 72953692 0.00 0.00 graphene::chain::generic_index<graphene::chain::limit_order_object, ... >::create(std::function<void (graphene::db::object&)> const&)
```
* The 11th entry `database::update_expired_feeds()` is it itself;
* the 7th entry `index<asset_bitasset_data_object>::find()` is caused by it here: https://github.com/bitshares/bitshares-core/blob/06aee789a4e2045d899e02b096d5a1a908be2865/libraries/chain/db_update.cpp#L483
* the 9th entry `index<asset_object>::find()` is caused by it due to an unnecessary call here: https://github.com/bitshares/bitshares-core/blob/06aee789a4e2045d899e02b096d5a1a908be2865/libraries/chain/db_update.cpp#L494
These 3 entries sum up to 10% of replay time.
Current code iterates through all `asset_object`s who is a bit asset or a prediction market on every new block, then fetch their `asset_bitasset_data_object`, then check if feed is expired. However, we have more than `500` bit assets / prediction markets on the chain now, it's inefficient to iterate through them all on every block.
Another issue is related to https://github.com/cryptonomex/graphene/issues/615. For the first `5,000,000` blocks or so, `update_median_feeds()` is almost called for every bit asset on every block. Although the bug has been fixed with a hard fork, the buggy code is still there for processing blocks before the hard fork, thus wasting our time for every sync/replay.
To optimize, things need to be done:
- [x] update median feed and check call orders when feed expired
- [x] add a `by_expiration` index in `asset_bitasset_index`, only iterate through and process expired ones
- [x] refactor pre-HF615 code with better performance
- [x] update CER in asset_object when
* median CER changed, or
* CER in asset_object got updated (E.G. by `asset_update_operation`)
This is a sub-task of #982. | non_infrastructure | improve performance of database update expired feeds according to profiling data mentioned in update expired feeds plays a significant role while replaying first m blocks th a db management cpp reindex done reindexing elapsed time sec flat profile each sample counts as seconds cumulative self self total time seconds seconds calls s call s call name graphene chain generic index find graphene db object id type const graphene chain operator graphene chain price const graphene chain price const graphene chain database adjust balance graphene db object id graphene chain asset graphene chain generic index find graphene db object id type const graphene chain operator graphene chain price const graphene chain price const block data order avx graphene chain generic index find graphene db object id type const graphene chain generic index modify graphene db object const std function const graphene chain generic index find graphene db object id type const graphene chain generic index modify graphene db object const std function const graphene chain database update expired feeds graphene chain generic index create std function const the entry database update expired feeds is it itself the entry index find is caused by it here the entry index find is caused by it due to an unnecessary call here these entries sum up to of replay time current code iterates through all asset object s who is a bit asset or a prediction market on every new block then fetch their asset bitasset data object then check if feed is expired however we have more than bit assets prediction markets on the chain now it s inefficient to iterate through them all on every block another issue is related to for the first blocks or so update median feeds is almost called for every bit asset on every block although the bug has been fixed with a hard fork the buggy code is still there for processing blocks before the hard fork thus wasting our time for every sync replay to optimize things need to be done update median feed and check call orders when feed expired add a by expiration index in asset bitasset index only iterate through and process expired ones refactor pre code with better performance update cer in asset object when median cer changed or cer in asset object got updated e g by asset update operation this is a sub task of | 0 |
28,084 | 22,918,410,226 | IssuesEvent | 2022-07-17 09:52:52 | UBCSailbot/.github | https://api.github.com/repos/UBCSailbot/.github | opened | Create contributing guidelines | infrastructure | ### Purpose
Document contributing guidelines
### Changes
Create new file in the repository's root directory, CONTRIBUTING.md
### Relevant Resources
- https://docs.github.com/en/communities/setting-up-your-project-for-healthy-contributions/setting-guidelines-for-repository-contributors | 1.0 | Create contributing guidelines - ### Purpose
Document contributing guidelines
### Changes
Create new file in the repository's root directory, CONTRIBUTING.md
### Relevant Resources
- https://docs.github.com/en/communities/setting-up-your-project-for-healthy-contributions/setting-guidelines-for-repository-contributors | infrastructure | create contributing guidelines purpose document contributing guidelines changes create new file in the repository s root directory contributing md relevant resources | 1 |
455,099 | 13,111,605,767 | IssuesEvent | 2020-08-04 23:31:29 | whitetrashyt/r34discordbot | https://api.github.com/repos/whitetrashyt/r34discordbot | closed | Full rewrite | Priority 1 issue | since the code we're using is heavily unoptimized, we're going to rewrite the entire script from the ground up in an attempt to strip out some of the less optimized and buggy code.
This will be a semi regular thing when it comes to constantly changing code, im constantly learning new python stuff so i cant just act like i wont do this often.
Since this will be a major change, we're going to be swapping from 1.<minor version> to 2.<minor version>
Hopefully with these changes, the script should run faster.
I expect no compat issues with the bot after this update, but of course, nothing ever goes as expected | 1.0 | Full rewrite - since the code we're using is heavily unoptimized, we're going to rewrite the entire script from the ground up in an attempt to strip out some of the less optimized and buggy code.
This will be a semi regular thing when it comes to constantly changing code, im constantly learning new python stuff so i cant just act like i wont do this often.
Since this will be a major change, we're going to be swapping from 1.<minor version> to 2.<minor version>
Hopefully with these changes, the script should run faster.
I expect no compat issues with the bot after this update, but of course, nothing ever goes as expected | non_infrastructure | full rewrite since the code we re using is heavily unoptimized we re going to rewrite the entire script from the ground up in an attempt to strip out some of the less optimized and buggy code this will be a semi regular thing when it comes to constantly changing code im constantly learning new python stuff so i cant just act like i wont do this often since this will be a major change we re going to be swapping from to hopefully with these changes the script should run faster i expect no compat issues with the bot after this update but of course nothing ever goes as expected | 0 |
601,828 | 18,435,101,528 | IssuesEvent | 2021-10-14 12:12:15 | GlennHS/Scrum-Helper | https://api.github.com/repos/GlennHS/Scrum-Helper | opened | Highs/Lows Tracker | enhancement low priority | **Is your feature request related to a problem? Please describe.**
When I'm working sometimes I'll think of something I want to mention in my Highs and Lows but forget. Would be nice to have a widget that handles this
**Describe the solution you'd like**
A small widget, possibly hideable on a fixed sidenav that contains a textarea for high/lows
| 1.0 | Highs/Lows Tracker - **Is your feature request related to a problem? Please describe.**
When I'm working sometimes I'll think of something I want to mention in my Highs and Lows but forget. Would be nice to have a widget that handles this
**Describe the solution you'd like**
A small widget, possibly hideable on a fixed sidenav that contains a textarea for high/lows
| non_infrastructure | highs lows tracker is your feature request related to a problem please describe when i m working sometimes i ll think of something i want to mention in my highs and lows but forget would be nice to have a widget that handles this describe the solution you d like a small widget possibly hideable on a fixed sidenav that contains a textarea for high lows | 0 |
25,344 | 6,653,683,696 | IssuesEvent | 2017-09-29 09:25:30 | openbmc/openbmc-test-automation | https://api.github.com/repos/openbmc/openbmc-test-automation | closed | Code update: Remaining upload image of BMC/ PNOR test cases | Epic Func:CodeUpdate Test | - [ ] BMC/PNOR: Upload image with incorrect MANIFEST
- [ ] BMC/PNOR: Upload image with incorrect Image | 1.0 | Code update: Remaining upload image of BMC/ PNOR test cases - - [ ] BMC/PNOR: Upload image with incorrect MANIFEST
- [ ] BMC/PNOR: Upload image with incorrect Image | non_infrastructure | code update remaining upload image of bmc pnor test cases bmc pnor upload image with incorrect manifest bmc pnor upload image with incorrect image | 0 |
110,402 | 13,906,782,183 | IssuesEvent | 2020-10-20 11:44:59 | httpwg/httpbis-issues | https://api.github.com/repos/httpwg/httpbis-issues | closed | Cache validators in 206 responses (Trac #18) | Migrated from Trac design p5-range |
```text
#!html
<p>
In <a href='http://www.apps.ietf.org/rfc/rfc2616.html#sec-10.2.7'>Section
10.2.7</a> the spec implies that it may be ok to use a weak cache
validator in a 206 response. The correct language is more
restrictive.
</p>
<blockquote>
<p class='error'>
If the 206 response is the result of an If-Range request <span class='diff'>that used a
strong cache validator (see section 13.3.3)</span>, the response SHOULD NOT
include other entity-headers. <span class='diff'>If the response is the result of an
If-Range request that used a weak validator, the response MUST NOT
include other entity-headers; this prevents inconsistencies between
cached entity-bodies and updated headers.</span> Otherwise, the response
MUST include all of the entity-headers that would have been returned
with a 200 (OK) response to the same request.
</p>
</blockquote>
<p>should be:</p>
<blockquote>
<p class='correct'>
If the 206 response is the result of an If-Range request, the
response SHOULD NOT include other entity-headers. Otherwise, the
response MUST include all of the entity-headers that would have been
returned with a 200 (OK) response to the same request.
</p>
</blockquote>
```
Migrated from https://trac.ietf.org/ticket/18
```json
{
"status": "closed",
"changetime": "2008-08-09T13:01:53",
"_ts": "1218286913000000",
"description": "{{{\n#!html\n<p>\n In <a href='http://www.apps.ietf.org/rfc/rfc2616.html#sec-10.2.7'>Section\n 10.2.7</a> the spec implies that it may be ok to use a weak cache\n validator in a 206 response. The correct language is more\n restrictive.\n</p>\n\n<blockquote>\n<p class='error'>\n If the 206 response is the result of an If-Range request <span class='diff'>that used a\n strong cache validator (see section 13.3.3)</span>, the response SHOULD NOT\n include other entity-headers. <span class='diff'>If the response is the result of an\n If-Range request that used a weak validator, the response MUST NOT\n include other entity-headers; this prevents inconsistencies between\n cached entity-bodies and updated headers.</span> Otherwise, the response\n MUST include all of the entity-headers that would have been returned\n with a 200 (OK) response to the same request.\n</p>\n</blockquote>\n\n<p>should be:</p>\n\n<blockquote>\n<p class='correct'>\n If the 206 response is the result of an If-Range request, the\n response SHOULD NOT include other entity-headers. Otherwise, the\n response MUST include all of the entity-headers that would have been\n returned with a 200 (OK) response to the same request.\n</p> \n</blockquote> \n}}}",
"reporter": "mnot@pobox.com",
"cc": "",
"resolution": "fixed",
"time": "2007-12-20T00:04:50",
"component": "p5-range",
"summary": "Cache validators in 206 responses",
"priority": "",
"keywords": "",
"milestone": "01",
"owner": "",
"type": "design",
"severity": ""
}
```
| 1.0 | Cache validators in 206 responses (Trac #18) -
```text
#!html
<p>
In <a href='http://www.apps.ietf.org/rfc/rfc2616.html#sec-10.2.7'>Section
10.2.7</a> the spec implies that it may be ok to use a weak cache
validator in a 206 response. The correct language is more
restrictive.
</p>
<blockquote>
<p class='error'>
If the 206 response is the result of an If-Range request <span class='diff'>that used a
strong cache validator (see section 13.3.3)</span>, the response SHOULD NOT
include other entity-headers. <span class='diff'>If the response is the result of an
If-Range request that used a weak validator, the response MUST NOT
include other entity-headers; this prevents inconsistencies between
cached entity-bodies and updated headers.</span> Otherwise, the response
MUST include all of the entity-headers that would have been returned
with a 200 (OK) response to the same request.
</p>
</blockquote>
<p>should be:</p>
<blockquote>
<p class='correct'>
If the 206 response is the result of an If-Range request, the
response SHOULD NOT include other entity-headers. Otherwise, the
response MUST include all of the entity-headers that would have been
returned with a 200 (OK) response to the same request.
</p>
</blockquote>
```
Migrated from https://trac.ietf.org/ticket/18
```json
{
"status": "closed",
"changetime": "2008-08-09T13:01:53",
"_ts": "1218286913000000",
"description": "{{{\n#!html\n<p>\n In <a href='http://www.apps.ietf.org/rfc/rfc2616.html#sec-10.2.7'>Section\n 10.2.7</a> the spec implies that it may be ok to use a weak cache\n validator in a 206 response. The correct language is more\n restrictive.\n</p>\n\n<blockquote>\n<p class='error'>\n If the 206 response is the result of an If-Range request <span class='diff'>that used a\n strong cache validator (see section 13.3.3)</span>, the response SHOULD NOT\n include other entity-headers. <span class='diff'>If the response is the result of an\n If-Range request that used a weak validator, the response MUST NOT\n include other entity-headers; this prevents inconsistencies between\n cached entity-bodies and updated headers.</span> Otherwise, the response\n MUST include all of the entity-headers that would have been returned\n with a 200 (OK) response to the same request.\n</p>\n</blockquote>\n\n<p>should be:</p>\n\n<blockquote>\n<p class='correct'>\n If the 206 response is the result of an If-Range request, the\n response SHOULD NOT include other entity-headers. Otherwise, the\n response MUST include all of the entity-headers that would have been\n returned with a 200 (OK) response to the same request.\n</p> \n</blockquote> \n}}}",
"reporter": "mnot@pobox.com",
"cc": "",
"resolution": "fixed",
"time": "2007-12-20T00:04:50",
"component": "p5-range",
"summary": "Cache validators in 206 responses",
"priority": "",
"keywords": "",
"milestone": "01",
"owner": "",
"type": "design",
"severity": ""
}
```
| non_infrastructure | cache validators in responses trac text html in a href the spec implies that it may be ok to use a weak cache validator in a response the correct language is more restrictive if the response is the result of an if range request that used a strong cache validator see section the response should not include other entity headers if the response is the result of an if range request that used a weak validator the response must not include other entity headers this prevents inconsistencies between cached entity bodies and updated headers otherwise the response must include all of the entity headers that would have been returned with a ok response to the same request should be if the response is the result of an if range request the response should not include other entity headers otherwise the response must include all of the entity headers that would have been returned with a ok response to the same request migrated from json status closed changetime ts description n html n n in the spec implies that it may be ok to use a weak cache n validator in a response the correct language is more n restrictive n n n n n if the response is the result of an if range request that used a n strong cache validator see section the response should not n include other entity headers if the response is the result of an n if range request that used a weak validator the response must not n include other entity headers this prevents inconsistencies between n cached entity bodies and updated headers otherwise the response n must include all of the entity headers that would have been returned n with a ok response to the same request n n n n should be n n n n if the response is the result of an if range request the n response should not include other entity headers otherwise the n response must include all of the entity headers that would have been n returned with a ok response to the same request n n n reporter mnot pobox com cc resolution fixed time component range summary cache validators in responses priority keywords milestone owner type design severity | 0 |
777,948 | 27,298,591,829 | IssuesEvent | 2023-02-23 22:48:31 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | reopened | [DocDB]Rolling restart got failed and universe is in error state | kind/bug area/docdb priority/medium | Jira Link: [DB-4088](https://yugabyte.atlassian.net/browse/DB-4088)
### Description
Version: `2.17.1.0-b122`
**Steps**:
1. Create universe with
`enable_automatic_tablet_splitting=False` and `ysql_num_shards_per_tserver=1` at both master and tserver
2. Start a workload and wait until 110GB uncompressed SST and 10 GB SST Size
3. Then Enable tablet splitting with below GFLAGs and do rolling restart
**Master**:
{"tablet_split_high_phase_shard_count_per_node": 10000,
"tablet_split_high_phase_size_threshold_bytes": 10485760,
"tablet_split_low_phase_size_threshold_bytes": 1048576,
"tablet_split_low_phase_shard_count_per_node": 16,
"db_write_buffer_size":102400}
**Tserver**:
{"ysql_num_shards_per_tserver":1,
"db_write_buffer_size":102400}
**Actual**:
1. Rolling restart step got failed with `
java.lang.RuntimeException: Follower lag timeout reached: ip=a.b.c.d, port=9000, followerLagMs=193766.000000.`
2. 1 Node got stuck with `Update G Flags` state
3. Observed `E1031 16:50:54.061192 17629 tablet_peer.cc:1554] Peer 25478d132b8b4df5829d5cdac7e6f420 is a VOTER Not changing its role after remote bootstrap` in tserver.ERROR log
| 1.0 | [DocDB]Rolling restart got failed and universe is in error state - Jira Link: [DB-4088](https://yugabyte.atlassian.net/browse/DB-4088)
### Description
Version: `2.17.1.0-b122`
**Steps**:
1. Create universe with
`enable_automatic_tablet_splitting=False` and `ysql_num_shards_per_tserver=1` at both master and tserver
2. Start a workload and wait until 110GB uncompressed SST and 10 GB SST Size
3. Then Enable tablet splitting with below GFLAGs and do rolling restart
**Master**:
{"tablet_split_high_phase_shard_count_per_node": 10000,
"tablet_split_high_phase_size_threshold_bytes": 10485760,
"tablet_split_low_phase_size_threshold_bytes": 1048576,
"tablet_split_low_phase_shard_count_per_node": 16,
"db_write_buffer_size":102400}
**Tserver**:
{"ysql_num_shards_per_tserver":1,
"db_write_buffer_size":102400}
**Actual**:
1. Rolling restart step got failed with `
java.lang.RuntimeException: Follower lag timeout reached: ip=a.b.c.d, port=9000, followerLagMs=193766.000000.`
2. 1 Node got stuck with `Update G Flags` state
3. Observed `E1031 16:50:54.061192 17629 tablet_peer.cc:1554] Peer 25478d132b8b4df5829d5cdac7e6f420 is a VOTER Not changing its role after remote bootstrap` in tserver.ERROR log
| non_infrastructure | rolling restart got failed and universe is in error state jira link description version steps create universe with enable automatic tablet splitting false and ysql num shards per tserver at both master and tserver start a workload and wait until uncompressed sst and gb sst size then enable tablet splitting with below gflags and do rolling restart master tablet split high phase shard count per node tablet split high phase size threshold bytes tablet split low phase size threshold bytes tablet split low phase shard count per node db write buffer size tserver ysql num shards per tserver db write buffer size actual rolling restart step got failed with
java lang runtimeexception follower lag timeout reached ip a b c d port followerlagms node got stuck with update g flags state observed tablet peer cc peer is a voter not changing its role after remote bootstrap in tserver error log | 0 |
19,702 | 13,397,523,352 | IssuesEvent | 2020-09-03 11:44:46 | skypyproject/skypy | https://api.github.com/repos/skypyproject/skypy | opened | config files: load existing tables from files | examples infrastructure module: pipeline | It would be useful to have syntax to load tables from existing files (not necessarily produced by SkyPy, so with arbitrary extension names for FITS or paths for HDF5). I think loading can somehow be achieved already using the `.init` table constructor, so we should explore. And if that method is not very comfortable to use, we should investigate a more specific table loader syntax. In either case, an example should be provided. | 1.0 | config files: load existing tables from files - It would be useful to have syntax to load tables from existing files (not necessarily produced by SkyPy, so with arbitrary extension names for FITS or paths for HDF5). I think loading can somehow be achieved already using the `.init` table constructor, so we should explore. And if that method is not very comfortable to use, we should investigate a more specific table loader syntax. In either case, an example should be provided. | infrastructure | config files load existing tables from files it would be useful to have syntax to load tables from existing files not necessarily produced by skypy so with arbitrary extension names for fits or paths for i think loading can somehow be achieved already using the init table constructor so we should explore and if that method is not very comfortable to use we should investigate a more specific table loader syntax in either case an example should be provided | 1 |
248,496 | 7,931,892,983 | IssuesEvent | 2018-07-07 06:58:11 | less/less.js | https://api.github.com/repos/less/less.js | closed | Add real-world Less libs to tests | good first issue medium priority up-for-grabs | While Less has hundreds of tests, sometimes real-world Less libraries do things that are unexpected, which breaks them with new releases - https://github.com/ant-design/ant-design/issues/7850#issuecomment-399904195
It wouldn't be too much work to pull a few popular Less libs (via NPM) and add them to the testing suite, especially since `@import` can pull from NPM by default (in Node) in 3.0+. | 1.0 | Add real-world Less libs to tests - While Less has hundreds of tests, sometimes real-world Less libraries do things that are unexpected, which breaks them with new releases - https://github.com/ant-design/ant-design/issues/7850#issuecomment-399904195
It wouldn't be too much work to pull a few popular Less libs (via NPM) and add them to the testing suite, especially since `@import` can pull from NPM by default (in Node) in 3.0+. | non_infrastructure | add real world less libs to tests while less has hundreds of tests sometimes real world less libraries do things that are unexpected which breaks them with new releases it wouldn t be too much work to pull a few popular less libs via npm and add them to the testing suite especially since import can pull from npm by default in node in | 0 |
52,186 | 7,752,094,208 | IssuesEvent | 2018-05-30 19:09:40 | w0rp/ale | https://api.github.com/repos/w0rp/ale | closed | Explicit mention of "FindProjectRoot" in documentation | documentation | I've recently spent some time trying to understand why was mypy being run from my home directory, which caused mypy to ignore 'mypy.ini' setting. If one follow the code the answer is easy - ale#python#FindProjectRootIni function at https://github.com/w0rp/ale/blob/8a4cf923a8a3017fa683bd27d699d9b14720cd66/autoload/ale/python.vim does not look for 'mypy.ini' file.
While this may or may not be the intended behavior, I realized that there's no explicit mention of the whole concept of "searching for root directory" in the documentation. This means every potential new user who wants to run mypy linter will have to read vimscript sources to understand what's going on. Should this behaviour be mentioned somewhere in the documentation?
Also, while speaking of FindProjectRoot, should the function also look for 'mypy.ini' file? | 1.0 | Explicit mention of "FindProjectRoot" in documentation - I've recently spent some time trying to understand why was mypy being run from my home directory, which caused mypy to ignore 'mypy.ini' setting. If one follow the code the answer is easy - ale#python#FindProjectRootIni function at https://github.com/w0rp/ale/blob/8a4cf923a8a3017fa683bd27d699d9b14720cd66/autoload/ale/python.vim does not look for 'mypy.ini' file.
While this may or may not be the intended behavior, I realized that there's no explicit mention of the whole concept of "searching for root directory" in the documentation. This means every potential new user who wants to run mypy linter will have to read vimscript sources to understand what's going on. Should this behaviour be mentioned somewhere in the documentation?
Also, while speaking of FindProjectRoot, should the function also look for 'mypy.ini' file? | non_infrastructure | explicit mention of findprojectroot in documentation i ve recently spent some time trying to understand why was mypy being run from my home directory which caused mypy to ignore mypy ini setting if one follow the code the answer is easy ale python findprojectrootini function at does not look for mypy ini file while this may or may not be the intended behavior i realized that there s no explicit mention of the whole concept of searching for root directory in the documentation this means every potential new user who wants to run mypy linter will have to read vimscript sources to understand what s going on should this behaviour be mentioned somewhere in the documentation also while speaking of findprojectroot should the function also look for mypy ini file | 0 |
30,726 | 25,017,697,598 | IssuesEvent | 2022-11-03 20:25:26 | google/site-kit-wp | https://api.github.com/repos/google/site-kit-wp | closed | Introduce cyclomatic complexity lint rule to warn about overly nested code. | P2 QA: Eng Good First Issue Rollover Type: Infrastructure | ## Feature Description
Anecdotally it feels like our code can get a bit overly complex and deeply nested at times. It would be interesting to introduce the [complexity](https://eslint.org/docs/rules/complexity) linting rule to warn if the code is nested beyond a certain depth, which could help inform potential refactors to simplify matters.
<!-- Please describe clear and concisely which problem the feature would solve or which publisher needs it would address. -->
---------------
_Do not alter or remove anything below. The following sections will be managed by moderators only._
## Acceptance criteria
### Implement the rule
* The ESLint [complexity](https://eslint.org/docs/latest/rules/complexity) rule should be introduced to our linting rules.
* The complexity threshold should be kept at the default of 20. The level for the rule should be `error`.
* Any functions which are currently over the threshold should have the rule disabled to avoid linting errors.
### Create the next issues
Once we've got the initial rule in place, the intention is to follow up to a) refactor functions to avoid disabling the rule, and b) iterate further, to reduce the threshold from its default of 20.
* An issue should be created for each of the functions that had the rule disabled to refactor it to a complexity within the current threshold.
* An followup issue should be created to reduce the complexity threshold, once again disabling the rule for functions above the threshold and creating further follow-up issues to refactor them.
## Implementation Brief
* Check out the [attached PR](https://github.com/google/site-kit-wp/pull/5954) with the lint rule updated locally
In `.eslintrc.json`:
* Add the `complexity` rule to the rules object, and set it to throw an error, with the threshold set to 20.
* Run `npm lint` to get a full list of code currently violating the lint rule
* Create issues for each of the violations
* Go over the flagged files, and ignore the rule for each occurence.
* Remember to specify the `complexity` rule in the ignore-comments so that other errors still get flagged (`eslint-disable-next-line complexity` instead of just `eslint-disable-next-line` ).
### Test Coverage
* No tests need to be created or updated for this issue.
## QA Brief
### QA:Eng
- Verify that running the `npm run lint:js` script doesn't return any errors.
- Remove one of the `// eslint-disable-next-line complexity` lines locally and run `npm run lint:js`; you should see an error.
## Changelog entry
* N/A
| 1.0 | Introduce cyclomatic complexity lint rule to warn about overly nested code. - ## Feature Description
Anecdotally it feels like our code can get a bit overly complex and deeply nested at times. It would be interesting to introduce the [complexity](https://eslint.org/docs/rules/complexity) linting rule to warn if the code is nested beyond a certain depth, which could help inform potential refactors to simplify matters.
<!-- Please describe clear and concisely which problem the feature would solve or which publisher needs it would address. -->
---------------
_Do not alter or remove anything below. The following sections will be managed by moderators only._
## Acceptance criteria
### Implement the rule
* The ESLint [complexity](https://eslint.org/docs/latest/rules/complexity) rule should be introduced to our linting rules.
* The complexity threshold should be kept at the default of 20. The level for the rule should be `error`.
* Any functions which are currently over the threshold should have the rule disabled to avoid linting errors.
### Create the next issues
Once we've got the initial rule in place, the intention is to follow up to a) refactor functions to avoid disabling the rule, and b) iterate further, to reduce the threshold from its default of 20.
* An issue should be created for each of the functions that had the rule disabled to refactor it to a complexity within the current threshold.
* An followup issue should be created to reduce the complexity threshold, once again disabling the rule for functions above the threshold and creating further follow-up issues to refactor them.
## Implementation Brief
* Check out the [attached PR](https://github.com/google/site-kit-wp/pull/5954) with the lint rule updated locally
In `.eslintrc.json`:
* Add the `complexity` rule to the rules object, and set it to throw an error, with the threshold set to 20.
* Run `npm lint` to get a full list of code currently violating the lint rule
* Create issues for each of the violations
* Go over the flagged files, and ignore the rule for each occurence.
* Remember to specify the `complexity` rule in the ignore-comments so that other errors still get flagged (`eslint-disable-next-line complexity` instead of just `eslint-disable-next-line` ).
### Test Coverage
* No tests need to be created or updated for this issue.
## QA Brief
### QA:Eng
- Verify that running the `npm run lint:js` script doesn't return any errors.
- Remove one of the `// eslint-disable-next-line complexity` lines locally and run `npm run lint:js`; you should see an error.
## Changelog entry
* N/A
| infrastructure | introduce cyclomatic complexity lint rule to warn about overly nested code feature description anecdotally it feels like our code can get a bit overly complex and deeply nested at times it would be interesting to introduce the linting rule to warn if the code is nested beyond a certain depth which could help inform potential refactors to simplify matters do not alter or remove anything below the following sections will be managed by moderators only acceptance criteria implement the rule the eslint rule should be introduced to our linting rules the complexity threshold should be kept at the default of the level for the rule should be error any functions which are currently over the threshold should have the rule disabled to avoid linting errors create the next issues once we ve got the initial rule in place the intention is to follow up to a refactor functions to avoid disabling the rule and b iterate further to reduce the threshold from its default of an issue should be created for each of the functions that had the rule disabled to refactor it to a complexity within the current threshold an followup issue should be created to reduce the complexity threshold once again disabling the rule for functions above the threshold and creating further follow up issues to refactor them implementation brief check out the with the lint rule updated locally in eslintrc json add the complexity rule to the rules object and set it to throw an error with the threshold set to run npm lint to get a full list of code currently violating the lint rule create issues for each of the violations go over the flagged files and ignore the rule for each occurence remember to specify the complexity rule in the ignore comments so that other errors still get flagged eslint disable next line complexity instead of just eslint disable next line test coverage no tests need to be created or updated for this issue qa brief qa eng verify that running the npm run lint js script doesn t return any errors remove one of the eslint disable next line complexity lines locally and run npm run lint js you should see an error changelog entry n a | 1 |
753,415 | 26,347,009,771 | IssuesEvent | 2023-01-10 23:15:57 | iterative/vscode-dvc | https://api.github.com/repos/iterative/vscode-dvc | closed | Sort panel - allow changing order of applied sorting operations | enhancement priority-p2 A: table | Would be nice to enable DnD or add up/down symbols. | 1.0 | Sort panel - allow changing order of applied sorting operations - Would be nice to enable DnD or add up/down symbols. | non_infrastructure | sort panel allow changing order of applied sorting operations would be nice to enable dnd or add up down symbols | 0 |
29,880 | 24,368,468,836 | IssuesEvent | 2022-10-03 17:05:00 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | The Roslyn directory artifacts\bin weights 11.8 GB ? | Area-Infrastructure untriaged | I compiled Roslyn v6.0.1 and the directory `artifacts\bin` weights 11.8 GB.
The 3,29MB Roslyn DLL `Microsoft.CodeAnalysis.dll` is duplicated 117 times for a total footprint of 377 MB.
The 24MB DLL `Microsoft.CodeAnalysis.Test.Utilities.dll ` is duplicated 75 times for a footprint of 1.75 GB.
This massive duplication consumes significant hard-drive space and must have a (slight?) impact on Roslyn build duration.
Is there a well identified reason to not have only a few DEBUG binary directories like `artifacts\bin\Debug\net6.0`, `artifacts\bin\Debug\net472,` `artifacts\bin\Debug\netstandard2.0`... ?
The [windirstat](https://windirstat.net/) view:




| 1.0 | The Roslyn directory artifacts\bin weights 11.8 GB ? - I compiled Roslyn v6.0.1 and the directory `artifacts\bin` weights 11.8 GB.
The 3,29MB Roslyn DLL `Microsoft.CodeAnalysis.dll` is duplicated 117 times for a total footprint of 377 MB.
The 24MB DLL `Microsoft.CodeAnalysis.Test.Utilities.dll ` is duplicated 75 times for a footprint of 1.75 GB.
This massive duplication consumes significant hard-drive space and must have a (slight?) impact on Roslyn build duration.
Is there a well identified reason to not have only a few DEBUG binary directories like `artifacts\bin\Debug\net6.0`, `artifacts\bin\Debug\net472,` `artifacts\bin\Debug\netstandard2.0`... ?
The [windirstat](https://windirstat.net/) view:




| infrastructure | the roslyn directory artifacts bin weights gb i compiled roslyn and the directory artifacts bin weights gb the roslyn dll microsoft codeanalysis dll is duplicated times for a total footprint of mb the dll microsoft codeanalysis test utilities dll is duplicated times for a footprint of gb this massive duplication consumes significant hard drive space and must have a slight impact on roslyn build duration is there a well identified reason to not have only a few debug binary directories like artifacts bin debug artifacts bin debug artifacts bin debug the view | 1 |
38,378 | 4,954,024,068 | IssuesEvent | 2016-12-01 16:30:25 | map-egypt/map-egypt.github.io | https://api.github.com/repos/map-egypt/map-egypt.github.io | opened | Homepage - spacing under "Overview of Agricultural Projects" title | design tweaks | Decrease space here to match the comp. | 1.0 | Homepage - spacing under "Overview of Agricultural Projects" title - Decrease space here to match the comp. | non_infrastructure | homepage spacing under overview of agricultural projects title decrease space here to match the comp | 0 |
19,311 | 13,212,699,151 | IssuesEvent | 2020-08-16 08:39:22 | wix/wix-style-react | https://api.github.com/repos/wix/wix-style-react | closed | auto generated testkits refactor (to be ignored in git) | Infrastructure | Even though they are created automatically, we get conflicts in PRs of new components like in here:

Proposed solution:
1. Create the files automatically on each build and make them override the old ones.
2. Remove it from the components generation.
3. Git ignore on these files.
@argshook
| 1.0 | auto generated testkits refactor (to be ignored in git) - Even though they are created automatically, we get conflicts in PRs of new components like in here:

Proposed solution:
1. Create the files automatically on each build and make them override the old ones.
2. Remove it from the components generation.
3. Git ignore on these files.
@argshook
| infrastructure | auto generated testkits refactor to be ignored in git even though they are created automatically we get conflicts in prs of new components like in here proposed solution create the files automatically on each build and make them override the old ones remove it from the components generation git ignore on these files argshook | 1 |
247,340 | 7,917,353,242 | IssuesEvent | 2018-07-04 09:34:47 | telerik/kendo-ui-core | https://api.github.com/repos/telerik/kendo-ui-core | closed | Incorrect date conversion with date formatting | Bug C: Spreadsheet Kendo2 Priority 5 SEV: Medium | ### Bug report
[https://demos.telerik.com/kendo-ui/spreadsheet/index](https://demos.telerik.com/kendo-ui/spreadsheet/index)
Regression since: **2017.3.913**
1. Clear the content
2. Set Date Format via the Custom Format tool
3. Insert 6/1/2018
### Current behavior
1/1/2018 is displayed
### Expected/desired behavior
6/1/2018 to be displayed
### Environment
* **Kendo UI version:** 2018.2.516
* **Browser:** all
| 1.0 | Incorrect date conversion with date formatting - ### Bug report
[https://demos.telerik.com/kendo-ui/spreadsheet/index](https://demos.telerik.com/kendo-ui/spreadsheet/index)
Regression since: **2017.3.913**
1. Clear the content
2. Set Date Format via the Custom Format tool
3. Insert 6/1/2018
### Current behavior
1/1/2018 is displayed
### Expected/desired behavior
6/1/2018 to be displayed
### Environment
* **Kendo UI version:** 2018.2.516
* **Browser:** all
| non_infrastructure | incorrect date conversion with date formatting bug report regression since clear the content set date format via the custom format tool insert current behavior is displayed expected desired behavior to be displayed environment kendo ui version browser all | 0 |
54,648 | 13,797,468,747 | IssuesEvent | 2020-10-09 22:18:30 | dkfans/keeperfx | https://api.github.com/repos/dkfans/keeperfx | closed | inconsistent creature pickup behavior | Priority-Medium Status-Done Type-Defect | Originally reported on Google Code with ID 222
```
In KeeperFX 0.4.4 and earlier the creature pickup behavior isn't consistent. (On the
creature menu, clicking on creatures with pick them up, or focus on them)
I've attached a file which describes expected and actual behavior in all scenarios.
Shift-Leftclick on creature icon, Rickclick on creature icon, and Shift/CTRL+Right
click on creature number do not behave as expected.
```
Reported by `Loobinex` on 2014-01-18 00:35:03
<hr>
- _Attachment: [Creaturepickup.xlsx](https://storage.googleapis.com/google-code-attachments/keeperfx/issue-222/comment-0/Creaturepickup.xlsx)_
| 1.0 | inconsistent creature pickup behavior - Originally reported on Google Code with ID 222
```
In KeeperFX 0.4.4 and earlier the creature pickup behavior isn't consistent. (On the
creature menu, clicking on creatures with pick them up, or focus on them)
I've attached a file which describes expected and actual behavior in all scenarios.
Shift-Leftclick on creature icon, Rickclick on creature icon, and Shift/CTRL+Right
click on creature number do not behave as expected.
```
Reported by `Loobinex` on 2014-01-18 00:35:03
<hr>
- _Attachment: [Creaturepickup.xlsx](https://storage.googleapis.com/google-code-attachments/keeperfx/issue-222/comment-0/Creaturepickup.xlsx)_
| non_infrastructure | inconsistent creature pickup behavior originally reported on google code with id in keeperfx and earlier the creature pickup behavior isn t consistent on the creature menu clicking on creatures with pick them up or focus on them i ve attached a file which describes expected and actual behavior in all scenarios shift leftclick on creature icon rickclick on creature icon and shift ctrl right click on creature number do not behave as expected reported by loobinex on attachment | 0 |
129,445 | 5,097,131,306 | IssuesEvent | 2017-01-03 20:28:20 | Terracotta-OSS/terracotta-core | https://api.github.com/repos/Terracotta-OSS/terracotta-core | closed | Rename modules to something more appropriate for current project | cleanup low priority | terms like dso, L1, L2, etc. have no meaning or relevance in the current platform. Rename modules, packages and terms to reject something more appropriate.
| 1.0 | Rename modules to something more appropriate for current project - terms like dso, L1, L2, etc. have no meaning or relevance in the current platform. Rename modules, packages and terms to reject something more appropriate.
| non_infrastructure | rename modules to something more appropriate for current project terms like dso etc have no meaning or relevance in the current platform rename modules packages and terms to reject something more appropriate | 0 |
226,957 | 18,045,975,227 | IssuesEvent | 2021-09-18 22:42:34 | logicmoo/logicmoo_workspace | https://api.github.com/repos/logicmoo/logicmoo_workspace | opened | logicmoo.pfc.test.sanity_base.MT_07 JUnit | Test_9999 logicmoo.pfc.test.sanity_base unit_test MT_07 | (cd /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base ; timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif mt_07.pl)
GH_MASTER_ISSUE_FINFO=
ISSUE_SEARCH: https://github.com/logicmoo/logicmoo_workspace/issues?q=is%3Aissue+label%3AMT_07
GITLAB: https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/commit/1629eba4a2a1da0e1b731d156198a7168dafae44
https://gitlab.logicmoo.org/gitlab/logicmoo/logicmoo_workspace/-/blob/1629eba4a2a1da0e1b731d156198a7168dafae44/packs_sys/pfc/t/sanity_base/mt_07.pl
Latest: https://jenkins.logicmoo.org/job/logicmoo_workspace/lastBuild/testReport/logicmoo.pfc.test.sanity_base/MT_07/logicmoo_pfc_test_sanity_base_MT_07_JUnit/
This Build: https://jenkins.logicmoo.org/job/logicmoo_workspace/68/testReport/logicmoo.pfc.test.sanity_base/MT_07/logicmoo_pfc_test_sanity_base_MT_07_JUnit/
GITHUB: https://github.com/logicmoo/logicmoo_workspace/commit/1629eba4a2a1da0e1b731d156198a7168dafae44
https://github.com/logicmoo/logicmoo_workspace/blob/1629eba4a2a1da0e1b731d156198a7168dafae44/packs_sys/pfc/t/sanity_base/mt_07.pl
```
%
running('/var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/mt_07.pl'),
%~ this_test_might_need( :-( use_module( library(logicmoo_plarkc))))
%~ this_test_might_need( :-( expects_dialect(pfc)))
%:- add_import_module(mt_01,baseKB,end).
:- set_defaultAssertMt(code1).
% mtProlog(code1).
% mtHybrid(code1).
%~ pfc_iri : include_module_file(code1:library('pfclib/system_each_module.pfc'),code1).
/*~
%~ pfc_iri:include_module_file(code1:library('pfclib/system_each_module.pfc'),code1)
~*/
% mtProlog(code1).
% mtHybrid(code1).
:- expects_dialect(pfc).
mtHybrid(kb2).
No source location!?
%~ message_hook_type(error)
%~ /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/mt_07.pl:24
%~ message_hook(
%~ error(
%~ permission_error(redefine,imported_procedure,baseKB:mtHybrid/1),
%~ context(system:'$record_clause'/3,Context_Kw)),
%~ error,
%~ [ 'No permission to ~w ~w `~p\'' - [ redefine,
%~ imported_procedure,
%~ baseKB : mtHybrid/1]])
/*~
No permission to redefine imported_procedure `baseKB:(mtHybrid/1)'
ERROR: No permission to redefine imported_procedure `baseKB:(mtHybrid/1)'
~*/
mtHybrid(kb3).
No source location!?
%~ /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/mt_07.pl:25
%~ message_hook_type(error)
%~ message_hook(
%~ error(
%~ permission_error(redefine,imported_procedure,baseKB:mtHybrid/1),
%~ context(system:'$record_clause'/3,Context_Kw)),
%~ error,
%~ [ 'No permission to ~w ~w `~p\'' - [ redefine,
%~ imported_procedure,
%~ baseKB : mtHybrid/1]])
/*~
No permission to redefine imported_procedure `baseKB:(mtHybrid/1)'
ERROR: No permission to redefine imported_procedure `baseKB:(mtHybrid/1)'
~*/
:- listing(mtProlog/1).
%~ skipped( listing( mtProlog/1))
:- listing(mtHybrid/1).
% code1: (a <- b).
%~ skipped( listing( mtHybrid/1))
% code1: (a <- b).
code1: (a:- printAll('$current_source_module'(_M))).
No source location!?
kb2: (b).
No source location!?
baseKB:genlMt(kb2,code1).
baseKB:genlMt(code1,baseKB).
kb2: (:- a).
No source location!?
baseKB:genlMt(kb3,kb2).
kb3:predicateConventionMt(c,code1).
kb3: (a==>c).
% to make sure a does not get accdently defined in kb2 or kb3
/*~
code1:'$current_source_module'(baseKB).
/* found 1 for code1:'$current_source_module'(_21840).
*/
~*/
% to make sure a does not get accdently defined in kb2 or kb3
:- mpred_must((clause(kb3:a,_,Ref), clause_property(Ref,module(kb3)))).
%~ FIlE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/mt_07.pl#L51
%~ failed_mpred_test( clause(kb3:a,Kw,Ref),clause_property(Ref,module(kb3)))
%~ FILE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/mt_07.pl#L51
%~ FIlE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/mt_07.pl#L51
%~ FILE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/mt_07.pl#L51
%~ DUMP_BREAK/0
%~ message_hook_type(error)
%~ message_hook( initialization_exception(abort),
%~ error,
%~ [ 'Prolog initialisation failed:', nl,'Unknown message: ~p'-[abort]])
%~ unused(save_junit_results)
```
totalTime=3
ISSUE_SEARCH: https://github.com/logicmoo/logicmoo_workspace/issues?q=is%3Aissue+label%3AMT_07
GITLAB: https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/commit/1629eba4a2a1da0e1b731d156198a7168dafae44
https://gitlab.logicmoo.org/gitlab/logicmoo/logicmoo_workspace/-/blob/1629eba4a2a1da0e1b731d156198a7168dafae44/packs_sys/pfc/t/sanity_base/mt_07.pl
Latest: https://jenkins.logicmoo.org/job/logicmoo_workspace/lastBuild/testReport/logicmoo.pfc.test.sanity_base/MT_07/logicmoo_pfc_test_sanity_base_MT_07_JUnit/
This Build: https://jenkins.logicmoo.org/job/logicmoo_workspace/68/testReport/logicmoo.pfc.test.sanity_base/MT_07/logicmoo_pfc_test_sanity_base_MT_07_JUnit/
GITHUB: https://github.com/logicmoo/logicmoo_workspace/commit/1629eba4a2a1da0e1b731d156198a7168dafae44
https://github.com/logicmoo/logicmoo_workspace/blob/1629eba4a2a1da0e1b731d156198a7168dafae44/packs_sys/pfc/t/sanity_base/mt_07.pl
FAILED: /var/lib/jenkins/workspace/logicmoo_workspace/bin/lmoo-junit-minor -k mt_07.pl (returned 1)
| 3.0 | logicmoo.pfc.test.sanity_base.MT_07 JUnit - (cd /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base ; timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif mt_07.pl)
GH_MASTER_ISSUE_FINFO=
ISSUE_SEARCH: https://github.com/logicmoo/logicmoo_workspace/issues?q=is%3Aissue+label%3AMT_07
GITLAB: https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/commit/1629eba4a2a1da0e1b731d156198a7168dafae44
https://gitlab.logicmoo.org/gitlab/logicmoo/logicmoo_workspace/-/blob/1629eba4a2a1da0e1b731d156198a7168dafae44/packs_sys/pfc/t/sanity_base/mt_07.pl
Latest: https://jenkins.logicmoo.org/job/logicmoo_workspace/lastBuild/testReport/logicmoo.pfc.test.sanity_base/MT_07/logicmoo_pfc_test_sanity_base_MT_07_JUnit/
This Build: https://jenkins.logicmoo.org/job/logicmoo_workspace/68/testReport/logicmoo.pfc.test.sanity_base/MT_07/logicmoo_pfc_test_sanity_base_MT_07_JUnit/
GITHUB: https://github.com/logicmoo/logicmoo_workspace/commit/1629eba4a2a1da0e1b731d156198a7168dafae44
https://github.com/logicmoo/logicmoo_workspace/blob/1629eba4a2a1da0e1b731d156198a7168dafae44/packs_sys/pfc/t/sanity_base/mt_07.pl
```
%
running('/var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/mt_07.pl'),
%~ this_test_might_need( :-( use_module( library(logicmoo_plarkc))))
%~ this_test_might_need( :-( expects_dialect(pfc)))
%:- add_import_module(mt_01,baseKB,end).
:- set_defaultAssertMt(code1).
% mtProlog(code1).
% mtHybrid(code1).
%~ pfc_iri : include_module_file(code1:library('pfclib/system_each_module.pfc'),code1).
/*~
%~ pfc_iri:include_module_file(code1:library('pfclib/system_each_module.pfc'),code1)
~*/
% mtProlog(code1).
% mtHybrid(code1).
:- expects_dialect(pfc).
mtHybrid(kb2).
No source location!?
%~ message_hook_type(error)
%~ /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/mt_07.pl:24
%~ message_hook(
%~ error(
%~ permission_error(redefine,imported_procedure,baseKB:mtHybrid/1),
%~ context(system:'$record_clause'/3,Context_Kw)),
%~ error,
%~ [ 'No permission to ~w ~w `~p\'' - [ redefine,
%~ imported_procedure,
%~ baseKB : mtHybrid/1]])
/*~
No permission to redefine imported_procedure `baseKB:(mtHybrid/1)'
ERROR: No permission to redefine imported_procedure `baseKB:(mtHybrid/1)'
~*/
mtHybrid(kb3).
No source location!?
%~ /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/mt_07.pl:25
%~ message_hook_type(error)
%~ message_hook(
%~ error(
%~ permission_error(redefine,imported_procedure,baseKB:mtHybrid/1),
%~ context(system:'$record_clause'/3,Context_Kw)),
%~ error,
%~ [ 'No permission to ~w ~w `~p\'' - [ redefine,
%~ imported_procedure,
%~ baseKB : mtHybrid/1]])
/*~
No permission to redefine imported_procedure `baseKB:(mtHybrid/1)'
ERROR: No permission to redefine imported_procedure `baseKB:(mtHybrid/1)'
~*/
:- listing(mtProlog/1).
%~ skipped( listing( mtProlog/1))
:- listing(mtHybrid/1).
% code1: (a <- b).
%~ skipped( listing( mtHybrid/1))
% code1: (a <- b).
code1: (a:- printAll('$current_source_module'(_M))).
No source location!?
kb2: (b).
No source location!?
baseKB:genlMt(kb2,code1).
baseKB:genlMt(code1,baseKB).
kb2: (:- a).
No source location!?
baseKB:genlMt(kb3,kb2).
kb3:predicateConventionMt(c,code1).
kb3: (a==>c).
% to make sure a does not get accdently defined in kb2 or kb3
/*~
code1:'$current_source_module'(baseKB).
/* found 1 for code1:'$current_source_module'(_21840).
*/
~*/
% to make sure a does not get accdently defined in kb2 or kb3
:- mpred_must((clause(kb3:a,_,Ref), clause_property(Ref,module(kb3)))).
%~ FIlE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/mt_07.pl#L51
%~ failed_mpred_test( clause(kb3:a,Kw,Ref),clause_property(Ref,module(kb3)))
%~ FILE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/mt_07.pl#L51
%~ FIlE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/mt_07.pl#L51
%~ FILE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/mt_07.pl#L51
%~ DUMP_BREAK/0
%~ message_hook_type(error)
%~ message_hook( initialization_exception(abort),
%~ error,
%~ [ 'Prolog initialisation failed:', nl,'Unknown message: ~p'-[abort]])
%~ unused(save_junit_results)
```
totalTime=3
ISSUE_SEARCH: https://github.com/logicmoo/logicmoo_workspace/issues?q=is%3Aissue+label%3AMT_07
GITLAB: https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/commit/1629eba4a2a1da0e1b731d156198a7168dafae44
https://gitlab.logicmoo.org/gitlab/logicmoo/logicmoo_workspace/-/blob/1629eba4a2a1da0e1b731d156198a7168dafae44/packs_sys/pfc/t/sanity_base/mt_07.pl
Latest: https://jenkins.logicmoo.org/job/logicmoo_workspace/lastBuild/testReport/logicmoo.pfc.test.sanity_base/MT_07/logicmoo_pfc_test_sanity_base_MT_07_JUnit/
This Build: https://jenkins.logicmoo.org/job/logicmoo_workspace/68/testReport/logicmoo.pfc.test.sanity_base/MT_07/logicmoo_pfc_test_sanity_base_MT_07_JUnit/
GITHUB: https://github.com/logicmoo/logicmoo_workspace/commit/1629eba4a2a1da0e1b731d156198a7168dafae44
https://github.com/logicmoo/logicmoo_workspace/blob/1629eba4a2a1da0e1b731d156198a7168dafae44/packs_sys/pfc/t/sanity_base/mt_07.pl
FAILED: /var/lib/jenkins/workspace/logicmoo_workspace/bin/lmoo-junit-minor -k mt_07.pl (returned 1)
| non_infrastructure | logicmoo pfc test sanity base mt junit cd var lib jenkins workspace logicmoo workspace packs sys pfc t sanity base timeout foreground preserve status s sigkill k lmoo clif mt pl gh master issue finfo issue search gitlab latest this build github running var lib jenkins workspace logicmoo workspace packs sys pfc t sanity base mt pl this test might need use module library logicmoo plarkc this test might need expects dialect pfc add import module mt basekb end set defaultassertmt mtprolog mthybrid pfc iri include module file library pfclib system each module pfc pfc iri include module file library pfclib system each module pfc mtprolog mthybrid expects dialect pfc mthybrid no source location message hook type error var lib jenkins workspace logicmoo workspace packs sys pfc t sanity base mt pl message hook error permission error redefine imported procedure basekb mthybrid context system record clause context kw error no permission to w w p redefine imported procedure basekb mthybrid no permission to redefine imported procedure basekb mthybrid error no permission to redefine imported procedure basekb mthybrid mthybrid no source location var lib jenkins workspace logicmoo workspace packs sys pfc t sanity base mt pl message hook type error message hook error permission error redefine imported procedure basekb mthybrid context system record clause context kw error no permission to w w p redefine imported procedure basekb mthybrid no permission to redefine imported procedure basekb mthybrid error no permission to redefine imported procedure basekb mthybrid listing mtprolog skipped listing mtprolog listing mthybrid a b skipped listing mthybrid a b a printall current source module m no source location b no source location basekb genlmt basekb genlmt basekb a no source location basekb genlmt predicateconventionmt c a c to make sure a does not get accdently defined in or current source module basekb found for current source module to make sure a does not get accdently defined in or mpred must clause a ref clause property ref module file failed mpred test clause a kw ref clause property ref module file file file dump break message hook type error message hook initialization exception abort error unused save junit results totaltime issue search gitlab latest this build github failed var lib jenkins workspace logicmoo workspace bin lmoo junit minor k mt pl returned | 0 |
591,744 | 17,860,287,905 | IssuesEvent | 2021-09-05 20:56:39 | exercism/exercism | https://api.github.com/repos/exercism/exercism | closed | Mentoring request broken for an exercise | priority/urgent type/bug | This is related to C track's gigasecond exercise. Prior to v3, I had
pushed my solution and requested mentoring on it. After the migration,
it says that it's waiting for a mentor but when I try to view the
mentoring request it returns "500 error occurred" page.
It also doesn't take up any mentoring slots. | 1.0 | Mentoring request broken for an exercise - This is related to C track's gigasecond exercise. Prior to v3, I had
pushed my solution and requested mentoring on it. After the migration,
it says that it's waiting for a mentor but when I try to view the
mentoring request it returns "500 error occurred" page.
It also doesn't take up any mentoring slots. | non_infrastructure | mentoring request broken for an exercise this is related to c track s gigasecond exercise prior to i had pushed my solution and requested mentoring on it after the migration it says that it s waiting for a mentor but when i try to view the mentoring request it returns error occurred page it also doesn t take up any mentoring slots | 0 |
190,477 | 6,818,864,474 | IssuesEvent | 2017-11-07 08:00:55 | De7vID/klingon-assistant | https://api.github.com/repos/De7vID/klingon-assistant | opened | detect if app notifications are disabled and turn off KWOTD automatically | enhancement Priority-Low | Slight improvement: make the app's KWOTD preference depend on the app's system notification permission. | 1.0 | detect if app notifications are disabled and turn off KWOTD automatically - Slight improvement: make the app's KWOTD preference depend on the app's system notification permission. | non_infrastructure | detect if app notifications are disabled and turn off kwotd automatically slight improvement make the app s kwotd preference depend on the app s system notification permission | 0 |
442,351 | 12,744,197,064 | IssuesEvent | 2020-06-26 12:01:21 | knative/docs | https://api.github.com/repos/knative/docs | closed | Need a section to explain the DNS interference for DNS challenge of Auto TLS | kind/networking lifecycle/rotten priority/2 | <!-- For a feature request about a change to Knative, please open the issue in the corresponding repo. -->
**Describe the change you'd like to see**
Per https://github.com/knative/serving/issues/4569#issuecomment-578448201, DNS interference could be a limitation for using Auto TLS DNS challenge.
We need to explicitly document this in the Auto TLS doc.
| 1.0 | Need a section to explain the DNS interference for DNS challenge of Auto TLS - <!-- For a feature request about a change to Knative, please open the issue in the corresponding repo. -->
**Describe the change you'd like to see**
Per https://github.com/knative/serving/issues/4569#issuecomment-578448201, DNS interference could be a limitation for using Auto TLS DNS challenge.
We need to explicitly document this in the Auto TLS doc.
| non_infrastructure | need a section to explain the dns interference for dns challenge of auto tls describe the change you d like to see per dns interference could be a limitation for using auto tls dns challenge we need to explicitly document this in the auto tls doc | 0 |
261,791 | 22,773,941,448 | IssuesEvent | 2022-07-08 12:47:32 | MPMG-DCC-UFMG/F01 | https://api.github.com/repos/MPMG-DCC-UFMG/F01 | opened | Teste de generalizacao para a tag Informações institucionais - Link de acesso - Bocaina de Minas | generalization test development | DoD: Realizar o teste de Generalização do validador da tag Informações institucionais - Link de acesso para o Município de Bocaina de Minas. | 1.0 | Teste de generalizacao para a tag Informações institucionais - Link de acesso - Bocaina de Minas - DoD: Realizar o teste de Generalização do validador da tag Informações institucionais - Link de acesso para o Município de Bocaina de Minas. | non_infrastructure | teste de generalizacao para a tag informações institucionais link de acesso bocaina de minas dod realizar o teste de generalização do validador da tag informações institucionais link de acesso para o município de bocaina de minas | 0 |
14,573 | 10,961,797,140 | IssuesEvent | 2019-11-27 16:01:33 | projet-m2-siris-unistra/smart-park | https://api.github.com/repos/projet-m2-siris-unistra/smart-park | closed | Déploiement de l'infrastructure de base | area/infrastructure priority/medium | Déploiement d'un cluster Kubernetes de test avec les outils de base | 1.0 | Déploiement de l'infrastructure de base - Déploiement d'un cluster Kubernetes de test avec les outils de base | infrastructure | déploiement de l infrastructure de base déploiement d un cluster kubernetes de test avec les outils de base | 1 |
15,311 | 11,455,665,273 | IssuesEvent | 2020-02-06 19:34:18 | SolarArbiter/solarforecastarbiter-core | https://api.github.com/repos/SolarArbiter/solarforecastarbiter-core | opened | test on many python and package versions | enhancement infrastructure testing | for example, test on various platforms (mac, linux, windows), python versions, and minimum and latest dependency versions | 1.0 | test on many python and package versions - for example, test on various platforms (mac, linux, windows), python versions, and minimum and latest dependency versions | infrastructure | test on many python and package versions for example test on various platforms mac linux windows python versions and minimum and latest dependency versions | 1 |
194,277 | 22,261,930,869 | IssuesEvent | 2022-06-10 01:52:12 | Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492 | https://api.github.com/repos/Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492 | reopened | CVE-2021-38300 (High) detected in linuxlinux-4.19.241 | security vulnerability | ## CVE-2021-38300 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.88</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492/commit/8d2169763c8858bce8d07fbb569f01ef9b30383b">8d2169763c8858bce8d07fbb569f01ef9b30383b</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/linux-4.19.72/arch/mips/net/bpf_jit.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/linux-4.19.72/arch/mips/net/bpf_jit.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
arch/mips/net/bpf_jit.c in the Linux kernel before 5.4.10 can generate undesirable machine code when transforming unprivileged cBPF programs, allowing execution of arbitrary code within the kernel context. This occurs because conditional branches can exceed the 128 KB limit of the MIPS architecture.
<p>Publish Date: 2021-09-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-38300>CVE-2021-38300</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-38300">https://www.linuxkernelcves.com/cves/CVE-2021-38300</a></p>
<p>Release Date: 2021-09-20</p>
<p>Fix Resolution: v4.14.251,v4.19.211,v5.4.153,v5.10.71,v5.14.10,v5.15-rc4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-38300 (High) detected in linuxlinux-4.19.241 - ## CVE-2021-38300 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.88</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492/commit/8d2169763c8858bce8d07fbb569f01ef9b30383b">8d2169763c8858bce8d07fbb569f01ef9b30383b</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/linux-4.19.72/arch/mips/net/bpf_jit.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/linux-4.19.72/arch/mips/net/bpf_jit.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
arch/mips/net/bpf_jit.c in the Linux kernel before 5.4.10 can generate undesirable machine code when transforming unprivileged cBPF programs, allowing execution of arbitrary code within the kernel context. This occurs because conditional branches can exceed the 128 KB limit of the MIPS architecture.
<p>Publish Date: 2021-09-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-38300>CVE-2021-38300</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-38300">https://www.linuxkernelcves.com/cves/CVE-2021-38300</a></p>
<p>Release Date: 2021-09-20</p>
<p>Fix Resolution: v4.14.251,v4.19.211,v5.4.153,v5.10.71,v5.14.10,v5.15-rc4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_infrastructure | cve high detected in linuxlinux cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files linux arch mips net bpf jit c linux arch mips net bpf jit c vulnerability details arch mips net bpf jit c in the linux kernel before can generate undesirable machine code when transforming unprivileged cbpf programs allowing execution of arbitrary code within the kernel context this occurs because conditional branches can exceed the kb limit of the mips architecture publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
34,156 | 28,371,221,000 | IssuesEvent | 2023-04-12 17:07:47 | microsoft/commercial-marketplace-offer-deploy | https://api.github.com/repos/microsoft/commercial-marketplace-offer-deploy | closed | Infra - MODM Dockerfile | infrastructure milestone-3 | Create a docker file that builds, sets environment variables, and starts the following:
### processes
- apiserver
- operator
## Conditions
- Both should share the same `.env`
- database file must be mounted from a volume path that supports azure storage mount, not current default of `./`
- env value that must be set: DB_PATH
- `DB_PATH` will be read into AppConfig already so it just needs to be set
| 1.0 | Infra - MODM Dockerfile - Create a docker file that builds, sets environment variables, and starts the following:
### processes
- apiserver
- operator
## Conditions
- Both should share the same `.env`
- database file must be mounted from a volume path that supports azure storage mount, not current default of `./`
- env value that must be set: DB_PATH
- `DB_PATH` will be read into AppConfig already so it just needs to be set
| infrastructure | infra modm dockerfile create a docker file that builds sets environment variables and starts the following processes apiserver operator conditions both should share the same env database file must be mounted from a volume path that supports azure storage mount not current default of env value that must be set db path db path will be read into appconfig already so it just needs to be set | 1 |
13,326 | 10,210,543,791 | IssuesEvent | 2019-08-14 14:59:50 | ethersphere/swarm | https://api.github.com/repos/ethersphere/swarm | opened | Kubernetes: Test bzzeth protocol | infrastructure | To test the integration between swarm / trinity.
- Create a helm chart for trinity
- Check integration/configurartion needs between swarm and trinity
- Provide examples on how to run swarm with trinity enabled.
More infos:
https://notes.ethereum.org/k1yEcw1gSo-iCNmEOmpUUg | 1.0 | Kubernetes: Test bzzeth protocol - To test the integration between swarm / trinity.
- Create a helm chart for trinity
- Check integration/configurartion needs between swarm and trinity
- Provide examples on how to run swarm with trinity enabled.
More infos:
https://notes.ethereum.org/k1yEcw1gSo-iCNmEOmpUUg | infrastructure | kubernetes test bzzeth protocol to test the integration between swarm trinity create a helm chart for trinity check integration configurartion needs between swarm and trinity provide examples on how to run swarm with trinity enabled more infos | 1 |
254,151 | 21,765,648,579 | IssuesEvent | 2022-05-13 01:18:50 | backend-br/vagas | https://api.github.com/repos/backend-br/vagas | closed | [Remoto] Sênior QA - Node.JS y Bash (Vaga Internacional) | Testes automatizados Git Inglês Fluente Stale | Responsabilidades:
Você será um dos dois primeiros membros da equipe com foco na garantia de qualidade, trabalhando ao lado de um engenheiro de teste de software. Isso significa que você terá uma oportunidade extraordinária de moldar nosso programa de teste e controle de qualidade e contribuir para o nosso sucesso.
Requisitos técnicos obrigatórios:
● Inglês fluente.
● Mínimo de 5 anos com testes de software.
● Deve ser fluente em Node.JS e Bash - mínimo de 4 anos.
● Selenium ou Cypress, conhecimento em um deles, mínimo 4 anos.
● Katalon ou TestIM - mínimo de 1 ano.
● Testes automatizados, Git, Jenkins, Appium.
Candidaturas: https://forms.gle/AvrQXo7sPVUrU2Nb8 | 1.0 | [Remoto] Sênior QA - Node.JS y Bash (Vaga Internacional) - Responsabilidades:
Você será um dos dois primeiros membros da equipe com foco na garantia de qualidade, trabalhando ao lado de um engenheiro de teste de software. Isso significa que você terá uma oportunidade extraordinária de moldar nosso programa de teste e controle de qualidade e contribuir para o nosso sucesso.
Requisitos técnicos obrigatórios:
● Inglês fluente.
● Mínimo de 5 anos com testes de software.
● Deve ser fluente em Node.JS e Bash - mínimo de 4 anos.
● Selenium ou Cypress, conhecimento em um deles, mínimo 4 anos.
● Katalon ou TestIM - mínimo de 1 ano.
● Testes automatizados, Git, Jenkins, Appium.
Candidaturas: https://forms.gle/AvrQXo7sPVUrU2Nb8 | non_infrastructure | sênior qa node js y bash vaga internacional responsabilidades você será um dos dois primeiros membros da equipe com foco na garantia de qualidade trabalhando ao lado de um engenheiro de teste de software isso significa que você terá uma oportunidade extraordinária de moldar nosso programa de teste e controle de qualidade e contribuir para o nosso sucesso requisitos técnicos obrigatórios ● inglês fluente ● mínimo de anos com testes de software ● deve ser fluente em node js e bash mínimo de anos ● selenium ou cypress conhecimento em um deles mínimo anos ● katalon ou testim mínimo de ano ● testes automatizados git jenkins appium candidaturas | 0 |
17,468 | 12,393,416,070 | IssuesEvent | 2020-05-20 15:23:21 | cds-snc/c19-benefits-node | https://api.github.com/repos/cds-snc/c19-benefits-node | opened | Generate semver labels on merging to master | infrastructure | - [ ] Generate Labels
- [ ] Tag Container with new label
- [ ] Inject version into container as environment variable | 1.0 | Generate semver labels on merging to master - - [ ] Generate Labels
- [ ] Tag Container with new label
- [ ] Inject version into container as environment variable | infrastructure | generate semver labels on merging to master generate labels tag container with new label inject version into container as environment variable | 1 |
122,890 | 4,846,762,516 | IssuesEvent | 2016-11-10 12:56:43 | cdnjs/cdnjs | https://api.github.com/repos/cdnjs/cdnjs | closed | [Request] Add tufte-css | High Priority in progress Library - Request to Add/Update | - [x] Before opening a issue ticket, please check if there is/was already an issue on the same topic.
- [x] @edwardtufte @daveliepmann
---
**Library name:** tufte-css
**Git repository url:** https://github.com/edwardtufte/tufte-css
**npm package url(optional):**
**License(s):** MIT
**Official homepage:** https://edwardtufte.github.io/tufte-css/
**Wanna say something? Leave message here:** I hope this suggestion is appropriate. Might need to include the css as well as the font files. Thanks!
#
Notes from cdnjs maintainer:
You are welcome to add a library via sending pull request,
it'll be faster then just opening a request issue,
and please don't forget to read the guidelines for contributing, thanks!!
| 1.0 | [Request] Add tufte-css - - [x] Before opening a issue ticket, please check if there is/was already an issue on the same topic.
- [x] @edwardtufte @daveliepmann
---
**Library name:** tufte-css
**Git repository url:** https://github.com/edwardtufte/tufte-css
**npm package url(optional):**
**License(s):** MIT
**Official homepage:** https://edwardtufte.github.io/tufte-css/
**Wanna say something? Leave message here:** I hope this suggestion is appropriate. Might need to include the css as well as the font files. Thanks!
#
Notes from cdnjs maintainer:
You are welcome to add a library via sending pull request,
it'll be faster then just opening a request issue,
and please don't forget to read the guidelines for contributing, thanks!!
| non_infrastructure | add tufte css before opening a issue ticket please check if there is was already an issue on the same topic edwardtufte daveliepmann library name tufte css git repository url npm package url optional license s mit official homepage wanna say something leave message here i hope this suggestion is appropriate might need to include the css as well as the font files thanks notes from cdnjs maintainer you are welcome to add a library via sending pull request it ll be faster then just opening a request issue and please don t forget to read the guidelines for contributing thanks | 0 |
19,848 | 13,502,796,343 | IssuesEvent | 2020-09-13 10:21:11 | LearningByExample/kotlin-event-driven-petstore | https://api.github.com/repos/LearningByExample/kotlin-event-driven-petstore | opened | [FEATURE] Install Kafka operator in kubernets | domain:pet feature infrastructure | **Describe the feature**
We have to install and configure a kafka in Kubernetes
| 1.0 | [FEATURE] Install Kafka operator in kubernets - **Describe the feature**
We have to install and configure a kafka in Kubernetes
| infrastructure | install kafka operator in kubernets describe the feature we have to install and configure a kafka in kubernetes | 1 |
30,961 | 25,200,050,813 | IssuesEvent | 2022-11-13 01:40:20 | MissouriMRR/SUAS-2023 | https://api.github.com/repos/MissouriMRR/SUAS-2023 | opened | Detect Kill-Switch | enhancement good first issue flight infrastructure | # Detect Kill-Switch
## Problem
Detect when the kill-switch is engaged by the remote. This occurs during a manual takeover. The program should end after the kill-switch is engaged to prevent additional commands from being sent to the drone.
## Solution
Detect when the kill-switch is engaged using `pymavlink`. When it is engaged, exit the program.
| 1.0 | Detect Kill-Switch - # Detect Kill-Switch
## Problem
Detect when the kill-switch is engaged by the remote. This occurs during a manual takeover. The program should end after the kill-switch is engaged to prevent additional commands from being sent to the drone.
## Solution
Detect when the kill-switch is engaged using `pymavlink`. When it is engaged, exit the program.
| infrastructure | detect kill switch detect kill switch problem detect when the kill switch is engaged by the remote this occurs during a manual takeover the program should end after the kill switch is engaged to prevent additional commands from being sent to the drone solution detect when the kill switch is engaged using pymavlink when it is engaged exit the program | 1 |
29,879 | 11,782,210,978 | IssuesEvent | 2020-03-17 01:05:23 | yaeljacobs67/librenms | https://api.github.com/repos/yaeljacobs67/librenms | opened | CVE-2020-7598 (High) detected in minimist-0.0.10.tgz | security vulnerability | ## CVE-2020-7598 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>minimist-0.0.10.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-0.0.10.tgz">https://registry.npmjs.org/minimist/-/minimist-0.0.10.tgz</a></p>
<p>Path to dependency file: /librenms/lib/Leaflet.markercluster/package.json</p>
<p>Path to vulnerable library: /tmp/git/librenms/lib/Leaflet.markercluster/node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- karma-0.8.8.tgz (Root Library)
- http-proxy-0.10.4.tgz
- optimist-0.6.1.tgz
- :x: **minimist-0.0.10.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
minimist before 1.2.2 could be tricked into adding or modifying properties of Object.prototype using a "constructor" or "__proto__" payload.
<p>Publish Date: 2020-03-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7598>CVE-2020-7598</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94">https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94</a></p>
<p>Release Date: 2020-03-11</p>
<p>Fix Resolution: minimist - 0.2.1,1.2.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"minimist","packageVersion":"0.0.10","isTransitiveDependency":true,"dependencyTree":"karma:0.8.8;http-proxy:0.10.4;optimist:0.6.1;minimist:0.0.10","isMinimumFixVersionAvailable":true,"minimumFixVersion":"minimist - 0.2.1,1.2.2"}],"vulnerabilityIdentifier":"CVE-2020-7598","vulnerabilityDetails":"minimist before 1.2.2 could be tricked into adding or modifying properties of Object.prototype using a \"constructor\" or \"__proto__\" payload.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7598","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-7598 (High) detected in minimist-0.0.10.tgz - ## CVE-2020-7598 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>minimist-0.0.10.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-0.0.10.tgz">https://registry.npmjs.org/minimist/-/minimist-0.0.10.tgz</a></p>
<p>Path to dependency file: /librenms/lib/Leaflet.markercluster/package.json</p>
<p>Path to vulnerable library: /tmp/git/librenms/lib/Leaflet.markercluster/node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- karma-0.8.8.tgz (Root Library)
- http-proxy-0.10.4.tgz
- optimist-0.6.1.tgz
- :x: **minimist-0.0.10.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
minimist before 1.2.2 could be tricked into adding or modifying properties of Object.prototype using a "constructor" or "__proto__" payload.
<p>Publish Date: 2020-03-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7598>CVE-2020-7598</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94">https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94</a></p>
<p>Release Date: 2020-03-11</p>
<p>Fix Resolution: minimist - 0.2.1,1.2.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"minimist","packageVersion":"0.0.10","isTransitiveDependency":true,"dependencyTree":"karma:0.8.8;http-proxy:0.10.4;optimist:0.6.1;minimist:0.0.10","isMinimumFixVersionAvailable":true,"minimumFixVersion":"minimist - 0.2.1,1.2.2"}],"vulnerabilityIdentifier":"CVE-2020-7598","vulnerabilityDetails":"minimist before 1.2.2 could be tricked into adding or modifying properties of Object.prototype using a \"constructor\" or \"__proto__\" payload.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7598","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_infrastructure | cve high detected in minimist tgz cve high severity vulnerability vulnerable library minimist tgz parse argument options library home page a href path to dependency file librenms lib leaflet markercluster package json path to vulnerable library tmp git librenms lib leaflet markercluster node modules minimist package json dependency hierarchy karma tgz root library http proxy tgz optimist tgz x minimist tgz vulnerable library vulnerability details minimist before could be tricked into adding or modifying properties of object prototype using a constructor or proto payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution minimist isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails minimist before could be tricked into adding or modifying properties of object prototype using a constructor or proto payload vulnerabilityurl | 0 |
17,869 | 12,675,549,826 | IssuesEvent | 2020-06-19 02:05:46 | raiden-network/light-client | https://api.github.com/repos/raiden-network/light-client | opened | Get dApp to work with relative paths on index.html | dApp 📱 infrastructure 🚧 | ## Description
As part of #1511 , I've developed an nginx docker setup which is able to load the artifacts from a CircleCI build and serve them in a local path. The idea is that by and upon acessing e.g. http://localhost:8888/1745/ , one can see the latest build for the #1745. But this needs the webApp to work with relative URLs (in this case, `/1745/`) for assets and routes/redirects.
We need to have a relative `publicPath` in `vue.config.js`, like `publicPath: './'`. This way, the assets for the dApp is downloaded from the path relative to the one from which the `index.html` file was loaded (ie. `js/...` instead of `/js/...`). This should also do it for `/staging/`. Last bit is fixing redirection to root `/`, which keeps happening when accessing the index file directly on the relative path.
## Acceptance criteria
- dApp's assets are fetched relative to the path index.html was loaded
- Routes and redirections honor relative path of index file
## Tasks
- [ ]
| 1.0 | Get dApp to work with relative paths on index.html - ## Description
As part of #1511 , I've developed an nginx docker setup which is able to load the artifacts from a CircleCI build and serve them in a local path. The idea is that by and upon acessing e.g. http://localhost:8888/1745/ , one can see the latest build for the #1745. But this needs the webApp to work with relative URLs (in this case, `/1745/`) for assets and routes/redirects.
We need to have a relative `publicPath` in `vue.config.js`, like `publicPath: './'`. This way, the assets for the dApp is downloaded from the path relative to the one from which the `index.html` file was loaded (ie. `js/...` instead of `/js/...`). This should also do it for `/staging/`. Last bit is fixing redirection to root `/`, which keeps happening when accessing the index file directly on the relative path.
## Acceptance criteria
- dApp's assets are fetched relative to the path index.html was loaded
- Routes and redirections honor relative path of index file
## Tasks
- [ ]
| infrastructure | get dapp to work with relative paths on index html description as part of i ve developed an nginx docker setup which is able to load the artifacts from a circleci build and serve them in a local path the idea is that by and upon acessing e g one can see the latest build for the but this needs the webapp to work with relative urls in this case for assets and routes redirects we need to have a relative publicpath in vue config js like publicpath this way the assets for the dapp is downloaded from the path relative to the one from which the index html file was loaded ie js instead of js this should also do it for staging last bit is fixing redirection to root which keeps happening when accessing the index file directly on the relative path acceptance criteria dapp s assets are fetched relative to the path index html was loaded routes and redirections honor relative path of index file tasks | 1 |
179,592 | 14,706,297,517 | IssuesEvent | 2021-01-04 19:36:32 | shaarli/Shaarli | https://api.github.com/repos/shaarli/Shaarli | opened | Missing php extension ldap in documentation | documentation | https://shaarli.readthedocs.io/en/master/Server-configuration/
LDAP integration could also be added to the features list https://github.com/shaarli/Shaarli/blob/master/doc/md/index.md | 1.0 | Missing php extension ldap in documentation - https://shaarli.readthedocs.io/en/master/Server-configuration/
LDAP integration could also be added to the features list https://github.com/shaarli/Shaarli/blob/master/doc/md/index.md | non_infrastructure | missing php extension ldap in documentation ldap integration could also be added to the features list | 0 |
19,223 | 13,207,532,626 | IssuesEvent | 2020-08-14 23:28:47 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | opened | SVN mails should have the committer in the reply-to field (Trac #673) | Incomplete Migration Migrated from Trac enhancement infrastructure | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/673">https://code.icecube.wisc.edu/projects/icecube/ticket/673</a>, reported by jvansantenand owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-02-11T19:07:52",
"_ts": "1423681672687980",
"description": "It's occasionally necessary to yell at the author of a bad commit. While the email address can sometimes be inferred from the username in the commit log, it would be much easier to be able to just hit \"Reply.\"",
"reporter": "jvansanten",
"cc": "",
"resolution": "wontfix",
"time": "2012-03-21T00:56:52",
"component": "infrastructure",
"summary": "SVN mails should have the committer in the reply-to field",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "nega",
"type": "enhancement"
}
```
</p>
</details>
| 1.0 | SVN mails should have the committer in the reply-to field (Trac #673) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/673">https://code.icecube.wisc.edu/projects/icecube/ticket/673</a>, reported by jvansantenand owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-02-11T19:07:52",
"_ts": "1423681672687980",
"description": "It's occasionally necessary to yell at the author of a bad commit. While the email address can sometimes be inferred from the username in the commit log, it would be much easier to be able to just hit \"Reply.\"",
"reporter": "jvansanten",
"cc": "",
"resolution": "wontfix",
"time": "2012-03-21T00:56:52",
"component": "infrastructure",
"summary": "SVN mails should have the committer in the reply-to field",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "nega",
"type": "enhancement"
}
```
</p>
</details>
| infrastructure | svn mails should have the committer in the reply to field trac migrated from json status closed changetime ts description it s occasionally necessary to yell at the author of a bad commit while the email address can sometimes be inferred from the username in the commit log it would be much easier to be able to just hit reply reporter jvansanten cc resolution wontfix time component infrastructure summary svn mails should have the committer in the reply to field priority normal keywords milestone owner nega type enhancement | 1 |
21,021 | 14,281,745,387 | IssuesEvent | 2020-11-23 08:34:05 | thinktecture/relayserver | https://api.github.com/repos/thinktecture/relayserver | opened | Introduce an IOriginRepository | enhancement infrastructure | Track the following:
- OriginId (Primary Key)
- StartTime
- ShutdownTime (nullable)
- HearbeatTime (initial StartTime, final ShutdownTime)
| 1.0 | Introduce an IOriginRepository - Track the following:
- OriginId (Primary Key)
- StartTime
- ShutdownTime (nullable)
- HearbeatTime (initial StartTime, final ShutdownTime)
| infrastructure | introduce an ioriginrepository track the following originid primary key starttime shutdowntime nullable hearbeattime initial starttime final shutdowntime | 1 |
416,305 | 12,142,443,207 | IssuesEvent | 2020-04-24 01:41:12 | NetchX/Netch | https://api.github.com/repos/NetchX/Netch | closed | 希望后续加入的功能,希望软件越做越好 | Priority: Low Status: In Progress Type: Enhancement | 1.针对游戏加速启动之前和启动之后,提供手动ping该节点的功能,测试节点是否正常.(划重点手动,不需要断开连接也可以ping)
2.实时连接速度显示.
3.规则文件可以编辑,删除
| 1.0 | 希望后续加入的功能,希望软件越做越好 - 1.针对游戏加速启动之前和启动之后,提供手动ping该节点的功能,测试节点是否正常.(划重点手动,不需要断开连接也可以ping)
2.实时连接速度显示.
3.规则文件可以编辑,删除
| non_infrastructure | 希望后续加入的功能 希望软件越做越好 针对游戏加速启动之前和启动之后 提供手动ping该节点的功能 测试节点是否正常 划重点手动 不需要断开连接也可以ping 实时连接速度显示 规则文件可以编辑 删除 | 0 |
11,200 | 8,997,950,354 | IssuesEvent | 2019-02-02 17:16:50 | Microsoft/visualfsharp | https://api.github.com/repos/Microsoft/visualfsharp | closed | Templates for dev16 | Area-Infrastructure discussion | ### Updated set of templates
Here is a flat list of templates we should ship in VS 2019:
* .NET Core console app
* .NET Core library
* ASP.NET Core web app (pivots to Empty app and Web API)
* Tutorial
* .NET Framework console app (using .NET SDK)
* .NET Framework library (using .NET SDK)
* .NET Standard library
* MSTest test project
* xUnit test project
I'm not sure what to make of a "Scripting project". Unless it serves a specific purpose, why have one?
Additionally, Xamarin should continue to deliver the existing F# templates.
### Original text
In #4977, we cleaned up templates a bit to more easily distinguish between .NET Framework and .NET Core templates. However, I'd like to document what should be delivered for dev16 (Visual Studio 2019).
This is what I have in mind for now, and what I know is **definitely** capable of being accomplished from a technical and policy perspective:
* Console App (.NET Core)
* Console App (.NET Framework)
* Class Library (.NET Standard)
* Class Library (.NET Framework)
* Tutorial
* ASP.NET Core Web Application
* MSTest Test Project (.NET Core)
* xUnit Test Project (.NET Core)
I'm curious what others have in mind.
### Suggested by @KevinRansom:
We should have a TypeProvider template. Allowing users to generate a correctly specified TP nuget package.
### Suggested by @dsyme:
I would like to see a "Scripting project".
* drop the user into the pit of success for F# data scripting
* consider referencing FSharp.Data and perhaps others
* consider having pre-baked snippets to make web requests, crack JSON, read/search/enumerate files and so on
* have examples of #r, #load, #r "nuget: Foo.Bar 4.3.5.2"
* both .NET Core and .NET Framework variations
Separately we could consider a "Math scripting project" referencing
* FSharp.Charting (but it is Windows-specific)
* MathNet.Numerics
* ML.NET
or some updated variation on FsLab components | 1.0 | Templates for dev16 - ### Updated set of templates
Here is a flat list of templates we should ship in VS 2019:
* .NET Core console app
* .NET Core library
* ASP.NET Core web app (pivots to Empty app and Web API)
* Tutorial
* .NET Framework console app (using .NET SDK)
* .NET Framework library (using .NET SDK)
* .NET Standard library
* MSTest test project
* xUnit test project
I'm not sure what to make of a "Scripting project". Unless it serves a specific purpose, why have one?
Additionally, Xamarin should continue to deliver the existing F# templates.
### Original text
In #4977, we cleaned up templates a bit to more easily distinguish between .NET Framework and .NET Core templates. However, I'd like to document what should be delivered for dev16 (Visual Studio 2019).
This is what I have in mind for now, and what I know is **definitely** capable of being accomplished from a technical and policy perspective:
* Console App (.NET Core)
* Console App (.NET Framework)
* Class Library (.NET Standard)
* Class Library (.NET Framework)
* Tutorial
* ASP.NET Core Web Application
* MSTest Test Project (.NET Core)
* xUnit Test Project (.NET Core)
I'm curious what others have in mind.
### Suggested by @KevinRansom:
We should have a TypeProvider template. Allowing users to generate a correctly specified TP nuget package.
### Suggested by @dsyme:
I would like to see a "Scripting project".
* drop the user into the pit of success for F# data scripting
* consider referencing FSharp.Data and perhaps others
* consider having pre-baked snippets to make web requests, crack JSON, read/search/enumerate files and so on
* have examples of #r, #load, #r "nuget: Foo.Bar 4.3.5.2"
* both .NET Core and .NET Framework variations
Separately we could consider a "Math scripting project" referencing
* FSharp.Charting (but it is Windows-specific)
* MathNet.Numerics
* ML.NET
or some updated variation on FsLab components | infrastructure | templates for updated set of templates here is a flat list of templates we should ship in vs net core console app net core library asp net core web app pivots to empty app and web api tutorial net framework console app using net sdk net framework library using net sdk net standard library mstest test project xunit test project i m not sure what to make of a scripting project unless it serves a specific purpose why have one additionally xamarin should continue to deliver the existing f templates original text in we cleaned up templates a bit to more easily distinguish between net framework and net core templates however i d like to document what should be delivered for visual studio this is what i have in mind for now and what i know is definitely capable of being accomplished from a technical and policy perspective console app net core console app net framework class library net standard class library net framework tutorial asp net core web application mstest test project net core xunit test project net core i m curious what others have in mind suggested by kevinransom we should have a typeprovider template allowing users to generate a correctly specified tp nuget package suggested by dsyme i would like to see a scripting project drop the user into the pit of success for f data scripting consider referencing fsharp data and perhaps others consider having pre baked snippets to make web requests crack json read search enumerate files and so on have examples of r load r nuget foo bar both net core and net framework variations separately we could consider a math scripting project referencing fsharp charting but it is windows specific mathnet numerics ml net or some updated variation on fslab components | 1 |
778,356 | 27,312,570,265 | IssuesEvent | 2023-02-24 13:22:57 | gamefreedomgit/Maelstrom | https://api.github.com/repos/gamefreedomgit/Maelstrom | reopened | [Core][Dungeon] Need before greed not aligned to class weapon/item usage | Dungeon Loot Core Priority: Critical Status: Needs Confirmation | Dungeon loot can be greeded / needed by any class despite classes not being able to use it. Image below of how this should work for each class

| 1.0 | [Core][Dungeon] Need before greed not aligned to class weapon/item usage - Dungeon loot can be greeded / needed by any class despite classes not being able to use it. Image below of how this should work for each class

| non_infrastructure | need before greed not aligned to class weapon item usage dungeon loot can be greeded needed by any class despite classes not being able to use it image below of how this should work for each class | 0 |
17,153 | 12,237,137,023 | IssuesEvent | 2020-05-04 17:30:35 | patternfly/patternfly-org | https://api.github.com/repos/patternfly/patternfly-org | closed | Side-nav colors not matching page background | infrastructure | In the current branch when 'active', 'hovering', 'selecting' links in the side nav the color background matched the color background of the page. Since the changes the color has change to be a shade or so darker. I have a fix for this locally, Below you can see the before and after.
## Before changes:

## After changes:
 | 1.0 | Side-nav colors not matching page background - In the current branch when 'active', 'hovering', 'selecting' links in the side nav the color background matched the color background of the page. Since the changes the color has change to be a shade or so darker. I have a fix for this locally, Below you can see the before and after.
## Before changes:

## After changes:
 | infrastructure | side nav colors not matching page background in the current branch when active hovering selecting links in the side nav the color background matched the color background of the page since the changes the color has change to be a shade or so darker i have a fix for this locally below you can see the before and after before changes after changes | 1 |
847 | 4,506,610,199 | IssuesEvent | 2016-09-02 05:08:09 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | zypper module: notify crashes because of changed dict | bug_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
zypper
notify
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0 (devel acd69bcc77) last updated 2016/08/31 16:06:22 (GMT +200)
lib/ansible/modules/core: (detached HEAD 5310bab12f) last updated 2016/08/31 16:06:30 (GMT +200)
lib/ansible/modules/extras: (detached HEAD 2ef4a34eee) last updated 2016/08/31 16:06:30 (GMT +200)
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
SLES 12
##### SUMMARY
<!--- Explain the problem briefly -->
The package module calls zypper with a list of packages/programs to install. The operation completes without errors. In my opinion, notify executes now and raises an error because package or zypper returned "changed" = {} instead of "changed" = "false".
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
The following will raise an exeption even if everything is fine and unchanged.
<!--- Paste example playbooks or commands between quotes below -->
```
#example code - apache2-utils is installed when apache is installed so both is fine
#from role/myrole/tasks/main.yml
- name: install apache and apache modules
package: name={{ item }} state=latest
with_items:
- "apache2"
- "apache2-utils"
notify:
- "restart apache"
#frome /role/myrole/handlers/main.yml
- name: restart apache
service: name=apache2 state=restarted
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
During the first run, when the packages aren't present, I expected that the handler is called. Instead => exception
During the second run, when the packages are present, I expectet that the handler isn't called. Instead => exception
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
Running zypper
Using module file /home/****/ansible/lib/ansible/modules/extras/packaging/os/zypper.py
<node> ESTABLISH SSH CONNECTION FOR USER: None
<node> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/****/.ansible/cp/ansible-ssh-%h-%p-%r node '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1472731543.66-70288465940279 `" && echo ansible-tmp-1472731543.66-70288465940279="` echo $HOME/.ansible/tmp/ansible-tmp-1472731543.66-70288465940279 `" ) && sleep 0'"'"''
<node> PUT /tmp/tmpd7stK7 TO /home/****/.ansible/tmp/ansible-tmp-1472731543.66-70288465940279/zypper.py
<node> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/****/.ansible/cp/ansible-ssh-%h-%p-%r '[node]'
<node> ESTABLISH SSH CONNECTION FOR USER: None
<node> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/****/.ansible/cp/ansible-ssh-%h-%p-%r node '/bin/sh -c '"'"'chmod u+x /home/****/.ansible/tmp/ansible-tmp-1472731543.66-70288465940279/ /home/****/.ansible/tmp/ansible-tmp-1472731543.66-70288465940279/zypper.py && sleep 0'"'"''
<node> ESTABLISH SSH CONNECTION FOR USER: None
<node> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/****/.ansible/cp/ansible-ssh-%h-%p-%r -tt node '/bin/sh -c '"'"'sudo -H -S -p "[sudo via ansible, key=MYSECRET] password: " -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-MYSECRET; /usr/bin/python /home/****/.ansible/tmp/ansible-tmp-1472731543.66-70288465940279/zypper.py; rm -rf "/home/****/.ansible/tmp/ansible-tmp-1472731543.66-70288465940279/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"''
ok: [node] => (item=[u'apache2', u'apache2-utils']) => {
"changed": {},
"cmd": [
"/usr/bin/zypper",
"--quiet",
"--non-interactive",
"--xmlout",
"install",
"--type",
"package",
"--auto-agree-with-licenses",
"--no-recommends",
"--",
"apache2-utils",
"apache2"
],
"invocation": {
"module_args": {
"disable_gpg_check": false,
"disable_recommends": true,
"force": false,
"name": [
"apache2",
"apache2-utils"
],
"oldpackage": false,
"state": "latest",
"type": "package",
"update_cache": false
}
},
"item": [
"apache2",
"apache2-utils"
],
"name": [
"apache2",
"apache2-utils"
],
"rc": 0,
"state": "latest",
"update_cache": false
}
ERROR! Unexpected Exception: unsupported operand type(s) for |=: 'bool' and 'dict'
the full traceback was:
Traceback (most recent call last):
File "/home/****/ansible/bin/ansible-playbook", line 97, in <module>
exit_code = cli.run()
File "/home/****/ansible/lib/ansible/cli/playbook.py", line 154, in run
results = pbex.run()
File "/home/****/ansible/lib/ansible/executor/playbook_executor.py", line 147, in run
result = self._tqm.run(play=play)
File "/home/****/ansible/lib/ansible/executor/task_queue_manager.py", line 281, in run
play_return = strategy.run(iterator, play_context)
File "/home/****/ansible/lib/ansible/plugins/strategy/linear.py", line 269, in run
results += self._wait_on_pending_results(iterator)
File "/home/****/ansible/lib/ansible/plugins/strategy/__init__.py", line 514, in _wait_on_pending_results
results = self._process_pending_results(iterator)
File "/home/****/ansible/lib/ansible/plugins/strategy/__init__.py", line 370, in _process_pending_results
if task_result.is_changed():
File "/home/****/ansible/lib/ansible/executor/task_result.py", line 40, in is_changed
return self._check_key('changed')
File "/home/****/ansible/lib/ansible/executor/task_result.py", line 69, in _check_key
flag |= res.get(key, False)
TypeError: unsupported operand type(s) for |=: 'bool' and 'dict'
AND WITHOUT -vvvv
TASK [apache-proxy : install apache and apache modules] ************************
ok: [sazvl0021.saz.bosch-si.com] => (item=[u'apache2', u'apache2-utils'])
ERROR! Unexpected Exception: unsupported operand type(s) for |=: 'bool' and 'dict'
```
| True | zypper module: notify crashes because of changed dict - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
zypper
notify
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0 (devel acd69bcc77) last updated 2016/08/31 16:06:22 (GMT +200)
lib/ansible/modules/core: (detached HEAD 5310bab12f) last updated 2016/08/31 16:06:30 (GMT +200)
lib/ansible/modules/extras: (detached HEAD 2ef4a34eee) last updated 2016/08/31 16:06:30 (GMT +200)
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
SLES 12
##### SUMMARY
<!--- Explain the problem briefly -->
The package module calls zypper with a list of packages/programs to install. The operation completes without errors. In my opinion, notify executes now and raises an error because package or zypper returned "changed" = {} instead of "changed" = "false".
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
The following will raise an exeption even if everything is fine and unchanged.
<!--- Paste example playbooks or commands between quotes below -->
```
#example code - apache2-utils is installed when apache is installed so both is fine
#from role/myrole/tasks/main.yml
- name: install apache and apache modules
package: name={{ item }} state=latest
with_items:
- "apache2"
- "apache2-utils"
notify:
- "restart apache"
#frome /role/myrole/handlers/main.yml
- name: restart apache
service: name=apache2 state=restarted
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
During the first run, when the packages aren't present, I expected that the handler is called. Instead => exception
During the second run, when the packages are present, I expectet that the handler isn't called. Instead => exception
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
Running zypper
Using module file /home/****/ansible/lib/ansible/modules/extras/packaging/os/zypper.py
<node> ESTABLISH SSH CONNECTION FOR USER: None
<node> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/****/.ansible/cp/ansible-ssh-%h-%p-%r node '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1472731543.66-70288465940279 `" && echo ansible-tmp-1472731543.66-70288465940279="` echo $HOME/.ansible/tmp/ansible-tmp-1472731543.66-70288465940279 `" ) && sleep 0'"'"''
<node> PUT /tmp/tmpd7stK7 TO /home/****/.ansible/tmp/ansible-tmp-1472731543.66-70288465940279/zypper.py
<node> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/****/.ansible/cp/ansible-ssh-%h-%p-%r '[node]'
<node> ESTABLISH SSH CONNECTION FOR USER: None
<node> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/****/.ansible/cp/ansible-ssh-%h-%p-%r node '/bin/sh -c '"'"'chmod u+x /home/****/.ansible/tmp/ansible-tmp-1472731543.66-70288465940279/ /home/****/.ansible/tmp/ansible-tmp-1472731543.66-70288465940279/zypper.py && sleep 0'"'"''
<node> ESTABLISH SSH CONNECTION FOR USER: None
<node> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/****/.ansible/cp/ansible-ssh-%h-%p-%r -tt node '/bin/sh -c '"'"'sudo -H -S -p "[sudo via ansible, key=MYSECRET] password: " -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-MYSECRET; /usr/bin/python /home/****/.ansible/tmp/ansible-tmp-1472731543.66-70288465940279/zypper.py; rm -rf "/home/****/.ansible/tmp/ansible-tmp-1472731543.66-70288465940279/" > /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"''
ok: [node] => (item=[u'apache2', u'apache2-utils']) => {
"changed": {},
"cmd": [
"/usr/bin/zypper",
"--quiet",
"--non-interactive",
"--xmlout",
"install",
"--type",
"package",
"--auto-agree-with-licenses",
"--no-recommends",
"--",
"apache2-utils",
"apache2"
],
"invocation": {
"module_args": {
"disable_gpg_check": false,
"disable_recommends": true,
"force": false,
"name": [
"apache2",
"apache2-utils"
],
"oldpackage": false,
"state": "latest",
"type": "package",
"update_cache": false
}
},
"item": [
"apache2",
"apache2-utils"
],
"name": [
"apache2",
"apache2-utils"
],
"rc": 0,
"state": "latest",
"update_cache": false
}
ERROR! Unexpected Exception: unsupported operand type(s) for |=: 'bool' and 'dict'
the full traceback was:
Traceback (most recent call last):
File "/home/****/ansible/bin/ansible-playbook", line 97, in <module>
exit_code = cli.run()
File "/home/****/ansible/lib/ansible/cli/playbook.py", line 154, in run
results = pbex.run()
File "/home/****/ansible/lib/ansible/executor/playbook_executor.py", line 147, in run
result = self._tqm.run(play=play)
File "/home/****/ansible/lib/ansible/executor/task_queue_manager.py", line 281, in run
play_return = strategy.run(iterator, play_context)
File "/home/****/ansible/lib/ansible/plugins/strategy/linear.py", line 269, in run
results += self._wait_on_pending_results(iterator)
File "/home/****/ansible/lib/ansible/plugins/strategy/__init__.py", line 514, in _wait_on_pending_results
results = self._process_pending_results(iterator)
File "/home/****/ansible/lib/ansible/plugins/strategy/__init__.py", line 370, in _process_pending_results
if task_result.is_changed():
File "/home/****/ansible/lib/ansible/executor/task_result.py", line 40, in is_changed
return self._check_key('changed')
File "/home/****/ansible/lib/ansible/executor/task_result.py", line 69, in _check_key
flag |= res.get(key, False)
TypeError: unsupported operand type(s) for |=: 'bool' and 'dict'
AND WITHOUT -vvvv
TASK [apache-proxy : install apache and apache modules] ************************
ok: [sazvl0021.saz.bosch-si.com] => (item=[u'apache2', u'apache2-utils'])
ERROR! Unexpected Exception: unsupported operand type(s) for |=: 'bool' and 'dict'
```
| non_infrastructure | zypper module notify crashes because of changed dict issue type bug report component name zypper notify ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific sles summary the package module calls zypper with a list of packages programs to install the operation completes without errors in my opinion notify executes now and raises an error because package or zypper returned changed instead of changed false steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used the following will raise an exeption even if everything is fine and unchanged example code utils is installed when apache is installed so both is fine from role myrole tasks main yml name install apache and apache modules package name item state latest with items utils notify restart apache frome role myrole handlers main yml name restart apache service name state restarted expected results during the first run when the packages aren t present i expected that the handler is called instead exception during the second run when the packages are present i expectet that the handler isn t called instead exception actual results running zypper using module file home ansible lib ansible modules extras packaging os zypper py establish ssh connection for user none ssh exec ssh vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath home ansible cp ansible ssh h p r node bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home ansible tmp ansible tmp zypper py ssh exec sftp b vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath home ansible cp ansible ssh h p r establish ssh connection for user none ssh exec ssh vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath home ansible cp ansible ssh h p r node bin sh c chmod u x home ansible tmp ansible tmp home ansible tmp ansible tmp zypper py sleep establish ssh connection for user none ssh exec ssh vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o connecttimeout o controlpath home ansible cp ansible ssh h p r tt node bin sh c sudo h s p password u root bin sh c echo become success mysecret usr bin python home ansible tmp ansible tmp zypper py rm rf home ansible tmp ansible tmp dev null sleep ok item changed cmd usr bin zypper quiet non interactive xmlout install type package auto agree with licenses no recommends utils invocation module args disable gpg check false disable recommends true force false name utils oldpackage false state latest type package update cache false item utils name utils rc state latest update cache false error unexpected exception unsupported operand type s for bool and dict the full traceback was traceback most recent call last file home ansible bin ansible playbook line in exit code cli run file home ansible lib ansible cli playbook py line in run results pbex run file home ansible lib ansible executor playbook executor py line in run result self tqm run play play file home ansible lib ansible executor task queue manager py line in run play return strategy run iterator play context file home ansible lib ansible plugins strategy linear py line in run results self wait on pending results iterator file home ansible lib ansible plugins strategy init py line in wait on pending results results self process pending results iterator file home ansible lib ansible plugins strategy init py line in process pending results if task result is changed file home ansible lib ansible executor task result py line in is changed return self check key changed file home ansible lib ansible executor task result py line in check key flag res get key false typeerror unsupported operand type s for bool and dict and without vvvv task ok item error unexpected exception unsupported operand type s for bool and dict | 0 |
12,571 | 20,254,869,737 | IssuesEvent | 2022-02-14 21:53:50 | renovatebot/renovate | https://api.github.com/repos/renovatebot/renovate | opened | Renovate not picking up submodule | type:bug status:requirements priority-5-triage | ### How are you running Renovate?
WhiteSource Renovate hosted app on github.com
### If you're self-hosting Renovate, tell us what version of Renovate you run.
_No response_
### Please select which platform you are using if self-hosting.
github.com
### If you're self-hosting Renovate, tell us what version of the platform you run.
_No response_
### Was this something which used to work for you, and then stopped?
I never saw this working
### Describe the bug
I've added
```
"git-submodules": {
"enabled": true
}
```
to my renovate.json, but it's not picking up submodules. I get this error message, but kuberay has a master branch:
```
DEBUG: Dependency third-party/kuberay has unsupported value master
```
`.gitmodules` looks like this:
```
[submodule "third-party/kuberay"]
path = third-party/kuberay
url = https://github.com/ray-project/kuberay.git
```
### Relevant debug logs
```
"git-submodules": [
{
"packageFile": ".gitmodules",
"deps": [
{
"depName": "third-party/kuberay",
"lookupName": "https://github.com/ray-project/kuberay.git",
"currentValue": "master",
"currentDigest": "76c3d76933c9c05f0b919ca469a67688806f5c5f",
"depIndex": 0,
"updates": [
{
"updateType": "digest",
"newValue": "master",
"newDigest": "de21b1f2d360f8c6a5b02dc3df91e4214c5cc802",
"branchName": "renovate/third-party-kuberay-digest"
}
],
"warnings": [],
"versioning": "git"
}
],
"datasource": "git-refs"
}
],
```
### Have you created a minimal reproduction repository?
I have linked to a minimal reproduction repository in the bug description | 1.0 | Renovate not picking up submodule - ### How are you running Renovate?
WhiteSource Renovate hosted app on github.com
### If you're self-hosting Renovate, tell us what version of Renovate you run.
_No response_
### Please select which platform you are using if self-hosting.
github.com
### If you're self-hosting Renovate, tell us what version of the platform you run.
_No response_
### Was this something which used to work for you, and then stopped?
I never saw this working
### Describe the bug
I've added
```
"git-submodules": {
"enabled": true
}
```
to my renovate.json, but it's not picking up submodules. I get this error message, but kuberay has a master branch:
```
DEBUG: Dependency third-party/kuberay has unsupported value master
```
`.gitmodules` looks like this:
```
[submodule "third-party/kuberay"]
path = third-party/kuberay
url = https://github.com/ray-project/kuberay.git
```
### Relevant debug logs
```
"git-submodules": [
{
"packageFile": ".gitmodules",
"deps": [
{
"depName": "third-party/kuberay",
"lookupName": "https://github.com/ray-project/kuberay.git",
"currentValue": "master",
"currentDigest": "76c3d76933c9c05f0b919ca469a67688806f5c5f",
"depIndex": 0,
"updates": [
{
"updateType": "digest",
"newValue": "master",
"newDigest": "de21b1f2d360f8c6a5b02dc3df91e4214c5cc802",
"branchName": "renovate/third-party-kuberay-digest"
}
],
"warnings": [],
"versioning": "git"
}
],
"datasource": "git-refs"
}
],
```
### Have you created a minimal reproduction repository?
I have linked to a minimal reproduction repository in the bug description | non_infrastructure | renovate not picking up submodule how are you running renovate whitesource renovate hosted app on github com if you re self hosting renovate tell us what version of renovate you run no response please select which platform you are using if self hosting github com if you re self hosting renovate tell us what version of the platform you run no response was this something which used to work for you and then stopped i never saw this working describe the bug i ve added git submodules enabled true to my renovate json but it s not picking up submodules i get this error message but kuberay has a master branch debug dependency third party kuberay has unsupported value master gitmodules looks like this path third party kuberay url relevant debug logs git submodules packagefile gitmodules deps depname third party kuberay lookupname currentvalue master currentdigest depindex updates updatetype digest newvalue master newdigest branchname renovate third party kuberay digest warnings versioning git datasource git refs have you created a minimal reproduction repository i have linked to a minimal reproduction repository in the bug description | 0 |
32,038 | 26,373,283,093 | IssuesEvent | 2023-01-11 22:49:37 | iree-org/iree | https://api.github.com/repos/iree-org/iree | closed | Make IREE dev assets installable | enhancement ➕ infrastructure backlog | I've been asked to work on Rust bindings for the IREE runtime api, and for now the work should be in a separate repository building against an "installed" build of IREE.
I tried `cmake --install iree` after building, but it fails. Is this supposed to work or am I doing something incorrectly?
| 1.0 | Make IREE dev assets installable - I've been asked to work on Rust bindings for the IREE runtime api, and for now the work should be in a separate repository building against an "installed" build of IREE.
I tried `cmake --install iree` after building, but it fails. Is this supposed to work or am I doing something incorrectly?
| infrastructure | make iree dev assets installable i ve been asked to work on rust bindings for the iree runtime api and for now the work should be in a separate repository building against an installed build of iree i tried cmake install iree after building but it fails is this supposed to work or am i doing something incorrectly | 1 |
57,339 | 3,081,508,586 | IssuesEvent | 2015-08-22 20:01:52 | manpaz/python-escpos | https://api.github.com/repos/manpaz/python-escpos | closed | New to printing | auto-migrated Priority-Medium Type-Other | ```
I'll preface this by saying that I'm quite new at this and I do not think there
is an issue with the module. Any help would be amazing! :)
I've isolate the error to the escpos.text() function. Here is my sample code:
from escpos import *
""" Seiko Epson Corp. Receipt Printer M129 Definitions (EPSON TM-T88IV) """
Epson = printer.Usb(0x04b8,0x0202)
print("print before text")
Epson.text("Hello World")
print("Print after text")
Epson.cut()
In my console, I see that "print before text" is printing out, but "print after
text" is not. Would you happen to have a sample application I might be able to
try out? or any recommendations?
I am using an Epson TM-T88iv. Here is the output from lsusb:
Bus 001 Device 043: ID 04b8:0202 Seiko Epson Corp. Receipt Printer M129C
Thanks!
Joseph
```
Original issue reported on code.google.com by `joseph.t...@gmail.com` on 23 Mar 2014 at 2:30 | 1.0 | New to printing - ```
I'll preface this by saying that I'm quite new at this and I do not think there
is an issue with the module. Any help would be amazing! :)
I've isolate the error to the escpos.text() function. Here is my sample code:
from escpos import *
""" Seiko Epson Corp. Receipt Printer M129 Definitions (EPSON TM-T88IV) """
Epson = printer.Usb(0x04b8,0x0202)
print("print before text")
Epson.text("Hello World")
print("Print after text")
Epson.cut()
In my console, I see that "print before text" is printing out, but "print after
text" is not. Would you happen to have a sample application I might be able to
try out? or any recommendations?
I am using an Epson TM-T88iv. Here is the output from lsusb:
Bus 001 Device 043: ID 04b8:0202 Seiko Epson Corp. Receipt Printer M129C
Thanks!
Joseph
```
Original issue reported on code.google.com by `joseph.t...@gmail.com` on 23 Mar 2014 at 2:30 | non_infrastructure | new to printing i ll preface this by saying that i m quite new at this and i do not think there is an issue with the module any help would be amazing i ve isolate the error to the escpos text function here is my sample code from escpos import seiko epson corp receipt printer definitions epson tm epson printer usb print print before text epson text hello world print print after text epson cut in my console i see that print before text is printing out but print after text is not would you happen to have a sample application i might be able to try out or any recommendations i am using an epson tm here is the output from lsusb bus device id seiko epson corp receipt printer thanks joseph original issue reported on code google com by joseph t gmail com on mar at | 0 |
29,174 | 5,575,373,706 | IssuesEvent | 2017-03-28 01:40:06 | prettydiff/prettydiff | https://api.github.com/repos/prettydiff/prettydiff | closed | Missing support for Java diamond | Defect Parsing Pending Release | ```
var a;
OrderedPair<String, Integer> p1 = new OrderedPair<>("Even", 8);
OrderedPair<String, String> p2 = new OrderedPair<>("hello", "world");
OrderedPair<String, Box<Integer>> p = new OrderedPair<>("primes", new Box<Integer>(...));
``` | 1.0 | Missing support for Java diamond - ```
var a;
OrderedPair<String, Integer> p1 = new OrderedPair<>("Even", 8);
OrderedPair<String, String> p2 = new OrderedPair<>("hello", "world");
OrderedPair<String, Box<Integer>> p = new OrderedPair<>("primes", new Box<Integer>(...));
``` | non_infrastructure | missing support for java diamond var a orderedpair new orderedpair even orderedpair new orderedpair hello world orderedpair p new orderedpair primes new box | 0 |
329,880 | 28,312,458,396 | IssuesEvent | 2023-04-10 16:36:40 | symbol/symbol | https://api.github.com/repos/symbol/symbol | opened | [client/catapult] HappyBlockchainIntegrityTests hangs | Change: Test Status: WIP Type: Bug catapult | ### System Information
- **Client Version:** any
- **Operating System:** Windows | Linux
- **Installed With:** Bootstrap | Binaries | From Source
- **VPS Provider:** AllNodes | Stir Network | Vultr | AWS | Other | None
- **Are you fully synchronized?** No | Yes
- **Did you try to restart your node?** No | Yes
#### Description
HappyBlockchainIntegrityTests hangs occasionally during test runs in Jenkins. Not able to reproduce locally.
Disabling the test for now until I have time to debug it.
#### Steps to Reproduce
Currently unable to repro locally
#### Logs
Jenkins logs
```
0:53:04 test_1 | [*] <find_glob> . tsanlog* False
20:53:04 test_1 | [*] <find_glob> . ubsanlog* False
20:53:04 test_1 | [*] /usr/catapult/tests/tests.catapult.int.node.stress
20:53:04 test_1 | --gtest_output=xml:/catapult-data/logs/tests.catapult.int.node.stress.xml
20:53:04 test_1 | /usr/catapult/tests/../lib
20:53:04 test_1 | [==========] Running 70 tests from 7 test suites.
20:53:04 test_1 | [----------] Global test environment set-up.
20:53:04 test_1 | [----------] 8 tests from HappyBlockchainIntegrityTests
23:40:46 Cancelling nested steps due to timeout
23:40:46 Sending interrupt signal to process
23:40:48 Stopping catapult-client-build-catapult-project_test_1 ...
23:40:48 Stopping catapult-client-build-catapult-project_db_1 ...
23:40:51 Terminated
23:40:51 script returned exit code 143
```
https://jenkins.symboldev.com/job/Symbol/job/server-pipelines/job/catapult-client-build-catapult-project/7423/console | 1.0 | [client/catapult] HappyBlockchainIntegrityTests hangs - ### System Information
- **Client Version:** any
- **Operating System:** Windows | Linux
- **Installed With:** Bootstrap | Binaries | From Source
- **VPS Provider:** AllNodes | Stir Network | Vultr | AWS | Other | None
- **Are you fully synchronized?** No | Yes
- **Did you try to restart your node?** No | Yes
#### Description
HappyBlockchainIntegrityTests hangs occasionally during test runs in Jenkins. Not able to reproduce locally.
Disabling the test for now until I have time to debug it.
#### Steps to Reproduce
Currently unable to repro locally
#### Logs
Jenkins logs
```
0:53:04 test_1 | [*] <find_glob> . tsanlog* False
20:53:04 test_1 | [*] <find_glob> . ubsanlog* False
20:53:04 test_1 | [*] /usr/catapult/tests/tests.catapult.int.node.stress
20:53:04 test_1 | --gtest_output=xml:/catapult-data/logs/tests.catapult.int.node.stress.xml
20:53:04 test_1 | /usr/catapult/tests/../lib
20:53:04 test_1 | [==========] Running 70 tests from 7 test suites.
20:53:04 test_1 | [----------] Global test environment set-up.
20:53:04 test_1 | [----------] 8 tests from HappyBlockchainIntegrityTests
23:40:46 Cancelling nested steps due to timeout
23:40:46 Sending interrupt signal to process
23:40:48 Stopping catapult-client-build-catapult-project_test_1 ...
23:40:48 Stopping catapult-client-build-catapult-project_db_1 ...
23:40:51 Terminated
23:40:51 script returned exit code 143
```
https://jenkins.symboldev.com/job/Symbol/job/server-pipelines/job/catapult-client-build-catapult-project/7423/console | non_infrastructure | happyblockchainintegritytests hangs system information client version any operating system windows linux installed with bootstrap binaries from source vps provider allnodes stir network vultr aws other none are you fully synchronized no yes did you try to restart your node no yes description happyblockchainintegritytests hangs occasionally during test runs in jenkins not able to reproduce locally disabling the test for now until i have time to debug it steps to reproduce currently unable to repro locally logs jenkins logs test tsanlog false test ubsanlog false test usr catapult tests tests catapult int node stress test gtest output xml catapult data logs tests catapult int node stress xml test usr catapult tests lib test running tests from test suites test global test environment set up test tests from happyblockchainintegritytests cancelling nested steps due to timeout sending interrupt signal to process stopping catapult client build catapult project test stopping catapult client build catapult project db terminated script returned exit code | 0 |
198,503 | 6,973,704,203 | IssuesEvent | 2017-12-11 21:28:37 | ampproject/amphtml | https://api.github.com/repos/ampproject/amphtml | closed | Support ${IntersectRect} and ${intersectRatio} substitution | P1: High Priority | decided to divide #11987 to two issues.
Request to more information from the visible event. `${intersectionRect}` `${intersectionRatio}`
We can add ${intersectionRect} and ${intersectionRatio} info from each visible event. Note that only the info from last InOb entry will be stored and substituted.
The `${IntersectionRect}` and `${intersectionRatio}` variable come from latest intersectEntry and may not always make sense. It's vendor/publisher responsibility to use them correctly.
`${intersectionRect}` value will be JSON.stringify(intersectionRect)
`${intersectionRatio}` value will be the actually value calculated by inOb, not the aligned value.
| 1.0 | Support ${IntersectRect} and ${intersectRatio} substitution - decided to divide #11987 to two issues.
Request to more information from the visible event. `${intersectionRect}` `${intersectionRatio}`
We can add ${intersectionRect} and ${intersectionRatio} info from each visible event. Note that only the info from last InOb entry will be stored and substituted.
The `${IntersectionRect}` and `${intersectionRatio}` variable come from latest intersectEntry and may not always make sense. It's vendor/publisher responsibility to use them correctly.
`${intersectionRect}` value will be JSON.stringify(intersectionRect)
`${intersectionRatio}` value will be the actually value calculated by inOb, not the aligned value.
| non_infrastructure | support intersectrect and intersectratio substitution decided to divide to two issues request to more information from the visible event intersectionrect intersectionratio we can add intersectionrect and intersectionratio info from each visible event note that only the info from last inob entry will be stored and substituted the intersectionrect and intersectionratio variable come from latest intersectentry and may not always make sense it s vendor publisher responsibility to use them correctly intersectionrect value will be json stringify intersectionrect intersectionratio value will be the actually value calculated by inob not the aligned value | 0 |
31,869 | 26,208,117,340 | IssuesEvent | 2023-01-04 01:55:13 | briangormanly/agora | https://api.github.com/repos/briangormanly/agora | closed | Email server has to be configured on new server | Infrastructure Review Launch | Email needs to be configured to relay on the server.
And zoho needs to be set up for freeagora.com or .org?
| 1.0 | Email server has to be configured on new server - Email needs to be configured to relay on the server.
And zoho needs to be set up for freeagora.com or .org?
| infrastructure | email server has to be configured on new server email needs to be configured to relay on the server and zoho needs to be set up for freeagora com or org | 1 |
10,394 | 2,622,150,021 | IssuesEvent | 2015-03-04 00:05:45 | byzhang/lh-vim | https://api.github.com/repos/byzhang/lh-vim | closed | lh-map-tools: Customizable Surround() behavior | auto-migrated Priority-Medium Type-Defect | ```
I often like this kind of behavior:
Select a few lines in visual mode, hit '{' which surrounds the code
with braces, but also two newlines (one before and one after the
selected lines). Additionally, the selected lines should be shifted to
the right (as if you would do '>' in visual mode). And possibly put
the cursor before the opening '{' :)
```
Original issue reported on code.google.com by `ngaloppo@gmail.com` on 26 Feb 2008 at 8:56 | 1.0 | lh-map-tools: Customizable Surround() behavior - ```
I often like this kind of behavior:
Select a few lines in visual mode, hit '{' which surrounds the code
with braces, but also two newlines (one before and one after the
selected lines). Additionally, the selected lines should be shifted to
the right (as if you would do '>' in visual mode). And possibly put
the cursor before the opening '{' :)
```
Original issue reported on code.google.com by `ngaloppo@gmail.com` on 26 Feb 2008 at 8:56 | non_infrastructure | lh map tools customizable surround behavior i often like this kind of behavior select a few lines in visual mode hit which surrounds the code with braces but also two newlines one before and one after the selected lines additionally the selected lines should be shifted to the right as if you would do in visual mode and possibly put the cursor before the opening original issue reported on code google com by ngaloppo gmail com on feb at | 0 |
2,463 | 3,691,049,532 | IssuesEvent | 2016-02-25 22:19:45 | MozillaFoundation/plan | https://api.github.com/repos/MozillaFoundation/plan | opened | Complete set-up for Mozilla Learning Discourse instance | betterfaster MLN Infrastructure Needs Dev p2 | #### DARCI
D:
A: @simonwex
R: @simonwex @ldecoursy
C: Abigail
I: @chrislarry33, others who oversee MoFo Discourse instances
#### Overview
We re-launched the Mozilla Learning Discourse instance a few months ago, however a few pieces of work remain, and we have some new guidance from legal that requires a few minor dev adjustments.
[ ] Add footer with links to TOU and Privacy Policy
[ ] Move to new domain (currently on Webmaker)
[ ] Update s3 server configuration (notification in admin settings)
[ ] Need help answering a few q's from legal (see link below)
#### Links
* Ticket to add footer with links to TOU and Privacy Policy https://github.com/MozillaFoundation/mofo-devops/issues/255
* Ticket to move off of webmaker domain https://github.com/MozillaFoundation/mofo-devops/issues/253
* Notes from legal https://docs.google.com/document/d/1wBlEpJUMU36_Snebcmx4Gr7X3iuPk9pn0xEr3AOICoE/edit
| 1.0 | Complete set-up for Mozilla Learning Discourse instance - #### DARCI
D:
A: @simonwex
R: @simonwex @ldecoursy
C: Abigail
I: @chrislarry33, others who oversee MoFo Discourse instances
#### Overview
We re-launched the Mozilla Learning Discourse instance a few months ago, however a few pieces of work remain, and we have some new guidance from legal that requires a few minor dev adjustments.
[ ] Add footer with links to TOU and Privacy Policy
[ ] Move to new domain (currently on Webmaker)
[ ] Update s3 server configuration (notification in admin settings)
[ ] Need help answering a few q's from legal (see link below)
#### Links
* Ticket to add footer with links to TOU and Privacy Policy https://github.com/MozillaFoundation/mofo-devops/issues/255
* Ticket to move off of webmaker domain https://github.com/MozillaFoundation/mofo-devops/issues/253
* Notes from legal https://docs.google.com/document/d/1wBlEpJUMU36_Snebcmx4Gr7X3iuPk9pn0xEr3AOICoE/edit
| infrastructure | complete set up for mozilla learning discourse instance darci d a simonwex r simonwex ldecoursy c abigail i others who oversee mofo discourse instances overview we re launched the mozilla learning discourse instance a few months ago however a few pieces of work remain and we have some new guidance from legal that requires a few minor dev adjustments add footer with links to tou and privacy policy move to new domain currently on webmaker update server configuration notification in admin settings need help answering a few q s from legal see link below links ticket to add footer with links to tou and privacy policy ticket to move off of webmaker domain notes from legal | 1 |
153,706 | 24,174,280,637 | IssuesEvent | 2022-09-22 22:43:41 | etsy/open-api | https://api.github.com/repos/etsy/open-api | closed | [ENDPOINT] Proposed design change for <uploadListingImage> | API Design | **Current Endpoint Design**
I'm using "**uploadListingImage**" (https://openapi.etsy.com/v3/application/shops/{shop_id}/listings/{listing_id}/images) API for uploading new images to Listing and its working as expected.
**Proposed Endpoint Design Change**
Please add the same feature for Videos also or if there's any API for that, please let me know. I tried the "**uploadListingFile**" API, but it's applicable for Digital products.
**Why are you proposing this change?**
Along with **Images**, I also want to **upload Videos** to my listing via API.
| 1.0 | [ENDPOINT] Proposed design change for <uploadListingImage> - **Current Endpoint Design**
I'm using "**uploadListingImage**" (https://openapi.etsy.com/v3/application/shops/{shop_id}/listings/{listing_id}/images) API for uploading new images to Listing and its working as expected.
**Proposed Endpoint Design Change**
Please add the same feature for Videos also or if there's any API for that, please let me know. I tried the "**uploadListingFile**" API, but it's applicable for Digital products.
**Why are you proposing this change?**
Along with **Images**, I also want to **upload Videos** to my listing via API.
| non_infrastructure | proposed design change for current endpoint design i m using uploadlistingimage api for uploading new images to listing and its working as expected proposed endpoint design change please add the same feature for videos also or if there s any api for that please let me know i tried the uploadlistingfile api but it s applicable for digital products why are you proposing this change along with images i also want to upload videos to my listing via api | 0 |
330,127 | 28,350,539,769 | IssuesEvent | 2023-04-12 02:00:21 | ymart1n/paper-reading | https://api.github.com/repos/ymart1n/paper-reading | closed | Automated Test-Case Generation for Solidity Smart Contracts: the AGSolT Approach and its Evaluation | area/blockchain area/testing status/done | Paper: https://arxiv.org/abs/2102.08864
Code: https://github.com/AGSolT/SolAR
Terminology:
- **Oracle**: In the context of software testing and automated test case generation, an oracle refers to a mechanism or a source of truth that can determine whether the output produced by the system under test is correct or not. An oracle can be a human expert, an existing system with known correct behavior, a set of specifications or requirements, or any other mechanism that can determine whether the output produced by the system is correct or not. The use of an oracle is crucial in automated test case generation, as it allows the generated test cases to be evaluated against a standard of correctness and helps ensure the reliability and correctness of the system being tested.
- **Pareto front**: A Pareto front is a set of solutions in multi-objective optimization that are not dominated by any other solution in terms of all objectives being optimized. In other words, it represents the trade-off between multiple objectives where improving one objective can only be done by sacrificing the performance of another objective. | 1.0 | Automated Test-Case Generation for Solidity Smart Contracts: the AGSolT Approach and its Evaluation - Paper: https://arxiv.org/abs/2102.08864
Code: https://github.com/AGSolT/SolAR
Terminology:
- **Oracle**: In the context of software testing and automated test case generation, an oracle refers to a mechanism or a source of truth that can determine whether the output produced by the system under test is correct or not. An oracle can be a human expert, an existing system with known correct behavior, a set of specifications or requirements, or any other mechanism that can determine whether the output produced by the system is correct or not. The use of an oracle is crucial in automated test case generation, as it allows the generated test cases to be evaluated against a standard of correctness and helps ensure the reliability and correctness of the system being tested.
- **Pareto front**: A Pareto front is a set of solutions in multi-objective optimization that are not dominated by any other solution in terms of all objectives being optimized. In other words, it represents the trade-off between multiple objectives where improving one objective can only be done by sacrificing the performance of another objective. | non_infrastructure | automated test case generation for solidity smart contracts the agsolt approach and its evaluation paper code terminology oracle in the context of software testing and automated test case generation an oracle refers to a mechanism or a source of truth that can determine whether the output produced by the system under test is correct or not an oracle can be a human expert an existing system with known correct behavior a set of specifications or requirements or any other mechanism that can determine whether the output produced by the system is correct or not the use of an oracle is crucial in automated test case generation as it allows the generated test cases to be evaluated against a standard of correctness and helps ensure the reliability and correctness of the system being tested pareto front a pareto front is a set of solutions in multi objective optimization that are not dominated by any other solution in terms of all objectives being optimized in other words it represents the trade off between multiple objectives where improving one objective can only be done by sacrificing the performance of another objective | 0 |
159,398 | 12,474,957,042 | IssuesEvent | 2020-05-29 10:37:00 | aliasrobotics/RVD | https://api.github.com/repos/aliasrobotics/RVD | closed | RVD#2021: Use of possibly insecure function - consider using safer ast., /opt/ros_noetic_ws/src/genpy/test/test_genpy_generator.py:177 | bandit bug static analysis testing triage | ```yaml
{
"id": 2021,
"title": "RVD#2021: Use of possibly insecure function - consider using safer ast., /opt/ros_noetic_ws/src/genpy/test/test_genpy_generator.py:177",
"type": "bug",
"description": "HIGH confidence of MEDIUM severity bug. Use of possibly insecure function - consider using safer ast.literal_eval. at /opt/ros_noetic_ws/src/genpy/test/test_genpy_generator.py:177 See links for more info on the bug.",
"cwe": "None",
"cve": "None",
"keywords": [
"bandit",
"bug",
"static analysis",
"testing",
"triage",
"bug"
],
"system": "",
"vendor": null,
"severity": {
"rvss-score": 0,
"rvss-vector": "",
"severity-description": "",
"cvss-score": 0,
"cvss-vector": ""
},
"links": [
"https://github.com/aliasrobotics/RVD/issues/2021",
"https://bandit.readthedocs.io/en/latest/blacklists/blacklist_calls.html#b307-eval"
],
"flaw": {
"phase": "testing",
"specificity": "subject-specific",
"architectural-location": "application-specific",
"application": "N/A",
"subsystem": "N/A",
"package": "N/A",
"languages": "None",
"date-detected": "2020-05-29 (09:15)",
"detected-by": "Alias Robotics",
"detected-by-method": "testing static",
"date-reported": "2020-05-29 (09:15)",
"reported-by": "Alias Robotics",
"reported-by-relationship": "automatic",
"issue": "https://github.com/aliasrobotics/RVD/issues/2021",
"reproducibility": "always",
"trace": "/opt/ros_noetic_ws/src/genpy/test/test_genpy_generator.py:177",
"reproduction": "See artifacts below (if available)",
"reproduction-image": ""
},
"exploitation": {
"description": "",
"exploitation-image": "",
"exploitation-vector": ""
},
"mitigation": {
"description": "",
"pull-request": "",
"date-mitigation": ""
}
}
``` | 1.0 | RVD#2021: Use of possibly insecure function - consider using safer ast., /opt/ros_noetic_ws/src/genpy/test/test_genpy_generator.py:177 - ```yaml
{
"id": 2021,
"title": "RVD#2021: Use of possibly insecure function - consider using safer ast., /opt/ros_noetic_ws/src/genpy/test/test_genpy_generator.py:177",
"type": "bug",
"description": "HIGH confidence of MEDIUM severity bug. Use of possibly insecure function - consider using safer ast.literal_eval. at /opt/ros_noetic_ws/src/genpy/test/test_genpy_generator.py:177 See links for more info on the bug.",
"cwe": "None",
"cve": "None",
"keywords": [
"bandit",
"bug",
"static analysis",
"testing",
"triage",
"bug"
],
"system": "",
"vendor": null,
"severity": {
"rvss-score": 0,
"rvss-vector": "",
"severity-description": "",
"cvss-score": 0,
"cvss-vector": ""
},
"links": [
"https://github.com/aliasrobotics/RVD/issues/2021",
"https://bandit.readthedocs.io/en/latest/blacklists/blacklist_calls.html#b307-eval"
],
"flaw": {
"phase": "testing",
"specificity": "subject-specific",
"architectural-location": "application-specific",
"application": "N/A",
"subsystem": "N/A",
"package": "N/A",
"languages": "None",
"date-detected": "2020-05-29 (09:15)",
"detected-by": "Alias Robotics",
"detected-by-method": "testing static",
"date-reported": "2020-05-29 (09:15)",
"reported-by": "Alias Robotics",
"reported-by-relationship": "automatic",
"issue": "https://github.com/aliasrobotics/RVD/issues/2021",
"reproducibility": "always",
"trace": "/opt/ros_noetic_ws/src/genpy/test/test_genpy_generator.py:177",
"reproduction": "See artifacts below (if available)",
"reproduction-image": ""
},
"exploitation": {
"description": "",
"exploitation-image": "",
"exploitation-vector": ""
},
"mitigation": {
"description": "",
"pull-request": "",
"date-mitigation": ""
}
}
``` | non_infrastructure | rvd use of possibly insecure function consider using safer ast opt ros noetic ws src genpy test test genpy generator py yaml id title rvd use of possibly insecure function consider using safer ast opt ros noetic ws src genpy test test genpy generator py type bug description high confidence of medium severity bug use of possibly insecure function consider using safer ast literal eval at opt ros noetic ws src genpy test test genpy generator py see links for more info on the bug cwe none cve none keywords bandit bug static analysis testing triage bug system vendor null severity rvss score rvss vector severity description cvss score cvss vector links flaw phase testing specificity subject specific architectural location application specific application n a subsystem n a package n a languages none date detected detected by alias robotics detected by method testing static date reported reported by alias robotics reported by relationship automatic issue reproducibility always trace opt ros noetic ws src genpy test test genpy generator py reproduction see artifacts below if available reproduction image exploitation description exploitation image exploitation vector mitigation description pull request date mitigation | 0 |
20,077 | 13,648,821,593 | IssuesEvent | 2020-09-26 11:19:01 | mahalde/plahner-backend | https://api.github.com/repos/mahalde/plahner-backend | closed | Infrastructure | Create initial deployment pipeline | feature infrastructure | <!-- Describe your new feature in a coherent text -->
## **Business Value**
A pipeline should be created to automatically build, test, and deploy the backend.
<!-- List all acceptance criteria which would fulfill your feature request -->
## **Acceptance Critera**
- [x] Create a new pipeline with GitHub Actions
- [x] Pipeline should be run as a manual trigger
- [x] Pipeline should build the backend
- [x] Pipeline should test the backend
- [x] Pipeline should run a sonar-scan on the backend
- [x] Pipeline should merge DEV to master
- [x] Pipeline should tag the branch
- [x] Pipeline should build a Docker Container
- [x] Pipeline should push the Docker Container to the registry
- [x] Pipeline should push the Docker Container to the server and run it
| 1.0 | Infrastructure | Create initial deployment pipeline - <!-- Describe your new feature in a coherent text -->
## **Business Value**
A pipeline should be created to automatically build, test, and deploy the backend.
<!-- List all acceptance criteria which would fulfill your feature request -->
## **Acceptance Critera**
- [x] Create a new pipeline with GitHub Actions
- [x] Pipeline should be run as a manual trigger
- [x] Pipeline should build the backend
- [x] Pipeline should test the backend
- [x] Pipeline should run a sonar-scan on the backend
- [x] Pipeline should merge DEV to master
- [x] Pipeline should tag the branch
- [x] Pipeline should build a Docker Container
- [x] Pipeline should push the Docker Container to the registry
- [x] Pipeline should push the Docker Container to the server and run it
| infrastructure | infrastructure create initial deployment pipeline business value a pipeline should be created to automatically build test and deploy the backend acceptance critera create a new pipeline with github actions pipeline should be run as a manual trigger pipeline should build the backend pipeline should test the backend pipeline should run a sonar scan on the backend pipeline should merge dev to master pipeline should tag the branch pipeline should build a docker container pipeline should push the docker container to the registry pipeline should push the docker container to the server and run it | 1 |
32,602 | 26,822,795,476 | IssuesEvent | 2023-02-02 10:39:10 | onebeyond/admin | https://api.github.com/repos/onebeyond/admin | closed | Onebeyond NPM ORG Access | infrastructure | @inigomarquinez can you add [guidesmiths_bot](https://www.npmjs.com/~guidesmiths_bot) and [ulisesgascon](https://www.npmjs.com/~ulisesgascon) as owners of the new NPM org? [Doc related here](https://docs.npmjs.com/adding-members-to-your-organization)
| 1.0 | Onebeyond NPM ORG Access - @inigomarquinez can you add [guidesmiths_bot](https://www.npmjs.com/~guidesmiths_bot) and [ulisesgascon](https://www.npmjs.com/~ulisesgascon) as owners of the new NPM org? [Doc related here](https://docs.npmjs.com/adding-members-to-your-organization)
| infrastructure | onebeyond npm org access inigomarquinez can you add and as owners of the new npm org | 1 |
79,581 | 22,824,752,629 | IssuesEvent | 2022-07-12 07:34:23 | MRtrix3/mrtrix3 | https://api.github.com/repos/MRtrix3/mrtrix3 | opened | ./build: progress indication misbehaves on a system with many cores | bug build scripts | On a system with 128 "cores" (Ubuntu 20.04 LTS, 2 x AMD EPYC 7452 with hyper-threading), I noticed the total number of compilation jobs seems to change between threads:
```
(386/391) [LB] bin/warpinit
(384/396) [LB] bin/warpcorrect
(379/392) [LB] bin/tsfinfo
(380/391) [LB] bin/mrdegibbs
(382/392) [LB] bin/amp2response
(392/391) [LB] bin/dwidenoise
(393/396) [LB] bin/tckmap
(394/392) [LB] bin/mrmetric
(395/391) [LB] bin/mraverageheader
(396/396) [LB] bin/mrmath
(397/397) [LB] bin/mrtransform
``` | 1.0 | ./build: progress indication misbehaves on a system with many cores - On a system with 128 "cores" (Ubuntu 20.04 LTS, 2 x AMD EPYC 7452 with hyper-threading), I noticed the total number of compilation jobs seems to change between threads:
```
(386/391) [LB] bin/warpinit
(384/396) [LB] bin/warpcorrect
(379/392) [LB] bin/tsfinfo
(380/391) [LB] bin/mrdegibbs
(382/392) [LB] bin/amp2response
(392/391) [LB] bin/dwidenoise
(393/396) [LB] bin/tckmap
(394/392) [LB] bin/mrmetric
(395/391) [LB] bin/mraverageheader
(396/396) [LB] bin/mrmath
(397/397) [LB] bin/mrtransform
``` | non_infrastructure | build progress indication misbehaves on a system with many cores on a system with cores ubuntu lts x amd epyc with hyper threading i noticed the total number of compilation jobs seems to change between threads bin warpinit bin warpcorrect bin tsfinfo bin mrdegibbs bin bin dwidenoise bin tckmap bin mrmetric bin mraverageheader bin mrmath bin mrtransform | 0 |
429,413 | 12,424,164,069 | IssuesEvent | 2020-05-24 10:08:21 | asoomar/salad-bowl-app | https://api.github.com/repos/asoomar/salad-bowl-app | closed | [1.2] Create CLI for switching between prod and dev | Medium Priority enhancement | The CLI will make it easier to create production builds because it will ensure that the credentials for firebase are swapped with the correct ones. Ideally, this will allow us to switch between production and development mode. Finally there will be another command to see what mode we are currently in. When switching, this will set the firebase configuration and plist file in app.json to the correct version, and it will set the DEV variable in the mode.js file to the appropriate value.
### Commands
- **nyx switch** prod | dev
- This will switch the mode to prod or dev respectively if it currently is not in that mode
- **nyx mode**
- This will state which mode you are currently in | 1.0 | [1.2] Create CLI for switching between prod and dev - The CLI will make it easier to create production builds because it will ensure that the credentials for firebase are swapped with the correct ones. Ideally, this will allow us to switch between production and development mode. Finally there will be another command to see what mode we are currently in. When switching, this will set the firebase configuration and plist file in app.json to the correct version, and it will set the DEV variable in the mode.js file to the appropriate value.
### Commands
- **nyx switch** prod | dev
- This will switch the mode to prod or dev respectively if it currently is not in that mode
- **nyx mode**
- This will state which mode you are currently in | non_infrastructure | create cli for switching between prod and dev the cli will make it easier to create production builds because it will ensure that the credentials for firebase are swapped with the correct ones ideally this will allow us to switch between production and development mode finally there will be another command to see what mode we are currently in when switching this will set the firebase configuration and plist file in app json to the correct version and it will set the dev variable in the mode js file to the appropriate value commands nyx switch prod dev this will switch the mode to prod or dev respectively if it currently is not in that mode nyx mode this will state which mode you are currently in | 0 |
73,170 | 3,408,725,623 | IssuesEvent | 2015-12-04 12:16:49 | bedita/bedita | https://api.github.com/repos/bedita/bedita | opened | Blip.tv is dead. We should remove all references to it | Priority - Normal Topic - Core Type - Task | Since 20th of August blip.tv doesn't exist anymore :disappointed:
We can remove all code related to it. | 1.0 | Blip.tv is dead. We should remove all references to it - Since 20th of August blip.tv doesn't exist anymore :disappointed:
We can remove all code related to it. | non_infrastructure | blip tv is dead we should remove all references to it since of august blip tv doesn t exist anymore disappointed we can remove all code related to it | 0 |
26,146 | 5,226,057,057 | IssuesEvent | 2017-01-27 20:08:35 | golang/go | https://api.github.com/repos/golang/go | closed | doc: broken wiki Images in golang/go/wiki/Mobile | Documentation NeedsFix | I'm reading the Go Mobile docs, and some images is not found
* Deploying app bundle
https://camo.githubusercontent.com/fbc28cc15ba04a995fe26925658557d8be5565fe/68747470733a2f2f676f6f676c6564726976652e636f6d2f686f73742f30427966536a64505673394d5a626b686a6555684d597a52546545452f676f77696b692f676f6d6f62696c652d696f732d6465706c6f792e706e67
* Xcode project layout with hello.framework
https://camo.githubusercontent.com/ca8480d267fa6fba06cd69ddbd7a157083a7dab9/68747470733a2f2f676f6f676c6564726976652e636f6d2f686f73742f30427966536a64505673394d5a626b686a6555684d597a52546545452f676f77696b692f676f6d6f62696c652d696d706f72742d616e64726f696473747564696f2e706e67
* Drag and drop Hello.framework
https://camo.githubusercontent.com/4056086ee93aa86233588f92a7e180374d11807d/68747470733a2f2f676f6f676c6564726976652e636f6d2f686f73742f30427966536a64505673394d5a626b686a6555684d597a52546545452f676f77696b692f676f6d6f62696c652d62696e642d696f73647261672e706e67
* Xcode project layout with Hello.framework
https://camo.githubusercontent.com/158aaea172021b5c38d1cf2fcccfa39336384a77/68747470733a2f2f676f6f676c6564726976652e636f6d2f686f73742f30427966536a64505673394d5a626b686a6555684d597a52546545452f676f77696b692f676f6d6f62696c652d62696e642d696f732e706e67
Should this be reported here? If I know which image should be there I can send a fix, but as I am a begginer looking for how to start to work with go, I'm not familiar with this documentation yet.
| 1.0 | doc: broken wiki Images in golang/go/wiki/Mobile - I'm reading the Go Mobile docs, and some images is not found
* Deploying app bundle
https://camo.githubusercontent.com/fbc28cc15ba04a995fe26925658557d8be5565fe/68747470733a2f2f676f6f676c6564726976652e636f6d2f686f73742f30427966536a64505673394d5a626b686a6555684d597a52546545452f676f77696b692f676f6d6f62696c652d696f732d6465706c6f792e706e67
* Xcode project layout with hello.framework
https://camo.githubusercontent.com/ca8480d267fa6fba06cd69ddbd7a157083a7dab9/68747470733a2f2f676f6f676c6564726976652e636f6d2f686f73742f30427966536a64505673394d5a626b686a6555684d597a52546545452f676f77696b692f676f6d6f62696c652d696d706f72742d616e64726f696473747564696f2e706e67
* Drag and drop Hello.framework
https://camo.githubusercontent.com/4056086ee93aa86233588f92a7e180374d11807d/68747470733a2f2f676f6f676c6564726976652e636f6d2f686f73742f30427966536a64505673394d5a626b686a6555684d597a52546545452f676f77696b692f676f6d6f62696c652d62696e642d696f73647261672e706e67
* Xcode project layout with Hello.framework
https://camo.githubusercontent.com/158aaea172021b5c38d1cf2fcccfa39336384a77/68747470733a2f2f676f6f676c6564726976652e636f6d2f686f73742f30427966536a64505673394d5a626b686a6555684d597a52546545452f676f77696b692f676f6d6f62696c652d62696e642d696f732e706e67
Should this be reported here? If I know which image should be there I can send a fix, but as I am a begginer looking for how to start to work with go, I'm not familiar with this documentation yet.
| non_infrastructure | doc broken wiki images in golang go wiki mobile i m reading the go mobile docs and some images is not found deploying app bundle xcode project layout with hello framework drag and drop hello framework xcode project layout with hello framework should this be reported here if i know which image should be there i can send a fix but as i am a begginer looking for how to start to work with go i m not familiar with this documentation yet | 0 |
72,343 | 15,225,426,869 | IssuesEvent | 2021-02-18 07:17:37 | devikab2b/whites5 | https://api.github.com/repos/devikab2b/whites5 | closed | CVE-2019-0201 (Medium) detected in zookeeper-3.4.6.jar - autoclosed | security vulnerability | ## CVE-2019-0201 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>zookeeper-3.4.6.jar</b></p></summary>
<p></p>
<p>Path to dependency file: whites5/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar</p>
<p>
Dependency Hierarchy:
- spark-core_2.12-2.4.7.jar (Root Library)
- :x: **zookeeper-3.4.6.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/devikab2b/whites5/commit/b24afaf70d8746f42dcb93a7ef65ad261fda5b7f">b24afaf70d8746f42dcb93a7ef65ad261fda5b7f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue is present in Apache ZooKeeper 1.0.0 to 3.4.13 and 3.5.0-alpha to 3.5.4-beta. ZooKeeper’s getACL() command doesn’t check any permission when retrieves the ACLs of the requested node and returns all information contained in the ACL Id field as plaintext string. DigestAuthenticationProvider overloads the Id field with the hash value that is used for user authentication. As a consequence, if Digest Authentication is in use, the unsalted hash value will be disclosed by getACL() request for unauthenticated or unprivileged users.
<p>Publish Date: 2019-05-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0201>CVE-2019-0201</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://zookeeper.apache.org/security.html">https://zookeeper.apache.org/security.html</a></p>
<p>Release Date: 2019-05-23</p>
<p>Fix Resolution: 3.4.14, 3.5.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-0201 (Medium) detected in zookeeper-3.4.6.jar - autoclosed - ## CVE-2019-0201 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>zookeeper-3.4.6.jar</b></p></summary>
<p></p>
<p>Path to dependency file: whites5/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar</p>
<p>
Dependency Hierarchy:
- spark-core_2.12-2.4.7.jar (Root Library)
- :x: **zookeeper-3.4.6.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/devikab2b/whites5/commit/b24afaf70d8746f42dcb93a7ef65ad261fda5b7f">b24afaf70d8746f42dcb93a7ef65ad261fda5b7f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue is present in Apache ZooKeeper 1.0.0 to 3.4.13 and 3.5.0-alpha to 3.5.4-beta. ZooKeeper’s getACL() command doesn’t check any permission when retrieves the ACLs of the requested node and returns all information contained in the ACL Id field as plaintext string. DigestAuthenticationProvider overloads the Id field with the hash value that is used for user authentication. As a consequence, if Digest Authentication is in use, the unsalted hash value will be disclosed by getACL() request for unauthenticated or unprivileged users.
<p>Publish Date: 2019-05-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-0201>CVE-2019-0201</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://zookeeper.apache.org/security.html">https://zookeeper.apache.org/security.html</a></p>
<p>Release Date: 2019-05-23</p>
<p>Fix Resolution: 3.4.14, 3.5.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_infrastructure | cve medium detected in zookeeper jar autoclosed cve medium severity vulnerability vulnerable library zookeeper jar path to dependency file pom xml path to vulnerable library home wss scanner repository org apache zookeeper zookeeper zookeeper jar dependency hierarchy spark core jar root library x zookeeper jar vulnerable library found in head commit a href found in base branch main vulnerability details an issue is present in apache zookeeper to and alpha to beta zookeeper’s getacl command doesn’t check any permission when retrieves the acls of the requested node and returns all information contained in the acl id field as plaintext string digestauthenticationprovider overloads the id field with the hash value that is used for user authentication as a consequence if digest authentication is in use the unsalted hash value will be disclosed by getacl request for unauthenticated or unprivileged users publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
31,285 | 25,520,038,127 | IssuesEvent | 2022-11-28 19:38:06 | 1Copenut/c3-eleventy | https://api.github.com/repos/1Copenut/c3-eleventy | opened | Automate tagging, releases, and changelogs using GitHub Actions | infrastructure | ## Description
Once this goes live, I want a way to track releases and honestly, do it automatically. It should collect changes, create tags, and create releases.
## Resources
* https://www.conventionalcommits.org/en/v1.0.0/
* https://www.infralovers.com/en/articles/2022/08/08/changelog-automation-with-github-actions/
* https://www.youtube.com/watch?v=fcHJZ4pMzBs | 1.0 | Automate tagging, releases, and changelogs using GitHub Actions - ## Description
Once this goes live, I want a way to track releases and honestly, do it automatically. It should collect changes, create tags, and create releases.
## Resources
* https://www.conventionalcommits.org/en/v1.0.0/
* https://www.infralovers.com/en/articles/2022/08/08/changelog-automation-with-github-actions/
* https://www.youtube.com/watch?v=fcHJZ4pMzBs | infrastructure | automate tagging releases and changelogs using github actions description once this goes live i want a way to track releases and honestly do it automatically it should collect changes create tags and create releases resources | 1 |
429,121 | 12,421,198,736 | IssuesEvent | 2020-05-23 15:43:47 | openshift/origin | https://api.github.com/repos/openshift/origin | closed | oc apply does not work on objects created with a resourceVersion field | component/cli component/kubernetes kind/question lifecycle/rotten priority/P1 | Related comment: https://github.com/openshift/release/pull/901#issuecomment-393963467
When creating an object via `oc apply -f resource.yaml`, subsequent patches to that object fail if the initial object in `resource.yaml` contained a `metadata.resourceVersion` field.
Example:
```yaml
# given the following dc with a metadata.resourceVersion field, I will create it via oc apply
$ cat simpledc.yaml
---
apiVersion: v1
kind: DeploymentConfig
metadata:
name: simple-dc
creationTimestamp: null
resourceVersion: "111"
labels:
name: test-deployment
spec:
replicas: 1
selector:
name: test-deployment
template:
metadata:
labels:
name: test-deployment
spec:
containers:
- image: openshift/origin-ruby-sample
name: helloworld
$ oc apply -f simpledc.yaml
deploymentconfig.apps.openshift.io "simple-dc" created
# modify the original dc, removing the resourceVersion field
$ cat simpledc_modified.yaml
apiVersion: v1
kind: DeploymentConfig
metadata:
name: simple-dc
creationTimestamp: null
labels:
name: test-deployment
spec:
replicas: 2
selector:
name: test-deployment
template:
metadata:
labels:
name: test-deployment
spec:
containers:
- image: openshift/origin-ruby-sample
name: helloworld
# attempt to run oc apply again
$ oc apply -f simpledc_modified.yaml
The deploymentconfigs "simple-dc" is invalid: metadata.resourceVersion: Invalid value: 0x0: must be specified for an update
```
If I run the steps above without setting a `metadata.resourceVersion` field when I create the dc via `oc apply`, I do not get the error seen, and the operation succeeds as expected.
Is this a bug, or an expected behavior of apply?
cc @stevekuznetsov @soltysh @liggitt | 1.0 | oc apply does not work on objects created with a resourceVersion field - Related comment: https://github.com/openshift/release/pull/901#issuecomment-393963467
When creating an object via `oc apply -f resource.yaml`, subsequent patches to that object fail if the initial object in `resource.yaml` contained a `metadata.resourceVersion` field.
Example:
```yaml
# given the following dc with a metadata.resourceVersion field, I will create it via oc apply
$ cat simpledc.yaml
---
apiVersion: v1
kind: DeploymentConfig
metadata:
name: simple-dc
creationTimestamp: null
resourceVersion: "111"
labels:
name: test-deployment
spec:
replicas: 1
selector:
name: test-deployment
template:
metadata:
labels:
name: test-deployment
spec:
containers:
- image: openshift/origin-ruby-sample
name: helloworld
$ oc apply -f simpledc.yaml
deploymentconfig.apps.openshift.io "simple-dc" created
# modify the original dc, removing the resourceVersion field
$ cat simpledc_modified.yaml
apiVersion: v1
kind: DeploymentConfig
metadata:
name: simple-dc
creationTimestamp: null
labels:
name: test-deployment
spec:
replicas: 2
selector:
name: test-deployment
template:
metadata:
labels:
name: test-deployment
spec:
containers:
- image: openshift/origin-ruby-sample
name: helloworld
# attempt to run oc apply again
$ oc apply -f simpledc_modified.yaml
The deploymentconfigs "simple-dc" is invalid: metadata.resourceVersion: Invalid value: 0x0: must be specified for an update
```
If I run the steps above without setting a `metadata.resourceVersion` field when I create the dc via `oc apply`, I do not get the error seen, and the operation succeeds as expected.
Is this a bug, or an expected behavior of apply?
cc @stevekuznetsov @soltysh @liggitt | non_infrastructure | oc apply does not work on objects created with a resourceversion field related comment when creating an object via oc apply f resource yaml subsequent patches to that object fail if the initial object in resource yaml contained a metadata resourceversion field example yaml given the following dc with a metadata resourceversion field i will create it via oc apply cat simpledc yaml apiversion kind deploymentconfig metadata name simple dc creationtimestamp null resourceversion labels name test deployment spec replicas selector name test deployment template metadata labels name test deployment spec containers image openshift origin ruby sample name helloworld oc apply f simpledc yaml deploymentconfig apps openshift io simple dc created modify the original dc removing the resourceversion field cat simpledc modified yaml apiversion kind deploymentconfig metadata name simple dc creationtimestamp null labels name test deployment spec replicas selector name test deployment template metadata labels name test deployment spec containers image openshift origin ruby sample name helloworld attempt to run oc apply again oc apply f simpledc modified yaml the deploymentconfigs simple dc is invalid metadata resourceversion invalid value must be specified for an update if i run the steps above without setting a metadata resourceversion field when i create the dc via oc apply i do not get the error seen and the operation succeeds as expected is this a bug or an expected behavior of apply cc stevekuznetsov soltysh liggitt | 0 |
19,757 | 13,443,390,543 | IssuesEvent | 2020-09-08 08:17:33 | bids-standard/bids-specification | https://api.github.com/repos/bids-standard/bids-specification | closed | [INFRA] Need to adjust github-changelog-generator | help wanted infrastructure | brought up by @franklin-feingold :
> currently our changelog generator makes a new entry for each merged commit on our bids-specification repository. The problem comes up when we are working on a different branch (e.g., `bepXXX`) and merge commits into that branch --> then these commits would show up the changelog ALTHOUGH we'd actually only like commits merged into `master` to show up in the changelog
there is a setting to grant that only commits merged into `master` show up, ... we just need to figure out how to apply it.
cc @effigies @rwblair @tsalo | 1.0 | [INFRA] Need to adjust github-changelog-generator - brought up by @franklin-feingold :
> currently our changelog generator makes a new entry for each merged commit on our bids-specification repository. The problem comes up when we are working on a different branch (e.g., `bepXXX`) and merge commits into that branch --> then these commits would show up the changelog ALTHOUGH we'd actually only like commits merged into `master` to show up in the changelog
there is a setting to grant that only commits merged into `master` show up, ... we just need to figure out how to apply it.
cc @effigies @rwblair @tsalo | infrastructure | need to adjust github changelog generator brought up by franklin feingold currently our changelog generator makes a new entry for each merged commit on our bids specification repository the problem comes up when we are working on a different branch e g bepxxx and merge commits into that branch then these commits would show up the changelog although we d actually only like commits merged into master to show up in the changelog there is a setting to grant that only commits merged into master show up we just need to figure out how to apply it cc effigies rwblair tsalo | 1 |
12,476 | 9,798,247,613 | IssuesEvent | 2019-06-11 11:57:17 | nest/nest-simulator | https://api.github.com/repos/nest/nest-simulator | closed | Module import error in Python Regression test issue-1034.py | C: Infrastructure I: No breaking change P: Pending S: High T: Bug | I came across the following error in a Travis OSX [build](https://travis-ci.org/lekshmideepu/nest-simulator/jobs/520817252#L5891) where Python was OFF.
```
Running test 'regressiontests/issue-1034.py'... Traceback (most recent call last):
File "/Users/travis/build/lekshmideepu/nest-simulator/result/share/doc/nest/regressiontests/issue-1034.py", line 26, in <module>
import nest
ImportError: No module named nest
Failed: missed SLI assertion
```
Obviously this test must not be run in a non-Python environment.
Surprisingly, as you could see [here](https://travis-ci.org/nest/nest-simulator/jobs/517258036#L5461), this test was a success in other Travis builds (and also ran successfully in all pull requests against master ever since).
Although this test is a regression test technically, it uses PyNEST and should thus not be located in the SLI testsuite, but belongs into the PyNEST testsuite (which already now contains at least one other regression test).
If there is a strong will to keep it in the SLI testsuite, it has to be guarded by something along these lines:
```python
import sys
EXIT_SKIPPED = 200
try:
import nest
except ImportError:
sys.exit(EXIT_SKIPPED)
```
However, as that would have to be repeated for each PyNEST test that is added to the SLI testsuite it does not seem to be a very sustainable solution. | 1.0 | Module import error in Python Regression test issue-1034.py - I came across the following error in a Travis OSX [build](https://travis-ci.org/lekshmideepu/nest-simulator/jobs/520817252#L5891) where Python was OFF.
```
Running test 'regressiontests/issue-1034.py'... Traceback (most recent call last):
File "/Users/travis/build/lekshmideepu/nest-simulator/result/share/doc/nest/regressiontests/issue-1034.py", line 26, in <module>
import nest
ImportError: No module named nest
Failed: missed SLI assertion
```
Obviously this test must not be run in a non-Python environment.
Surprisingly, as you could see [here](https://travis-ci.org/nest/nest-simulator/jobs/517258036#L5461), this test was a success in other Travis builds (and also ran successfully in all pull requests against master ever since).
Although this test is a regression test technically, it uses PyNEST and should thus not be located in the SLI testsuite, but belongs into the PyNEST testsuite (which already now contains at least one other regression test).
If there is a strong will to keep it in the SLI testsuite, it has to be guarded by something along these lines:
```python
import sys
EXIT_SKIPPED = 200
try:
import nest
except ImportError:
sys.exit(EXIT_SKIPPED)
```
However, as that would have to be repeated for each PyNEST test that is added to the SLI testsuite it does not seem to be a very sustainable solution. | infrastructure | module import error in python regression test issue py i came across the following error in a travis osx where python was off running test regressiontests issue py traceback most recent call last file users travis build lekshmideepu nest simulator result share doc nest regressiontests issue py line in import nest importerror no module named nest failed missed sli assertion obviously this test must not be run in a non python environment surprisingly as you could see this test was a success in other travis builds and also ran successfully in all pull requests against master ever since although this test is a regression test technically it uses pynest and should thus not be located in the sli testsuite but belongs into the pynest testsuite which already now contains at least one other regression test if there is a strong will to keep it in the sli testsuite it has to be guarded by something along these lines python import sys exit skipped try import nest except importerror sys exit exit skipped however as that would have to be repeated for each pynest test that is added to the sli testsuite it does not seem to be a very sustainable solution | 1 |
27,535 | 21,901,518,198 | IssuesEvent | 2022-05-20 13:50:31 | mitodl/ol-infrastructure | https://api.github.com/repos/mitodl/ol-infrastructure | closed | Container Management POC | Infrastructure POC | We are increasingly in need of a smooth way to get containerized workloads into a production environment. To that end, we want to invest in building the infrastructure and tooling necessary to support application and service deployment as containers. We have looked at Nomad in the past due to its easy integration with Vault and Consul which we use extensively, but the weight of the community favors Kubernetes. Given our perennial challenge of testing out new systems quickly, we will likely want to get a proof of concept working on Kubernetes to identify any unknown unknowns. As a first pass we can work on getting the edx-notes-api running in a Kubernetes cluster running on Amazon EKS in a pre-production environment.
We will want to timebox this task to ~1-2 weeks and the outcome should be:
- [ ] A go/no-go decision on whether to stick with Kubernetes
- [ ] A list of additional services/system design requirements
- [ ] A functional integration with Vault and Consul for secrets and service discovery respectively | 1.0 | Container Management POC - We are increasingly in need of a smooth way to get containerized workloads into a production environment. To that end, we want to invest in building the infrastructure and tooling necessary to support application and service deployment as containers. We have looked at Nomad in the past due to its easy integration with Vault and Consul which we use extensively, but the weight of the community favors Kubernetes. Given our perennial challenge of testing out new systems quickly, we will likely want to get a proof of concept working on Kubernetes to identify any unknown unknowns. As a first pass we can work on getting the edx-notes-api running in a Kubernetes cluster running on Amazon EKS in a pre-production environment.
We will want to timebox this task to ~1-2 weeks and the outcome should be:
- [ ] A go/no-go decision on whether to stick with Kubernetes
- [ ] A list of additional services/system design requirements
- [ ] A functional integration with Vault and Consul for secrets and service discovery respectively | infrastructure | container management poc we are increasingly in need of a smooth way to get containerized workloads into a production environment to that end we want to invest in building the infrastructure and tooling necessary to support application and service deployment as containers we have looked at nomad in the past due to its easy integration with vault and consul which we use extensively but the weight of the community favors kubernetes given our perennial challenge of testing out new systems quickly we will likely want to get a proof of concept working on kubernetes to identify any unknown unknowns as a first pass we can work on getting the edx notes api running in a kubernetes cluster running on amazon eks in a pre production environment we will want to timebox this task to weeks and the outcome should be a go no go decision on whether to stick with kubernetes a list of additional services system design requirements a functional integration with vault and consul for secrets and service discovery respectively | 1 |
4,244 | 4,264,867,270 | IssuesEvent | 2016-07-12 08:58:37 | geometalab/OSMNames | https://api.github.com/repos/geometalab/OSMNames | closed | Performance improvements in the exporting process | in progress performance | export process has still room for improvement, performance-wise that is.. | True | Performance improvements in the exporting process - export process has still room for improvement, performance-wise that is.. | non_infrastructure | performance improvements in the exporting process export process has still room for improvement performance wise that is | 0 |
76,382 | 3,487,603,031 | IssuesEvent | 2016-01-02 02:54:09 | mlhwang/monsterappetite | https://api.github.com/repos/mlhwang/monsterappetite | closed | Change the wording for the framed message in game | Difficulty - Easy MA Priority - High | Right now if the message says this: "if one eat's 294 more calories, he is at a higher risk for diabetes" --> It sounds like 294 in itself has a significant meaning. And if the consumer had 293 calories more then they are not at a higher risk for something. | 1.0 | Change the wording for the framed message in game - Right now if the message says this: "if one eat's 294 more calories, he is at a higher risk for diabetes" --> It sounds like 294 in itself has a significant meaning. And if the consumer had 293 calories more then they are not at a higher risk for something. | non_infrastructure | change the wording for the framed message in game right now if the message says this if one eat s more calories he is at a higher risk for diabetes it sounds like in itself has a significant meaning and if the consumer had calories more then they are not at a higher risk for something | 0 |
288,766 | 8,851,448,863 | IssuesEvent | 2019-01-08 15:47:18 | TheNLGamerZone/QuestPlugin | https://api.github.com/repos/TheNLGamerZone/QuestPlugin | opened | Implement SQL storage | enhancement low priority | Lower priority due to the fact that it'll take quite some time. Also mongo will be present at launch, so this will not be needed immediately. | 1.0 | Implement SQL storage - Lower priority due to the fact that it'll take quite some time. Also mongo will be present at launch, so this will not be needed immediately. | non_infrastructure | implement sql storage lower priority due to the fact that it ll take quite some time also mongo will be present at launch so this will not be needed immediately | 0 |
46,160 | 9,889,150,591 | IssuesEvent | 2019-06-25 13:13:01 | elastic/kibana | https://api.github.com/repos/elastic/kibana | reopened | Failing test: X-Pack Mocha Tests.x-pack/plugins/code/server/__tests__/lsp_indexer·ts - lsp_indexer unit tests Index continues from a checkpoint | Team:Code failed-test skipped-test | A test failed on a tracked branch
```
{ AssertionError [ERR_ASSERTION]: false == true
at Context.ok (plugins/code/server/__tests__/lsp_indexer.ts:298:12)
at process._tickCallback (internal/process/next_tick.js:68:7)
generatedMessage: true,
name: 'AssertionError [ERR_ASSERTION]',
code: 'ERR_ASSERTION',
actual: 'false',
expected: 'true',
operator: '==' }
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+7.x/JOB=x-pack-intake,node=immutable/1801/)
<!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Mocha Tests.x-pack/plugins/code/server/__tests__/lsp_indexer·ts","test.name":"lsp_indexer unit tests Index continues from a checkpoint","test.failCount":3}} --> | 1.0 | Failing test: X-Pack Mocha Tests.x-pack/plugins/code/server/__tests__/lsp_indexer·ts - lsp_indexer unit tests Index continues from a checkpoint - A test failed on a tracked branch
```
{ AssertionError [ERR_ASSERTION]: false == true
at Context.ok (plugins/code/server/__tests__/lsp_indexer.ts:298:12)
at process._tickCallback (internal/process/next_tick.js:68:7)
generatedMessage: true,
name: 'AssertionError [ERR_ASSERTION]',
code: 'ERR_ASSERTION',
actual: 'false',
expected: 'true',
operator: '==' }
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+7.x/JOB=x-pack-intake,node=immutable/1801/)
<!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Mocha Tests.x-pack/plugins/code/server/__tests__/lsp_indexer·ts","test.name":"lsp_indexer unit tests Index continues from a checkpoint","test.failCount":3}} --> | non_infrastructure | failing test x pack mocha tests x pack plugins code server tests lsp indexer·ts lsp indexer unit tests index continues from a checkpoint a test failed on a tracked branch assertionerror false true at context ok plugins code server tests lsp indexer ts at process tickcallback internal process next tick js generatedmessage true name assertionerror code err assertion actual false expected true operator first failure | 0 |
10,114 | 4,007,417,667 | IssuesEvent | 2016-05-12 18:04:27 | Shopify/javascript | https://api.github.com/repos/Shopify/javascript | closed | Remove returns from anonymous `addEventListener` handlers | new-codemod | Vanilla and jQuery handlers treat return values very differently. Removing (completely meaningless) explicit return values from `addEventListener` handlers should discourage conflating of techniques.
Example:
```
document.addEventListener('input', event => {
if (event.target.type === 'hidden') {
event.preventDefault();
return event.stopPropagation();
}
}, true);
``` | 1.0 | Remove returns from anonymous `addEventListener` handlers - Vanilla and jQuery handlers treat return values very differently. Removing (completely meaningless) explicit return values from `addEventListener` handlers should discourage conflating of techniques.
Example:
```
document.addEventListener('input', event => {
if (event.target.type === 'hidden') {
event.preventDefault();
return event.stopPropagation();
}
}, true);
``` | non_infrastructure | remove returns from anonymous addeventlistener handlers vanilla and jquery handlers treat return values very differently removing completely meaningless explicit return values from addeventlistener handlers should discourage conflating of techniques example document addeventlistener input event if event target type hidden event preventdefault return event stoppropagation true | 0 |
13,538 | 10,318,859,332 | IssuesEvent | 2019-08-30 15:55:07 | TransparentHealth/smh_app | https://api.github.com/repos/TransparentHealth/smh_app | closed | Simplify User/Member/Organizations/Resources Relationships | enhancement infrastructure | In order to make the relationships between Users, Organizations, and ResourceRequests more normalized and less confusing,
- [ ] Merge UserProfile and Member model under the UserProfile (no more Member model)
- [ ] Remove .organizations attribute from that relationship – covered by ResourceGrants
- [ ] Change Organization.users to Organization.agents for clarity
(reverse as User.organizations_as_agent)
- [ ] Add Organization.members
(reverse as User.organizations_as_member)
- [ ] Remove ResourceGrants model, make sure records are represented as Organization.members
| 1.0 | Simplify User/Member/Organizations/Resources Relationships - In order to make the relationships between Users, Organizations, and ResourceRequests more normalized and less confusing,
- [ ] Merge UserProfile and Member model under the UserProfile (no more Member model)
- [ ] Remove .organizations attribute from that relationship – covered by ResourceGrants
- [ ] Change Organization.users to Organization.agents for clarity
(reverse as User.organizations_as_agent)
- [ ] Add Organization.members
(reverse as User.organizations_as_member)
- [ ] Remove ResourceGrants model, make sure records are represented as Organization.members
| infrastructure | simplify user member organizations resources relationships in order to make the relationships between users organizations and resourcerequests more normalized and less confusing merge userprofile and member model under the userprofile no more member model remove organizations attribute from that relationship – covered by resourcegrants change organization users to organization agents for clarity reverse as user organizations as agent add organization members reverse as user organizations as member remove resourcegrants model make sure records are represented as organization members | 1 |
143,937 | 19,256,906,854 | IssuesEvent | 2021-12-09 12:19:22 | mcaj-git/gatsby2 | https://api.github.com/repos/mcaj-git/gatsby2 | closed | CVE-2020-15366 (Medium) detected in ajv-6.12.2.tgz - autoclosed | security vulnerability | ## CVE-2020-15366 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ajv-6.12.2.tgz</b></p></summary>
<p>Another JSON Schema Validator</p>
<p>Library home page: <a href="https://registry.npmjs.org/ajv/-/ajv-6.12.2.tgz">https://registry.npmjs.org/ajv/-/ajv-6.12.2.tgz</a></p>
<p>Path to dependency file: gatsby2/package.json</p>
<p>Path to vulnerable library: gatsby2/node_modules/ajv/package.json</p>
<p>
Dependency Hierarchy:
- gatsby-2.23.17.tgz (Root Library)
- url-loader-1.1.2.tgz
- schema-utils-1.0.0.tgz
- :x: **ajv-6.12.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mcaj-git/gatsby2/commit/5017ec2dfca3609bd4c292564f66274469cc5b4d">5017ec2dfca3609bd4c292564f66274469cc5b4d</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in ajv.validate() in Ajv (aka Another JSON Schema Validator) 6.12.2. A carefully crafted JSON schema could be provided that allows execution of other code by prototype pollution. (While untrusted schemas are recommended against, the worst case of an untrusted schema should be a denial of service, not execution of code.)
<p>Publish Date: 2020-07-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15366>CVE-2020-15366</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/ajv-validator/ajv/releases/tag/v6.12.3">https://github.com/ajv-validator/ajv/releases/tag/v6.12.3</a></p>
<p>Release Date: 2020-07-15</p>
<p>Fix Resolution: ajv - 6.12.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-15366 (Medium) detected in ajv-6.12.2.tgz - autoclosed - ## CVE-2020-15366 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ajv-6.12.2.tgz</b></p></summary>
<p>Another JSON Schema Validator</p>
<p>Library home page: <a href="https://registry.npmjs.org/ajv/-/ajv-6.12.2.tgz">https://registry.npmjs.org/ajv/-/ajv-6.12.2.tgz</a></p>
<p>Path to dependency file: gatsby2/package.json</p>
<p>Path to vulnerable library: gatsby2/node_modules/ajv/package.json</p>
<p>
Dependency Hierarchy:
- gatsby-2.23.17.tgz (Root Library)
- url-loader-1.1.2.tgz
- schema-utils-1.0.0.tgz
- :x: **ajv-6.12.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mcaj-git/gatsby2/commit/5017ec2dfca3609bd4c292564f66274469cc5b4d">5017ec2dfca3609bd4c292564f66274469cc5b4d</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in ajv.validate() in Ajv (aka Another JSON Schema Validator) 6.12.2. A carefully crafted JSON schema could be provided that allows execution of other code by prototype pollution. (While untrusted schemas are recommended against, the worst case of an untrusted schema should be a denial of service, not execution of code.)
<p>Publish Date: 2020-07-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15366>CVE-2020-15366</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/ajv-validator/ajv/releases/tag/v6.12.3">https://github.com/ajv-validator/ajv/releases/tag/v6.12.3</a></p>
<p>Release Date: 2020-07-15</p>
<p>Fix Resolution: ajv - 6.12.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_infrastructure | cve medium detected in ajv tgz autoclosed cve medium severity vulnerability vulnerable library ajv tgz another json schema validator library home page a href path to dependency file package json path to vulnerable library node modules ajv package json dependency hierarchy gatsby tgz root library url loader tgz schema utils tgz x ajv tgz vulnerable library found in head commit a href found in base branch main vulnerability details an issue was discovered in ajv validate in ajv aka another json schema validator a carefully crafted json schema could be provided that allows execution of other code by prototype pollution while untrusted schemas are recommended against the worst case of an untrusted schema should be a denial of service not execution of code publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ajv step up your open source security game with whitesource | 0 |
280,996 | 21,315,319,521 | IssuesEvent | 2022-04-16 07:01:32 | j4ck990/pe | https://api.github.com/repos/j4ck990/pe | opened | Inconsistent message(viewtask) | type.DocumentationBug severity.Low | 
The order of tasks from viewtask does not match the order shown by the student groups tab.
<!--session: 1650086616556-abe55de6-f984-418c-aad7-44e60f192af6-->
<!--Version: Web v3.4.2--> | 1.0 | Inconsistent message(viewtask) - 
The order of tasks from viewtask does not match the order shown by the student groups tab.
<!--session: 1650086616556-abe55de6-f984-418c-aad7-44e60f192af6-->
<!--Version: Web v3.4.2--> | non_infrastructure | inconsistent message viewtask the order of tasks from viewtask does not match the order shown by the student groups tab | 0 |
5,145 | 7,923,822,623 | IssuesEvent | 2018-07-05 15:05:19 | SlicerIGT/SlicerIGT | https://api.github.com/repos/SlicerIGT/SlicerIGT | closed | Quaternion Average does not update when input transforms are modified | TransformProcessor bug | Steps to reproduce:
1. Open fresh 3D Slicer
2. Navigate to TransformProcessor module
3. Make sure the mode is Quaternion Average
4. Create a new linear transform and add it to the input list of transforms
5. Create a new linear transform and set it as the output
6. Manually change the input transform, see that the output does not update
| 1.0 | Quaternion Average does not update when input transforms are modified - Steps to reproduce:
1. Open fresh 3D Slicer
2. Navigate to TransformProcessor module
3. Make sure the mode is Quaternion Average
4. Create a new linear transform and add it to the input list of transforms
5. Create a new linear transform and set it as the output
6. Manually change the input transform, see that the output does not update
| non_infrastructure | quaternion average does not update when input transforms are modified steps to reproduce open fresh slicer navigate to transformprocessor module make sure the mode is quaternion average create a new linear transform and add it to the input list of transforms create a new linear transform and set it as the output manually change the input transform see that the output does not update | 0 |
25,971 | 19,521,834,051 | IssuesEvent | 2021-12-29 20:09:36 | bcgov/foi-flow | https://api.github.com/repos/bcgov/foi-flow | opened | Object Storage data migration | Task Data Infrastructure | Title of ticket: Object Storage data migration
#### Description
This task is to copy all the data from LAN to Object Storage
#### Dependencies
No
#### DOD
- [ ] Execute the script to copy files from LAN to TEST VM
- [ ] Check if GeoDrive is moving the files from TEST VM to Object Storage and replaces it with a stub file in TEST VM
- [ ] Compare the files migrated to Object Storage
- [ ] Note all the files that were not copied through the logs
- [ ] Schedule the scripts to run daily once after business hours
| 1.0 | Object Storage data migration - Title of ticket: Object Storage data migration
#### Description
This task is to copy all the data from LAN to Object Storage
#### Dependencies
No
#### DOD
- [ ] Execute the script to copy files from LAN to TEST VM
- [ ] Check if GeoDrive is moving the files from TEST VM to Object Storage and replaces it with a stub file in TEST VM
- [ ] Compare the files migrated to Object Storage
- [ ] Note all the files that were not copied through the logs
- [ ] Schedule the scripts to run daily once after business hours
| infrastructure | object storage data migration title of ticket object storage data migration description this task is to copy all the data from lan to object storage dependencies no dod execute the script to copy files from lan to test vm check if geodrive is moving the files from test vm to object storage and replaces it with a stub file in test vm compare the files migrated to object storage note all the files that were not copied through the logs schedule the scripts to run daily once after business hours | 1 |
5,371 | 5,624,837,922 | IssuesEvent | 2017-04-04 17:58:23 | google/tie | https://api.github.com/repos/google/tie | opened | Suggest adding stdout | infrastructure | I wanted to print something to the screen with print(), but that did not work. I suggest adding stdout to the page, since that could be useful sometime.

| 1.0 | Suggest adding stdout - I wanted to print something to the screen with print(), but that did not work. I suggest adding stdout to the page, since that could be useful sometime.

| infrastructure | suggest adding stdout i wanted to print something to the screen with print but that did not work i suggest adding stdout to the page since that could be useful sometime | 1 |
5,961 | 6,063,863,919 | IssuesEvent | 2017-06-14 13:08:07 | insieme/insieme | https://api.github.com/repos/insieme/insieme | closed | Find a tool to clean up the whole codebase. | infrastructure question | During our big refactorings in the core we already stumbled upon several functions which aren't used by anybody anymore and probably haven't been used in years.
In the whole project there is probably a whole lot of functionality which nobody ever uses. We should find a tool which analyzes our whole codebase and dispalys some candidates for deletion, to clean up the code.
| 1.0 | Find a tool to clean up the whole codebase. - During our big refactorings in the core we already stumbled upon several functions which aren't used by anybody anymore and probably haven't been used in years.
In the whole project there is probably a whole lot of functionality which nobody ever uses. We should find a tool which analyzes our whole codebase and dispalys some candidates for deletion, to clean up the code.
| infrastructure | find a tool to clean up the whole codebase during our big refactorings in the core we already stumbled upon several functions which aren t used by anybody anymore and probably haven t been used in years in the whole project there is probably a whole lot of functionality which nobody ever uses we should find a tool which analyzes our whole codebase and dispalys some candidates for deletion to clean up the code | 1 |
33,541 | 27,562,255,278 | IssuesEvent | 2023-03-07 23:15:32 | microsoft/TypeScript | https://api.github.com/repos/microsoft/TypeScript | opened | Check package size changes in CI | Infrastructure | We used to get this via the LKG task.
After #52226, LKG no longer exists. To ensure that we can still get some idea of when we accidentally regress package size too much, I had to add some hacks to the `smoke` test in CI which builds LKG at main, saves it outside the tree, then switches back to the PR and copies lib back, as though it were there. Then our old package size checks can kick in.
This is a hack and just temporary. What we should actually do is introduce a new CI task which checks this explicitly, then eliminate size checks from our LKG task itself.
The best thing would be to create some failure threshold in CI (right now, 10%?), that fails the build. But, using GitHub's checks API, we could feasibly also output a markdown report into the UI to look at. (This is where I'd like to stick perf results and other changes in the future.)
I've looked at the existing actions in the marketplace, and none really do what we want, in that they all reply via a comment, which will be noisy. Not sure what to do about that, besides writing a whole new action from scratch. Maybe that's fine, because there are other actions I'd like to write too (e.g. the errors delta repo could be an action that runs on every PR, as could DT and perf). | 1.0 | Check package size changes in CI - We used to get this via the LKG task.
After #52226, LKG no longer exists. To ensure that we can still get some idea of when we accidentally regress package size too much, I had to add some hacks to the `smoke` test in CI which builds LKG at main, saves it outside the tree, then switches back to the PR and copies lib back, as though it were there. Then our old package size checks can kick in.
This is a hack and just temporary. What we should actually do is introduce a new CI task which checks this explicitly, then eliminate size checks from our LKG task itself.
The best thing would be to create some failure threshold in CI (right now, 10%?), that fails the build. But, using GitHub's checks API, we could feasibly also output a markdown report into the UI to look at. (This is where I'd like to stick perf results and other changes in the future.)
I've looked at the existing actions in the marketplace, and none really do what we want, in that they all reply via a comment, which will be noisy. Not sure what to do about that, besides writing a whole new action from scratch. Maybe that's fine, because there are other actions I'd like to write too (e.g. the errors delta repo could be an action that runs on every PR, as could DT and perf). | infrastructure | check package size changes in ci we used to get this via the lkg task after lkg no longer exists to ensure that we can still get some idea of when we accidentally regress package size too much i had to add some hacks to the smoke test in ci which builds lkg at main saves it outside the tree then switches back to the pr and copies lib back as though it were there then our old package size checks can kick in this is a hack and just temporary what we should actually do is introduce a new ci task which checks this explicitly then eliminate size checks from our lkg task itself the best thing would be to create some failure threshold in ci right now that fails the build but using github s checks api we could feasibly also output a markdown report into the ui to look at this is where i d like to stick perf results and other changes in the future i ve looked at the existing actions in the marketplace and none really do what we want in that they all reply via a comment which will be noisy not sure what to do about that besides writing a whole new action from scratch maybe that s fine because there are other actions i d like to write too e g the errors delta repo could be an action that runs on every pr as could dt and perf | 1 |
176,220 | 6,557,390,999 | IssuesEvent | 2017-09-06 17:15:08 | stats4sd/SSD-Resources-Demo | https://api.github.com/repos/stats4sd/SSD-Resources-Demo | closed | search page doesn't function unless resource page visited first | Priority-Medium ready Size-Medium Type-bug | persisted resources not defined, need to load if not... | 1.0 | search page doesn't function unless resource page visited first - persisted resources not defined, need to load if not... | non_infrastructure | search page doesn t function unless resource page visited first persisted resources not defined need to load if not | 0 |
7,314 | 6,896,560,076 | IssuesEvent | 2017-11-23 18:36:16 | Daniel-Mietchen/ideas | https://api.github.com/repos/Daniel-Mietchen/ideas | opened | Look into Time Well Spent | infrastructure learning sustainability workflows | > We are a building a non-profit organization dedicated to creating a humane future where technology is in harmony with our well-being, our social values, and our democratic principles.
http://www.timewellspent.io/ | 1.0 | Look into Time Well Spent - > We are a building a non-profit organization dedicated to creating a humane future where technology is in harmony with our well-being, our social values, and our democratic principles.
http://www.timewellspent.io/ | infrastructure | look into time well spent we are a building a non profit organization dedicated to creating a humane future where technology is in harmony with our well being our social values and our democratic principles | 1 |
150,660 | 5,783,215,192 | IssuesEvent | 2017-04-30 06:38:21 | danrabbit/nimbus | https://api.github.com/repos/danrabbit/nimbus | opened | Don't try to update while we're already updating | Priority: Low Status: Confirmed | There's not currently any mechanism to prevent triggering another update | 1.0 | Don't try to update while we're already updating - There's not currently any mechanism to prevent triggering another update | non_infrastructure | don t try to update while we re already updating there s not currently any mechanism to prevent triggering another update | 0 |
24,214 | 17,014,179,061 | IssuesEvent | 2021-07-02 09:36:18 | kaitai-io/kaitai_struct | https://api.github.com/repos/kaitai-io/kaitai_struct | closed | Add link to xref.html in the format gallery | infrastructure | While digging in the generating code for the [format gallery](https://formats.kaitai.io/), I noticed that every time it generates page https://formats.kaitai.io/xref.html, which is quite a neat summary of all formats included with their licenses and cross-references. It's a pity that there isn't any link leading to it, so nobody can actually get there. So I think it makes sense to add one. Probably on the [format gallery homepage](https://formats.kaitai.io/). | 1.0 | Add link to xref.html in the format gallery - While digging in the generating code for the [format gallery](https://formats.kaitai.io/), I noticed that every time it generates page https://formats.kaitai.io/xref.html, which is quite a neat summary of all formats included with their licenses and cross-references. It's a pity that there isn't any link leading to it, so nobody can actually get there. So I think it makes sense to add one. Probably on the [format gallery homepage](https://formats.kaitai.io/). | infrastructure | add link to xref html in the format gallery while digging in the generating code for the i noticed that every time it generates page which is quite a neat summary of all formats included with their licenses and cross references it s a pity that there isn t any link leading to it so nobody can actually get there so i think it makes sense to add one probably on the | 1 |
20,056 | 13,643,494,513 | IssuesEvent | 2020-09-25 17:13:15 | microsoft/react-native-windows | https://api.github.com/repos/microsoft/react-native-windows | closed | Investigate and resolve warnings | Area: Infrastructure bug help wanted | The new pipeline showed all warnings on the top of PR. It has too much warnings.
we should investigate and resolve the warnings.

| 1.0 | Investigate and resolve warnings - The new pipeline showed all warnings on the top of PR. It has too much warnings.
we should investigate and resolve the warnings.

| infrastructure | investigate and resolve warnings the new pipeline showed all warnings on the top of pr it has too much warnings we should investigate and resolve the warnings | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.