Unnamed: 0 int64 1 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 3 438 | labels stringlengths 4 308 | body stringlengths 7 254k | index stringclasses 7 values | text_combine stringlengths 96 254k | label stringclasses 2 values | text stringlengths 96 246k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
145,366 | 13,149,088,722 | IssuesEvent | 2020-08-09 02:28:21 | kubernetes/kops | https://api.github.com/repos/kubernetes/kops | closed | api server documentation generation | area/documentation lifecycle/frozen | We need to clean up the instructions for using the tool. Have issue open: https://github.com/kubernetes-incubator/reference-docs/issues/32 | 1.0 | api server documentation generation - We need to clean up the instructions for using the tool. Have issue open: https://github.com/kubernetes-incubator/reference-docs/issues/32 | non_main | api server documentation generation we need to clean up the instructions for using the tool have issue open | 0 |
121,678 | 26,012,688,244 | IssuesEvent | 2022-12-21 04:25:22 | UnitTestBot/UTBotJava | https://api.github.com/repos/UnitTestBot/UTBotJava | closed | Useless assignment of enum values to corresponding static fields in generated tests | bug codegen engine release tailings | **Description**
Generated tests explicitly assign enum values to corresponding static fields, which is useless as the set of enum values is fixed, and no new instances can be created.
**To Reproduce**
Generate a test suite for the `Coin.reverse()` method in the following code.
```java
public enum Coin {
HEADS,
TAILS;
public Coin reverse() {
return this == HEADS ? TAILS : HEADS;
}
}
```
**Expected behavior**
Generated tests should not contain assignments of enum values to corresponding static fields.
**Actual behavior**
Generated tests explicitly assign enum values to corresponding static fields using reflection.
Note: this behavior depends on the enum support that is not in `main` yet (PR #611). When checking on `main`, the behavior may differ.
**Visual proofs (screenshots, logs, images)**
Generated tests:
```java
///region SUCCESSFUL EXECUTIONS for method reverse()
/**
* <pre>
* Test executes conditions:
* {@code (this == HEADS): False }
* returns from: {@code return this == HEADS ? TAILS : HEADS; }
* </pre>
*/
@Test
//@org.junit.jupiter.api.DisplayName("reverse: this == HEADS : False -> return this == HEADS ? TAILS : HEADS")
public void testReverse_NotEqualsHEADS() throws ClassNotFoundException, IllegalAccessException, NoSuchFieldException {
Coin prevHEADS = Coin.HEADS;
try {
Coin heads = Coin.HEADS;
Class coinClazz = Class.forName("enums.Coin");
setStaticField(coinClazz, "HEADS", heads);
Coin coin = Coin.TAILS;
Coin actual = coin.reverse();
assertEquals(heads, actual);
} finally {
setStaticField(Coin.class, "HEADS", prevHEADS);
}
}
/**
* <pre>
* Test executes conditions:
* {@code (this == HEADS): True }
* returns from: {@code return this == HEADS ? TAILS : HEADS; }
* </pre>
*/
@Test
//@org.junit.jupiter.api.DisplayName("reverse: this == HEADS : True -> return this == HEADS ? TAILS : HEADS")
public void testReverse_EqualsHEADS() throws ClassNotFoundException, IllegalAccessException, NoSuchFieldException {
Coin prevHEADS = Coin.HEADS;
Coin prevTAILS = Coin.TAILS;
try {
Coin heads = Coin.HEADS;
Class coinClazz = Class.forName("enums.Coin");
setStaticField(coinClazz, "HEADS", heads);
Coin tails = Coin.TAILS;
setStaticField(coinClazz, "TAILS", tails);
Coin actual = heads.reverse();
assertEquals(tails, actual);
} finally {
setStaticField(Coin.class, "HEADS", prevHEADS);
setStaticField(Coin.class, "TAILS", prevTAILS);
}
}
///endregion
```
**Environment**
This behavior does not depend on any specific test environment.
**Additional context**
Assignment of specific values to static fields (usually using reflection) and restoring old values after the test is a common trait of UTBot-generated tests. Generally this behavior is necessary as the codegen should guarantee that the initial state of each object matches the expected initial state induced by the symbolic engine. On the other hand, often these assignment are useless. For example, final static fields of JDK classes are correctly initialized by the JDK code, and there is no sensible way to assign something else to these fields without risking to break things. Unfortunately, it seems that the correct and non-redundant object initialization is hard.
Maybe it might be done for some special cases like enum value initialization, as we can be sure that each corresponding static field already has the correct value at the start of the test. | 1.0 | Useless assignment of enum values to corresponding static fields in generated tests - **Description**
Generated tests explicitly assign enum values to corresponding static fields, which is useless as the set of enum values is fixed, and no new instances can be created.
**To Reproduce**
Generate a test suite for the `Coin.reverse()` method in the following code.
```java
public enum Coin {
HEADS,
TAILS;
public Coin reverse() {
return this == HEADS ? TAILS : HEADS;
}
}
```
**Expected behavior**
Generated tests should not contain assignments of enum values to corresponding static fields.
**Actual behavior**
Generated tests explicitly assign enum values to corresponding static fields using reflection.
Note: this behavior depends on the enum support that is not in `main` yet (PR #611). When checking on `main`, the behavior may differ.
**Visual proofs (screenshots, logs, images)**
Generated tests:
```java
///region SUCCESSFUL EXECUTIONS for method reverse()
/**
* <pre>
* Test executes conditions:
* {@code (this == HEADS): False }
* returns from: {@code return this == HEADS ? TAILS : HEADS; }
* </pre>
*/
@Test
//@org.junit.jupiter.api.DisplayName("reverse: this == HEADS : False -> return this == HEADS ? TAILS : HEADS")
public void testReverse_NotEqualsHEADS() throws ClassNotFoundException, IllegalAccessException, NoSuchFieldException {
Coin prevHEADS = Coin.HEADS;
try {
Coin heads = Coin.HEADS;
Class coinClazz = Class.forName("enums.Coin");
setStaticField(coinClazz, "HEADS", heads);
Coin coin = Coin.TAILS;
Coin actual = coin.reverse();
assertEquals(heads, actual);
} finally {
setStaticField(Coin.class, "HEADS", prevHEADS);
}
}
/**
* <pre>
* Test executes conditions:
* {@code (this == HEADS): True }
* returns from: {@code return this == HEADS ? TAILS : HEADS; }
* </pre>
*/
@Test
//@org.junit.jupiter.api.DisplayName("reverse: this == HEADS : True -> return this == HEADS ? TAILS : HEADS")
public void testReverse_EqualsHEADS() throws ClassNotFoundException, IllegalAccessException, NoSuchFieldException {
Coin prevHEADS = Coin.HEADS;
Coin prevTAILS = Coin.TAILS;
try {
Coin heads = Coin.HEADS;
Class coinClazz = Class.forName("enums.Coin");
setStaticField(coinClazz, "HEADS", heads);
Coin tails = Coin.TAILS;
setStaticField(coinClazz, "TAILS", tails);
Coin actual = heads.reverse();
assertEquals(tails, actual);
} finally {
setStaticField(Coin.class, "HEADS", prevHEADS);
setStaticField(Coin.class, "TAILS", prevTAILS);
}
}
///endregion
```
**Environment**
This behavior does not depend on any specific test environment.
**Additional context**
Assignment of specific values to static fields (usually using reflection) and restoring old values after the test is a common trait of UTBot-generated tests. Generally this behavior is necessary as the codegen should guarantee that the initial state of each object matches the expected initial state induced by the symbolic engine. On the other hand, often these assignment are useless. For example, final static fields of JDK classes are correctly initialized by the JDK code, and there is no sensible way to assign something else to these fields without risking to break things. Unfortunately, it seems that the correct and non-redundant object initialization is hard.
Maybe it might be done for some special cases like enum value initialization, as we can be sure that each corresponding static field already has the correct value at the start of the test. | non_main | useless assignment of enum values to corresponding static fields in generated tests description generated tests explicitly assign enum values to corresponding static fields which is useless as the set of enum values is fixed and no new instances can be created to reproduce generate a test suite for the coin reverse method in the following code java public enum coin heads tails public coin reverse return this heads tails heads expected behavior generated tests should not contain assignments of enum values to corresponding static fields actual behavior generated tests explicitly assign enum values to corresponding static fields using reflection note this behavior depends on the enum support that is not in main yet pr when checking on main the behavior may differ visual proofs screenshots logs images generated tests java region successful executions for method reverse test executes conditions code this heads false returns from code return this heads tails heads test org junit jupiter api displayname reverse this heads false return this heads tails heads public void testreverse notequalsheads throws classnotfoundexception illegalaccessexception nosuchfieldexception coin prevheads coin heads try coin heads coin heads class coinclazz class forname enums coin setstaticfield coinclazz heads heads coin coin coin tails coin actual coin reverse assertequals heads actual finally setstaticfield coin class heads prevheads test executes conditions code this heads true returns from code return this heads tails heads test org junit jupiter api displayname reverse this heads true return this heads tails heads public void testreverse equalsheads throws classnotfoundexception illegalaccessexception nosuchfieldexception coin prevheads coin heads coin prevtails coin tails try coin heads coin heads class coinclazz class forname enums coin setstaticfield coinclazz heads heads coin tails coin tails setstaticfield coinclazz tails tails coin actual heads reverse assertequals tails actual finally setstaticfield coin class heads prevheads setstaticfield coin class tails prevtails endregion environment this behavior does not depend on any specific test environment additional context assignment of specific values to static fields usually using reflection and restoring old values after the test is a common trait of utbot generated tests generally this behavior is necessary as the codegen should guarantee that the initial state of each object matches the expected initial state induced by the symbolic engine on the other hand often these assignment are useless for example final static fields of jdk classes are correctly initialized by the jdk code and there is no sensible way to assign something else to these fields without risking to break things unfortunately it seems that the correct and non redundant object initialization is hard maybe it might be done for some special cases like enum value initialization as we can be sure that each corresponding static field already has the correct value at the start of the test | 0 |
326,685 | 28,012,119,040 | IssuesEvent | 2023-03-27 19:26:33 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | closed | com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulatorStressTest.asyncInvocation_and_syncBackups_and_asyncBackups | Team: Core Type: Test-Failure Source: Internal Module: Invocation System | _master_ (commit ee2404a117f71eb319ae58a30f9609ed0e9a4ba7)
Failed on nightly run with Oracle JDK 8: http://jenkins.hazelcast.com/view/Official%20Builds/job/Hazelcast-master-OracleJDK8-nightly/846/testReport/com.hazelcast.spi.impl.operationservice.impl/BackpressureRegulatorStressTest/asyncInvocation_and_syncBackups_and_asyncBackups/
Stacktrace:
```
java.lang.AssertionError: No error should have been thrown, but StressThread-5 completed error expected null, but was:<com.hazelcast.core.HazelcastOverloadException: Failed to start invocation due to overload: Invocation{op=com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulatorStressTest$DummyOperation{serviceName='null', identityHash=2023014565, partitionId=0, replicaIndex=0, callId=0, invocationTime=1622007432304 (2021-05-26 05:37:12.304), waitTimeout=-1, callTimeout=60000, tenantControl=com.hazelcast.spi.impl.tenantcontrol.NoopTenantControl@0}, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeoutMillis=60000, firstInvocationTimeMs=1622007432304, firstInvocationTime='2021-05-26 05:37:12.304', lastHeartbeatMillis=0, lastHeartbeatTime='1970-01-01 00:00:00.000', target=[127.0.0.1]:5702, pendingResponse={VOID}, backupsAcksExpected=-1, backupsAcksReceived=0, connection=null}>
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotNull(Assert.java:756)
at org.junit.Assert.assertNull(Assert.java:738)
at com.hazelcast.test.TestThread.assertSucceedsEventually(TestThread.java:78)
at com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulatorStressTest.test(BackpressureRegulatorStressTest.java:177)
at com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulatorStressTest.asyncInvocation_and_syncBackups_and_asyncBackups(BackpressureRegulatorStressTest.java:156)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:115)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:107)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
```
Standard output:
```
05:37:12,079 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [MetricsConfigHelper] main - [LOCAL] [dev] [5.0-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
05:37:12,085 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [system] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT]
o o O o---o o--o o o-o O o-o o-O-o o--o o O o-O-o o--o o-o o--o o o
| | / \ / | | / / \ | | | | | / \ | | o o | | |\ /|
O--O o---o -O- O-o | O o---o o-o | O--o | o---o | O-o | | O-Oo | O |
| | | | / | | \ | | | | | | | | | | o o | \ | |
o o o o o---o o--o O---o o-o o o o--o o o O---o o o o o o-o o o o o
05:37:12,085 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [system] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Copyright (c) 2008-2021, Hazelcast, Inc. All Rights Reserved.
05:37:12,085 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [system] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Hazelcast Platform 5.0-SNAPSHOT (20210526 - ee2404a) starting at [127.0.0.1]:5701
05:37:12,085 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [system] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Cluster name: dev
05:37:12,092 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [MetricsConfigHelper] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
05:37:12,093 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [BackpressureRegulator] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Backpressure is enabled, maxConcurrentInvocations:22, syncWindow: 10
05:37:12,097 WARN |asyncInvocation_and_syncBackups_and_asyncBackups| - [CPSubsystem] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
05:37:12,100 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [JetService] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Setting number of cooperative threads and default parallelism to 8
05:37:12,101 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [Diagnostics] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
05:37:12,102 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [LifecycleService] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] [127.0.0.1]:5701 is STARTING
05:37:12,102 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [JetExtension] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Jet extension is enabled after the cluster version upgrade.
05:37:12,102 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [ClusterService] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT]
Members {size:1, ver:1} [
Member [127.0.0.1]:5701 - 444fbe20-06b5-4bb7-8ce6-e365781a735f this
]
05:37:12,102 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
05:37:12,102 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [JetExtension] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Jet extension is enabled
05:37:12,102 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
05:37:12,102 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [LifecycleService] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] [127.0.0.1]:5701 is STARTED
05:37:12,102 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [MetricsConfigHelper] main - [LOCAL] [dev] [5.0-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
05:37:12,103 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [system] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT]
o o O o---o o--o o o-o O o-o o-O-o o--o o O o-O-o o--o o-o o--o o o
| | / \ / | | / / \ | | | | | / \ | | o o | | |\ /|
O--O o---o -O- O-o | O o---o o-o | O--o | o---o | O-o | | O-Oo | O |
| | | | / | | \ | | | | | | | | | | o o | \ | |
o o o o o---o o--o O---o o-o o o o--o o o O---o o o o o o-o o o o o
05:37:12,103 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [system] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Copyright (c) 2008-2021, Hazelcast, Inc. All Rights Reserved.
05:37:12,103 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [system] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Hazelcast Platform 5.0-SNAPSHOT (20210526 - ee2404a) starting at [127.0.0.1]:5702
05:37:12,103 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [system] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Cluster name: dev
05:37:12,105 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [MetricsConfigHelper] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
05:37:12,105 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [BackpressureRegulator] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Backpressure is enabled, maxConcurrentInvocations:22, syncWindow: 10
05:37:12,108 WARN |asyncInvocation_and_syncBackups_and_asyncBackups| - [CPSubsystem] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
05:37:12,110 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [JetService] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Setting number of cooperative threads and default parallelism to 8
05:37:12,112 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [Diagnostics] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
05:37:12,112 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [LifecycleService] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] [127.0.0.1]:5702 is STARTING
05:37:12,112 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [MockServer] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5701, connection: MockConnection{localEndpoint=[127.0.0.1]:5702, remoteEndpoint=[127.0.0.1]:5701, alive=true}
05:37:12,112 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [MockServer] hz.sad_engelbart.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5702, connection: MockConnection{localEndpoint=[127.0.0.1]:5701, remoteEndpoint=[127.0.0.1]:5702, alive=true}
05:37:12,113 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [ClusterService] hz.sad_engelbart.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT]
Members {size:2, ver:2} [
Member [127.0.0.1]:5701 - 444fbe20-06b5-4bb7-8ce6-e365781a735f this
Member [127.0.0.1]:5702 - 1877f0a8-25ad-40ec-8849-972bd1e4f5d0
]
05:37:12,202 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
05:37:12,202 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
05:37:12,213 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [JetExtension] hz.crazy_engelbart.generic-operation.thread-3 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Jet extension is enabled after the cluster version upgrade.
05:37:12,214 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [ClusterService] hz.crazy_engelbart.generic-operation.thread-3 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT]
Members {size:2, ver:2} [
Member [127.0.0.1]:5701 - 444fbe20-06b5-4bb7-8ce6-e365781a735f
Member [127.0.0.1]:5702 - 1877f0a8-25ad-40ec-8849-972bd1e4f5d0 this
]
05:37:12,214 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [JetExtension] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Jet extension is enabled
05:37:12,214 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [LifecycleService] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] [127.0.0.1]:5702 is STARTED
05:37:12,215 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [PartitionStateManager] hz.sad_engelbart.generic-operation.thread-3 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Initializing cluster partition table arrangement...
05:37:12,225 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [BackpressureRegulatorStressTest$StressThread] StressThread-5 - StressThread-5 Starting
05:37:22,302 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:37:22,302 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:37:23,103 WARN |asyncInvocation_and_syncBackups_and_asyncBackups| - [SlowOperationDetector] hz.sad_engelbart.SlowOperationDetectorThread - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Slow operation detected: com.hazelcast.internal.partition.impl.CheckPartitionReplicaVersionTask
Hint: You can enable the logging of stack traces with the following system property: -Dhazelcast.slow.operation.detector.stacktrace.logging.enabled
05:37:23,103 WARN |asyncInvocation_and_syncBackups_and_asyncBackups| - [SlowOperationDetector] hz.sad_engelbart.SlowOperationDetectorThread - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Slow operation detected: com.hazelcast.internal.partition.impl.CheckPartitionReplicaVersionTask (2 invocations)
05:37:23,104 WARN |asyncInvocation_and_syncBackups_and_asyncBackups| - [SlowOperationDetector] hz.sad_engelbart.SlowOperationDetectorThread - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Slow operation detected: com.hazelcast.internal.partition.impl.CheckPartitionReplicaVersionTask (3 invocations)
05:37:23,104 WARN |asyncInvocation_and_syncBackups_and_asyncBackups| - [SlowOperationDetector] hz.sad_engelbart.SlowOperationDetectorThread - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Slow operation detected: com.hazelcast.internal.partition.impl.CheckPartitionReplicaVersionTask (4 invocations)
05:37:23,105 WARN |asyncInvocation_and_syncBackups_and_asyncBackups| - [SlowOperationDetector] hz.sad_engelbart.SlowOperationDetectorThread - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Slow operation detected: com.hazelcast.internal.partition.impl.CheckPartitionReplicaVersionTask (5 invocations)
05:37:32,113 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-11 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:37:32,402 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:37:32,402 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:37:42,503 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:37:42,503 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:37:44,113 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:37:52,603 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:37:52,603 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-16 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:37:56,114 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-2 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:38:02,703 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:38:02,703 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-16 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:38:08,114 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-13 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:38:12,324 WARN |asyncInvocation_and_syncBackups_and_asyncBackups| - [BackpressureRegulatorStressTest$StressThread] StressThread-5 - StressThread-5 Completed with failure
com.hazelcast.core.HazelcastOverloadException: Failed to start invocation due to overload: Invocation{op=com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulatorStressTest$DummyOperation{serviceName='null', identityHash=2023014565, partitionId=0, replicaIndex=0, callId=0, invocationTime=1622007432304 (2021-05-26 05:37:12.304), waitTimeout=-1, callTimeout=60000, tenantControl=com.hazelcast.spi.impl.tenantcontrol.NoopTenantControl@0}, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeoutMillis=60000, firstInvocationTimeMs=1622007432304, firstInvocationTime='2021-05-26 05:37:12.304', lastHeartbeatMillis=0, lastHeartbeatTime='1970-01-01 00:00:00.000', target=[127.0.0.1]:5702, pendingResponse={VOID}, backupsAcksExpected=-1, backupsAcksReceived=0, connection=null}
at com.hazelcast.spi.impl.operationservice.impl.InvocationRegistry.register(InvocationRegistry.java:129) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvoke(Invocation.java:569) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke0(Invocation.java:540) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke(Invocation.java:240) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.OperationServiceImpl.invokeOnPartition(OperationServiceImpl.java:330) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulatorStressTest$StressThread.asyncInvoke(BackpressureRegulatorStressTest.java:253) ~[test-classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulatorStressTest$StressThread.doRun(BackpressureRegulatorStressTest.java:243) ~[test-classes/:?]
at com.hazelcast.test.TestThread.run(TestThread.java:43) [test-classes/:?]
Caused by: com.hazelcast.core.HazelcastOverloadException: Timed out trying to acquire another call ID. maxConcurrentInvocations = 22, backoffTimeout = 60000 msecs, elapsed:60019 msecs
at com.hazelcast.spi.impl.sequence.CallIdSequenceWithBackpressure.handleNoSpaceLeft(CallIdSequenceWithBackpressure.java:65) ~[classes/:?]
at com.hazelcast.spi.impl.sequence.AbstractCallIdSequence.next(AbstractCallIdSequence.java:62) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.InvocationRegistry.register(InvocationRegistry.java:127) ~[classes/:?]
... 7 more
05:38:12,324 ERROR |asyncInvocation_and_syncBackups_and_asyncBackups| - [OperationExecutorImpl] hz.sad_engelbart.partition-operation.thread-5 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Failed to process: com.hazelcast.internal.partition.impl.CheckPartitionReplicaVersionTask@1a43b8f5 on: hz.sad_engelbart.partition-operation.thread-5
com.hazelcast.core.HazelcastOverloadException: Failed to start invocation due to overload: Invocation{op=com.hazelcast.internal.partition.operation.PartitionBackupReplicaAntiEntropyOperation{serviceName='hz:core:partitionService', identityHash=1661789955, partitionId=5, replicaIndex=1, callId=0, invocationTime=1622007432302 (2021-05-26 05:37:12.302), waitTimeout=-1, callTimeout=60000, tenantControl=com.hazelcast.spi.impl.tenantcontrol.NoopTenantControl@0, versions={}}, tryCount=10, tryPauseMillis=250, invokeCount=1, callTimeoutMillis=60000, firstInvocationTimeMs=1622007432302, firstInvocationTime='2021-05-26 05:37:12.302', lastHeartbeatMillis=0, lastHeartbeatTime='1970-01-01 00:00:00.000', target=[127.0.0.1]:5702, pendingResponse={VOID}, backupsAcksExpected=-1, backupsAcksReceived=0, connection=null}
at com.hazelcast.spi.impl.operationservice.impl.InvocationRegistry.register(InvocationRegistry.java:129) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvoke(Invocation.java:569) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke0(Invocation.java:540) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke(Invocation.java:240) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.InvocationBuilderImpl.invoke(InvocationBuilderImpl.java:59) ~[classes/:?]
at com.hazelcast.internal.partition.impl.AbstractPartitionPrimaryReplicaAntiEntropyTask.invokePartitionBackupReplicaAntiEntropyOp(AbstractPartitionPrimaryReplicaAntiEntropyTask.java:122) ~[classes/:?]
at com.hazelcast.internal.partition.impl.CheckPartitionReplicaVersionTask.run(CheckPartitionReplicaVersionTask.java:63) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:192) ~[classes/:?]
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:207) ~[classes/:?]
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:141) [classes/:?]
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.executeRun(OperationThread.java:123) [classes/:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
Caused by: com.hazelcast.core.HazelcastOverloadException: Timed out trying to acquire another call ID. maxConcurrentInvocations = 22, backoffTimeout = 60000 msecs, elapsed:60019 msecs
at com.hazelcast.spi.impl.sequence.CallIdSequenceWithBackpressure.handleNoSpaceLeft(CallIdSequenceWithBackpressure.java:65) ~[classes/:?]
at com.hazelcast.spi.impl.sequence.AbstractCallIdSequence.next(AbstractCallIdSequence.java:62) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.InvocationRegistry.register(InvocationRegistry.java:127) ~[classes/:?]
... 11 more
05:38:12,324 ERROR |asyncInvocation_and_syncBackups_and_asyncBackups| - [OperationExecutorImpl] hz.sad_engelbart.partition-operation.thread-3 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Failed to process: com.hazelcast.internal.partition.impl.CheckPartitionReplicaVersionTask@1e7c3fe2 on: hz.sad_engelbart.partition-operation.thread-3
com.hazelcast.core.HazelcastOverloadException: Failed to start invocation due to overload: Invocation{op=com.hazelcast.internal.partition.operation.PartitionBackupReplicaAntiEntropyOperation{serviceName='hz:core:partitionService', identityHash=1803361505, partitionId=3, replicaIndex=1, callId=0, invocationTime=1622007432302 (2021-05-26 05:37:12.302), waitTimeout=-1, callTimeout=60000, tenantControl=com.hazelcast.spi.impl.tenantcontrol.NoopTenantControl@0, versions={}}, tryCount=10, tryPauseMillis=250, invokeCount=1, callTimeoutMillis=60000, firstInvocationTimeMs=1622007432302, firstInvocationTime='2021-05-26 05:37:12.302', lastHeartbeatMillis=0, lastHeartbeatTime='1970-01-01 00:00:00.000', target=[127.0.0.1]:5702, pendingResponse={VOID}, backupsAcksExpected=-1, backupsAcksReceived=0, connection=null}
at com.hazelcast.spi.impl.operationservice.impl.InvocationRegistry.register(InvocationRegistry.java:129) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvoke(Invocation.java:569) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke0(Invocation.java:540) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke(Invocation.java:240) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.InvocationBuilderImpl.invoke(InvocationBuilderImpl.java:59) ~[classes/:?]
at com.hazelcast.internal.partition.impl.AbstractPartitionPrimaryReplicaAntiEntropyTask.invokePartitionBackupReplicaAntiEntropyOp(AbstractPartitionPrimaryReplicaAntiEntropyTask.java:122) ~[classes/:?]
at com.hazelcast.internal.partition.impl.CheckPartitionReplicaVersionTask.run(CheckPartitionReplicaVersionTask.java:63) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:192) ~[classes/:?]
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:207) ~[classes/:?]
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:141) [classes/:?]
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.executeRun(OperationThread.java:123) [classes/:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
Caused by: com.hazelcast.core.HazelcastOverloadException: Timed out trying to acquire another call ID. maxConcurrentInvocations = 22, backoffTimeout = 60000 msecs, elapsed:60019 msecs
at com.hazelcast.spi.impl.sequence.CallIdSequenceWithBackpressure.handleNoSpaceLeft(CallIdSequenceWithBackpressure.java:65) ~[classes/:?]
at com.hazelcast.spi.impl.sequence.AbstractCallIdSequence.next(AbstractCallIdSequence.java:62) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.InvocationRegistry.register(InvocationRegistry.java:127) ~[classes/:?]
... 11 more
05:38:12,324 ERROR |asyncInvocation_and_syncBackups_and_asyncBackups| - [OperationExecutorImpl] hz.sad_engelbart.partition-operation.thread-2 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Failed to process: com.hazelcast.internal.partition.impl.CheckPartitionReplicaVersionTask@10bbd385 on: hz.sad_engelbart.partition-operation.thread-2
com.hazelcast.core.HazelcastOverloadException: Failed to start invocation due to overload: Invocation{op=com.hazelcast.internal.partition.operation.PartitionBackupReplicaAntiEntropyOperation{serviceName='hz:core:partitionService', identityHash=2110114862, partitionId=2, replicaIndex=1, callId=0, invocationTime=1622007432303 (2021-05-26 05:37:12.303), waitTimeout=-1, callTimeout=60000, tenantControl=com.hazelcast.spi.impl.tenantcontrol.NoopTenantControl@0, versions={}}, tryCount=10, tryPauseMillis=250, invokeCount=1, callTimeoutMillis=60000, firstInvocationTimeMs=1622007432303, firstInvocationTime='2021-05-26 05:37:12.303', lastHeartbeatMillis=0, lastHeartbeatTime='1970-01-01 00:00:00.000', target=[127.0.0.1]:5702, pendingResponse={VOID}, backupsAcksExpected=-1, backupsAcksReceived=0, connection=null}
at com.hazelcast.spi.impl.operationservice.impl.InvocationRegistry.register(InvocationRegistry.java:129) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvoke(Invocation.java:569) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke0(Invocation.java:540) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke(Invocation.java:240) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.InvocationBuilderImpl.invoke(InvocationBuilderImpl.java:59) ~[classes/:?]
at com.hazelcast.internal.partition.impl.AbstractPartitionPrimaryReplicaAntiEntropyTask.invokePartitionBackupReplicaAntiEntropyOp(AbstractPartitionPrimaryReplicaAntiEntropyTask.java:122) ~[classes/:?]
at com.hazelcast.internal.partition.impl.CheckPartitionReplicaVersionTask.run(CheckPartitionReplicaVersionTask.java:63) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:192) ~[classes/:?]
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:207) ~[classes/:?]
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:141) [classes/:?]
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.executeRun(OperationThread.java:123) [classes/:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
Caused by: com.hazelcast.core.HazelcastOverloadException: Timed out trying to acquire another call ID. maxConcurrentInvocations = 22, backoffTimeout = 60000 msecs, elapsed:60019 msecs
at com.hazelcast.spi.impl.sequence.CallIdSequenceWithBackpressure.handleNoSpaceLeft(CallIdSequenceWithBackpressure.java:65) ~[classes/:?]
at com.hazelcast.spi.impl.sequence.AbstractCallIdSequence.next(AbstractCallIdSequence.java:62) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.InvocationRegistry.register(InvocationRegistry.java:127) ~[classes/:?]
... 11 more
05:38:12,324 ERROR |asyncInvocation_and_syncBackups_and_asyncBackups| - [OperationExecutorImpl] hz.sad_engelbart.partition-operation.thread-6 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Failed to process: com.hazelcast.internal.partition.impl.CheckPartitionReplicaVersionTask@5f4135d9 on: hz.sad_engelbart.partition-operation.thread-6
com.hazelcast.core.HazelcastOverloadException: Failed to start invocation due to overload: Invocation{op=com.hazelcast.internal.partition.operation.PartitionBackupReplicaAntiEntropyOperation{serviceName='hz:core:partitionService', identityHash=2088966286, partitionId=6, replicaIndex=1, callId=0, invocationTime=1622007432302 (2021-05-26 05:37:12.302), waitTimeout=-1, callTimeout=60000, tenantControl=com.hazelcast.spi.impl.tenantcontrol.NoopTenantControl@0, versions={}}, tryCount=10, tryPauseMillis=250, invokeCount=1, callTimeoutMillis=60000, firstInvocationTimeMs=1622007432302, firstInvocationTime='2021-05-26 05:37:12.302', lastHeartbeatMillis=0, lastHeartbeatTime='1970-01-01 00:00:00.000', target=[127.0.0.1]:5702, pendingResponse={VOID}, backupsAcksExpected=-1, backupsAcksReceived=0, connection=null}
at com.hazelcast.spi.impl.operationservice.impl.InvocationRegistry.register(InvocationRegistry.java:129) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvoke(Invocation.java:569) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke0(Invocation.java:540) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke(Invocation.java:240) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.InvocationBuilderImpl.invoke(InvocationBuilderImpl.java:59) ~[classes/:?]
at com.hazelcast.internal.partition.impl.AbstractPartitionPrimaryReplicaAntiEntropyTask.invokePartitionBackupReplicaAntiEntropyOp(AbstractPartitionPrimaryReplicaAntiEntropyTask.java:122) ~[classes/:?]
at com.hazelcast.internal.partition.impl.CheckPartitionReplicaVersionTask.run(CheckPartitionReplicaVersionTask.java:63) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:192) ~[classes/:?]
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:207) ~[classes/:?]
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:141) [classes/:?]
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.executeRun(OperationThread.java:123) [classes/:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
Caused by: com.hazelcast.core.HazelcastOverloadException: Timed out trying to acquire another call ID. maxConcurrentInvocations = 22, backoffTimeout = 60000 msecs, elapsed:60020 msecs
at com.hazelcast.spi.impl.sequence.CallIdSequenceWithBackpressure.handleNoSpaceLeft(CallIdSequenceWithBackpressure.java:65) ~[classes/:?]
at com.hazelcast.spi.impl.sequence.AbstractCallIdSequence.next(AbstractCallIdSequence.java:62) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.InvocationRegistry.register(InvocationRegistry.java:127) ~[classes/:?]
... 11 more
05:38:12,324 ERROR |asyncInvocation_and_syncBackups_and_asyncBackups| - [OperationExecutorImpl] hz.sad_engelbart.partition-operation.thread-0 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Failed to process: com.hazelcast.internal.partition.impl.CheckPartitionReplicaVersionTask@1128e24f on: hz.sad_engelbart.partition-operation.thread-0
com.hazelcast.core.HazelcastOverloadException: Failed to start invocation due to overload: Invocation{op=com.hazelcast.internal.partition.operation.PartitionBackupReplicaAntiEntropyOperation{serviceName='hz:core:partitionService', identityHash=233902025, partitionId=8, replicaIndex=1, callId=0, invocationTime=1622007432303 (2021-05-26 05:37:12.303), waitTimeout=-1, callTimeout=60000, tenantControl=com.hazelcast.spi.impl.tenantcontrol.NoopTenantControl@0, versions={}}, tryCount=10, tryPauseMillis=250, invokeCount=1, callTimeoutMillis=60000, firstInvocationTimeMs=1622007432303, firstInvocationTime='2021-05-26 05:37:12.303', lastHeartbeatMillis=0, lastHeartbeatTime='1970-01-01 00:00:00.000', target=[127.0.0.1]:5702, pendingResponse={VOID}, backupsAcksExpected=-1, backupsAcksReceived=0, connection=null}
at com.hazelcast.spi.impl.operationservice.impl.InvocationRegistry.register(InvocationRegistry.java:129) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvoke(Invocation.java:569) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke0(Invocation.java:540) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke(Invocation.java:240) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.InvocationBuilderImpl.invoke(InvocationBuilderImpl.java:59) ~[classes/:?]
at com.hazelcast.internal.partition.impl.AbstractPartitionPrimaryReplicaAntiEntropyTask.invokePartitionBackupReplicaAntiEntropyOp(AbstractPartitionPrimaryReplicaAntiEntropyTask.java:122) ~[classes/:?]
at com.hazelcast.internal.partition.impl.CheckPartitionReplicaVersionTask.run(CheckPartitionReplicaVersionTask.java:63) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:192) ~[classes/:?]
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:207) ~[classes/:?]
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:141) [classes/:?]
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.executeRun(OperationThread.java:123) [classes/:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
Caused by: com.hazelcast.core.HazelcastOverloadException: Timed out trying to acquire another call ID. maxConcurrentInvocations = 22, backoffTimeout = 60000 msecs, elapsed:60020 msecs
at com.hazelcast.spi.impl.sequence.CallIdSequenceWithBackpressure.handleNoSpaceLeft(CallIdSequenceWithBackpressure.java:65) ~[classes/:?]
at com.hazelcast.spi.impl.sequence.AbstractCallIdSequence.next(AbstractCallIdSequence.java:62) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.InvocationRegistry.register(InvocationRegistry.java:127) ~[classes/:?]
... 11 more
05:38:12,804 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-16 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:38:12,804 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:38:13,107 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [InvocationMonitor] hz.sad_engelbart.InvocationMonitorThread - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Invocations:22 timeouts:0 backup-timeouts:22
05:38:13,731 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobRepository] hz.sad_engelbart.cached.thread-16 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Job cleanup took 0ms
05:38:13,731 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobRepository] hz.sad_engelbart.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Job cleanup took 0ms
05:38:13,741 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [LifecycleService] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] [127.0.0.1]:5702 is SHUTTING_DOWN
05:38:13,741 WARN |asyncInvocation_and_syncBackups_and_asyncBackups| - [Node] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Terminating forcefully...
05:38:13,741 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [Node] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Shutting down connection manager...
05:38:13,741 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [MockServer] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5702, connection: MockConnection{localEndpoint=[127.0.0.1]:5701, remoteEndpoint=[127.0.0.1]:5702, alive=false}
05:38:13,741 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [MockServer] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5701, connection: MockConnection{localEndpoint=[127.0.0.1]:5702, remoteEndpoint=[127.0.0.1]:5701, alive=false}
05:38:13,741 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [MembershipManager] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Removing Member [127.0.0.1]:5702 - 1877f0a8-25ad-40ec-8849-972bd1e4f5d0
05:38:13,741 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [ClusterService] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT]
Members {size:1, ver:3} [
Member [127.0.0.1]:5701 - 444fbe20-06b5-4bb7-8ce6-e365781a735f this
]
05:38:13,741 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [TransactionManagerService] hz.sad_engelbart.cached.thread-7 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Committing/rolling-back live transactions of [127.0.0.1]:5702, UUID: 1877f0a8-25ad-40ec-8849-972bd1e4f5d0
05:38:13,741 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [Node] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Shutting down node engine...
05:38:13,743 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [MigrationManager] hz.sad_engelbart.migration - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Partition balance is ok, no need to repartition.
05:38:13,745 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [NodeExtension] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Destroying node NodeExtension.
05:38:13,745 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [Node] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Hazelcast Shutdown is completed in 4 ms.
05:38:13,745 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [LifecycleService] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] [127.0.0.1]:5702 is SHUTDOWN
05:38:13,745 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [LifecycleService] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] [127.0.0.1]:5701 is SHUTTING_DOWN
05:38:13,745 WARN |asyncInvocation_and_syncBackups_and_asyncBackups| - [Node] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Terminating forcefully...
05:38:13,745 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [Node] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Shutting down connection manager...
05:38:13,745 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [Node] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Shutting down node engine...
05:38:13,748 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [NodeExtension] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Destroying node NodeExtension.
05:38:13,748 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [Node] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Hazelcast Shutdown is completed in 3 ms.
05:38:13,748 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [LifecycleService] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] [127.0.0.1]:5701 is SHUTDOWN
BuildInfo right after asyncInvocation_and_syncBackups_and_asyncBackups(com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulatorStressTest): BuildInfo{version='5.0-SNAPSHOT', build='20210526', buildNumber=20210526, revision=ee2404a, enterprise=false, serializationVersion=1, jet=JetBuildInfo{version='5.0-SNAPSHOT', build='20210526', revision='ee2404a'}}
Hiccups measured while running test 'asyncInvocation_and_syncBackups_and_asyncBackups(com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulatorStressTest):'
05:37:10, accumulated pauses: 43 ms, max pause: 6 ms, pauses over 1000 ms: 0
05:37:15, accumulated pauses: 29 ms, max pause: 0 ms, pauses over 1000 ms: 0
05:37:20, accumulated pauses: 29 ms, max pause: 0 ms, pauses over 1000 ms: 0
05:37:25, accumulated pauses: 28 ms, max pause: 0 ms, pauses over 1000 ms: 0
05:37:30, accumulated pauses: 31 ms, max pause: 0 ms, pauses over 1000 ms: 0
05:37:35, accumulated pauses: 29 ms, max pause: 0 ms, pauses over 1000 ms: 0
05:37:40, accumulated pauses: 28 ms, max pause: 0 ms, pauses over 1000 ms: 0
05:37:45, accumulated pauses: 30 ms, max pause: 0 ms, pauses over 1000 ms: 0
05:37:50, accumulated pauses: 30 ms, max pause: 0 ms, pauses over 1000 ms: 0
05:37:55, accumulated pauses: 31 ms, max pause: 0 ms, pauses over 1000 ms: 0
05:38:00, accumulated pauses: 35 ms, max pause: 1 ms, pauses over 1000 ms: 0
05:38:05, accumulated pauses: 27 ms, max pause: 0 ms, pauses over 1000 ms: 0
05:38:10, accumulated pauses: 519 ms, max pause: 498 ms, pauses over 1000 ms: 0
```
| 1.0 | com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulatorStressTest.asyncInvocation_and_syncBackups_and_asyncBackups - _master_ (commit ee2404a117f71eb319ae58a30f9609ed0e9a4ba7)
Failed on nightly run with Oracle JDK 8: http://jenkins.hazelcast.com/view/Official%20Builds/job/Hazelcast-master-OracleJDK8-nightly/846/testReport/com.hazelcast.spi.impl.operationservice.impl/BackpressureRegulatorStressTest/asyncInvocation_and_syncBackups_and_asyncBackups/
Stacktrace:
```
java.lang.AssertionError: No error should have been thrown, but StressThread-5 completed error expected null, but was:<com.hazelcast.core.HazelcastOverloadException: Failed to start invocation due to overload: Invocation{op=com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulatorStressTest$DummyOperation{serviceName='null', identityHash=2023014565, partitionId=0, replicaIndex=0, callId=0, invocationTime=1622007432304 (2021-05-26 05:37:12.304), waitTimeout=-1, callTimeout=60000, tenantControl=com.hazelcast.spi.impl.tenantcontrol.NoopTenantControl@0}, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeoutMillis=60000, firstInvocationTimeMs=1622007432304, firstInvocationTime='2021-05-26 05:37:12.304', lastHeartbeatMillis=0, lastHeartbeatTime='1970-01-01 00:00:00.000', target=[127.0.0.1]:5702, pendingResponse={VOID}, backupsAcksExpected=-1, backupsAcksReceived=0, connection=null}>
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotNull(Assert.java:756)
at org.junit.Assert.assertNull(Assert.java:738)
at com.hazelcast.test.TestThread.assertSucceedsEventually(TestThread.java:78)
at com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulatorStressTest.test(BackpressureRegulatorStressTest.java:177)
at com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulatorStressTest.asyncInvocation_and_syncBackups_and_asyncBackups(BackpressureRegulatorStressTest.java:156)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:115)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:107)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
```
Standard output:
```
05:37:12,079 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [MetricsConfigHelper] main - [LOCAL] [dev] [5.0-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
05:37:12,085 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [system] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT]
o o O o---o o--o o o-o O o-o o-O-o o--o o O o-O-o o--o o-o o--o o o
| | / \ / | | / / \ | | | | | / \ | | o o | | |\ /|
O--O o---o -O- O-o | O o---o o-o | O--o | o---o | O-o | | O-Oo | O |
| | | | / | | \ | | | | | | | | | | o o | \ | |
o o o o o---o o--o O---o o-o o o o--o o o O---o o o o o o-o o o o o
05:37:12,085 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [system] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Copyright (c) 2008-2021, Hazelcast, Inc. All Rights Reserved.
05:37:12,085 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [system] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Hazelcast Platform 5.0-SNAPSHOT (20210526 - ee2404a) starting at [127.0.0.1]:5701
05:37:12,085 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [system] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Cluster name: dev
05:37:12,092 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [MetricsConfigHelper] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
05:37:12,093 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [BackpressureRegulator] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Backpressure is enabled, maxConcurrentInvocations:22, syncWindow: 10
05:37:12,097 WARN |asyncInvocation_and_syncBackups_and_asyncBackups| - [CPSubsystem] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
05:37:12,100 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [JetService] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Setting number of cooperative threads and default parallelism to 8
05:37:12,101 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [Diagnostics] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
05:37:12,102 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [LifecycleService] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] [127.0.0.1]:5701 is STARTING
05:37:12,102 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [JetExtension] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Jet extension is enabled after the cluster version upgrade.
05:37:12,102 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [ClusterService] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT]
Members {size:1, ver:1} [
Member [127.0.0.1]:5701 - 444fbe20-06b5-4bb7-8ce6-e365781a735f this
]
05:37:12,102 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
05:37:12,102 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [JetExtension] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Jet extension is enabled
05:37:12,102 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
05:37:12,102 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [LifecycleService] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] [127.0.0.1]:5701 is STARTED
05:37:12,102 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [MetricsConfigHelper] main - [LOCAL] [dev] [5.0-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
05:37:12,103 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [system] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT]
o o O o---o o--o o o-o O o-o o-O-o o--o o O o-O-o o--o o-o o--o o o
| | / \ / | | / / \ | | | | | / \ | | o o | | |\ /|
O--O o---o -O- O-o | O o---o o-o | O--o | o---o | O-o | | O-Oo | O |
| | | | / | | \ | | | | | | | | | | o o | \ | |
o o o o o---o o--o O---o o-o o o o--o o o O---o o o o o o-o o o o o
05:37:12,103 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [system] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Copyright (c) 2008-2021, Hazelcast, Inc. All Rights Reserved.
05:37:12,103 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [system] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Hazelcast Platform 5.0-SNAPSHOT (20210526 - ee2404a) starting at [127.0.0.1]:5702
05:37:12,103 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [system] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Cluster name: dev
05:37:12,105 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [MetricsConfigHelper] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
05:37:12,105 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [BackpressureRegulator] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Backpressure is enabled, maxConcurrentInvocations:22, syncWindow: 10
05:37:12,108 WARN |asyncInvocation_and_syncBackups_and_asyncBackups| - [CPSubsystem] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
05:37:12,110 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [JetService] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Setting number of cooperative threads and default parallelism to 8
05:37:12,112 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [Diagnostics] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
05:37:12,112 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [LifecycleService] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] [127.0.0.1]:5702 is STARTING
05:37:12,112 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [MockServer] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5701, connection: MockConnection{localEndpoint=[127.0.0.1]:5702, remoteEndpoint=[127.0.0.1]:5701, alive=true}
05:37:12,112 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [MockServer] hz.sad_engelbart.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Created connection to endpoint: [127.0.0.1]:5702, connection: MockConnection{localEndpoint=[127.0.0.1]:5701, remoteEndpoint=[127.0.0.1]:5702, alive=true}
05:37:12,113 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [ClusterService] hz.sad_engelbart.priority-generic-operation.thread-0 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT]
Members {size:2, ver:2} [
Member [127.0.0.1]:5701 - 444fbe20-06b5-4bb7-8ce6-e365781a735f this
Member [127.0.0.1]:5702 - 1877f0a8-25ad-40ec-8849-972bd1e4f5d0
]
05:37:12,202 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
05:37:12,202 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partitions are not yet initialized.
05:37:12,213 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [JetExtension] hz.crazy_engelbart.generic-operation.thread-3 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Jet extension is enabled after the cluster version upgrade.
05:37:12,214 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [ClusterService] hz.crazy_engelbart.generic-operation.thread-3 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT]
Members {size:2, ver:2} [
Member [127.0.0.1]:5701 - 444fbe20-06b5-4bb7-8ce6-e365781a735f
Member [127.0.0.1]:5702 - 1877f0a8-25ad-40ec-8849-972bd1e4f5d0 this
]
05:37:12,214 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [JetExtension] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Jet extension is enabled
05:37:12,214 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [LifecycleService] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] [127.0.0.1]:5702 is STARTED
05:37:12,215 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [PartitionStateManager] hz.sad_engelbart.generic-operation.thread-3 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Initializing cluster partition table arrangement...
05:37:12,225 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [BackpressureRegulatorStressTest$StressThread] StressThread-5 - StressThread-5 Starting
05:37:22,302 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:37:22,302 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:37:23,103 WARN |asyncInvocation_and_syncBackups_and_asyncBackups| - [SlowOperationDetector] hz.sad_engelbart.SlowOperationDetectorThread - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Slow operation detected: com.hazelcast.internal.partition.impl.CheckPartitionReplicaVersionTask
Hint: You can enable the logging of stack traces with the following system property: -Dhazelcast.slow.operation.detector.stacktrace.logging.enabled
05:37:23,103 WARN |asyncInvocation_and_syncBackups_and_asyncBackups| - [SlowOperationDetector] hz.sad_engelbart.SlowOperationDetectorThread - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Slow operation detected: com.hazelcast.internal.partition.impl.CheckPartitionReplicaVersionTask (2 invocations)
05:37:23,104 WARN |asyncInvocation_and_syncBackups_and_asyncBackups| - [SlowOperationDetector] hz.sad_engelbart.SlowOperationDetectorThread - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Slow operation detected: com.hazelcast.internal.partition.impl.CheckPartitionReplicaVersionTask (3 invocations)
05:37:23,104 WARN |asyncInvocation_and_syncBackups_and_asyncBackups| - [SlowOperationDetector] hz.sad_engelbart.SlowOperationDetectorThread - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Slow operation detected: com.hazelcast.internal.partition.impl.CheckPartitionReplicaVersionTask (4 invocations)
05:37:23,105 WARN |asyncInvocation_and_syncBackups_and_asyncBackups| - [SlowOperationDetector] hz.sad_engelbart.SlowOperationDetectorThread - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Slow operation detected: com.hazelcast.internal.partition.impl.CheckPartitionReplicaVersionTask (5 invocations)
05:37:32,113 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-11 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:37:32,402 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:37:32,402 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:37:42,503 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:37:42,503 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:37:44,113 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:37:52,603 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:37:52,603 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-16 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:37:56,114 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-2 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:38:02,703 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:38:02,703 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-16 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:38:08,114 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-13 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:38:12,324 WARN |asyncInvocation_and_syncBackups_and_asyncBackups| - [BackpressureRegulatorStressTest$StressThread] StressThread-5 - StressThread-5 Completed with failure
com.hazelcast.core.HazelcastOverloadException: Failed to start invocation due to overload: Invocation{op=com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulatorStressTest$DummyOperation{serviceName='null', identityHash=2023014565, partitionId=0, replicaIndex=0, callId=0, invocationTime=1622007432304 (2021-05-26 05:37:12.304), waitTimeout=-1, callTimeout=60000, tenantControl=com.hazelcast.spi.impl.tenantcontrol.NoopTenantControl@0}, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeoutMillis=60000, firstInvocationTimeMs=1622007432304, firstInvocationTime='2021-05-26 05:37:12.304', lastHeartbeatMillis=0, lastHeartbeatTime='1970-01-01 00:00:00.000', target=[127.0.0.1]:5702, pendingResponse={VOID}, backupsAcksExpected=-1, backupsAcksReceived=0, connection=null}
at com.hazelcast.spi.impl.operationservice.impl.InvocationRegistry.register(InvocationRegistry.java:129) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvoke(Invocation.java:569) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke0(Invocation.java:540) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke(Invocation.java:240) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.OperationServiceImpl.invokeOnPartition(OperationServiceImpl.java:330) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulatorStressTest$StressThread.asyncInvoke(BackpressureRegulatorStressTest.java:253) ~[test-classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulatorStressTest$StressThread.doRun(BackpressureRegulatorStressTest.java:243) ~[test-classes/:?]
at com.hazelcast.test.TestThread.run(TestThread.java:43) [test-classes/:?]
Caused by: com.hazelcast.core.HazelcastOverloadException: Timed out trying to acquire another call ID. maxConcurrentInvocations = 22, backoffTimeout = 60000 msecs, elapsed:60019 msecs
at com.hazelcast.spi.impl.sequence.CallIdSequenceWithBackpressure.handleNoSpaceLeft(CallIdSequenceWithBackpressure.java:65) ~[classes/:?]
at com.hazelcast.spi.impl.sequence.AbstractCallIdSequence.next(AbstractCallIdSequence.java:62) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.InvocationRegistry.register(InvocationRegistry.java:127) ~[classes/:?]
... 7 more
05:38:12,324 ERROR |asyncInvocation_and_syncBackups_and_asyncBackups| - [OperationExecutorImpl] hz.sad_engelbart.partition-operation.thread-5 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Failed to process: com.hazelcast.internal.partition.impl.CheckPartitionReplicaVersionTask@1a43b8f5 on: hz.sad_engelbart.partition-operation.thread-5
com.hazelcast.core.HazelcastOverloadException: Failed to start invocation due to overload: Invocation{op=com.hazelcast.internal.partition.operation.PartitionBackupReplicaAntiEntropyOperation{serviceName='hz:core:partitionService', identityHash=1661789955, partitionId=5, replicaIndex=1, callId=0, invocationTime=1622007432302 (2021-05-26 05:37:12.302), waitTimeout=-1, callTimeout=60000, tenantControl=com.hazelcast.spi.impl.tenantcontrol.NoopTenantControl@0, versions={}}, tryCount=10, tryPauseMillis=250, invokeCount=1, callTimeoutMillis=60000, firstInvocationTimeMs=1622007432302, firstInvocationTime='2021-05-26 05:37:12.302', lastHeartbeatMillis=0, lastHeartbeatTime='1970-01-01 00:00:00.000', target=[127.0.0.1]:5702, pendingResponse={VOID}, backupsAcksExpected=-1, backupsAcksReceived=0, connection=null}
at com.hazelcast.spi.impl.operationservice.impl.InvocationRegistry.register(InvocationRegistry.java:129) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvoke(Invocation.java:569) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke0(Invocation.java:540) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke(Invocation.java:240) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.InvocationBuilderImpl.invoke(InvocationBuilderImpl.java:59) ~[classes/:?]
at com.hazelcast.internal.partition.impl.AbstractPartitionPrimaryReplicaAntiEntropyTask.invokePartitionBackupReplicaAntiEntropyOp(AbstractPartitionPrimaryReplicaAntiEntropyTask.java:122) ~[classes/:?]
at com.hazelcast.internal.partition.impl.CheckPartitionReplicaVersionTask.run(CheckPartitionReplicaVersionTask.java:63) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:192) ~[classes/:?]
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:207) ~[classes/:?]
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:141) [classes/:?]
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.executeRun(OperationThread.java:123) [classes/:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
Caused by: com.hazelcast.core.HazelcastOverloadException: Timed out trying to acquire another call ID. maxConcurrentInvocations = 22, backoffTimeout = 60000 msecs, elapsed:60019 msecs
at com.hazelcast.spi.impl.sequence.CallIdSequenceWithBackpressure.handleNoSpaceLeft(CallIdSequenceWithBackpressure.java:65) ~[classes/:?]
at com.hazelcast.spi.impl.sequence.AbstractCallIdSequence.next(AbstractCallIdSequence.java:62) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.InvocationRegistry.register(InvocationRegistry.java:127) ~[classes/:?]
... 11 more
05:38:12,324 ERROR |asyncInvocation_and_syncBackups_and_asyncBackups| - [OperationExecutorImpl] hz.sad_engelbart.partition-operation.thread-3 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Failed to process: com.hazelcast.internal.partition.impl.CheckPartitionReplicaVersionTask@1e7c3fe2 on: hz.sad_engelbart.partition-operation.thread-3
com.hazelcast.core.HazelcastOverloadException: Failed to start invocation due to overload: Invocation{op=com.hazelcast.internal.partition.operation.PartitionBackupReplicaAntiEntropyOperation{serviceName='hz:core:partitionService', identityHash=1803361505, partitionId=3, replicaIndex=1, callId=0, invocationTime=1622007432302 (2021-05-26 05:37:12.302), waitTimeout=-1, callTimeout=60000, tenantControl=com.hazelcast.spi.impl.tenantcontrol.NoopTenantControl@0, versions={}}, tryCount=10, tryPauseMillis=250, invokeCount=1, callTimeoutMillis=60000, firstInvocationTimeMs=1622007432302, firstInvocationTime='2021-05-26 05:37:12.302', lastHeartbeatMillis=0, lastHeartbeatTime='1970-01-01 00:00:00.000', target=[127.0.0.1]:5702, pendingResponse={VOID}, backupsAcksExpected=-1, backupsAcksReceived=0, connection=null}
at com.hazelcast.spi.impl.operationservice.impl.InvocationRegistry.register(InvocationRegistry.java:129) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvoke(Invocation.java:569) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke0(Invocation.java:540) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke(Invocation.java:240) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.InvocationBuilderImpl.invoke(InvocationBuilderImpl.java:59) ~[classes/:?]
at com.hazelcast.internal.partition.impl.AbstractPartitionPrimaryReplicaAntiEntropyTask.invokePartitionBackupReplicaAntiEntropyOp(AbstractPartitionPrimaryReplicaAntiEntropyTask.java:122) ~[classes/:?]
at com.hazelcast.internal.partition.impl.CheckPartitionReplicaVersionTask.run(CheckPartitionReplicaVersionTask.java:63) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:192) ~[classes/:?]
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:207) ~[classes/:?]
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:141) [classes/:?]
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.executeRun(OperationThread.java:123) [classes/:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
Caused by: com.hazelcast.core.HazelcastOverloadException: Timed out trying to acquire another call ID. maxConcurrentInvocations = 22, backoffTimeout = 60000 msecs, elapsed:60019 msecs
at com.hazelcast.spi.impl.sequence.CallIdSequenceWithBackpressure.handleNoSpaceLeft(CallIdSequenceWithBackpressure.java:65) ~[classes/:?]
at com.hazelcast.spi.impl.sequence.AbstractCallIdSequence.next(AbstractCallIdSequence.java:62) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.InvocationRegistry.register(InvocationRegistry.java:127) ~[classes/:?]
... 11 more
05:38:12,324 ERROR |asyncInvocation_and_syncBackups_and_asyncBackups| - [OperationExecutorImpl] hz.sad_engelbart.partition-operation.thread-2 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Failed to process: com.hazelcast.internal.partition.impl.CheckPartitionReplicaVersionTask@10bbd385 on: hz.sad_engelbart.partition-operation.thread-2
com.hazelcast.core.HazelcastOverloadException: Failed to start invocation due to overload: Invocation{op=com.hazelcast.internal.partition.operation.PartitionBackupReplicaAntiEntropyOperation{serviceName='hz:core:partitionService', identityHash=2110114862, partitionId=2, replicaIndex=1, callId=0, invocationTime=1622007432303 (2021-05-26 05:37:12.303), waitTimeout=-1, callTimeout=60000, tenantControl=com.hazelcast.spi.impl.tenantcontrol.NoopTenantControl@0, versions={}}, tryCount=10, tryPauseMillis=250, invokeCount=1, callTimeoutMillis=60000, firstInvocationTimeMs=1622007432303, firstInvocationTime='2021-05-26 05:37:12.303', lastHeartbeatMillis=0, lastHeartbeatTime='1970-01-01 00:00:00.000', target=[127.0.0.1]:5702, pendingResponse={VOID}, backupsAcksExpected=-1, backupsAcksReceived=0, connection=null}
at com.hazelcast.spi.impl.operationservice.impl.InvocationRegistry.register(InvocationRegistry.java:129) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvoke(Invocation.java:569) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke0(Invocation.java:540) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke(Invocation.java:240) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.InvocationBuilderImpl.invoke(InvocationBuilderImpl.java:59) ~[classes/:?]
at com.hazelcast.internal.partition.impl.AbstractPartitionPrimaryReplicaAntiEntropyTask.invokePartitionBackupReplicaAntiEntropyOp(AbstractPartitionPrimaryReplicaAntiEntropyTask.java:122) ~[classes/:?]
at com.hazelcast.internal.partition.impl.CheckPartitionReplicaVersionTask.run(CheckPartitionReplicaVersionTask.java:63) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:192) ~[classes/:?]
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:207) ~[classes/:?]
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:141) [classes/:?]
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.executeRun(OperationThread.java:123) [classes/:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
Caused by: com.hazelcast.core.HazelcastOverloadException: Timed out trying to acquire another call ID. maxConcurrentInvocations = 22, backoffTimeout = 60000 msecs, elapsed:60019 msecs
at com.hazelcast.spi.impl.sequence.CallIdSequenceWithBackpressure.handleNoSpaceLeft(CallIdSequenceWithBackpressure.java:65) ~[classes/:?]
at com.hazelcast.spi.impl.sequence.AbstractCallIdSequence.next(AbstractCallIdSequence.java:62) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.InvocationRegistry.register(InvocationRegistry.java:127) ~[classes/:?]
... 11 more
05:38:12,324 ERROR |asyncInvocation_and_syncBackups_and_asyncBackups| - [OperationExecutorImpl] hz.sad_engelbart.partition-operation.thread-6 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Failed to process: com.hazelcast.internal.partition.impl.CheckPartitionReplicaVersionTask@5f4135d9 on: hz.sad_engelbart.partition-operation.thread-6
com.hazelcast.core.HazelcastOverloadException: Failed to start invocation due to overload: Invocation{op=com.hazelcast.internal.partition.operation.PartitionBackupReplicaAntiEntropyOperation{serviceName='hz:core:partitionService', identityHash=2088966286, partitionId=6, replicaIndex=1, callId=0, invocationTime=1622007432302 (2021-05-26 05:37:12.302), waitTimeout=-1, callTimeout=60000, tenantControl=com.hazelcast.spi.impl.tenantcontrol.NoopTenantControl@0, versions={}}, tryCount=10, tryPauseMillis=250, invokeCount=1, callTimeoutMillis=60000, firstInvocationTimeMs=1622007432302, firstInvocationTime='2021-05-26 05:37:12.302', lastHeartbeatMillis=0, lastHeartbeatTime='1970-01-01 00:00:00.000', target=[127.0.0.1]:5702, pendingResponse={VOID}, backupsAcksExpected=-1, backupsAcksReceived=0, connection=null}
at com.hazelcast.spi.impl.operationservice.impl.InvocationRegistry.register(InvocationRegistry.java:129) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvoke(Invocation.java:569) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke0(Invocation.java:540) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke(Invocation.java:240) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.InvocationBuilderImpl.invoke(InvocationBuilderImpl.java:59) ~[classes/:?]
at com.hazelcast.internal.partition.impl.AbstractPartitionPrimaryReplicaAntiEntropyTask.invokePartitionBackupReplicaAntiEntropyOp(AbstractPartitionPrimaryReplicaAntiEntropyTask.java:122) ~[classes/:?]
at com.hazelcast.internal.partition.impl.CheckPartitionReplicaVersionTask.run(CheckPartitionReplicaVersionTask.java:63) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:192) ~[classes/:?]
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:207) ~[classes/:?]
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:141) [classes/:?]
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.executeRun(OperationThread.java:123) [classes/:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
Caused by: com.hazelcast.core.HazelcastOverloadException: Timed out trying to acquire another call ID. maxConcurrentInvocations = 22, backoffTimeout = 60000 msecs, elapsed:60020 msecs
at com.hazelcast.spi.impl.sequence.CallIdSequenceWithBackpressure.handleNoSpaceLeft(CallIdSequenceWithBackpressure.java:65) ~[classes/:?]
at com.hazelcast.spi.impl.sequence.AbstractCallIdSequence.next(AbstractCallIdSequence.java:62) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.InvocationRegistry.register(InvocationRegistry.java:127) ~[classes/:?]
... 11 more
05:38:12,324 ERROR |asyncInvocation_and_syncBackups_and_asyncBackups| - [OperationExecutorImpl] hz.sad_engelbart.partition-operation.thread-0 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Failed to process: com.hazelcast.internal.partition.impl.CheckPartitionReplicaVersionTask@1128e24f on: hz.sad_engelbart.partition-operation.thread-0
com.hazelcast.core.HazelcastOverloadException: Failed to start invocation due to overload: Invocation{op=com.hazelcast.internal.partition.operation.PartitionBackupReplicaAntiEntropyOperation{serviceName='hz:core:partitionService', identityHash=233902025, partitionId=8, replicaIndex=1, callId=0, invocationTime=1622007432303 (2021-05-26 05:37:12.303), waitTimeout=-1, callTimeout=60000, tenantControl=com.hazelcast.spi.impl.tenantcontrol.NoopTenantControl@0, versions={}}, tryCount=10, tryPauseMillis=250, invokeCount=1, callTimeoutMillis=60000, firstInvocationTimeMs=1622007432303, firstInvocationTime='2021-05-26 05:37:12.303', lastHeartbeatMillis=0, lastHeartbeatTime='1970-01-01 00:00:00.000', target=[127.0.0.1]:5702, pendingResponse={VOID}, backupsAcksExpected=-1, backupsAcksReceived=0, connection=null}
at com.hazelcast.spi.impl.operationservice.impl.InvocationRegistry.register(InvocationRegistry.java:129) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvoke(Invocation.java:569) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke0(Invocation.java:540) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke(Invocation.java:240) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.InvocationBuilderImpl.invoke(InvocationBuilderImpl.java:59) ~[classes/:?]
at com.hazelcast.internal.partition.impl.AbstractPartitionPrimaryReplicaAntiEntropyTask.invokePartitionBackupReplicaAntiEntropyOp(AbstractPartitionPrimaryReplicaAntiEntropyTask.java:122) ~[classes/:?]
at com.hazelcast.internal.partition.impl.CheckPartitionReplicaVersionTask.run(CheckPartitionReplicaVersionTask.java:63) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:192) ~[classes/:?]
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:207) ~[classes/:?]
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:141) [classes/:?]
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.executeRun(OperationThread.java:123) [classes/:?]
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) [classes/:?]
Caused by: com.hazelcast.core.HazelcastOverloadException: Timed out trying to acquire another call ID. maxConcurrentInvocations = 22, backoffTimeout = 60000 msecs, elapsed:60020 msecs
at com.hazelcast.spi.impl.sequence.CallIdSequenceWithBackpressure.handleNoSpaceLeft(CallIdSequenceWithBackpressure.java:65) ~[classes/:?]
at com.hazelcast.spi.impl.sequence.AbstractCallIdSequence.next(AbstractCallIdSequence.java:62) ~[classes/:?]
at com.hazelcast.spi.impl.operationservice.impl.InvocationRegistry.register(InvocationRegistry.java:127) ~[classes/:?]
... 11 more
05:38:12,804 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-16 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:38:12,804 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobCoordinationService] hz.sad_engelbart.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not starting jobs because partition replication is not in safe state, but in REPLICA_NOT_SYNC
05:38:13,107 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [InvocationMonitor] hz.sad_engelbart.InvocationMonitorThread - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Invocations:22 timeouts:0 backup-timeouts:22
05:38:13,731 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobRepository] hz.sad_engelbart.cached.thread-16 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Job cleanup took 0ms
05:38:13,731 DEBUG |asyncInvocation_and_syncBackups_and_asyncBackups| - [JobRepository] hz.sad_engelbart.cached.thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Job cleanup took 0ms
05:38:13,741 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [LifecycleService] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] [127.0.0.1]:5702 is SHUTTING_DOWN
05:38:13,741 WARN |asyncInvocation_and_syncBackups_and_asyncBackups| - [Node] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Terminating forcefully...
05:38:13,741 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [Node] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Shutting down connection manager...
05:38:13,741 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [MockServer] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5702, connection: MockConnection{localEndpoint=[127.0.0.1]:5701, remoteEndpoint=[127.0.0.1]:5702, alive=false}
05:38:13,741 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [MockServer] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5701, connection: MockConnection{localEndpoint=[127.0.0.1]:5702, remoteEndpoint=[127.0.0.1]:5701, alive=false}
05:38:13,741 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [MembershipManager] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Removing Member [127.0.0.1]:5702 - 1877f0a8-25ad-40ec-8849-972bd1e4f5d0
05:38:13,741 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [ClusterService] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT]
Members {size:1, ver:3} [
Member [127.0.0.1]:5701 - 444fbe20-06b5-4bb7-8ce6-e365781a735f this
]
05:38:13,741 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [TransactionManagerService] hz.sad_engelbart.cached.thread-7 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Committing/rolling-back live transactions of [127.0.0.1]:5702, UUID: 1877f0a8-25ad-40ec-8849-972bd1e4f5d0
05:38:13,741 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [Node] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Shutting down node engine...
05:38:13,743 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [MigrationManager] hz.sad_engelbart.migration - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Partition balance is ok, no need to repartition.
05:38:13,745 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [NodeExtension] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Destroying node NodeExtension.
05:38:13,745 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [Node] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Hazelcast Shutdown is completed in 4 ms.
05:38:13,745 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [LifecycleService] main - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] [127.0.0.1]:5702 is SHUTDOWN
05:38:13,745 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [LifecycleService] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] [127.0.0.1]:5701 is SHUTTING_DOWN
05:38:13,745 WARN |asyncInvocation_and_syncBackups_and_asyncBackups| - [Node] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Terminating forcefully...
05:38:13,745 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [Node] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Shutting down connection manager...
05:38:13,745 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [Node] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Shutting down node engine...
05:38:13,748 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [NodeExtension] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Destroying node NodeExtension.
05:38:13,748 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [Node] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Hazelcast Shutdown is completed in 3 ms.
05:38:13,748 INFO |asyncInvocation_and_syncBackups_and_asyncBackups| - [LifecycleService] main - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] [127.0.0.1]:5701 is SHUTDOWN
BuildInfo right after asyncInvocation_and_syncBackups_and_asyncBackups(com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulatorStressTest): BuildInfo{version='5.0-SNAPSHOT', build='20210526', buildNumber=20210526, revision=ee2404a, enterprise=false, serializationVersion=1, jet=JetBuildInfo{version='5.0-SNAPSHOT', build='20210526', revision='ee2404a'}}
Hiccups measured while running test 'asyncInvocation_and_syncBackups_and_asyncBackups(com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulatorStressTest):'
05:37:10, accumulated pauses: 43 ms, max pause: 6 ms, pauses over 1000 ms: 0
05:37:15, accumulated pauses: 29 ms, max pause: 0 ms, pauses over 1000 ms: 0
05:37:20, accumulated pauses: 29 ms, max pause: 0 ms, pauses over 1000 ms: 0
05:37:25, accumulated pauses: 28 ms, max pause: 0 ms, pauses over 1000 ms: 0
05:37:30, accumulated pauses: 31 ms, max pause: 0 ms, pauses over 1000 ms: 0
05:37:35, accumulated pauses: 29 ms, max pause: 0 ms, pauses over 1000 ms: 0
05:37:40, accumulated pauses: 28 ms, max pause: 0 ms, pauses over 1000 ms: 0
05:37:45, accumulated pauses: 30 ms, max pause: 0 ms, pauses over 1000 ms: 0
05:37:50, accumulated pauses: 30 ms, max pause: 0 ms, pauses over 1000 ms: 0
05:37:55, accumulated pauses: 31 ms, max pause: 0 ms, pauses over 1000 ms: 0
05:38:00, accumulated pauses: 35 ms, max pause: 1 ms, pauses over 1000 ms: 0
05:38:05, accumulated pauses: 27 ms, max pause: 0 ms, pauses over 1000 ms: 0
05:38:10, accumulated pauses: 519 ms, max pause: 498 ms, pauses over 1000 ms: 0
```
| non_main | com hazelcast spi impl operationservice impl backpressureregulatorstresstest asyncinvocation and syncbackups and asyncbackups master commit failed on nightly run with oracle jdk stacktrace java lang assertionerror no error should have been thrown but stressthread completed error expected null but was at org junit assert fail assert java at org junit assert failnotnull assert java at org junit assert assertnull assert java at com hazelcast test testthread assertsucceedseventually testthread java at com hazelcast spi impl operationservice impl backpressureregulatorstresstest test backpressureregulatorstresstest java at com hazelcast spi impl operationservice impl backpressureregulatorstresstest asyncinvocation and syncbackups and asyncbackups backpressureregulatorstresstest java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at com hazelcast test failontimeoutstatement callablestatement call failontimeoutstatement java at com hazelcast test failontimeoutstatement callablestatement call failontimeoutstatement java at java util concurrent futuretask run futuretask java at java lang thread run thread java standard output info asyncinvocation and syncbackups and asyncbackups main overridden metrics configuration with system property hazelcast metrics collection frequency metricsconfig collectionfrequencyseconds info asyncinvocation and syncbackups and asyncbackups main o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o info asyncinvocation and syncbackups and asyncbackups main copyright c hazelcast inc all rights reserved info asyncinvocation and syncbackups and asyncbackups main hazelcast platform snapshot starting at info asyncinvocation and syncbackups and asyncbackups main cluster name dev info asyncinvocation and syncbackups and asyncbackups main collecting debug metrics and sending to diagnostics is enabled info asyncinvocation and syncbackups and asyncbackups main backpressure is enabled maxconcurrentinvocations syncwindow warn asyncinvocation and syncbackups and asyncbackups main cp subsystem is not enabled cp data structures will operate in unsafe mode please note that unsafe mode will not provide strong consistency guarantees info asyncinvocation and syncbackups and asyncbackups main setting number of cooperative threads and default parallelism to info asyncinvocation and syncbackups and asyncbackups main diagnostics disabled to enable add dhazelcast diagnostics enabled true to the jvm arguments info asyncinvocation and syncbackups and asyncbackups main is starting info asyncinvocation and syncbackups and asyncbackups main jet extension is enabled after the cluster version upgrade info asyncinvocation and syncbackups and asyncbackups main members size ver member this debug asyncinvocation and syncbackups and asyncbackups hz sad engelbart cached thread not starting jobs because partitions are not yet initialized info asyncinvocation and syncbackups and asyncbackups main jet extension is enabled debug asyncinvocation and syncbackups and asyncbackups hz sad engelbart cached thread not starting jobs because partitions are not yet initialized info asyncinvocation and syncbackups and asyncbackups main is started info asyncinvocation and syncbackups and asyncbackups main overridden metrics configuration with system property hazelcast metrics collection frequency metricsconfig collectionfrequencyseconds info asyncinvocation and syncbackups and asyncbackups main o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o oo o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o info asyncinvocation and syncbackups and asyncbackups main copyright c hazelcast inc all rights reserved info asyncinvocation and syncbackups and asyncbackups main hazelcast platform snapshot starting at info asyncinvocation and syncbackups and asyncbackups main cluster name dev info asyncinvocation and syncbackups and asyncbackups main collecting debug metrics and sending to diagnostics is enabled info asyncinvocation and syncbackups and asyncbackups main backpressure is enabled maxconcurrentinvocations syncwindow warn asyncinvocation and syncbackups and asyncbackups main cp subsystem is not enabled cp data structures will operate in unsafe mode please note that unsafe mode will not provide strong consistency guarantees info asyncinvocation and syncbackups and asyncbackups main setting number of cooperative threads and default parallelism to info asyncinvocation and syncbackups and asyncbackups main diagnostics disabled to enable add dhazelcast diagnostics enabled true to the jvm arguments info asyncinvocation and syncbackups and asyncbackups main is starting info asyncinvocation and syncbackups and asyncbackups main created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info asyncinvocation and syncbackups and asyncbackups hz sad engelbart priority generic operation thread created connection to endpoint connection mockconnection localendpoint remoteendpoint alive true info asyncinvocation and syncbackups and asyncbackups hz sad engelbart priority generic operation thread members size ver member this member debug asyncinvocation and syncbackups and asyncbackups hz sad engelbart cached thread not starting jobs because partitions are not yet initialized debug asyncinvocation and syncbackups and asyncbackups hz sad engelbart cached thread not starting jobs because partitions are not yet initialized info asyncinvocation and syncbackups and asyncbackups hz crazy engelbart generic operation thread jet extension is enabled after the cluster version upgrade info asyncinvocation and syncbackups and asyncbackups hz crazy engelbart generic operation thread members size ver member member this info asyncinvocation and syncbackups and asyncbackups main jet extension is enabled info asyncinvocation and syncbackups and asyncbackups main is started info asyncinvocation and syncbackups and asyncbackups hz sad engelbart generic operation thread initializing cluster partition table arrangement info asyncinvocation and syncbackups and asyncbackups stressthread stressthread starting debug asyncinvocation and syncbackups and asyncbackups hz sad engelbart cached thread not starting jobs because partition replication is not in safe state but in replica not sync debug asyncinvocation and syncbackups and asyncbackups hz sad engelbart cached thread not starting jobs because partition replication is not in safe state but in replica not sync warn asyncinvocation and syncbackups and asyncbackups hz sad engelbart slowoperationdetectorthread slow operation detected com hazelcast internal partition impl checkpartitionreplicaversiontask hint you can enable the logging of stack traces with the following system property dhazelcast slow operation detector stacktrace logging enabled warn asyncinvocation and syncbackups and asyncbackups hz sad engelbart slowoperationdetectorthread slow operation detected com hazelcast internal partition impl checkpartitionreplicaversiontask invocations warn asyncinvocation and syncbackups and asyncbackups hz sad engelbart slowoperationdetectorthread slow operation detected com hazelcast internal partition impl checkpartitionreplicaversiontask invocations warn asyncinvocation and syncbackups and asyncbackups hz sad engelbart slowoperationdetectorthread slow operation detected com hazelcast internal partition impl checkpartitionreplicaversiontask invocations warn asyncinvocation and syncbackups and asyncbackups hz sad engelbart slowoperationdetectorthread slow operation detected com hazelcast internal partition impl checkpartitionreplicaversiontask invocations debug asyncinvocation and syncbackups and asyncbackups hz sad engelbart cached thread not starting jobs because partition replication is not in safe state but in replica not sync debug asyncinvocation and syncbackups and asyncbackups hz sad engelbart cached thread not starting jobs because partition replication is not in safe state but in replica not sync debug asyncinvocation and syncbackups and asyncbackups hz sad engelbart cached thread not starting jobs because partition replication is not in safe state but in replica not sync debug asyncinvocation and syncbackups and asyncbackups hz sad engelbart cached thread not starting jobs because partition replication is not in safe state but in replica not sync debug asyncinvocation and syncbackups and asyncbackups hz sad engelbart cached thread not starting jobs because partition replication is not in safe state but in replica not sync debug asyncinvocation and syncbackups and asyncbackups hz sad engelbart cached thread not starting jobs because partition replication is not in safe state but in replica not sync debug asyncinvocation and syncbackups and asyncbackups hz sad engelbart cached thread not starting jobs because partition replication is not in safe state but in replica not sync debug asyncinvocation and syncbackups and asyncbackups hz sad engelbart cached thread not starting jobs because partition replication is not in safe state but in replica not sync debug asyncinvocation and syncbackups and asyncbackups hz sad engelbart cached thread not starting jobs because partition replication is not in safe state but in replica not sync debug asyncinvocation and syncbackups and asyncbackups hz sad engelbart cached thread not starting jobs because partition replication is not in safe state but in replica not sync debug asyncinvocation and syncbackups and asyncbackups hz sad engelbart cached thread not starting jobs because partition replication is not in safe state but in replica not sync debug asyncinvocation and syncbackups and asyncbackups hz sad engelbart cached thread not starting jobs because partition replication is not in safe state but in replica not sync warn asyncinvocation and syncbackups and asyncbackups stressthread stressthread completed with failure com hazelcast core hazelcastoverloadexception failed to start invocation due to overload invocation op com hazelcast spi impl operationservice impl backpressureregulatorstresstest dummyoperation servicename null identityhash partitionid replicaindex callid invocationtime waittimeout calltimeout tenantcontrol com hazelcast spi impl tenantcontrol nooptenantcontrol trycount trypausemillis invokecount calltimeoutmillis firstinvocationtimems firstinvocationtime lastheartbeatmillis lastheartbeattime target pendingresponse void backupsacksexpected backupsacksreceived connection null at com hazelcast spi impl operationservice impl invocationregistry register invocationregistry java at com hazelcast spi impl operationservice impl invocation doinvoke invocation java at com hazelcast spi impl operationservice impl invocation invocation java at com hazelcast spi impl operationservice impl invocation invoke invocation java at com hazelcast spi impl operationservice impl operationserviceimpl invokeonpartition operationserviceimpl java at com hazelcast spi impl operationservice impl backpressureregulatorstresstest stressthread asyncinvoke backpressureregulatorstresstest java at com hazelcast spi impl operationservice impl backpressureregulatorstresstest stressthread dorun backpressureregulatorstresstest java at com hazelcast test testthread run testthread java caused by com hazelcast core hazelcastoverloadexception timed out trying to acquire another call id maxconcurrentinvocations backofftimeout msecs elapsed msecs at com hazelcast spi impl sequence callidsequencewithbackpressure handlenospaceleft callidsequencewithbackpressure java at com hazelcast spi impl sequence abstractcallidsequence next abstractcallidsequence java at com hazelcast spi impl operationservice impl invocationregistry register invocationregistry java more error asyncinvocation and syncbackups and asyncbackups hz sad engelbart partition operation thread failed to process com hazelcast internal partition impl checkpartitionreplicaversiontask on hz sad engelbart partition operation thread com hazelcast core hazelcastoverloadexception failed to start invocation due to overload invocation op com hazelcast internal partition operation partitionbackupreplicaantientropyoperation servicename hz core partitionservice identityhash partitionid replicaindex callid invocationtime waittimeout calltimeout tenantcontrol com hazelcast spi impl tenantcontrol nooptenantcontrol versions trycount trypausemillis invokecount calltimeoutmillis firstinvocationtimems firstinvocationtime lastheartbeatmillis lastheartbeattime target pendingresponse void backupsacksexpected backupsacksreceived connection null at com hazelcast spi impl operationservice impl invocationregistry register invocationregistry java at com hazelcast spi impl operationservice impl invocation doinvoke invocation java at com hazelcast spi impl operationservice impl invocation invocation java at com hazelcast spi impl operationservice impl invocation invoke invocation java at com hazelcast spi impl operationservice impl invocationbuilderimpl invoke invocationbuilderimpl java at com hazelcast internal partition impl abstractpartitionprimaryreplicaantientropytask invokepartitionbackupreplicaantientropyop abstractpartitionprimaryreplicaantientropytask java at com hazelcast internal partition impl checkpartitionreplicaversiontask run checkpartitionreplicaversiontask java at com hazelcast spi impl operationservice impl operationrunnerimpl run operationrunnerimpl java at com hazelcast spi impl operationexecutor impl operationthread process operationthread java at com hazelcast spi impl operationexecutor impl operationthread process operationthread java at com hazelcast spi impl operationexecutor impl operationthread executerun operationthread java at com hazelcast internal util executor hazelcastmanagedthread run hazelcastmanagedthread java caused by com hazelcast core hazelcastoverloadexception timed out trying to acquire another call id maxconcurrentinvocations backofftimeout msecs elapsed msecs at com hazelcast spi impl sequence callidsequencewithbackpressure handlenospaceleft callidsequencewithbackpressure java at com hazelcast spi impl sequence abstractcallidsequence next abstractcallidsequence java at com hazelcast spi impl operationservice impl invocationregistry register invocationregistry java more error asyncinvocation and syncbackups and asyncbackups hz sad engelbart partition operation thread failed to process com hazelcast internal partition impl checkpartitionreplicaversiontask on hz sad engelbart partition operation thread com hazelcast core hazelcastoverloadexception failed to start invocation due to overload invocation op com hazelcast internal partition operation partitionbackupreplicaantientropyoperation servicename hz core partitionservice identityhash partitionid replicaindex callid invocationtime waittimeout calltimeout tenantcontrol com hazelcast spi impl tenantcontrol nooptenantcontrol versions trycount trypausemillis invokecount calltimeoutmillis firstinvocationtimems firstinvocationtime lastheartbeatmillis lastheartbeattime target pendingresponse void backupsacksexpected backupsacksreceived connection null at com hazelcast spi impl operationservice impl invocationregistry register invocationregistry java at com hazelcast spi impl operationservice impl invocation doinvoke invocation java at com hazelcast spi impl operationservice impl invocation invocation java at com hazelcast spi impl operationservice impl invocation invoke invocation java at com hazelcast spi impl operationservice impl invocationbuilderimpl invoke invocationbuilderimpl java at com hazelcast internal partition impl abstractpartitionprimaryreplicaantientropytask invokepartitionbackupreplicaantientropyop abstractpartitionprimaryreplicaantientropytask java at com hazelcast internal partition impl checkpartitionreplicaversiontask run checkpartitionreplicaversiontask java at com hazelcast spi impl operationservice impl operationrunnerimpl run operationrunnerimpl java at com hazelcast spi impl operationexecutor impl operationthread process operationthread java at com hazelcast spi impl operationexecutor impl operationthread process operationthread java at com hazelcast spi impl operationexecutor impl operationthread executerun operationthread java at com hazelcast internal util executor hazelcastmanagedthread run hazelcastmanagedthread java caused by com hazelcast core hazelcastoverloadexception timed out trying to acquire another call id maxconcurrentinvocations backofftimeout msecs elapsed msecs at com hazelcast spi impl sequence callidsequencewithbackpressure handlenospaceleft callidsequencewithbackpressure java at com hazelcast spi impl sequence abstractcallidsequence next abstractcallidsequence java at com hazelcast spi impl operationservice impl invocationregistry register invocationregistry java more error asyncinvocation and syncbackups and asyncbackups hz sad engelbart partition operation thread failed to process com hazelcast internal partition impl checkpartitionreplicaversiontask on hz sad engelbart partition operation thread com hazelcast core hazelcastoverloadexception failed to start invocation due to overload invocation op com hazelcast internal partition operation partitionbackupreplicaantientropyoperation servicename hz core partitionservice identityhash partitionid replicaindex callid invocationtime waittimeout calltimeout tenantcontrol com hazelcast spi impl tenantcontrol nooptenantcontrol versions trycount trypausemillis invokecount calltimeoutmillis firstinvocationtimems firstinvocationtime lastheartbeatmillis lastheartbeattime target pendingresponse void backupsacksexpected backupsacksreceived connection null at com hazelcast spi impl operationservice impl invocationregistry register invocationregistry java at com hazelcast spi impl operationservice impl invocation doinvoke invocation java at com hazelcast spi impl operationservice impl invocation invocation java at com hazelcast spi impl operationservice impl invocation invoke invocation java at com hazelcast spi impl operationservice impl invocationbuilderimpl invoke invocationbuilderimpl java at com hazelcast internal partition impl abstractpartitionprimaryreplicaantientropytask invokepartitionbackupreplicaantientropyop abstractpartitionprimaryreplicaantientropytask java at com hazelcast internal partition impl checkpartitionreplicaversiontask run checkpartitionreplicaversiontask java at com hazelcast spi impl operationservice impl operationrunnerimpl run operationrunnerimpl java at com hazelcast spi impl operationexecutor impl operationthread process operationthread java at com hazelcast spi impl operationexecutor impl operationthread process operationthread java at com hazelcast spi impl operationexecutor impl operationthread executerun operationthread java at com hazelcast internal util executor hazelcastmanagedthread run hazelcastmanagedthread java caused by com hazelcast core hazelcastoverloadexception timed out trying to acquire another call id maxconcurrentinvocations backofftimeout msecs elapsed msecs at com hazelcast spi impl sequence callidsequencewithbackpressure handlenospaceleft callidsequencewithbackpressure java at com hazelcast spi impl sequence abstractcallidsequence next abstractcallidsequence java at com hazelcast spi impl operationservice impl invocationregistry register invocationregistry java more error asyncinvocation and syncbackups and asyncbackups hz sad engelbart partition operation thread failed to process com hazelcast internal partition impl checkpartitionreplicaversiontask on hz sad engelbart partition operation thread com hazelcast core hazelcastoverloadexception failed to start invocation due to overload invocation op com hazelcast internal partition operation partitionbackupreplicaantientropyoperation servicename hz core partitionservice identityhash partitionid replicaindex callid invocationtime waittimeout calltimeout tenantcontrol com hazelcast spi impl tenantcontrol nooptenantcontrol versions trycount trypausemillis invokecount calltimeoutmillis firstinvocationtimems firstinvocationtime lastheartbeatmillis lastheartbeattime target pendingresponse void backupsacksexpected backupsacksreceived connection null at com hazelcast spi impl operationservice impl invocationregistry register invocationregistry java at com hazelcast spi impl operationservice impl invocation doinvoke invocation java at com hazelcast spi impl operationservice impl invocation invocation java at com hazelcast spi impl operationservice impl invocation invoke invocation java at com hazelcast spi impl operationservice impl invocationbuilderimpl invoke invocationbuilderimpl java at com hazelcast internal partition impl abstractpartitionprimaryreplicaantientropytask invokepartitionbackupreplicaantientropyop abstractpartitionprimaryreplicaantientropytask java at com hazelcast internal partition impl checkpartitionreplicaversiontask run checkpartitionreplicaversiontask java at com hazelcast spi impl operationservice impl operationrunnerimpl run operationrunnerimpl java at com hazelcast spi impl operationexecutor impl operationthread process operationthread java at com hazelcast spi impl operationexecutor impl operationthread process operationthread java at com hazelcast spi impl operationexecutor impl operationthread executerun operationthread java at com hazelcast internal util executor hazelcastmanagedthread run hazelcastmanagedthread java caused by com hazelcast core hazelcastoverloadexception timed out trying to acquire another call id maxconcurrentinvocations backofftimeout msecs elapsed msecs at com hazelcast spi impl sequence callidsequencewithbackpressure handlenospaceleft callidsequencewithbackpressure java at com hazelcast spi impl sequence abstractcallidsequence next abstractcallidsequence java at com hazelcast spi impl operationservice impl invocationregistry register invocationregistry java more error asyncinvocation and syncbackups and asyncbackups hz sad engelbart partition operation thread failed to process com hazelcast internal partition impl checkpartitionreplicaversiontask on hz sad engelbart partition operation thread com hazelcast core hazelcastoverloadexception failed to start invocation due to overload invocation op com hazelcast internal partition operation partitionbackupreplicaantientropyoperation servicename hz core partitionservice identityhash partitionid replicaindex callid invocationtime waittimeout calltimeout tenantcontrol com hazelcast spi impl tenantcontrol nooptenantcontrol versions trycount trypausemillis invokecount calltimeoutmillis firstinvocationtimems firstinvocationtime lastheartbeatmillis lastheartbeattime target pendingresponse void backupsacksexpected backupsacksreceived connection null at com hazelcast spi impl operationservice impl invocationregistry register invocationregistry java at com hazelcast spi impl operationservice impl invocation doinvoke invocation java at com hazelcast spi impl operationservice impl invocation invocation java at com hazelcast spi impl operationservice impl invocation invoke invocation java at com hazelcast spi impl operationservice impl invocationbuilderimpl invoke invocationbuilderimpl java at com hazelcast internal partition impl abstractpartitionprimaryreplicaantientropytask invokepartitionbackupreplicaantientropyop abstractpartitionprimaryreplicaantientropytask java at com hazelcast internal partition impl checkpartitionreplicaversiontask run checkpartitionreplicaversiontask java at com hazelcast spi impl operationservice impl operationrunnerimpl run operationrunnerimpl java at com hazelcast spi impl operationexecutor impl operationthread process operationthread java at com hazelcast spi impl operationexecutor impl operationthread process operationthread java at com hazelcast spi impl operationexecutor impl operationthread executerun operationthread java at com hazelcast internal util executor hazelcastmanagedthread run hazelcastmanagedthread java caused by com hazelcast core hazelcastoverloadexception timed out trying to acquire another call id maxconcurrentinvocations backofftimeout msecs elapsed msecs at com hazelcast spi impl sequence callidsequencewithbackpressure handlenospaceleft callidsequencewithbackpressure java at com hazelcast spi impl sequence abstractcallidsequence next abstractcallidsequence java at com hazelcast spi impl operationservice impl invocationregistry register invocationregistry java more debug asyncinvocation and syncbackups and asyncbackups hz sad engelbart cached thread not starting jobs because partition replication is not in safe state but in replica not sync debug asyncinvocation and syncbackups and asyncbackups hz sad engelbart cached thread not starting jobs because partition replication is not in safe state but in replica not sync info asyncinvocation and syncbackups and asyncbackups hz sad engelbart invocationmonitorthread invocations timeouts backup timeouts debug asyncinvocation and syncbackups and asyncbackups hz sad engelbart cached thread job cleanup took debug asyncinvocation and syncbackups and asyncbackups hz sad engelbart cached thread job cleanup took info asyncinvocation and syncbackups and asyncbackups main is shutting down warn asyncinvocation and syncbackups and asyncbackups main terminating forcefully info asyncinvocation and syncbackups and asyncbackups main shutting down connection manager info asyncinvocation and syncbackups and asyncbackups main removed connection to endpoint connection mockconnection localendpoint remoteendpoint alive false info asyncinvocation and syncbackups and asyncbackups main removed connection to endpoint connection mockconnection localendpoint remoteendpoint alive false info asyncinvocation and syncbackups and asyncbackups main removing member info asyncinvocation and syncbackups and asyncbackups main members size ver member this info asyncinvocation and syncbackups and asyncbackups hz sad engelbart cached thread committing rolling back live transactions of uuid info asyncinvocation and syncbackups and asyncbackups main shutting down node engine info asyncinvocation and syncbackups and asyncbackups hz sad engelbart migration partition balance is ok no need to repartition info asyncinvocation and syncbackups and asyncbackups main destroying node nodeextension info asyncinvocation and syncbackups and asyncbackups main hazelcast shutdown is completed in ms info asyncinvocation and syncbackups and asyncbackups main is shutdown info asyncinvocation and syncbackups and asyncbackups main is shutting down warn asyncinvocation and syncbackups and asyncbackups main terminating forcefully info asyncinvocation and syncbackups and asyncbackups main shutting down connection manager info asyncinvocation and syncbackups and asyncbackups main shutting down node engine info asyncinvocation and syncbackups and asyncbackups main destroying node nodeextension info asyncinvocation and syncbackups and asyncbackups main hazelcast shutdown is completed in ms info asyncinvocation and syncbackups and asyncbackups main is shutdown buildinfo right after asyncinvocation and syncbackups and asyncbackups com hazelcast spi impl operationservice impl backpressureregulatorstresstest buildinfo version snapshot build buildnumber revision enterprise false serializationversion jet jetbuildinfo version snapshot build revision hiccups measured while running test asyncinvocation and syncbackups and asyncbackups com hazelcast spi impl operationservice impl backpressureregulatorstresstest accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms | 0 |
118,368 | 15,284,050,526 | IssuesEvent | 2021-02-23 11:41:39 | mysociety/alaveteli | https://api.github.com/repos/mysociety/alaveteli | closed | Figure out how to link from wizard to next page when users land on /help/unhappy directly. | t:design | > Thinking about how to link the wizard to the new followup page. If the user lands on the wizard directly then where is the "Help me send a reply" button going to link - we won't know which request is to follow up on?
>
> – https://github.com/mysociety/alaveteli/issues/5911#issuecomment-759410583 | 1.0 | Figure out how to link from wizard to next page when users land on /help/unhappy directly. - > Thinking about how to link the wizard to the new followup page. If the user lands on the wizard directly then where is the "Help me send a reply" button going to link - we won't know which request is to follow up on?
>
> – https://github.com/mysociety/alaveteli/issues/5911#issuecomment-759410583 | non_main | figure out how to link from wizard to next page when users land on help unhappy directly thinking about how to link the wizard to the new followup page if the user lands on the wizard directly then where is the help me send a reply button going to link we won t know which request is to follow up on – | 0 |
2,882 | 10,319,570,103 | IssuesEvent | 2019-08-30 17:54:05 | backdrop-ops/contrib | https://api.github.com/repos/backdrop-ops/contrib | closed | Backdrop Contributed Project Group Application | Maintainer application | I would like to port a few of my modules to backdrop.
Thanks
| True | Backdrop Contributed Project Group Application - I would like to port a few of my modules to backdrop.
Thanks
| main | backdrop contributed project group application i would like to port a few of my modules to backdrop thanks | 1 |
2,015 | 6,743,780,703 | IssuesEvent | 2017-10-20 13:28:38 | opencaching/opencaching-pl | https://api.github.com/repos/opencaching/opencaching-pl | opened | Horizontal scaling | General_Discussion Maintainability Server_Administration Type_Enhancement x_Help wanted | Can a website running OCPL code be scaled horizontally?
- If YES, what aspects must be taken into consideration and what specific configurations should be set?
- if NO, what are the limiting factors and what can be done to overcome them ? | True | Horizontal scaling - Can a website running OCPL code be scaled horizontally?
- If YES, what aspects must be taken into consideration and what specific configurations should be set?
- if NO, what are the limiting factors and what can be done to overcome them ? | main | horizontal scaling can a website running ocpl code be scaled horizontally if yes what aspects must be taken into consideration and what specific configurations should be set if no what are the limiting factors and what can be done to overcome them | 1 |
413 | 3,479,973,185 | IssuesEvent | 2015-12-29 01:17:15 | caskroom/homebrew-cask | https://api.github.com/repos/caskroom/homebrew-cask | closed | Enable Travis caching | awaiting maintainer feedback travis | As seen on https://docs.travis-ci.com/user/caching#Caching-directories-(Bundler%2C-dependencies)
Looks like the new container-based infrastructure has to be used, which means no sudo.
Not sure if it's feasible, but if so, would speed up Travis checks considerably. | True | Enable Travis caching - As seen on https://docs.travis-ci.com/user/caching#Caching-directories-(Bundler%2C-dependencies)
Looks like the new container-based infrastructure has to be used, which means no sudo.
Not sure if it's feasible, but if so, would speed up Travis checks considerably. | main | enable travis caching as seen on looks like the new container based infrastructure has to be used which means no sudo not sure if it s feasible but if so would speed up travis checks considerably | 1 |
419,547 | 12,224,638,121 | IssuesEvent | 2020-05-02 23:52:08 | TheTofuShop/Menu | https://api.github.com/repos/TheTofuShop/Menu | closed | Allow time scale to be changed | feature-request priority-low | ⭐ **What feature do you want to be added?**
Setting for staff members to change the time scale.
🔎 **Why do you want this feature to be added?**
There are some players that like to do runs during the night, so I feel that they would like to extend the duration of the night in-game.
❓ **Is there something else that we need to know?**
No.
| 1.0 | Allow time scale to be changed - ⭐ **What feature do you want to be added?**
Setting for staff members to change the time scale.
🔎 **Why do you want this feature to be added?**
There are some players that like to do runs during the night, so I feel that they would like to extend the duration of the night in-game.
❓ **Is there something else that we need to know?**
No.
| non_main | allow time scale to be changed ⭐ what feature do you want to be added setting for staff members to change the time scale 🔎 why do you want this feature to be added there are some players that like to do runs during the night so i feel that they would like to extend the duration of the night in game ❓ is there something else that we need to know no | 0 |
863 | 4,533,209,123 | IssuesEvent | 2016-09-08 10:42:41 | KazDragon/terminalpp | https://api.github.com/repos/KazDragon/terminalpp | closed | Add sanitizers to CI | Continuous Integration in progress Maintainability | Three sanitizers are available from Clang:
* address
* undefined behaviour
* thread | True | Add sanitizers to CI - Three sanitizers are available from Clang:
* address
* undefined behaviour
* thread | main | add sanitizers to ci three sanitizers are available from clang address undefined behaviour thread | 1 |
22,912 | 15,646,541,985 | IssuesEvent | 2021-03-23 01:10:03 | opencog/link-grammar | https://api.github.com/repos/opencog/link-grammar | closed | Dialect support! | enhancement infrastructure | Below is a sketch of how to add dialect support, and why its a good idea.
Currently, {} is used to indicate optional connectors: for example: A+ & {B- & C+} indicates that (B- & C+) is optional.
Lets give options names! These names will be names of dialects! So, for example: A+ & {B- & C+}{irish} means that (B- & C+) is optional, but only if the "irish" dialect is enabled; otherwise, it is never allowed.
In my imagination, this solve zillions of problems. These include:
A) the bad-spelling problem: create a kant-spel dialect, that merges together the disjuncts for they're there and their (and throws in thier, for good measure)
B) enhanced support for ... irish-english, black-american-english, australian-english, hillbilly-basilect, archaic 19th-century English, twitterese, newspaper-headlines
C) Automatic detection of dialects! So, for example, if a sentence does not parse normally, but does parse after enabling some dialect, we can guess that it must be that dialect.
D) post-parse parse-ranking. That is, parse a sentence with all dialect enabled, but then fiddle with the costs associated with each particular dialect. Thus, to turn off the kant-spel dialect, one simply gives those connectors a very high cost, and they would be raked last.
| 1.0 | Dialect support! - Below is a sketch of how to add dialect support, and why its a good idea.
Currently, {} is used to indicate optional connectors: for example: A+ & {B- & C+} indicates that (B- & C+) is optional.
Lets give options names! These names will be names of dialects! So, for example: A+ & {B- & C+}{irish} means that (B- & C+) is optional, but only if the "irish" dialect is enabled; otherwise, it is never allowed.
In my imagination, this solve zillions of problems. These include:
A) the bad-spelling problem: create a kant-spel dialect, that merges together the disjuncts for they're there and their (and throws in thier, for good measure)
B) enhanced support for ... irish-english, black-american-english, australian-english, hillbilly-basilect, archaic 19th-century English, twitterese, newspaper-headlines
C) Automatic detection of dialects! So, for example, if a sentence does not parse normally, but does parse after enabling some dialect, we can guess that it must be that dialect.
D) post-parse parse-ranking. That is, parse a sentence with all dialect enabled, but then fiddle with the costs associated with each particular dialect. Thus, to turn off the kant-spel dialect, one simply gives those connectors a very high cost, and they would be raked last.
| non_main | dialect support below is a sketch of how to add dialect support and why its a good idea currently is used to indicate optional connectors for example a b c indicates that b c is optional lets give options names these names will be names of dialects so for example a b c irish means that b c is optional but only if the irish dialect is enabled otherwise it is never allowed in my imagination this solve zillions of problems these include a the bad spelling problem create a kant spel dialect that merges together the disjuncts for they re there and their and throws in thier for good measure b enhanced support for irish english black american english australian english hillbilly basilect archaic century english twitterese newspaper headlines c automatic detection of dialects so for example if a sentence does not parse normally but does parse after enabling some dialect we can guess that it must be that dialect d post parse parse ranking that is parse a sentence with all dialect enabled but then fiddle with the costs associated with each particular dialect thus to turn off the kant spel dialect one simply gives those connectors a very high cost and they would be raked last | 0 |
254,140 | 21,730,939,856 | IssuesEvent | 2022-05-11 11:57:39 | wazuh/wazuh-qa | https://api.github.com/repos/wazuh/wazuh-qa | closed | E2E test: Auditing commands run by user | team/qa test/system subteam/qa-hurricane type/testing-development | # Context
[This](https://github.com/wazuh/wazuh/wiki/Proof-of-concept-guide#auditing-commands-run-by-user) use case needs to be tested automatically to speed up the testing process.
# Tasks
Following the [guide](https://github.com/wazuh/wazuh/wiki/Proof-of-concept-guide#auditing-commands-run-by-user), is necessary to achieve these tasks to successfully test the scenario:
## In the agent:
- [x] Check that `auditd` daemon is running
- [x] Check the `auditd` configuration
- [x] Configure `audit` to audit commands run by the user
- [x] Generate an alert as the previously configured user
### Test:
- [x] Test that the alert is generated as is expected
## DoD
- [x] Python codebase satisfies PEP-8 style style guide. `pycodestyle --max-line-length=120 --show-source --show-pep8 file.py`.
- [x] QA-Docs executed from branch `1864-qa-docs-fixes`
- [x] Prove that the tests fail when they have to
- [x] Prove that the tests pass when they have to
- [x] 3 local executions (Generate the report)
- [ ] 3 Jenkins executions (Link the job in Jenkins) | 2.0 | E2E test: Auditing commands run by user - # Context
[This](https://github.com/wazuh/wazuh/wiki/Proof-of-concept-guide#auditing-commands-run-by-user) use case needs to be tested automatically to speed up the testing process.
# Tasks
Following the [guide](https://github.com/wazuh/wazuh/wiki/Proof-of-concept-guide#auditing-commands-run-by-user), is necessary to achieve these tasks to successfully test the scenario:
## In the agent:
- [x] Check that `auditd` daemon is running
- [x] Check the `auditd` configuration
- [x] Configure `audit` to audit commands run by the user
- [x] Generate an alert as the previously configured user
### Test:
- [x] Test that the alert is generated as is expected
## DoD
- [x] Python codebase satisfies PEP-8 style style guide. `pycodestyle --max-line-length=120 --show-source --show-pep8 file.py`.
- [x] QA-Docs executed from branch `1864-qa-docs-fixes`
- [x] Prove that the tests fail when they have to
- [x] Prove that the tests pass when they have to
- [x] 3 local executions (Generate the report)
- [ ] 3 Jenkins executions (Link the job in Jenkins) | non_main | test auditing commands run by user context use case needs to be tested automatically to speed up the testing process tasks following the is necessary to achieve these tasks to successfully test the scenario in the agent check that auditd daemon is running check the auditd configuration configure audit to audit commands run by the user generate an alert as the previously configured user test test that the alert is generated as is expected dod python codebase satisfies pep style style guide pycodestyle max line length show source show file py qa docs executed from branch qa docs fixes prove that the tests fail when they have to prove that the tests pass when they have to local executions generate the report jenkins executions link the job in jenkins | 0 |
181,714 | 30,728,358,172 | IssuesEvent | 2023-07-27 21:52:07 | 18F/TLC-crew | https://api.github.com/repos/18F/TLC-crew | closed | Update 18F Federalist Page | design content engineering | ### A description of the work
We have this case study about Federalist on the 18F website. But, there are few fixes to improve the accuracy and experience of this page:
Example of alignment issue:
<img width="1160" alt="Screen Shot 2023-04-17 at 5 05 20 PM" src="https://user-images.githubusercontent.com/2374206/232610937-5be3981f-931b-42e3-8f81-37a6fe650c4b.png">
### Point of contact on this issue
Who can we follow-up with if we have questions?
You can contact me, @cmajel. I can help connect with website contacts as needed.
### Reproduction steps (if necessary)
Be as specific as possible
**Billable?**
- [ ] Yes
- [x] No
If yes, tock code:
**Skills needed**
A designer with front-end experience could contribute here too.
- [ ] Any human
- [x] Design
- [x] Content
- [x] Engineering
- [ ] Acquisition
- [ ] Product
- [ ] Other
**Timeline**
Does this need to happen in the next two weeks?
- [ ] Yes
- [x] No
How much time do you anticipate this work taking?
### Acceptance Criteria
- [x] Visitors to the 18F Federalist page know that 18F Federalist is now cloud.gov pages and is not maintained by 18F.
### Tasks
- [x] Update content to note that Federalist is now [Cloud.gov](http://cloud.gov/) pages, and not maintained by 18F
- [x] Update sidebar labels appropriately.
_- Fix section and aside alignment to match other page elements (this is an issue across [case study content](https://18f.gsa.gov/what-we-deliver/forest-service/)). This appears to affect mostly screens wider than 1030px. --> Note, this has been moved to a [separate issue](https://github.com/18F/TLC-crew/issues/183)._ | 1.0 | Update 18F Federalist Page - ### A description of the work
We have this case study about Federalist on the 18F website. But, there are few fixes to improve the accuracy and experience of this page:
Example of alignment issue:
<img width="1160" alt="Screen Shot 2023-04-17 at 5 05 20 PM" src="https://user-images.githubusercontent.com/2374206/232610937-5be3981f-931b-42e3-8f81-37a6fe650c4b.png">
### Point of contact on this issue
Who can we follow-up with if we have questions?
You can contact me, @cmajel. I can help connect with website contacts as needed.
### Reproduction steps (if necessary)
Be as specific as possible
**Billable?**
- [ ] Yes
- [x] No
If yes, tock code:
**Skills needed**
A designer with front-end experience could contribute here too.
- [ ] Any human
- [x] Design
- [x] Content
- [x] Engineering
- [ ] Acquisition
- [ ] Product
- [ ] Other
**Timeline**
Does this need to happen in the next two weeks?
- [ ] Yes
- [x] No
How much time do you anticipate this work taking?
### Acceptance Criteria
- [x] Visitors to the 18F Federalist page know that 18F Federalist is now cloud.gov pages and is not maintained by 18F.
### Tasks
- [x] Update content to note that Federalist is now [Cloud.gov](http://cloud.gov/) pages, and not maintained by 18F
- [x] Update sidebar labels appropriately.
_- Fix section and aside alignment to match other page elements (this is an issue across [case study content](https://18f.gsa.gov/what-we-deliver/forest-service/)). This appears to affect mostly screens wider than 1030px. --> Note, this has been moved to a [separate issue](https://github.com/18F/TLC-crew/issues/183)._ | non_main | update federalist page a description of the work we have this case study about federalist on the website but there are few fixes to improve the accuracy and experience of this page example of alignment issue img width alt screen shot at pm src point of contact on this issue who can we follow up with if we have questions you can contact me cmajel i can help connect with website contacts as needed reproduction steps if necessary be as specific as possible billable yes no if yes tock code skills needed a designer with front end experience could contribute here too any human design content engineering acquisition product other timeline does this need to happen in the next two weeks yes no how much time do you anticipate this work taking acceptance criteria visitors to the federalist page know that federalist is now cloud gov pages and is not maintained by tasks update content to note that federalist is now pages and not maintained by update sidebar labels appropriately fix section and aside alignment to match other page elements this is an issue across this appears to affect mostly screens wider than note this has been moved to a | 0 |
5,261 | 26,618,987,963 | IssuesEvent | 2023-01-24 09:46:22 | mlocati/docker-php-extension-installer | https://api.github.com/repos/mlocati/docker-php-extension-installer | closed | Support solr with PHP 8.1 | waiting for external maintainer | Their master branch should be ok: we just need that they publish a new version (keep an eye on https://pecl.php.net/package-changelog.php?package=solr for a version greater than 2.5.1) | True | Support solr with PHP 8.1 - Their master branch should be ok: we just need that they publish a new version (keep an eye on https://pecl.php.net/package-changelog.php?package=solr for a version greater than 2.5.1) | main | support solr with php their master branch should be ok we just need that they publish a new version keep an eye on for a version greater than | 1 |
639,349 | 20,751,678,440 | IssuesEvent | 2022-03-15 08:17:28 | returntocorp/semgrep | https://api.github.com/repos/returntocorp/semgrep | opened | Upgrade Dockerfile grammar | priority:low user:r2c lang:dockerfile | This is a follow-up to https://github.com/returntocorp/semgrep/pull/4813
* the dockerfile grammar has already been extended (https://github.com/returntocorp/ocaml-tree-sitter-semgrep/pull/293) with an ellipsis after CMD/RUN/etc. We should then get rid of the hack that parses `...` using PCRE.
* there are some changes (not sure how many) which require changes here and there in the ocaml boilerplate.
| 1.0 | Upgrade Dockerfile grammar - This is a follow-up to https://github.com/returntocorp/semgrep/pull/4813
* the dockerfile grammar has already been extended (https://github.com/returntocorp/ocaml-tree-sitter-semgrep/pull/293) with an ellipsis after CMD/RUN/etc. We should then get rid of the hack that parses `...` using PCRE.
* there are some changes (not sure how many) which require changes here and there in the ocaml boilerplate.
| non_main | upgrade dockerfile grammar this is a follow up to the dockerfile grammar has already been extended with an ellipsis after cmd run etc we should then get rid of the hack that parses using pcre there are some changes not sure how many which require changes here and there in the ocaml boilerplate | 0 |
2,349 | 8,394,227,590 | IssuesEvent | 2018-10-09 23:31:21 | Homebrew/homebrew-cask | https://api.github.com/repos/Homebrew/homebrew-cask | closed | Make a homebrew/cask-java tap? | awaiting maintainer feedback discussion | Current casks:
- `java`: OpenJDK 11
- `adoptopenjdk` JDK 11
- `oracle-jdk`: OracleJDK 11
- `zulu`: ZuluJDK 10
- `java8`: OracleJDK 8 (versions)
- `java10`: OracleJDK 10 (versions)
- `zulu7`: ZuluJDK 7 (versions)
- `zulu8`: ZuluJDK 8 (versions)
- `zulu9`: ZuluJDK 9 (versions)
Related cask: `java-jdk-javadoc`
Open PRs:
- `sapmachine-jdk` JDK 11
- `adoptopenjdk9 ` JDK 9 (versions)
Forthcoming PRs:
- `adoptopenjdk` 8/10 (versions)
Dumping these all in one tap would make them easier to manage (naming/avoiding duplicates, postflight consistency/avoiding install conflicts, etc.) I'm expecting more variants now that building JDKs is a thing. | True | Make a homebrew/cask-java tap? - Current casks:
- `java`: OpenJDK 11
- `adoptopenjdk` JDK 11
- `oracle-jdk`: OracleJDK 11
- `zulu`: ZuluJDK 10
- `java8`: OracleJDK 8 (versions)
- `java10`: OracleJDK 10 (versions)
- `zulu7`: ZuluJDK 7 (versions)
- `zulu8`: ZuluJDK 8 (versions)
- `zulu9`: ZuluJDK 9 (versions)
Related cask: `java-jdk-javadoc`
Open PRs:
- `sapmachine-jdk` JDK 11
- `adoptopenjdk9 ` JDK 9 (versions)
Forthcoming PRs:
- `adoptopenjdk` 8/10 (versions)
Dumping these all in one tap would make them easier to manage (naming/avoiding duplicates, postflight consistency/avoiding install conflicts, etc.) I'm expecting more variants now that building JDKs is a thing. | main | make a homebrew cask java tap current casks java openjdk adoptopenjdk jdk oracle jdk oraclejdk zulu zulujdk oraclejdk versions oraclejdk versions zulujdk versions zulujdk versions zulujdk versions related cask java jdk javadoc open prs sapmachine jdk jdk jdk versions forthcoming prs adoptopenjdk versions dumping these all in one tap would make them easier to manage naming avoiding duplicates postflight consistency avoiding install conflicts etc i m expecting more variants now that building jdks is a thing | 1 |
134,919 | 30,212,806,770 | IssuesEvent | 2023-07-05 13:49:21 | KDWSS/Java-Demo-2 | https://api.github.com/repos/KDWSS/Java-Demo-2 | opened | Code Security Report: 17 high severity findings, 58 total findings | Mend: code security findings | # Code Security Report
### Scan Metadata
**Latest Scan:** 2023-07-05 01:48pm
**Total Findings:** 58 | **New Findings:** 0 | **Resolved Findings:** 0
**Tested Project Files:** 102
**Detected Programming Languages:** 1 (Java)
<!-- SAST-MANUAL-SCAN-START -->
- [ ] Check this box to manually trigger a scan
<!-- SAST-MANUAL-SCAN-END -->
### Most Relevant Findings
> The below list presents the 10 most relevant findings that need your attention. To view information on the remaining findings, navigate to the [Mend SAST Application](https://saas.mend.io/sast/#/scans/f3cabc86-1bf1-48ec-8350-fe38e8c84911/details).
<table role='table'><thead><tr><th>Severity</th><th>Vulnerability Type</th><th>CWE</th><th>File</th><th>Data Flows</th><th>Date</th></tr></thead><tbody><tr><td><a href='#'><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20></a> High</td><td>SQL Injection</td><td>
[CWE-89](https://cwe.mitre.org/data/definitions/89.html)
</td><td>
[SQLInjectionServlet.java:69](https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L69)
</td><td>3</td><td>2023-07-05 01:49pm</td></tr><tr><td colspan='6'><details><summary>More info</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L64-L69
<details>
<summary>3 Data Flow/s detected</summary></br>
<details>
<summary>View Data Flow 1</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L28
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L28
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L45
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L60
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L69
</details>
<details>
<summary>View Data Flow 2</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L28
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L28
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L45
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L60
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L69
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L69
</details>
<details>
<summary>View Data Flow 3</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L28
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L28
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L39
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L45
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L60
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L69
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L69
</details>
</details>
</td></tr></details></td></tr><tr><td><a href='#'><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20></a> High</td><td>Path/Directory Traversal</td><td>
[CWE-22](https://cwe.mitre.org/data/definitions/22.html)
</td><td>
[NullByteInjectionServlet.java:46](https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/NullByteInjectionServlet.java#L46)
</td><td>1</td><td>2023-07-05 01:49pm</td></tr><tr><td colspan='6'><details><summary>More info</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/NullByteInjectionServlet.java#L41-L46
<details>
<summary>1 Data Flow/s detected</summary></br>
<details>
<summary>View Data Flow 1</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/NullByteInjectionServlet.java#L35
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/NullByteInjectionServlet.java#L35
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/NullByteInjectionServlet.java#L40
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/NullByteInjectionServlet.java#L46
</details>
</details>
</td></tr></details></td></tr><tr><td><a href='#'><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20></a> High</td><td>Path/Directory Traversal</td><td>
[CWE-22](https://cwe.mitre.org/data/definitions/22.html)
</td><td>
[UnrestrictedExtensionUploadServlet.java:84](https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L84)
</td><td>1</td><td>2023-07-05 01:49pm</td></tr><tr><td colspan='6'><details><summary>More info</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L79-L84
<details>
<summary>1 Data Flow/s detected</summary></br>
<details>
<summary>View Data Flow 1</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L69
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L69
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L76
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L84
</details>
</details>
</td></tr></details></td></tr><tr><td><a href='#'><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20></a> High</td><td>Path/Directory Traversal</td><td>
[CWE-22](https://cwe.mitre.org/data/definitions/22.html)
</td><td>
[MailHeaderInjectionServlet.java:133](https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L133)
</td><td>1</td><td>2023-07-05 01:49pm</td></tr><tr><td colspan='6'><details><summary>More info</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L128-L133
<details>
<summary>1 Data Flow/s detected</summary></br>
<details>
<summary>View Data Flow 1</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L125
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L125
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L127
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L133
</details>
</details>
</td></tr></details></td></tr><tr><td><a href='#'><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20></a> High</td><td>Path/Directory Traversal</td><td>
[CWE-22](https://cwe.mitre.org/data/definitions/22.html)
</td><td>
[UnrestrictedSizeUploadServlet.java:84](https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L84)
</td><td>1</td><td>2023-07-05 01:49pm</td></tr><tr><td colspan='6'><details><summary>More info</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L79-L84
<details>
<summary>1 Data Flow/s detected</summary></br>
<details>
<summary>View Data Flow 1</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L70
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L70
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L71
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L84
</details>
</details>
</td></tr></details></td></tr><tr><td><a href='#'><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20></a> High</td><td>File Manipulation</td><td>
[CWE-73](https://cwe.mitre.org/data/definitions/73.html)
</td><td>
[MailHeaderInjectionServlet.java:142](https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L142)
</td><td>1</td><td>2023-07-05 01:49pm</td></tr><tr><td colspan='6'><details><summary>More info</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L137-L142
<details>
<summary>1 Data Flow/s detected</summary></br>
<details>
<summary>View Data Flow 1</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L141
</details>
</details>
</td></tr></details></td></tr><tr><td><a href='#'><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20></a> High</td><td>File Manipulation</td><td>
[CWE-73](https://cwe.mitre.org/data/definitions/73.html)
</td><td>
[MultiPartFileUtils.java:38](https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L38)
</td><td>4</td><td>2023-07-05 01:49pm</td></tr><tr><td colspan='6'><details><summary>More info</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L33-L38
<details>
<summary>4 Data Flow/s detected</summary></br>
<details>
<summary>View Data Flow 1</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L37
</details>
<details>
<summary>View Data Flow 2</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L37
</details>
<details>
<summary>View Data Flow 3</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L37
</details>
[View more Data Flows](https://saas.mend.io/sast/#/scans/f3cabc86-1bf1-48ec-8350-fe38e8c84911/details?vulnId=4f62b182-e897-4384-b29a-afddd3f631e3&filtered=yes)
</details>
</td></tr></details></td></tr><tr><td><a href='#'><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20></a> High</td><td>Code Injection</td><td>
[CWE-94](https://cwe.mitre.org/data/definitions/94.html)
</td><td>
[CodeInjectionServlet.java:65](https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L65)
</td><td>1</td><td>2023-07-05 01:49pm</td></tr><tr><td colspan='6'><details><summary>More info</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L60-L65
<details>
<summary>1 Data Flow/s detected</summary></br>
<details>
<summary>View Data Flow 1</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L25
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L25
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L44
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L45
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L46
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L47
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L61
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L65
</details>
</details>
</td></tr></details></td></tr><tr><td><a href='#'><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20></a> High</td><td>Path/Directory Traversal</td><td>
[CWE-22](https://cwe.mitre.org/data/definitions/22.html)
</td><td>
[XEEandXXEServlet.java:196](https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L196)
</td><td>1</td><td>2023-07-05 01:49pm</td></tr><tr><td colspan='6'><details><summary>More info</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L191-L196
<details>
<summary>1 Data Flow/s detected</summary></br>
<details>
<summary>View Data Flow 1</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L141
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L141
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L148
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L161
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L192
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L196
</details>
</details>
</td></tr></details></td></tr><tr><td><a href='#'><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20></a> High</td><td>Path/Directory Traversal</td><td>
[CWE-22](https://cwe.mitre.org/data/definitions/22.html)
</td><td>
[UnrestrictedExtensionUploadServlet.java:135](https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L135)
</td><td>1</td><td>2023-07-05 01:49pm</td></tr><tr><td colspan='6'><details><summary>More info</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L130-L135
<details>
<summary>1 Data Flow/s detected</summary></br>
<details>
<summary>View Data Flow 1</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L69
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L69
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L76
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L84
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L106
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L135
</details>
</details>
</td></tr></details></td></tr></tbody></table>
### Findings Overview
| Severity | Vulnerability Type | CWE | Language | Count |
|-|-|-|-|-|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High|Code Injection|[CWE-94](https://cwe.mitre.org/data/definitions/94.html)|Java|1|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High|File Manipulation|[CWE-73](https://cwe.mitre.org/data/definitions/73.html)|Java|3|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High|Cross-Site Scripting|[CWE-79](https://cwe.mitre.org/data/definitions/79.html)|Java|2|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High|Path/Directory Traversal|[CWE-22](https://cwe.mitre.org/data/definitions/22.html)|Java|9|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High|Server Side Request Forgery|[CWE-918](https://cwe.mitre.org/data/definitions/918.html)|Java|1|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High|SQL Injection|[CWE-89](https://cwe.mitre.org/data/definitions/89.html)|Java|1|
|<img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium|Error Messages Information Exposure|[CWE-209](https://cwe.mitre.org/data/definitions/209.html)|Java|15|
|<img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium|Trust Boundary Violation|[CWE-501](https://cwe.mitre.org/data/definitions/501.html)|Java|5|
|<img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium|Weak Pseudo-Random|[CWE-338](https://cwe.mitre.org/data/definitions/338.html)|Java|2|
|<img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium|Heap Inspection|[CWE-244](https://cwe.mitre.org/data/definitions/244.html)|Java|5|
|<img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png?' width=19 height=20> Low|HTTP Header Injection|[CWE-113](https://cwe.mitre.org/data/definitions/113.html)|Java|1|
|<img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png?' width=19 height=20> Low|Session Poisoning|[CWE-20](https://cwe.mitre.org/data/definitions/20.html)|Java|5|
|<img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png?' width=19 height=20> Low|Unvalidated/Open Redirect|[CWE-601](https://cwe.mitre.org/data/definitions/601.html)|Java|5|
|<img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png?' width=19 height=20> Low|Log Forging|[CWE-117](https://cwe.mitre.org/data/definitions/117.html)|Java|3|
| 1.0 | Code Security Report: 17 high severity findings, 58 total findings - # Code Security Report
### Scan Metadata
**Latest Scan:** 2023-07-05 01:48pm
**Total Findings:** 58 | **New Findings:** 0 | **Resolved Findings:** 0
**Tested Project Files:** 102
**Detected Programming Languages:** 1 (Java)
<!-- SAST-MANUAL-SCAN-START -->
- [ ] Check this box to manually trigger a scan
<!-- SAST-MANUAL-SCAN-END -->
### Most Relevant Findings
> The below list presents the 10 most relevant findings that need your attention. To view information on the remaining findings, navigate to the [Mend SAST Application](https://saas.mend.io/sast/#/scans/f3cabc86-1bf1-48ec-8350-fe38e8c84911/details).
<table role='table'><thead><tr><th>Severity</th><th>Vulnerability Type</th><th>CWE</th><th>File</th><th>Data Flows</th><th>Date</th></tr></thead><tbody><tr><td><a href='#'><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20></a> High</td><td>SQL Injection</td><td>
[CWE-89](https://cwe.mitre.org/data/definitions/89.html)
</td><td>
[SQLInjectionServlet.java:69](https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L69)
</td><td>3</td><td>2023-07-05 01:49pm</td></tr><tr><td colspan='6'><details><summary>More info</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L64-L69
<details>
<summary>3 Data Flow/s detected</summary></br>
<details>
<summary>View Data Flow 1</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L28
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L28
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L45
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L60
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L69
</details>
<details>
<summary>View Data Flow 2</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L28
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L28
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L45
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L60
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L69
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L69
</details>
<details>
<summary>View Data Flow 3</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L28
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L28
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L39
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L45
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L60
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L69
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/SQLInjectionServlet.java#L69
</details>
</details>
</td></tr></details></td></tr><tr><td><a href='#'><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20></a> High</td><td>Path/Directory Traversal</td><td>
[CWE-22](https://cwe.mitre.org/data/definitions/22.html)
</td><td>
[NullByteInjectionServlet.java:46](https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/NullByteInjectionServlet.java#L46)
</td><td>1</td><td>2023-07-05 01:49pm</td></tr><tr><td colspan='6'><details><summary>More info</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/NullByteInjectionServlet.java#L41-L46
<details>
<summary>1 Data Flow/s detected</summary></br>
<details>
<summary>View Data Flow 1</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/NullByteInjectionServlet.java#L35
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/NullByteInjectionServlet.java#L35
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/NullByteInjectionServlet.java#L40
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/NullByteInjectionServlet.java#L46
</details>
</details>
</td></tr></details></td></tr><tr><td><a href='#'><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20></a> High</td><td>Path/Directory Traversal</td><td>
[CWE-22](https://cwe.mitre.org/data/definitions/22.html)
</td><td>
[UnrestrictedExtensionUploadServlet.java:84](https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L84)
</td><td>1</td><td>2023-07-05 01:49pm</td></tr><tr><td colspan='6'><details><summary>More info</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L79-L84
<details>
<summary>1 Data Flow/s detected</summary></br>
<details>
<summary>View Data Flow 1</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L69
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L69
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L76
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L84
</details>
</details>
</td></tr></details></td></tr><tr><td><a href='#'><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20></a> High</td><td>Path/Directory Traversal</td><td>
[CWE-22](https://cwe.mitre.org/data/definitions/22.html)
</td><td>
[MailHeaderInjectionServlet.java:133](https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L133)
</td><td>1</td><td>2023-07-05 01:49pm</td></tr><tr><td colspan='6'><details><summary>More info</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L128-L133
<details>
<summary>1 Data Flow/s detected</summary></br>
<details>
<summary>View Data Flow 1</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L125
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L125
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L127
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L133
</details>
</details>
</td></tr></details></td></tr><tr><td><a href='#'><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20></a> High</td><td>Path/Directory Traversal</td><td>
[CWE-22](https://cwe.mitre.org/data/definitions/22.html)
</td><td>
[UnrestrictedSizeUploadServlet.java:84](https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L84)
</td><td>1</td><td>2023-07-05 01:49pm</td></tr><tr><td colspan='6'><details><summary>More info</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L79-L84
<details>
<summary>1 Data Flow/s detected</summary></br>
<details>
<summary>View Data Flow 1</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L70
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L70
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L71
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L84
</details>
</details>
</td></tr></details></td></tr><tr><td><a href='#'><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20></a> High</td><td>File Manipulation</td><td>
[CWE-73](https://cwe.mitre.org/data/definitions/73.html)
</td><td>
[MailHeaderInjectionServlet.java:142](https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L142)
</td><td>1</td><td>2023-07-05 01:49pm</td></tr><tr><td colspan='6'><details><summary>More info</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L137-L142
<details>
<summary>1 Data Flow/s detected</summary></br>
<details>
<summary>View Data Flow 1</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L141
</details>
</details>
</td></tr></details></td></tr><tr><td><a href='#'><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20></a> High</td><td>File Manipulation</td><td>
[CWE-73](https://cwe.mitre.org/data/definitions/73.html)
</td><td>
[MultiPartFileUtils.java:38](https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L38)
</td><td>4</td><td>2023-07-05 01:49pm</td></tr><tr><td colspan='6'><details><summary>More info</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L33-L38
<details>
<summary>4 Data Flow/s detected</summary></br>
<details>
<summary>View Data Flow 1</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L37
</details>
<details>
<summary>View Data Flow 2</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L37
</details>
<details>
<summary>View Data Flow 3</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L37
</details>
[View more Data Flows](https://saas.mend.io/sast/#/scans/f3cabc86-1bf1-48ec-8350-fe38e8c84911/details?vulnId=4f62b182-e897-4384-b29a-afddd3f631e3&filtered=yes)
</details>
</td></tr></details></td></tr><tr><td><a href='#'><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20></a> High</td><td>Code Injection</td><td>
[CWE-94](https://cwe.mitre.org/data/definitions/94.html)
</td><td>
[CodeInjectionServlet.java:65](https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L65)
</td><td>1</td><td>2023-07-05 01:49pm</td></tr><tr><td colspan='6'><details><summary>More info</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L60-L65
<details>
<summary>1 Data Flow/s detected</summary></br>
<details>
<summary>View Data Flow 1</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L25
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L25
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L44
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L45
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L46
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L47
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L61
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L65
</details>
</details>
</td></tr></details></td></tr><tr><td><a href='#'><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20></a> High</td><td>Path/Directory Traversal</td><td>
[CWE-22](https://cwe.mitre.org/data/definitions/22.html)
</td><td>
[XEEandXXEServlet.java:196](https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L196)
</td><td>1</td><td>2023-07-05 01:49pm</td></tr><tr><td colspan='6'><details><summary>More info</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L191-L196
<details>
<summary>1 Data Flow/s detected</summary></br>
<details>
<summary>View Data Flow 1</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L141
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L141
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L148
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L161
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L192
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L196
</details>
</details>
</td></tr></details></td></tr><tr><td><a href='#'><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20></a> High</td><td>Path/Directory Traversal</td><td>
[CWE-22](https://cwe.mitre.org/data/definitions/22.html)
</td><td>
[UnrestrictedExtensionUploadServlet.java:135](https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L135)
</td><td>1</td><td>2023-07-05 01:49pm</td></tr><tr><td colspan='6'><details><summary>More info</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L130-L135
<details>
<summary>1 Data Flow/s detected</summary></br>
<details>
<summary>View Data Flow 1</summary>
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L69
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L69
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L76
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L84
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L106
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L135
</details>
</details>
</td></tr></details></td></tr></tbody></table>
### Findings Overview
| Severity | Vulnerability Type | CWE | Language | Count |
|-|-|-|-|-|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High|Code Injection|[CWE-94](https://cwe.mitre.org/data/definitions/94.html)|Java|1|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High|File Manipulation|[CWE-73](https://cwe.mitre.org/data/definitions/73.html)|Java|3|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High|Cross-Site Scripting|[CWE-79](https://cwe.mitre.org/data/definitions/79.html)|Java|2|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High|Path/Directory Traversal|[CWE-22](https://cwe.mitre.org/data/definitions/22.html)|Java|9|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High|Server Side Request Forgery|[CWE-918](https://cwe.mitre.org/data/definitions/918.html)|Java|1|
|<img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High|SQL Injection|[CWE-89](https://cwe.mitre.org/data/definitions/89.html)|Java|1|
|<img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium|Error Messages Information Exposure|[CWE-209](https://cwe.mitre.org/data/definitions/209.html)|Java|15|
|<img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium|Trust Boundary Violation|[CWE-501](https://cwe.mitre.org/data/definitions/501.html)|Java|5|
|<img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium|Weak Pseudo-Random|[CWE-338](https://cwe.mitre.org/data/definitions/338.html)|Java|2|
|<img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium|Heap Inspection|[CWE-244](https://cwe.mitre.org/data/definitions/244.html)|Java|5|
|<img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png?' width=19 height=20> Low|HTTP Header Injection|[CWE-113](https://cwe.mitre.org/data/definitions/113.html)|Java|1|
|<img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png?' width=19 height=20> Low|Session Poisoning|[CWE-20](https://cwe.mitre.org/data/definitions/20.html)|Java|5|
|<img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png?' width=19 height=20> Low|Unvalidated/Open Redirect|[CWE-601](https://cwe.mitre.org/data/definitions/601.html)|Java|5|
|<img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png?' width=19 height=20> Low|Log Forging|[CWE-117](https://cwe.mitre.org/data/definitions/117.html)|Java|3|
| non_main | code security report high severity findings total findings code security report scan metadata latest scan total findings new findings resolved findings tested project files detected programming languages java check this box to manually trigger a scan most relevant findings the below list presents the most relevant findings that need your attention to view information on the remaining findings navigate to the severity vulnerability type cwe file data flows date high sql injection more info data flow s detected view data flow view data flow view data flow high path directory traversal more info data flow s detected view data flow high path directory traversal more info data flow s detected view data flow high path directory traversal more info data flow s detected view data flow high path directory traversal more info data flow s detected view data flow high file manipulation more info data flow s detected view data flow high file manipulation more info data flow s detected view data flow view data flow view data flow high code injection more info data flow s detected view data flow high path directory traversal more info data flow s detected view data flow high path directory traversal more info data flow s detected view data flow findings overview severity vulnerability type cwe language count high code injection high file manipulation high cross site scripting high path directory traversal high server side request forgery high sql injection medium error messages information exposure medium trust boundary violation medium weak pseudo random medium heap inspection low http header injection low session poisoning low unvalidated open redirect low log forging | 0 |
1,152 | 5,026,312,029 | IssuesEvent | 2016-12-15 12:07:40 | aroberge/reeborg | https://api.github.com/repos/aroberge/reeborg | closed | Add extra canvases for animation | easier to maintain enhancement | Add extra canvases for animations, which will make updating the display more robust when using animated objects/tiles/etc.
Use this opportunity to regroup all z-index in the css file in one "block" for ease of maintenance.
| True | Add extra canvases for animation - Add extra canvases for animations, which will make updating the display more robust when using animated objects/tiles/etc.
Use this opportunity to regroup all z-index in the css file in one "block" for ease of maintenance.
| main | add extra canvases for animation add extra canvases for animations which will make updating the display more robust when using animated objects tiles etc use this opportunity to regroup all z index in the css file in one block for ease of maintenance | 1 |
2,389 | 8,490,695,669 | IssuesEvent | 2018-10-27 04:03:38 | invertase/react-native-firebase | https://api.github.com/repos/invertase/react-native-firebase | closed | Firebase.database().ref().on() fires only once with Live Reload | RN-0.57.x await-maintainer-feedback database react-native rn-reload 🐞 bug 👁investigate 🤖 android | ```console
> react-native info
OS: Windows 10
Node: 8.9.4
npm: 6.1.0
Packages: (wanted => installed)
react: ^16.3.2 => 16.4.1
react-native: ^0.55.3 => 0.55.3
```
```json
// package.json
{
"dependencies": {
"react": "^16.3.2",
"react-native": "^0.55.3",
"react-native-firebase": "^4.3.6",
"react-native-google-signin": "^1.0.0-rc5",
"react-navigation": "^2.14.2",
"react-redux": "^5.0.7",
"redux": "^4.0.0",
"redux-thunk": "^2.3.0"
}
}
```
```
// /android/app/build.gradle
dependencies {
implementation(project(':react-native-firebase')) {
transitive = false
}
implementation('com.crashlytics.sdk.android:crashlytics:2.9.3@aar') {
transitive = true
}
// RNFirebase required dependencies
implementation "com.google.firebase:firebase-core:16.0.3"
implementation "com.google.android.gms:play-services-base:15.0.1"
// RNFirebase optional dependencies
implementation "com.google.firebase:firebase-ads:15.0.1"
implementation "com.google.firebase:firebase-auth:16.0.3"
implementation "com.google.firebase:firebase-config:16.0.0"
implementation "com.google.firebase:firebase-crash:16.2.0"
implementation "com.google.firebase:firebase-database:16.0.2"
implementation "com.google.firebase:firebase-firestore:17.1.0"
implementation "com.google.firebase:firebase-functions:16.1.0"
implementation "com.google.firebase:firebase-invites:16.0.3"
implementation "com.google.firebase:firebase-storage:16.0.2"
implementation "com.google.firebase:firebase-messaging:17.3.1"
implementation "com.google.firebase:firebase-perf:16.1.0"
implementation "com.facebook.react:react-native:+"
implementation "com.android.support:appcompat-v7:27.1.1"
implementation 'com.android.support:support-annotations:27.1.1'
implementation fileTree(dir: "libs", include: ["*.jar"])
implementation "com.facebook.react:react-native:+" // From node_modules
implementation(project(":react-native-google-signin")){
exclude group: "com.google.android.gms" // very important
}
implementation 'com.google.android.gms:play-services-auth:16.0.0'
}
// Run this once to be able to run the application with BUCK
// puts all compile dependencies into folder libs for BUCK to use
task copyDownloadableDepsToLibs(type: Copy) {
from configurations.compile
into 'libs'
}
apply plugin: 'com.google.gms.google-services'
com.google.gms.googleservices.GoogleServicesPlugin.config.disableVersionCheck = true
```
I'm on an **Android**, **Connected Device**, with **Live Reload enabled**
I use Firebase for **Auth** and **Realtime Database**
**After Live Reload fires** (each time I save a file, basically), `firebase.database().ref().on()` **stops working.**
Here is my code:
I have a Redux Action to retrieve datas from the logged user
```javascript
// AppRedux.js
const reducer = (state = initialState, action) => {
switch (action.type) {
case 'setCurrentUser':
return { ...state, currentUser: action.value }
break;
// { ... }
}
}
const store = createStore(reducer, applyMiddleware(thunkMiddleware))
export { store }
const setCurrentUser = (user) => {
return {
type: "setCurrentUser",
value: user
}
}
const watchUserData = () => {
return function(dispatch) {
const { currentUser } = firebase.auth()
if (currentUser) {
console.log(currentUser.uid) // I always have the right uid, so firebase.auth() always works
const ref = firebase.database().ref('users/' + currentUser.uid)
ref.on('value', (snapshot) => {
const user = snapshot.val()
console.log(user) // I get there only once !
dispatch(setCurrentUser(user))
}, (error) => {
console.log(error)
})
}
}
}
export { setCurrentUser, watchUserData }
```
And I call this action in a pretty simple Component (screen)
```javascript
// Main.js
// { ... other imports}
import { watchUserData } from '../redux/AppRedux'
const mapStateToProps = (state) => {
return {
currentUser: state.currentUser
}
}
const mapDispatchToProps = (dispatch) => {
return {
watchUserData: () => dispatch(watchUserData())
}
}
class Main extends React.Component {
constructor(props) {
super(props)
}
componentWillMount() {
this.props.watchUserData()
}
render() {
const { currentUser } = this.props
return (
<View>
<Text>Hello { currentUser && currentUser.firstname }!</Text>
</View>
)
}
}
export default connect(mapStateToProps, mapDispatchToProps)(Main)
```
This piece of code works perfectly.. **only once**. When Live Reload fires, it won't work anymore.
Note that `firebase.auth()` still works and gives me the right `uid`, but `firebase.database().ref('users/' + currentUser.uid).on('value')` won't fire anymore, _even if I sign out then sign in again_.
Also note that **it works perfectly with *Hot Reloading* whereas it doesn't with *Live Reloading***.
I have to `npm run android` (which is equal to `react-native run-android`) for it to fire again.. until the next Live Reload. | True | Firebase.database().ref().on() fires only once with Live Reload - ```console
> react-native info
OS: Windows 10
Node: 8.9.4
npm: 6.1.0
Packages: (wanted => installed)
react: ^16.3.2 => 16.4.1
react-native: ^0.55.3 => 0.55.3
```
```json
// package.json
{
"dependencies": {
"react": "^16.3.2",
"react-native": "^0.55.3",
"react-native-firebase": "^4.3.6",
"react-native-google-signin": "^1.0.0-rc5",
"react-navigation": "^2.14.2",
"react-redux": "^5.0.7",
"redux": "^4.0.0",
"redux-thunk": "^2.3.0"
}
}
```
```
// /android/app/build.gradle
dependencies {
implementation(project(':react-native-firebase')) {
transitive = false
}
implementation('com.crashlytics.sdk.android:crashlytics:2.9.3@aar') {
transitive = true
}
// RNFirebase required dependencies
implementation "com.google.firebase:firebase-core:16.0.3"
implementation "com.google.android.gms:play-services-base:15.0.1"
// RNFirebase optional dependencies
implementation "com.google.firebase:firebase-ads:15.0.1"
implementation "com.google.firebase:firebase-auth:16.0.3"
implementation "com.google.firebase:firebase-config:16.0.0"
implementation "com.google.firebase:firebase-crash:16.2.0"
implementation "com.google.firebase:firebase-database:16.0.2"
implementation "com.google.firebase:firebase-firestore:17.1.0"
implementation "com.google.firebase:firebase-functions:16.1.0"
implementation "com.google.firebase:firebase-invites:16.0.3"
implementation "com.google.firebase:firebase-storage:16.0.2"
implementation "com.google.firebase:firebase-messaging:17.3.1"
implementation "com.google.firebase:firebase-perf:16.1.0"
implementation "com.facebook.react:react-native:+"
implementation "com.android.support:appcompat-v7:27.1.1"
implementation 'com.android.support:support-annotations:27.1.1'
implementation fileTree(dir: "libs", include: ["*.jar"])
implementation "com.facebook.react:react-native:+" // From node_modules
implementation(project(":react-native-google-signin")){
exclude group: "com.google.android.gms" // very important
}
implementation 'com.google.android.gms:play-services-auth:16.0.0'
}
// Run this once to be able to run the application with BUCK
// puts all compile dependencies into folder libs for BUCK to use
task copyDownloadableDepsToLibs(type: Copy) {
from configurations.compile
into 'libs'
}
apply plugin: 'com.google.gms.google-services'
com.google.gms.googleservices.GoogleServicesPlugin.config.disableVersionCheck = true
```
I'm on an **Android**, **Connected Device**, with **Live Reload enabled**
I use Firebase for **Auth** and **Realtime Database**
**After Live Reload fires** (each time I save a file, basically), `firebase.database().ref().on()` **stops working.**
Here is my code:
I have a Redux Action to retrieve datas from the logged user
```javascript
// AppRedux.js
const reducer = (state = initialState, action) => {
switch (action.type) {
case 'setCurrentUser':
return { ...state, currentUser: action.value }
break;
// { ... }
}
}
const store = createStore(reducer, applyMiddleware(thunkMiddleware))
export { store }
const setCurrentUser = (user) => {
return {
type: "setCurrentUser",
value: user
}
}
const watchUserData = () => {
return function(dispatch) {
const { currentUser } = firebase.auth()
if (currentUser) {
console.log(currentUser.uid) // I always have the right uid, so firebase.auth() always works
const ref = firebase.database().ref('users/' + currentUser.uid)
ref.on('value', (snapshot) => {
const user = snapshot.val()
console.log(user) // I get there only once !
dispatch(setCurrentUser(user))
}, (error) => {
console.log(error)
})
}
}
}
export { setCurrentUser, watchUserData }
```
And I call this action in a pretty simple Component (screen)
```javascript
// Main.js
// { ... other imports}
import { watchUserData } from '../redux/AppRedux'
const mapStateToProps = (state) => {
return {
currentUser: state.currentUser
}
}
const mapDispatchToProps = (dispatch) => {
return {
watchUserData: () => dispatch(watchUserData())
}
}
class Main extends React.Component {
constructor(props) {
super(props)
}
componentWillMount() {
this.props.watchUserData()
}
render() {
const { currentUser } = this.props
return (
<View>
<Text>Hello { currentUser && currentUser.firstname }!</Text>
</View>
)
}
}
export default connect(mapStateToProps, mapDispatchToProps)(Main)
```
This piece of code works perfectly.. **only once**. When Live Reload fires, it won't work anymore.
Note that `firebase.auth()` still works and gives me the right `uid`, but `firebase.database().ref('users/' + currentUser.uid).on('value')` won't fire anymore, _even if I sign out then sign in again_.
Also note that **it works perfectly with *Hot Reloading* whereas it doesn't with *Live Reloading***.
I have to `npm run android` (which is equal to `react-native run-android`) for it to fire again.. until the next Live Reload. | main | firebase database ref on fires only once with live reload console react native info os windows node npm packages wanted installed react react native json package json dependencies react react native react native firebase react native google signin react navigation react redux redux redux thunk android app build gradle dependencies implementation project react native firebase transitive false implementation com crashlytics sdk android crashlytics aar transitive true rnfirebase required dependencies implementation com google firebase firebase core implementation com google android gms play services base rnfirebase optional dependencies implementation com google firebase firebase ads implementation com google firebase firebase auth implementation com google firebase firebase config implementation com google firebase firebase crash implementation com google firebase firebase database implementation com google firebase firebase firestore implementation com google firebase firebase functions implementation com google firebase firebase invites implementation com google firebase firebase storage implementation com google firebase firebase messaging implementation com google firebase firebase perf implementation com facebook react react native implementation com android support appcompat implementation com android support support annotations implementation filetree dir libs include implementation com facebook react react native from node modules implementation project react native google signin exclude group com google android gms very important implementation com google android gms play services auth run this once to be able to run the application with buck puts all compile dependencies into folder libs for buck to use task copydownloadabledepstolibs type copy from configurations compile into libs apply plugin com google gms google services com google gms googleservices googleservicesplugin config disableversioncheck true i m on an android connected device with live reload enabled i use firebase for auth and realtime database after live reload fires each time i save a file basically firebase database ref on stops working here is my code i have a redux action to retrieve datas from the logged user javascript appredux js const reducer state initialstate action switch action type case setcurrentuser return state currentuser action value break const store createstore reducer applymiddleware thunkmiddleware export store const setcurrentuser user return type setcurrentuser value user const watchuserdata return function dispatch const currentuser firebase auth if currentuser console log currentuser uid i always have the right uid so firebase auth always works const ref firebase database ref users currentuser uid ref on value snapshot const user snapshot val console log user i get there only once dispatch setcurrentuser user error console log error export setcurrentuser watchuserdata and i call this action in a pretty simple component screen javascript main js other imports import watchuserdata from redux appredux const mapstatetoprops state return currentuser state currentuser const mapdispatchtoprops dispatch return watchuserdata dispatch watchuserdata class main extends react component constructor props super props componentwillmount this props watchuserdata render const currentuser this props return hello currentuser currentuser firstname export default connect mapstatetoprops mapdispatchtoprops main this piece of code works perfectly only once when live reload fires it won t work anymore note that firebase auth still works and gives me the right uid but firebase database ref users currentuser uid on value won t fire anymore even if i sign out then sign in again also note that it works perfectly with hot reloading whereas it doesn t with live reloading i have to npm run android which is equal to react native run android for it to fire again until the next live reload | 1 |
104,346 | 11,404,433,478 | IssuesEvent | 2020-01-31 09:47:55 | asciidoctor/asciidoctor-pdf | https://api.github.com/repos/asciidoctor/asciidoctor-pdf | closed | Table column autowidth issue | documentation | When generating the following tables, the longest entries in the first two columns wraps the last character. In the first table, `Label1` is always wrapped and `100` is wrapped, however in the second table, `Label1` isn't wrapped (due to `Labels1` being longer) and `100` isn't wrapped due to `1000` being longer.
```
[%autowidth]
|===
| I | Label | Text
| 1 | Label1 | Lorem ipsum dolor sit amet, elit fusce duis, voluptatem ut,
mauris tempor orci odio sapien viverra ut, deserunt luctus. Tellus ut aliquet
commodo malesuada rutrum, laoreet suspendisse et amet sit turpis elit.
Pellentesque nunc nibh ipsum, eu dolor, sapien ipsum sit justo massa eros. Eu
lacus integer, ante nullam curabitur hendrerit urna scelerisque in. Vel mus
vestibulum tincidunt, mauris nec rhoncus, feugiat laborum auctor praesent amet,
accumsan ac, bibendum metus metus vestibulum. Massa orci enim elit egestas,
lectus wisi vel justo velit sed nunc. In posuere sed, risus mollis leo
ullamcorper justo, metus donec aliquam occaecati, lectus vel aliquam elit ipsum,
diam ante dictum eros.
| 10 | Label1 | Lorem ipsum dolor sit amet, elit fusce duis, voluptatem ut,
mauris tempor orci odio sapien viverra ut, deserunt luctus. Tellus ut aliquet
commodo malesuada rutrum, laoreet suspendisse et amet sit turpis elit.
Pellentesque nunc nibh ipsum, eu dolor, sapien ipsum sit justo massa eros. Eu
lacus integer, ante nullam curabitur hendrerit urna scelerisque in. Vel mus
vestibulum tincidunt, mauris nec rhoncus, feugiat laborum auctor praesent amet,
accumsan ac, bibendum metus metus vestibulum. Massa orci enim elit egestas,
lectus wisi vel justo velit sed nunc. In posuere sed, risus mollis leo
ullamcorper justo, metus donec aliquam occaecati, lectus vel aliquam elit ipsum,
diam ante dictum eros.
| 100 | Label1 | Lorem ipsum dolor sit amet, elit fusce duis, voluptatem ut,
mauris tempor orci odio sapien viverra ut, deserunt luctus. Tellus ut aliquet
commodo malesuada rutrum, laoreet suspendisse et amet sit turpis elit.
Pellentesque nunc nibh ipsum, eu dolor, sapien ipsum sit justo massa eros. Eu
lacus integer, ante nullam curabitur hendrerit urna scelerisque in. Vel mus
vestibulum tincidunt, mauris nec rhoncus, feugiat laborum auctor praesent amet,
accumsan ac, bibendum metus metus vestibulum. Massa orci enim elit egestas,
lectus wisi vel justo velit sed nunc. In posuere sed, risus mollis leo
ullamcorper justo, metus donec aliquam occaecati, lectus vel aliquam elit ipsum,
diam ante dictum eros.
|===
[%autowidth]
|===
| I | Label | Text
| 1 | Labels1 | Lorem ipsum dolor sit amet, elit fusce duis, voluptatem ut,
mauris tempor orci odio sapien viverra ut, deserunt luctus. Tellus ut aliquet
commodo malesuada rutrum, laoreet suspendisse et amet sit turpis elit.
Pellentesque nunc nibh ipsum, eu dolor, sapien ipsum sit justo massa eros. Eu
lacus integer, ante nullam curabitur hendrerit urna scelerisque in. Vel mus
vestibulum tincidunt, mauris nec rhoncus, feugiat laborum auctor praesent amet,
accumsan ac, bibendum metus metus vestibulum. Massa orci enim elit egestas,
lectus wisi vel justo velit sed nunc. In posuere sed, risus mollis leo
ullamcorper justo, metus donec aliquam occaecati, lectus vel aliquam elit ipsum,
diam ante dictum eros.
| 10 | Label1 | Lorem ipsum dolor sit amet, elit fusce duis, voluptatem ut,
mauris tempor orci odio sapien viverra ut, deserunt luctus. Tellus ut aliquet
commodo malesuada rutrum, laoreet suspendisse et amet sit turpis elit.
Pellentesque nunc nibh ipsum, eu dolor, sapien ipsum sit justo massa eros. Eu
lacus integer, ante nullam curabitur hendrerit urna scelerisque in. Vel mus
vestibulum tincidunt, mauris nec rhoncus, feugiat laborum auctor praesent amet,
accumsan ac, bibendum metus metus vestibulum. Massa orci enim elit egestas,
lectus wisi vel justo velit sed nunc. In posuere sed, risus mollis leo
ullamcorper justo, metus donec aliquam occaecati, lectus vel aliquam elit ipsum,
diam ante dictum eros.
| 100 | Label2 | Lorem ipsum dolor sit amet, elit fusce duis, voluptatem ut,
mauris tempor orci odio sapien viverra ut, deserunt luctus. Tellus ut aliquet
commodo malesuada rutrum, laoreet suspendisse et amet sit turpis elit.
Pellentesque nunc nibh ipsum, eu dolor, sapien ipsum sit justo massa eros. Eu
lacus integer, ante nullam curabitur hendrerit urna scelerisque in. Vel mus
vestibulum tincidunt, mauris nec rhoncus, feugiat laborum auctor praesent amet,
accumsan ac, bibendum metus metus vestibulum. Massa orci enim elit egestas,
lectus wisi vel justo velit sed nunc. In posuere sed, risus mollis leo
ullamcorper justo, metus donec aliquam occaecati, lectus vel aliquam elit ipsum,
diam ante dictum eros.
| 1000 | Label2 | Lorem ipsum dolor sit amet, elit fusce duis, voluptatem ut,
mauris tempor orci odio sapien viverra ut, deserunt luctus. Tellus ut aliquet
commodo malesuada rutrum, laoreet suspendisse et amet sit turpis elit.
Pellentesque nunc nibh ipsum, eu dolor, sapien ipsum sit justo massa eros. Eu
lacus integer, ante nullam curabitur hendrerit urna scelerisque in. Vel mus
vestibulum tincidunt, mauris nec rhoncus, feugiat laborum auctor praesent amet,
accumsan ac, bibendum metus metus vestibulum. Massa orci enim elit egestas,
lectus wisi vel justo velit sed nunc. In posuere sed, risus mollis leo
ullamcorper justo, metus donec aliquam occaecati, lectus vel aliquam elit ipsum,
diam ante dictum eros.
|===
```
Using the following versions:
* asciidoctor-2.0.10
* asciidoctor-pdf-1.5.0.rc.2
* prawn-d980247be8a00e7c59cd4e5785e3aa98f9856db1 (head as of 22/01/2020)
* prawn-table-515f2db294866a343b05d15f94e5fb417a32f6ff (head as of 22/01/2020)
| 1.0 | Table column autowidth issue - When generating the following tables, the longest entries in the first two columns wraps the last character. In the first table, `Label1` is always wrapped and `100` is wrapped, however in the second table, `Label1` isn't wrapped (due to `Labels1` being longer) and `100` isn't wrapped due to `1000` being longer.
```
[%autowidth]
|===
| I | Label | Text
| 1 | Label1 | Lorem ipsum dolor sit amet, elit fusce duis, voluptatem ut,
mauris tempor orci odio sapien viverra ut, deserunt luctus. Tellus ut aliquet
commodo malesuada rutrum, laoreet suspendisse et amet sit turpis elit.
Pellentesque nunc nibh ipsum, eu dolor, sapien ipsum sit justo massa eros. Eu
lacus integer, ante nullam curabitur hendrerit urna scelerisque in. Vel mus
vestibulum tincidunt, mauris nec rhoncus, feugiat laborum auctor praesent amet,
accumsan ac, bibendum metus metus vestibulum. Massa orci enim elit egestas,
lectus wisi vel justo velit sed nunc. In posuere sed, risus mollis leo
ullamcorper justo, metus donec aliquam occaecati, lectus vel aliquam elit ipsum,
diam ante dictum eros.
| 10 | Label1 | Lorem ipsum dolor sit amet, elit fusce duis, voluptatem ut,
mauris tempor orci odio sapien viverra ut, deserunt luctus. Tellus ut aliquet
commodo malesuada rutrum, laoreet suspendisse et amet sit turpis elit.
Pellentesque nunc nibh ipsum, eu dolor, sapien ipsum sit justo massa eros. Eu
lacus integer, ante nullam curabitur hendrerit urna scelerisque in. Vel mus
vestibulum tincidunt, mauris nec rhoncus, feugiat laborum auctor praesent amet,
accumsan ac, bibendum metus metus vestibulum. Massa orci enim elit egestas,
lectus wisi vel justo velit sed nunc. In posuere sed, risus mollis leo
ullamcorper justo, metus donec aliquam occaecati, lectus vel aliquam elit ipsum,
diam ante dictum eros.
| 100 | Label1 | Lorem ipsum dolor sit amet, elit fusce duis, voluptatem ut,
mauris tempor orci odio sapien viverra ut, deserunt luctus. Tellus ut aliquet
commodo malesuada rutrum, laoreet suspendisse et amet sit turpis elit.
Pellentesque nunc nibh ipsum, eu dolor, sapien ipsum sit justo massa eros. Eu
lacus integer, ante nullam curabitur hendrerit urna scelerisque in. Vel mus
vestibulum tincidunt, mauris nec rhoncus, feugiat laborum auctor praesent amet,
accumsan ac, bibendum metus metus vestibulum. Massa orci enim elit egestas,
lectus wisi vel justo velit sed nunc. In posuere sed, risus mollis leo
ullamcorper justo, metus donec aliquam occaecati, lectus vel aliquam elit ipsum,
diam ante dictum eros.
|===
[%autowidth]
|===
| I | Label | Text
| 1 | Labels1 | Lorem ipsum dolor sit amet, elit fusce duis, voluptatem ut,
mauris tempor orci odio sapien viverra ut, deserunt luctus. Tellus ut aliquet
commodo malesuada rutrum, laoreet suspendisse et amet sit turpis elit.
Pellentesque nunc nibh ipsum, eu dolor, sapien ipsum sit justo massa eros. Eu
lacus integer, ante nullam curabitur hendrerit urna scelerisque in. Vel mus
vestibulum tincidunt, mauris nec rhoncus, feugiat laborum auctor praesent amet,
accumsan ac, bibendum metus metus vestibulum. Massa orci enim elit egestas,
lectus wisi vel justo velit sed nunc. In posuere sed, risus mollis leo
ullamcorper justo, metus donec aliquam occaecati, lectus vel aliquam elit ipsum,
diam ante dictum eros.
| 10 | Label1 | Lorem ipsum dolor sit amet, elit fusce duis, voluptatem ut,
mauris tempor orci odio sapien viverra ut, deserunt luctus. Tellus ut aliquet
commodo malesuada rutrum, laoreet suspendisse et amet sit turpis elit.
Pellentesque nunc nibh ipsum, eu dolor, sapien ipsum sit justo massa eros. Eu
lacus integer, ante nullam curabitur hendrerit urna scelerisque in. Vel mus
vestibulum tincidunt, mauris nec rhoncus, feugiat laborum auctor praesent amet,
accumsan ac, bibendum metus metus vestibulum. Massa orci enim elit egestas,
lectus wisi vel justo velit sed nunc. In posuere sed, risus mollis leo
ullamcorper justo, metus donec aliquam occaecati, lectus vel aliquam elit ipsum,
diam ante dictum eros.
| 100 | Label2 | Lorem ipsum dolor sit amet, elit fusce duis, voluptatem ut,
mauris tempor orci odio sapien viverra ut, deserunt luctus. Tellus ut aliquet
commodo malesuada rutrum, laoreet suspendisse et amet sit turpis elit.
Pellentesque nunc nibh ipsum, eu dolor, sapien ipsum sit justo massa eros. Eu
lacus integer, ante nullam curabitur hendrerit urna scelerisque in. Vel mus
vestibulum tincidunt, mauris nec rhoncus, feugiat laborum auctor praesent amet,
accumsan ac, bibendum metus metus vestibulum. Massa orci enim elit egestas,
lectus wisi vel justo velit sed nunc. In posuere sed, risus mollis leo
ullamcorper justo, metus donec aliquam occaecati, lectus vel aliquam elit ipsum,
diam ante dictum eros.
| 1000 | Label2 | Lorem ipsum dolor sit amet, elit fusce duis, voluptatem ut,
mauris tempor orci odio sapien viverra ut, deserunt luctus. Tellus ut aliquet
commodo malesuada rutrum, laoreet suspendisse et amet sit turpis elit.
Pellentesque nunc nibh ipsum, eu dolor, sapien ipsum sit justo massa eros. Eu
lacus integer, ante nullam curabitur hendrerit urna scelerisque in. Vel mus
vestibulum tincidunt, mauris nec rhoncus, feugiat laborum auctor praesent amet,
accumsan ac, bibendum metus metus vestibulum. Massa orci enim elit egestas,
lectus wisi vel justo velit sed nunc. In posuere sed, risus mollis leo
ullamcorper justo, metus donec aliquam occaecati, lectus vel aliquam elit ipsum,
diam ante dictum eros.
|===
```
Using the following versions:
* asciidoctor-2.0.10
* asciidoctor-pdf-1.5.0.rc.2
* prawn-d980247be8a00e7c59cd4e5785e3aa98f9856db1 (head as of 22/01/2020)
* prawn-table-515f2db294866a343b05d15f94e5fb417a32f6ff (head as of 22/01/2020)
| non_main | table column autowidth issue when generating the following tables the longest entries in the first two columns wraps the last character in the first table is always wrapped and is wrapped however in the second table isn t wrapped due to being longer and isn t wrapped due to being longer i label text lorem ipsum dolor sit amet elit fusce duis voluptatem ut mauris tempor orci odio sapien viverra ut deserunt luctus tellus ut aliquet commodo malesuada rutrum laoreet suspendisse et amet sit turpis elit pellentesque nunc nibh ipsum eu dolor sapien ipsum sit justo massa eros eu lacus integer ante nullam curabitur hendrerit urna scelerisque in vel mus vestibulum tincidunt mauris nec rhoncus feugiat laborum auctor praesent amet accumsan ac bibendum metus metus vestibulum massa orci enim elit egestas lectus wisi vel justo velit sed nunc in posuere sed risus mollis leo ullamcorper justo metus donec aliquam occaecati lectus vel aliquam elit ipsum diam ante dictum eros lorem ipsum dolor sit amet elit fusce duis voluptatem ut mauris tempor orci odio sapien viverra ut deserunt luctus tellus ut aliquet commodo malesuada rutrum laoreet suspendisse et amet sit turpis elit pellentesque nunc nibh ipsum eu dolor sapien ipsum sit justo massa eros eu lacus integer ante nullam curabitur hendrerit urna scelerisque in vel mus vestibulum tincidunt mauris nec rhoncus feugiat laborum auctor praesent amet accumsan ac bibendum metus metus vestibulum massa orci enim elit egestas lectus wisi vel justo velit sed nunc in posuere sed risus mollis leo ullamcorper justo metus donec aliquam occaecati lectus vel aliquam elit ipsum diam ante dictum eros lorem ipsum dolor sit amet elit fusce duis voluptatem ut mauris tempor orci odio sapien viverra ut deserunt luctus tellus ut aliquet commodo malesuada rutrum laoreet suspendisse et amet sit turpis elit pellentesque nunc nibh ipsum eu dolor sapien ipsum sit justo massa eros eu lacus integer ante nullam curabitur hendrerit urna scelerisque in vel mus vestibulum tincidunt mauris nec rhoncus feugiat laborum auctor praesent amet accumsan ac bibendum metus metus vestibulum massa orci enim elit egestas lectus wisi vel justo velit sed nunc in posuere sed risus mollis leo ullamcorper justo metus donec aliquam occaecati lectus vel aliquam elit ipsum diam ante dictum eros i label text lorem ipsum dolor sit amet elit fusce duis voluptatem ut mauris tempor orci odio sapien viverra ut deserunt luctus tellus ut aliquet commodo malesuada rutrum laoreet suspendisse et amet sit turpis elit pellentesque nunc nibh ipsum eu dolor sapien ipsum sit justo massa eros eu lacus integer ante nullam curabitur hendrerit urna scelerisque in vel mus vestibulum tincidunt mauris nec rhoncus feugiat laborum auctor praesent amet accumsan ac bibendum metus metus vestibulum massa orci enim elit egestas lectus wisi vel justo velit sed nunc in posuere sed risus mollis leo ullamcorper justo metus donec aliquam occaecati lectus vel aliquam elit ipsum diam ante dictum eros lorem ipsum dolor sit amet elit fusce duis voluptatem ut mauris tempor orci odio sapien viverra ut deserunt luctus tellus ut aliquet commodo malesuada rutrum laoreet suspendisse et amet sit turpis elit pellentesque nunc nibh ipsum eu dolor sapien ipsum sit justo massa eros eu lacus integer ante nullam curabitur hendrerit urna scelerisque in vel mus vestibulum tincidunt mauris nec rhoncus feugiat laborum auctor praesent amet accumsan ac bibendum metus metus vestibulum massa orci enim elit egestas lectus wisi vel justo velit sed nunc in posuere sed risus mollis leo ullamcorper justo metus donec aliquam occaecati lectus vel aliquam elit ipsum diam ante dictum eros lorem ipsum dolor sit amet elit fusce duis voluptatem ut mauris tempor orci odio sapien viverra ut deserunt luctus tellus ut aliquet commodo malesuada rutrum laoreet suspendisse et amet sit turpis elit pellentesque nunc nibh ipsum eu dolor sapien ipsum sit justo massa eros eu lacus integer ante nullam curabitur hendrerit urna scelerisque in vel mus vestibulum tincidunt mauris nec rhoncus feugiat laborum auctor praesent amet accumsan ac bibendum metus metus vestibulum massa orci enim elit egestas lectus wisi vel justo velit sed nunc in posuere sed risus mollis leo ullamcorper justo metus donec aliquam occaecati lectus vel aliquam elit ipsum diam ante dictum eros lorem ipsum dolor sit amet elit fusce duis voluptatem ut mauris tempor orci odio sapien viverra ut deserunt luctus tellus ut aliquet commodo malesuada rutrum laoreet suspendisse et amet sit turpis elit pellentesque nunc nibh ipsum eu dolor sapien ipsum sit justo massa eros eu lacus integer ante nullam curabitur hendrerit urna scelerisque in vel mus vestibulum tincidunt mauris nec rhoncus feugiat laborum auctor praesent amet accumsan ac bibendum metus metus vestibulum massa orci enim elit egestas lectus wisi vel justo velit sed nunc in posuere sed risus mollis leo ullamcorper justo metus donec aliquam occaecati lectus vel aliquam elit ipsum diam ante dictum eros using the following versions asciidoctor asciidoctor pdf rc prawn head as of prawn table head as of | 0 |
55,652 | 11,456,404,772 | IssuesEvent | 2020-02-06 21:09:22 | MoonchildProductions/UXP | https://api.github.com/repos/MoonchildProductions/UXP | closed | Get rid of the Presentation Web API | App: Basilisk App: Toolkit C: DOM Code Cleanup Fixed | This has been a WIP in Gecko since 42. It's never been enabled on release versions, and really is more gadgeteering to pollute the code base with unnecessary proprietary messaging systems.
Do we care about sending pages to Chromecast, Roku or DIAL-enabled devices? Nope.
Do we need to be able to talk to "Smart TVs"? Nope.
The 2-UA setup is dead anyway since we have no counterpart "apps" that could run on smart displays.
| 1.0 | Get rid of the Presentation Web API - This has been a WIP in Gecko since 42. It's never been enabled on release versions, and really is more gadgeteering to pollute the code base with unnecessary proprietary messaging systems.
Do we care about sending pages to Chromecast, Roku or DIAL-enabled devices? Nope.
Do we need to be able to talk to "Smart TVs"? Nope.
The 2-UA setup is dead anyway since we have no counterpart "apps" that could run on smart displays.
| non_main | get rid of the presentation web api this has been a wip in gecko since it s never been enabled on release versions and really is more gadgeteering to pollute the code base with unnecessary proprietary messaging systems do we care about sending pages to chromecast roku or dial enabled devices nope do we need to be able to talk to smart tvs nope the ua setup is dead anyway since we have no counterpart apps that could run on smart displays | 0 |
28,742 | 4,434,307,905 | IssuesEvent | 2016-08-18 01:46:42 | codeforamerica/typeseam | https://api.github.com/repos/codeforamerica/typeseam | closed | Improve form fields | back-end bugs/tests front-end ready | We currently don't ask:
- phone type (cell, home, work, other)
- can we send voicemail
- can we send paper mail
Rewrite the form to capture these questions. | 1.0 | Improve form fields - We currently don't ask:
- phone type (cell, home, work, other)
- can we send voicemail
- can we send paper mail
Rewrite the form to capture these questions. | non_main | improve form fields we currently don t ask phone type cell home work other can we send voicemail can we send paper mail rewrite the form to capture these questions | 0 |
324,799 | 24,017,260,021 | IssuesEvent | 2022-09-15 02:49:29 | timescale/docs | https://api.github.com/repos/timescale/docs | closed | [Feedback] Page: /install/latest/self-hosted/installation-macos/ | documentation community feedback | ### Is it easy to find the information you need?
No
### Are the instructions clear?
No
### How could we improve the Timescale documentation site?
I upgraded my Timescale extension using `brew upgrade timescaledb` (MacOS) and it is no longer functional.
`pg_ctl` utility reports the following at server startup:
```
2022-09-14 16:52:14.822 PDT [68955] LOG: TimescaleDB background worker launcher connected to shared catalogs
2022-09-14 16:52:14.829 PDT [68957] ERROR: could not access file "$libdir/timescaledb-2.7.2": No such file or directory
2022-09-14 16:52:14.829 PDT [68948] LOG: background worker "TimescaleDB Background Worker Scheduler" (PID 68957) exited with exit code 1
```
It looks like something is hard-coded to reference version 2.7.2, but the new version is 2.8.0.
I followed steps 4-6 of the Homebrew installation section in the docs, but this has not resolved the problem. Now my local database is inaccessible to me.
If there are special considerations for upgrading TimescaleDB via Homebrew, or if `brew upgrade` should be avoided altogether for TimescaleDB, can you please add these to the documentation? | 1.0 | [Feedback] Page: /install/latest/self-hosted/installation-macos/ - ### Is it easy to find the information you need?
No
### Are the instructions clear?
No
### How could we improve the Timescale documentation site?
I upgraded my Timescale extension using `brew upgrade timescaledb` (MacOS) and it is no longer functional.
`pg_ctl` utility reports the following at server startup:
```
2022-09-14 16:52:14.822 PDT [68955] LOG: TimescaleDB background worker launcher connected to shared catalogs
2022-09-14 16:52:14.829 PDT [68957] ERROR: could not access file "$libdir/timescaledb-2.7.2": No such file or directory
2022-09-14 16:52:14.829 PDT [68948] LOG: background worker "TimescaleDB Background Worker Scheduler" (PID 68957) exited with exit code 1
```
It looks like something is hard-coded to reference version 2.7.2, but the new version is 2.8.0.
I followed steps 4-6 of the Homebrew installation section in the docs, but this has not resolved the problem. Now my local database is inaccessible to me.
If there are special considerations for upgrading TimescaleDB via Homebrew, or if `brew upgrade` should be avoided altogether for TimescaleDB, can you please add these to the documentation? | non_main | page install latest self hosted installation macos is it easy to find the information you need no are the instructions clear no how could we improve the timescale documentation site i upgraded my timescale extension using brew upgrade timescaledb macos and it is no longer functional pg ctl utility reports the following at server startup pdt log timescaledb background worker launcher connected to shared catalogs pdt error could not access file libdir timescaledb no such file or directory pdt log background worker timescaledb background worker scheduler pid exited with exit code it looks like something is hard coded to reference version but the new version is i followed steps of the homebrew installation section in the docs but this has not resolved the problem now my local database is inaccessible to me if there are special considerations for upgrading timescaledb via homebrew or if brew upgrade should be avoided altogether for timescaledb can you please add these to the documentation | 0 |
3,667 | 14,978,788,918 | IssuesEvent | 2021-01-28 11:18:58 | RalfKoban/MiKo-Analyzers | https://api.github.com/repos/RalfKoban/MiKo-Analyzers | opened | ObjectUnderTest should be preceded and followed by a blank line | Area: analyzer Area: maintainability feature | A call on `ObjectUnderTest` (SUT) should be preceded by a blank line if the preceding line contains a call to something that is no call on `ObjectUnderTest`. In addition, a call to `ObjectUnderTest` should be followed by a blank line.
The reason is ease of reading (spotting test subject calls with ease).
Following should report a violation:
```c#
var x = 42;
var y = "something";
ObjectUnderTest.DoSomething(x, y);
```
While following should **NOT** report a violation:
```c#
var x = 42;
var y = "something";
ObjectUnderTest.DoSomething(x, y);
```
Following should report a violation:
```c#
ObjectUnderTest.DoSomething(x, y);
var x = 42;
var y = "something";
```
While following should **NOT** report a violation:
```c#
ObjectUnderTest.DoSomething(x, y);
var x = 42;
var y = "something";
```
| True | ObjectUnderTest should be preceded and followed by a blank line - A call on `ObjectUnderTest` (SUT) should be preceded by a blank line if the preceding line contains a call to something that is no call on `ObjectUnderTest`. In addition, a call to `ObjectUnderTest` should be followed by a blank line.
The reason is ease of reading (spotting test subject calls with ease).
Following should report a violation:
```c#
var x = 42;
var y = "something";
ObjectUnderTest.DoSomething(x, y);
```
While following should **NOT** report a violation:
```c#
var x = 42;
var y = "something";
ObjectUnderTest.DoSomething(x, y);
```
Following should report a violation:
```c#
ObjectUnderTest.DoSomething(x, y);
var x = 42;
var y = "something";
```
While following should **NOT** report a violation:
```c#
ObjectUnderTest.DoSomething(x, y);
var x = 42;
var y = "something";
```
| main | objectundertest should be preceded and followed by a blank line a call on objectundertest sut should be preceded by a blank line if the preceding line contains a call to something that is no call on objectundertest in addition a call to objectundertest should be followed by a blank line the reason is ease of reading spotting test subject calls with ease following should report a violation c var x var y something objectundertest dosomething x y while following should not report a violation c var x var y something objectundertest dosomething x y following should report a violation c objectundertest dosomething x y var x var y something while following should not report a violation c objectundertest dosomething x y var x var y something | 1 |
276,711 | 30,522,988,994 | IssuesEvent | 2023-07-19 09:19:28 | NickSham-Sandbox/VulnerableNode | https://api.github.com/repos/NickSham-Sandbox/VulnerableNode | opened | swig-1.4.2.tgz: 1 vulnerabilities (highest severity is: 7.5) | Mend: dependency security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>swig-1.4.2.tgz</b></p></summary>
<p></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/uglify-js/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/NickSham-Sandbox/VulnerableNode/commit/06f47b84b81980ff616cfd3010ec7417a9bb80dc">06f47b84b81980ff616cfd3010ec7417a9bb80dc</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (swig version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2015-8858](https://www.mend.io/vulnerability-database/CVE-2015-8858) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | uglify-js-2.4.24.tgz | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the "Details" section below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2015-8858</summary>
### Vulnerable Library - <b>uglify-js-2.4.24.tgz</b></p>
<p>JavaScript parser, mangler/compressor and beautifier toolkit</p>
<p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-2.4.24.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-2.4.24.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/uglify-js/package.json</p>
<p>
Dependency Hierarchy:
- swig-1.4.2.tgz (Root Library)
- :x: **uglify-js-2.4.24.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/NickSham-Sandbox/VulnerableNode/commit/06f47b84b81980ff616cfd3010ec7417a9bb80dc">06f47b84b81980ff616cfd3010ec7417a9bb80dc</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The uglify-js package before 2.6.0 for Node.js allows attackers to cause a denial of service (CPU consumption) via crafted input in a parse call, aka a "regular expression denial of service (ReDoS)."
<p>Publish Date: 2017-01-23
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-8858>CVE-2015-8858</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858</a></p>
<p>Release Date: 2017-01-23</p>
<p>Fix Resolution: v2.6.0</p>
</p>
<p></p>
</details> | True | swig-1.4.2.tgz: 1 vulnerabilities (highest severity is: 7.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>swig-1.4.2.tgz</b></p></summary>
<p></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/uglify-js/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/NickSham-Sandbox/VulnerableNode/commit/06f47b84b81980ff616cfd3010ec7417a9bb80dc">06f47b84b81980ff616cfd3010ec7417a9bb80dc</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (swig version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2015-8858](https://www.mend.io/vulnerability-database/CVE-2015-8858) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | uglify-js-2.4.24.tgz | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the "Details" section below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2015-8858</summary>
### Vulnerable Library - <b>uglify-js-2.4.24.tgz</b></p>
<p>JavaScript parser, mangler/compressor and beautifier toolkit</p>
<p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-2.4.24.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-2.4.24.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/uglify-js/package.json</p>
<p>
Dependency Hierarchy:
- swig-1.4.2.tgz (Root Library)
- :x: **uglify-js-2.4.24.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/NickSham-Sandbox/VulnerableNode/commit/06f47b84b81980ff616cfd3010ec7417a9bb80dc">06f47b84b81980ff616cfd3010ec7417a9bb80dc</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The uglify-js package before 2.6.0 for Node.js allows attackers to cause a denial of service (CPU consumption) via crafted input in a parse call, aka a "regular expression denial of service (ReDoS)."
<p>Publish Date: 2017-01-23
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-8858>CVE-2015-8858</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858</a></p>
<p>Release Date: 2017-01-23</p>
<p>Fix Resolution: v2.6.0</p>
</p>
<p></p>
</details> | non_main | swig tgz vulnerabilities highest severity is vulnerable library swig tgz path to dependency file package json path to vulnerable library node modules uglify js package json found in head commit a href vulnerabilities cve severity cvss dependency type fixed in swig version remediation available high uglify js tgz transitive n a for some transitive vulnerabilities there is no version of direct dependency with a fix check the details section below to see if there is a version of transitive dependency where vulnerability is fixed details cve vulnerable library uglify js tgz javascript parser mangler compressor and beautifier toolkit library home page a href path to dependency file package json path to vulnerable library node modules uglify js package json dependency hierarchy swig tgz root library x uglify js tgz vulnerable library found in head commit a href found in base branch master vulnerability details the uglify js package before for node js allows attackers to cause a denial of service cpu consumption via crafted input in a parse call aka a regular expression denial of service redos publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution | 0 |
386,018 | 11,430,197,311 | IssuesEvent | 2020-02-04 09:38:10 | level73/membernet | https://api.github.com/repos/level73/membernet | closed | Log-in error | Platform: Membernet Priority: High Type: Bug | 
I just set up an account for an ILC staff member (g.baldinelli) and when testing her account by logging in, I get the bug pictured above. | 1.0 | Log-in error - 
I just set up an account for an ILC staff member (g.baldinelli) and when testing her account by logging in, I get the bug pictured above. | non_main | log in error i just set up an account for an ilc staff member g baldinelli and when testing her account by logging in i get the bug pictured above | 0 |
5,619 | 28,114,232,198 | IssuesEvent | 2023-03-31 09:31:01 | microsoft/mu_oem_sample | https://api.github.com/repos/microsoft/mu_oem_sample | closed | [Bug]: DFCI_CertChainingTest report fail when manufacturing mode | type:bug state:needs-maintainer-feedback state:needs-triage state:needs-owner urgency:medium | ### Is there an existing issue for this?
- [x] I have searched existing issues
### Current Behavior
Follow the implementation of DfciUiDisplayAuthDialog function, [DFCI_CertChainingTest ](https://microsoft.github.io/mu/dyn/mu_feature_dfci/DfciPkg/UnitTests/DfciTests/readme/#extended-testing)reports 5 Fails when manufacturing mode.
DfciUiDisplayAuthDialog function will accept the enrollment without the proper key when manufacturing mode.
### Expected Behavior
mu_oem_sample should provide the sample code which can pass [all DFCI test cases](https://microsoft.github.io/mu/dyn/mu_feature_dfci/DfciPkg/UnitTests/DfciTests/readme/#test-cases-collections).
Should the DfciUiDisplayAuthDialog function be changed as below:
```
if (DfciUiIsManufacturingMode ()) {
*Result = DFCI_MB_IDCANCEL;
return EFI_SUCCESS;
}
```
### Steps To Reproduce
1.
```
BOOLEAN
EFIAPI
DfciUiIsManufacturingMode (
VOID
) {
return TRUE;
}
```
2. Run [DFCI_CertChainingTest ](https://microsoft.github.io/mu/dyn/mu_feature_dfci/DfciPkg/UnitTests/DfciTests/readme/#extended-testing)test case and it will report 5 Fails.
### Build Environment
```markdown
- OS(s): Windows 10
- Tool Chain(s): VS2019
- Targets Impacted:RELEASE
```
### Version Information
```text
Commit: 1709eca4a7e17372b9d0c801cbd1fa8e7cbf7b83
```
### Urgency
Medium
### Are you going to fix this?
Someone else needs to fix it
### Do you need maintainer feedback?
Maintainer feedback requested
### Anything else?
_No response_ | True | [Bug]: DFCI_CertChainingTest report fail when manufacturing mode - ### Is there an existing issue for this?
- [x] I have searched existing issues
### Current Behavior
Follow the implementation of DfciUiDisplayAuthDialog function, [DFCI_CertChainingTest ](https://microsoft.github.io/mu/dyn/mu_feature_dfci/DfciPkg/UnitTests/DfciTests/readme/#extended-testing)reports 5 Fails when manufacturing mode.
DfciUiDisplayAuthDialog function will accept the enrollment without the proper key when manufacturing mode.
### Expected Behavior
mu_oem_sample should provide the sample code which can pass [all DFCI test cases](https://microsoft.github.io/mu/dyn/mu_feature_dfci/DfciPkg/UnitTests/DfciTests/readme/#test-cases-collections).
Should the DfciUiDisplayAuthDialog function be changed as below:
```
if (DfciUiIsManufacturingMode ()) {
*Result = DFCI_MB_IDCANCEL;
return EFI_SUCCESS;
}
```
### Steps To Reproduce
1.
```
BOOLEAN
EFIAPI
DfciUiIsManufacturingMode (
VOID
) {
return TRUE;
}
```
2. Run [DFCI_CertChainingTest ](https://microsoft.github.io/mu/dyn/mu_feature_dfci/DfciPkg/UnitTests/DfciTests/readme/#extended-testing)test case and it will report 5 Fails.
### Build Environment
```markdown
- OS(s): Windows 10
- Tool Chain(s): VS2019
- Targets Impacted:RELEASE
```
### Version Information
```text
Commit: 1709eca4a7e17372b9d0c801cbd1fa8e7cbf7b83
```
### Urgency
Medium
### Are you going to fix this?
Someone else needs to fix it
### Do you need maintainer feedback?
Maintainer feedback requested
### Anything else?
_No response_ | main | dfci certchainingtest report fail when manufacturing mode is there an existing issue for this i have searched existing issues current behavior follow the implementation of dfciuidisplayauthdialog function fails when manufacturing mode dfciuidisplayauthdialog function will accept the enrollment without the proper key when manufacturing mode expected behavior mu oem sample should provide the sample code which can pass should the dfciuidisplayauthdialog function be changed as below if dfciuiismanufacturingmode result dfci mb idcancel return efi success steps to reproduce boolean efiapi dfciuiismanufacturingmode void return true run case and it will report fails build environment markdown os s windows tool chain s targets impacted release version information text commit urgency medium are you going to fix this someone else needs to fix it do you need maintainer feedback maintainer feedback requested anything else no response | 1 |
1,800 | 6,575,922,908 | IssuesEvent | 2017-09-11 17:50:57 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Enhancement request: add ability to modify instance user-data after initial launch. | affects_2.2 aws cloud feature_idea waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
ec2.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0 (devel fa5f8a7543) last updated 2016/09/18 02:10:05 (GMT -700)
lib/ansible/modules/core: (detached HEAD 488f082761) last updated 2016/09/18 02:10:48 (GMT -700)
lib/ansible/modules/extras: (detached HEAD 24da3602c6) last updated 2016/09/18 02:10:57 (GMT -700)
config file = /etc/ansible/ansible.cfg
configured module search path = ['/usr/share/ansible']
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
Default
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
Instance user-data cannot be set or updated except on the first launch of an instance.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
This playbook has no effect on user-data whatsoever:
```
- name: Stop nodes to set user-data
ec2:
region: "{{ aws_region }}"
state: "stopped"
wait: yes
instance_ids: "i-12345"
- name: Start nodes after setting user data
ec2:
region: "{{ aws_region }}"
state: "running"
user_data: "{{ user_data }}"
wait: no
instance_ids: "i-12345"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
I expect the user-data to get updated in EC2 and the instance to start up.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
The instance starts up but user-data remains unaffected.
| True | Enhancement request: add ability to modify instance user-data after initial launch. - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
ec2.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0 (devel fa5f8a7543) last updated 2016/09/18 02:10:05 (GMT -700)
lib/ansible/modules/core: (detached HEAD 488f082761) last updated 2016/09/18 02:10:48 (GMT -700)
lib/ansible/modules/extras: (detached HEAD 24da3602c6) last updated 2016/09/18 02:10:57 (GMT -700)
config file = /etc/ansible/ansible.cfg
configured module search path = ['/usr/share/ansible']
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
Default
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
Instance user-data cannot be set or updated except on the first launch of an instance.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
This playbook has no effect on user-data whatsoever:
```
- name: Stop nodes to set user-data
ec2:
region: "{{ aws_region }}"
state: "stopped"
wait: yes
instance_ids: "i-12345"
- name: Start nodes after setting user data
ec2:
region: "{{ aws_region }}"
state: "running"
user_data: "{{ user_data }}"
wait: no
instance_ids: "i-12345"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
I expect the user-data to get updated in EC2 and the instance to start up.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
The instance starts up but user-data remains unaffected.
| main | enhancement request add ability to modify instance user data after initial launch issue type feature idea component name py ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file etc ansible ansible cfg configured module search path configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables default os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary instance user data cannot be set or updated except on the first launch of an instance steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used this playbook has no effect on user data whatsoever name stop nodes to set user data region aws region state stopped wait yes instance ids i name start nodes after setting user data region aws region state running user data user data wait no instance ids i expected results i expect the user data to get updated in and the instance to start up actual results the instance starts up but user data remains unaffected | 1 |
1,717 | 6,574,472,923 | IssuesEvent | 2017-09-11 13:01:19 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Copy Module Fails with relatively large files | affects_2.3 bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
Copy module
##### ANSIBLE VERSION
```
ansible 2.3.0 (devel 32a7b4ce71) last updated 2016/11/03 11:04:12 (GMT -500)
lib/ansible/modules/core: (detached HEAD 7cc4d3fe04) last updated 2016/11/03 11:04:33 (GMT -500)
lib/ansible/modules/extras: (detached HEAD e4bc618956) last updated 2016/11/03 11:04:54 (GMT -500)
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
```
$ cat /etc/ansible/hosts
local ansible_host=127.0.0.1
test_host ansible_host=192.168.56.50
$ egrep -v "^#|^$" /etc/ansible/ansible.cfg
[defaults]
log_path = /var/log/ansible.log
[privilege_escalation]
[paramiko_connection]
[ssh_connection]
[accelerate]
[selinux]
[colors]
```
##### OS / ENVIRONMENT
```
$ hostnamectl
Static hostname: ansible
Icon name: computer-vm
Chassis: vm
Boot ID: 44bef02e34ee4cad9ddf55df52cb03c5
Operating System: Ubuntu 14.04.5 LTS
Kernel: Linux 3.13.0-100-generic
Architecture: x86_64
```
##### SUMMARY
When trying to copy a large file (jdk installer 195Mb) the copy module fails, but works perfectly with small files (conf files for example)
##### STEPS TO REPRODUCE
Run an ad-hoc command with the large file
```
ansible -vvv test_host -s -m copy -a 'src=/vagrant/jdk-8u111-windows-x64.exe dest=/var/www/html/ owner=www-data group=www-data mode=0644'
```
##### EXPECTED RESULTS
Expected a successfully file copy
##### ACTUAL RESULTS
It fails with MemoryError
```
Using /etc/ansible/ansible.cfg as config file
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/home/vagrant/ansible/lib/ansible/executor/task_executor.py", line 119, in run
res = self._execute()
File "/home/vagrant/ansible/lib/ansible/executor/task_executor.py", line 490, in _execute
result = self._handler.run(task_vars=variables)
File "/home/vagrant/ansible/lib/ansible/plugins/action/copy.py", line 157, in run
source_full = self._loader.get_real_file(source_full)
File "/home/vagrant/ansible/lib/ansible/parsing/dataloader.py", line 402, in get_real_file
if is_encrypted_file(f):
File "/home/vagrant/ansible/lib/ansible/parsing/vault/__init__.py", line 152, in is_encrypted_file
b_vaulttext = to_bytes(to_text(vaulttext, encoding='ascii', errors='strict'), encoding='ascii', errors='strict')
File "/home/vagrant/ansible/lib/ansible/module_utils/_text.py", line 177, in to_text
return obj.decode(encoding, errors)
MemoryError
test_host | FAILED! => {
"failed": true,
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
``` | True | Copy Module Fails with relatively large files - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
Copy module
##### ANSIBLE VERSION
```
ansible 2.3.0 (devel 32a7b4ce71) last updated 2016/11/03 11:04:12 (GMT -500)
lib/ansible/modules/core: (detached HEAD 7cc4d3fe04) last updated 2016/11/03 11:04:33 (GMT -500)
lib/ansible/modules/extras: (detached HEAD e4bc618956) last updated 2016/11/03 11:04:54 (GMT -500)
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
```
$ cat /etc/ansible/hosts
local ansible_host=127.0.0.1
test_host ansible_host=192.168.56.50
$ egrep -v "^#|^$" /etc/ansible/ansible.cfg
[defaults]
log_path = /var/log/ansible.log
[privilege_escalation]
[paramiko_connection]
[ssh_connection]
[accelerate]
[selinux]
[colors]
```
##### OS / ENVIRONMENT
```
$ hostnamectl
Static hostname: ansible
Icon name: computer-vm
Chassis: vm
Boot ID: 44bef02e34ee4cad9ddf55df52cb03c5
Operating System: Ubuntu 14.04.5 LTS
Kernel: Linux 3.13.0-100-generic
Architecture: x86_64
```
##### SUMMARY
When trying to copy a large file (jdk installer 195Mb) the copy module fails, but works perfectly with small files (conf files for example)
##### STEPS TO REPRODUCE
Run an ad-hoc command with the large file
```
ansible -vvv test_host -s -m copy -a 'src=/vagrant/jdk-8u111-windows-x64.exe dest=/var/www/html/ owner=www-data group=www-data mode=0644'
```
##### EXPECTED RESULTS
Expected a successfully file copy
##### ACTUAL RESULTS
It fails with MemoryError
```
Using /etc/ansible/ansible.cfg as config file
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/home/vagrant/ansible/lib/ansible/executor/task_executor.py", line 119, in run
res = self._execute()
File "/home/vagrant/ansible/lib/ansible/executor/task_executor.py", line 490, in _execute
result = self._handler.run(task_vars=variables)
File "/home/vagrant/ansible/lib/ansible/plugins/action/copy.py", line 157, in run
source_full = self._loader.get_real_file(source_full)
File "/home/vagrant/ansible/lib/ansible/parsing/dataloader.py", line 402, in get_real_file
if is_encrypted_file(f):
File "/home/vagrant/ansible/lib/ansible/parsing/vault/__init__.py", line 152, in is_encrypted_file
b_vaulttext = to_bytes(to_text(vaulttext, encoding='ascii', errors='strict'), encoding='ascii', errors='strict')
File "/home/vagrant/ansible/lib/ansible/module_utils/_text.py", line 177, in to_text
return obj.decode(encoding, errors)
MemoryError
test_host | FAILED! => {
"failed": true,
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
``` | main | copy module fails with relatively large files issue type bug report component name copy module ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file etc ansible ansible cfg configured module search path default w o overrides configuration cat etc ansible hosts local ansible host test host ansible host egrep v etc ansible ansible cfg log path var log ansible log os environment hostnamectl static hostname ansible icon name computer vm chassis vm boot id operating system ubuntu lts kernel linux generic architecture summary when trying to copy a large file jdk installer the copy module fails but works perfectly with small files conf files for example steps to reproduce run an ad hoc command with the large file ansible vvv test host s m copy a src vagrant jdk windows exe dest var www html owner www data group www data mode expected results expected a successfully file copy actual results it fails with memoryerror using etc ansible ansible cfg as config file an exception occurred during task execution the full traceback is traceback most recent call last file home vagrant ansible lib ansible executor task executor py line in run res self execute file home vagrant ansible lib ansible executor task executor py line in execute result self handler run task vars variables file home vagrant ansible lib ansible plugins action copy py line in run source full self loader get real file source full file home vagrant ansible lib ansible parsing dataloader py line in get real file if is encrypted file f file home vagrant ansible lib ansible parsing vault init py line in is encrypted file b vaulttext to bytes to text vaulttext encoding ascii errors strict encoding ascii errors strict file home vagrant ansible lib ansible module utils text py line in to text return obj decode encoding errors memoryerror test host failed failed true msg unexpected failure during module execution stdout | 1 |
87,020 | 25,009,970,648 | IssuesEvent | 2022-11-03 14:36:58 | Crocoblock/suggestions | https://api.github.com/repos/Crocoblock/suggestions | closed | Global settings for jet woobuilder | JetWooBuilder | hello crocoblock team
I use 12 product carousels (product grid) on my website. Every time I decide to edit the style in them, I have to do it for each one. Even for some I forget to do. So this is a big challenge for me and many users.
So it's great if you add a new feature, that we style the product grid or list widget once, and then connect it to all the product grid widgets on the website. If we make a change in the style of one of the widgets, the style of all product grid widgets will change accordingly, and this will greatly increase our work speed. | 1.0 | Global settings for jet woobuilder - hello crocoblock team
I use 12 product carousels (product grid) on my website. Every time I decide to edit the style in them, I have to do it for each one. Even for some I forget to do. So this is a big challenge for me and many users.
So it's great if you add a new feature, that we style the product grid or list widget once, and then connect it to all the product grid widgets on the website. If we make a change in the style of one of the widgets, the style of all product grid widgets will change accordingly, and this will greatly increase our work speed. | non_main | global settings for jet woobuilder hello crocoblock team i use product carousels product grid on my website every time i decide to edit the style in them i have to do it for each one even for some i forget to do so this is a big challenge for me and many users so it s great if you add a new feature that we style the product grid or list widget once and then connect it to all the product grid widgets on the website if we make a change in the style of one of the widgets the style of all product grid widgets will change accordingly and this will greatly increase our work speed | 0 |
3,856 | 17,019,950,251 | IssuesEvent | 2021-07-02 17:15:57 | obs-websocket-community-projects/obs-websocket-java | https://api.github.com/repos/obs-websocket-community-projects/obs-websocket-java | closed | Usage Examples | 5.X.X Support maintainability | Rather than providing examples in the README, we could provide a small examples module/directory showing how to use the client for some common use cases:
- [x] Connect
- [x] Send a Request through convinience method
- [x] Send a Request through *Request class
- [x] Disconnect
- [x] Reconnect
- etc as requested (adding new examples could be tied directly to discussions or issues) | True | Usage Examples - Rather than providing examples in the README, we could provide a small examples module/directory showing how to use the client for some common use cases:
- [x] Connect
- [x] Send a Request through convinience method
- [x] Send a Request through *Request class
- [x] Disconnect
- [x] Reconnect
- etc as requested (adding new examples could be tied directly to discussions or issues) | main | usage examples rather than providing examples in the readme we could provide a small examples module directory showing how to use the client for some common use cases connect send a request through convinience method send a request through request class disconnect reconnect etc as requested adding new examples could be tied directly to discussions or issues | 1 |
479 | 3,755,249,570 | IssuesEvent | 2016-03-12 14:37:55 | duckduckgo/zeroclickinfo-goodies | https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies | closed | People in Space IA | Maintainer Input Requested Suggestion | I think we can add designation such as `Commander` as well to IA. This is readily available at data source http://www.howmanypeopleareinspacerightnow.com/
------
IA Page: http://duck.co/ia/view/people_in_space
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @elebow | True | People in Space IA - I think we can add designation such as `Commander` as well to IA. This is readily available at data source http://www.howmanypeopleareinspacerightnow.com/
------
IA Page: http://duck.co/ia/view/people_in_space
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @elebow | main | people in space ia i think we can add designation such as commander as well to ia this is readily available at data source ia page elebow | 1 |
1,603 | 6,572,392,193 | IssuesEvent | 2017-09-11 01:58:05 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | route53_facts doesn't let me list zone's records | affects_2.1 aws bug_report cloud waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
route53_facts
##### ANSIBLE VERSION
```
ansible --version
ansible 2.1.1.0
config file = /Users/wkoszek/r/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
```
# Ansible configuration file
[defaults]
timeout = 120
remote_user = ec2-user
forks = 20
host_key_checking = False
display_skipped_hosts = False
force_color = 1
# Path information
log_path = ./logs/ansible.log
retry_files_save_path = ./retries/
roles_path = ./roles/
# Vault settings
vault_password_file = ./vault.creds
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ControlPath=~/.ssh/%r@%h:%p
pipelining = True
```
##### OS / ENVIRONMENT
```
Darwin wkoszek_mba 15.6.0 Darwin Kernel Version 15.6.0: Mon Aug 29 20:21:34 PDT 2016; root:xnu-3248.60.11~1/RELEASE_X86_64 x86_64
```
##### SUMMARY
I have a zone ID. Let's call it MYZONEID. I take it from right column of the zone listing in Route 53. It's an existing zone with lots of records. I'd like to get all of them into the Ansible variable so that later I can iterate on it.
##### STEPS TO REPRODUCE
I run it like this:'
```
ansible-playbook -vvvv -i inventory playbooks/dns_route53.yml
```
**dns_route53.yml**
```
---
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: List the first 20 resource record sets in a given hosted zone
route53_facts:
query: record_sets
hosted_zone_id: 'MYZONEID'
max_items: 20
register: recset
```
##### EXPECTED RESULTS
I'd expect this to just work and recset to have something.
##### ACTUAL RESULTS
```
ansible-playbook -vvvv -i inventory playbooks/dns_route53.yml
Using /Users/wkoszek/r/ansible/ansible.cfg as config file
Loaded callback default of type stdout, v2.0
PLAYBOOK: dns_route53.yml ******************************************************
1 plays in playbooks/dns_route53.yml
PLAY [localhost] ***************************************************************
TASK [List the first 20 resource record sets in a given hosted zone] ***********
task path: /Users/wkoszek/r/ansible/playbooks/dns_route53.yml:7
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: wkoszek
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_GB.UTF-8 LC_ALL=en_GB.UTF-8 LC_MESSAGES=en_GB.UTF-8 /usr/local/opt/python/bin/python2.7 && sleep 0'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/var/folders/l9/94xvjj6n3jgcnbdf57q_drv00000gn/T/ansible_VLttIK/ansible_module_route53_facts.py", line 436, in <module>
main()
File "/var/folders/l9/94xvjj6n3jgcnbdf57q_drv00000gn/T/ansible_VLttIK/ansible_module_route53_facts.py", line 414, in main
region, ec2_url, aws_connect_kwargs = get_aws_connection_info(module, boto3=True)
File "/var/folders/l9/94xvjj6n3jgcnbdf57q_drv00000gn/T/ansible_VLttIK/ansible_modlib.zip/ansible/module_utils/ec2.py", line 154, in get_aws_connection_info
TypeError: fail_json() takes exactly 1 argument (2 given)
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "route53_facts"}, "module_stderr": "Traceback (most recent call last):\n File \"/var/folders/l9/94xvjj6n3jgcnbdf57q_drv00000gn/T/ansible_VLttIK/ansible_module_route53_facts.py\", line 436, in <module>\n main()\n File \"/var/folders/l9/94xvjj6n3jgcnbdf57q_drv00000gn/T/ansible_VLttIK/ansible_module_route53_facts.py\", line 414, in main\n region, ec2_url, aws_connect_kwargs = get_aws_connection_info(module, boto3=True)\n File \"/var/folders/l9/94xvjj6n3jgcnbdf57q_drv00000gn/T/ansible_VLttIK/ansible_modlib.zip/ansible/module_utils/ec2.py\", line 154, in get_aws_connection_info\nTypeError: fail_json() takes exactly 1 argument (2 given)\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @./retries/dns_route53.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1
```
This works OK:
```
aws route53 list-resource-record-sets --hosted-zone-id MYZONEID
```
and I can use Ansible for all other stuff (creating/deleting zones, creating records) just fine.
| True | route53_facts doesn't let me list zone's records - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
route53_facts
##### ANSIBLE VERSION
```
ansible --version
ansible 2.1.1.0
config file = /Users/wkoszek/r/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
```
# Ansible configuration file
[defaults]
timeout = 120
remote_user = ec2-user
forks = 20
host_key_checking = False
display_skipped_hosts = False
force_color = 1
# Path information
log_path = ./logs/ansible.log
retry_files_save_path = ./retries/
roles_path = ./roles/
# Vault settings
vault_password_file = ./vault.creds
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ControlPath=~/.ssh/%r@%h:%p
pipelining = True
```
##### OS / ENVIRONMENT
```
Darwin wkoszek_mba 15.6.0 Darwin Kernel Version 15.6.0: Mon Aug 29 20:21:34 PDT 2016; root:xnu-3248.60.11~1/RELEASE_X86_64 x86_64
```
##### SUMMARY
I have a zone ID. Let's call it MYZONEID. I take it from right column of the zone listing in Route 53. It's an existing zone with lots of records. I'd like to get all of them into the Ansible variable so that later I can iterate on it.
##### STEPS TO REPRODUCE
I run it like this:'
```
ansible-playbook -vvvv -i inventory playbooks/dns_route53.yml
```
**dns_route53.yml**
```
---
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: List the first 20 resource record sets in a given hosted zone
route53_facts:
query: record_sets
hosted_zone_id: 'MYZONEID'
max_items: 20
register: recset
```
##### EXPECTED RESULTS
I'd expect this to just work and recset to have something.
##### ACTUAL RESULTS
```
ansible-playbook -vvvv -i inventory playbooks/dns_route53.yml
Using /Users/wkoszek/r/ansible/ansible.cfg as config file
Loaded callback default of type stdout, v2.0
PLAYBOOK: dns_route53.yml ******************************************************
1 plays in playbooks/dns_route53.yml
PLAY [localhost] ***************************************************************
TASK [List the first 20 resource record sets in a given hosted zone] ***********
task path: /Users/wkoszek/r/ansible/playbooks/dns_route53.yml:7
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: wkoszek
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_GB.UTF-8 LC_ALL=en_GB.UTF-8 LC_MESSAGES=en_GB.UTF-8 /usr/local/opt/python/bin/python2.7 && sleep 0'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/var/folders/l9/94xvjj6n3jgcnbdf57q_drv00000gn/T/ansible_VLttIK/ansible_module_route53_facts.py", line 436, in <module>
main()
File "/var/folders/l9/94xvjj6n3jgcnbdf57q_drv00000gn/T/ansible_VLttIK/ansible_module_route53_facts.py", line 414, in main
region, ec2_url, aws_connect_kwargs = get_aws_connection_info(module, boto3=True)
File "/var/folders/l9/94xvjj6n3jgcnbdf57q_drv00000gn/T/ansible_VLttIK/ansible_modlib.zip/ansible/module_utils/ec2.py", line 154, in get_aws_connection_info
TypeError: fail_json() takes exactly 1 argument (2 given)
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "route53_facts"}, "module_stderr": "Traceback (most recent call last):\n File \"/var/folders/l9/94xvjj6n3jgcnbdf57q_drv00000gn/T/ansible_VLttIK/ansible_module_route53_facts.py\", line 436, in <module>\n main()\n File \"/var/folders/l9/94xvjj6n3jgcnbdf57q_drv00000gn/T/ansible_VLttIK/ansible_module_route53_facts.py\", line 414, in main\n region, ec2_url, aws_connect_kwargs = get_aws_connection_info(module, boto3=True)\n File \"/var/folders/l9/94xvjj6n3jgcnbdf57q_drv00000gn/T/ansible_VLttIK/ansible_modlib.zip/ansible/module_utils/ec2.py\", line 154, in get_aws_connection_info\nTypeError: fail_json() takes exactly 1 argument (2 given)\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @./retries/dns_route53.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1
```
This works OK:
```
aws route53 list-resource-record-sets --hosted-zone-id MYZONEID
```
and I can use Ansible for all other stuff (creating/deleting zones, creating records) just fine.
| main | facts doesn t let me list zone s records issue type bug report component name facts ansible version ansible version ansible config file users wkoszek r ansible ansible cfg configured module search path default w o overrides configuration ansible configuration file timeout remote user user forks host key checking false display skipped hosts false force color path information log path logs ansible log retry files save path retries roles path roles vault settings vault password file vault creds ssh args o controlmaster auto o controlpersist o controlpath ssh r h p pipelining true os environment darwin wkoszek mba darwin kernel version mon aug pdt root xnu release summary i have a zone id let s call it myzoneid i take it from right column of the zone listing in route it s an existing zone with lots of records i d like to get all of them into the ansible variable so that later i can iterate on it steps to reproduce i run it like this ansible playbook vvvv i inventory playbooks dns yml dns yml hosts localhost connection local gather facts no tasks name list the first resource record sets in a given hosted zone facts query record sets hosted zone id myzoneid max items register recset expected results i d expect this to just work and recset to have something actual results ansible playbook vvvv i inventory playbooks dns yml using users wkoszek r ansible ansible cfg as config file loaded callback default of type stdout playbook dns yml plays in playbooks dns yml play task task path users wkoszek r ansible playbooks dns yml establish local connection for user wkoszek exec bin sh c lang en gb utf lc all en gb utf lc messages en gb utf usr local opt python bin sleep an exception occurred during task execution the full traceback is traceback most recent call last file var folders t ansible vlttik ansible module facts py line in main file var folders t ansible vlttik ansible module facts py line in main region url aws connect kwargs get aws connection info module true file var folders t ansible vlttik ansible modlib zip ansible module utils py line in get aws connection info typeerror fail json takes exactly argument given fatal failed changed false failed true invocation module name facts module stderr traceback most recent call last n file var folders t ansible vlttik ansible module facts py line in n main n file var folders t ansible vlttik ansible module facts py line in main n region url aws connect kwargs get aws connection info module true n file var folders t ansible vlttik ansible modlib zip ansible module utils py line in get aws connection info ntypeerror fail json takes exactly argument given n module stdout msg module failure parsed false no more hosts left to retry use limit retries dns retry play recap localhost ok changed unreachable failed this works ok aws list resource record sets hosted zone id myzoneid and i can use ansible for all other stuff creating deleting zones creating records just fine | 1 |
1,979 | 2,581,180,796 | IssuesEvent | 2015-02-13 22:53:54 | azavea/nyc-trees | https://api.github.com/repos/azavea/nyc-trees | closed | Design not displaying correctly in IE | design user testing | The design isn't displaying correctly in IE11, probably in other versions of IE as well.

| 1.0 | Design not displaying correctly in IE - The design isn't displaying correctly in IE11, probably in other versions of IE as well.

| non_main | design not displaying correctly in ie the design isn t displaying correctly in probably in other versions of ie as well | 0 |
2,778 | 9,962,268,105 | IssuesEvent | 2019-07-07 13:10:34 | Homebrew/homebrew-cask | https://api.github.com/repos/Homebrew/homebrew-cask | closed | Feature request: Skip some casks on `brew cask upgrade` | awaiting maintainer feedback | My routine workflow is to upgrade all casks once a week, or on demand as reported by an [update-notifier](https://github.com/vjt/homebrew-update-notifier). Now say there is one cask where I _don't_ want the new version yet. (This mostly applies to development tools, e.g. `chefdk`, where a new major version might be backwards incompatible with my projects, and I need to migrate them first.)
AFAICT the only way to currently do this is to get the complete list from `brew cask outdated`, and then manually update all casks _except_ the one I would like to skip. Which is a bit cumbersome. Or am I missing something? I mean, sure, shell commands to the rescue, e.g. `brew cask outdated | awk '/chefdk/ { next } { print $1 } | xargs brew cask upgrade` -- but you have to remember to do this every time, and/or create a place where your "skip" state lives, e.g. by having a `my-brew-cask-upgrade.sh` file which you then use instead of the normal `brew cask upgrade`.
I'm studiously avoiding the word "pin" here, since it seems that means something else to most people here than it means to me. ;) I personally don't care about [being consistent with auto_updates](https://github.com/Homebrew/homebrew-cask/issues/44914#issuecomment-386121355), I just wish I could tell the cask system to "please skip over cask xyz when running `brew cask upgrade`". (And possibly "also don't show xyz in `brew cask outdated` unless `--include-skipped` is given".) | True | Feature request: Skip some casks on `brew cask upgrade` - My routine workflow is to upgrade all casks once a week, or on demand as reported by an [update-notifier](https://github.com/vjt/homebrew-update-notifier). Now say there is one cask where I _don't_ want the new version yet. (This mostly applies to development tools, e.g. `chefdk`, where a new major version might be backwards incompatible with my projects, and I need to migrate them first.)
AFAICT the only way to currently do this is to get the complete list from `brew cask outdated`, and then manually update all casks _except_ the one I would like to skip. Which is a bit cumbersome. Or am I missing something? I mean, sure, shell commands to the rescue, e.g. `brew cask outdated | awk '/chefdk/ { next } { print $1 } | xargs brew cask upgrade` -- but you have to remember to do this every time, and/or create a place where your "skip" state lives, e.g. by having a `my-brew-cask-upgrade.sh` file which you then use instead of the normal `brew cask upgrade`.
I'm studiously avoiding the word "pin" here, since it seems that means something else to most people here than it means to me. ;) I personally don't care about [being consistent with auto_updates](https://github.com/Homebrew/homebrew-cask/issues/44914#issuecomment-386121355), I just wish I could tell the cask system to "please skip over cask xyz when running `brew cask upgrade`". (And possibly "also don't show xyz in `brew cask outdated` unless `--include-skipped` is given".) | main | feature request skip some casks on brew cask upgrade my routine workflow is to upgrade all casks once a week or on demand as reported by an now say there is one cask where i don t want the new version yet this mostly applies to development tools e g chefdk where a new major version might be backwards incompatible with my projects and i need to migrate them first afaict the only way to currently do this is to get the complete list from brew cask outdated and then manually update all casks except the one i would like to skip which is a bit cumbersome or am i missing something i mean sure shell commands to the rescue e g brew cask outdated awk chefdk next print xargs brew cask upgrade but you have to remember to do this every time and or create a place where your skip state lives e g by having a my brew cask upgrade sh file which you then use instead of the normal brew cask upgrade i m studiously avoiding the word pin here since it seems that means something else to most people here than it means to me i personally don t care about i just wish i could tell the cask system to please skip over cask xyz when running brew cask upgrade and possibly also don t show xyz in brew cask outdated unless include skipped is given | 1 |
344,370 | 24,811,269,102 | IssuesEvent | 2022-10-25 09:36:00 | FuseCP/SolidCP | https://api.github.com/repos/FuseCP/SolidCP | closed | [DS-Heuristics] Problem with deny permissions | documentation | **Describe the bug**
When a deny OU permission is made further up then it will cause the error:
> This access control list is not in canonical form and therefore cannot be modified. at System.Security.AccessControl.CommonAcl.ThrowIfNotCanonical()
**To Reproduce**
Steps to reproduce the behavior:
1. Go to root OU and add a deny permission and apply to all children
2. Create a new Hosted Org in SolidCP
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**

**SolidCP Info**
- SolidCP Version: 1.4.4
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://app.bountysource.com/issues/79895552-ds-heuristics-problem-with-deny-permissions?utm_campaign=plugin&utm_content=tracker%2F150994120&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://app.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F150994120&utm_medium=issues&utm_source=github).
</bountysource-plugin> | 1.0 | [DS-Heuristics] Problem with deny permissions - **Describe the bug**
When a deny OU permission is made further up then it will cause the error:
> This access control list is not in canonical form and therefore cannot be modified. at System.Security.AccessControl.CommonAcl.ThrowIfNotCanonical()
**To Reproduce**
Steps to reproduce the behavior:
1. Go to root OU and add a deny permission and apply to all children
2. Create a new Hosted Org in SolidCP
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**

**SolidCP Info**
- SolidCP Version: 1.4.4
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://app.bountysource.com/issues/79895552-ds-heuristics-problem-with-deny-permissions?utm_campaign=plugin&utm_content=tracker%2F150994120&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://app.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F150994120&utm_medium=issues&utm_source=github).
</bountysource-plugin> | non_main | problem with deny permissions describe the bug when a deny ou permission is made further up then it will cause the error this access control list is not in canonical form and therefore cannot be modified at system security accesscontrol commonacl throwifnotcanonical to reproduce steps to reproduce the behavior go to root ou and add a deny permission and apply to all children create a new hosted org in solidcp see error expected behavior a clear and concise description of what you expected to happen screenshots solidcp info solidcp version want to back this issue we accept bounties via | 0 |
34,972 | 7,495,149,048 | IssuesEvent | 2018-04-07 17:41:42 | scipy/scipy | https://api.github.com/repos/scipy/scipy | closed | bug for log1p on csr_matrix | defect scipy.sparse.linalg | I'm trying to use log1p function on csr_matrix. I use two different ways to construct the same csr_matrix but got different results after applying log1p.
### Create the matrix by directly using a python list:
```
>>> mat_1_raw = [[2, 2, 0, 0, 0], [0, 0, 1, 1, 1]]
>>> mat_1 = csr_matrix(mat_1_raw)
>>> mat_1.log1p().toarray()
array([[ 1.09861229, 1.09861229, 0. , 0. , 0. ],
[ 0. , 0. , 0.69314718, 0.69314718, 0.69314718]])
```
### Create the matrix by using indices:
```
>>> indices = [0, 1, 0, 1, 2, 3, 4]
>>> indptr = [0, 4, 7]
>>> data = [1, 1, 1, 1, 1, 1, 1]
>>> mat_2 = csr_matrix((data, indices, indptr), shape=(2, 5))
>>> mat_2.toarray()
array([[ 2., 2., 0., 0., 0.],
[ 0., 0., 1., 1., 1.]])
>>> mat_2.log1p().toarray()
array([[ 1.38629436, 1.38629436, 0. , 0. , 0. ],
[ 0. , 0. , 0.69314718, 0.69314718, 0.69314718]])
```
Apparently in the second case there are two separate 1s and two separate 0s added to the same entry in the final csr_matrix, and the corresponding entry in the log1p result is log(1+1) * 2 = 1.39, where it should be log(1 + 2*1) = 1.099.
| 1.0 | bug for log1p on csr_matrix - I'm trying to use log1p function on csr_matrix. I use two different ways to construct the same csr_matrix but got different results after applying log1p.
### Create the matrix by directly using a python list:
```
>>> mat_1_raw = [[2, 2, 0, 0, 0], [0, 0, 1, 1, 1]]
>>> mat_1 = csr_matrix(mat_1_raw)
>>> mat_1.log1p().toarray()
array([[ 1.09861229, 1.09861229, 0. , 0. , 0. ],
[ 0. , 0. , 0.69314718, 0.69314718, 0.69314718]])
```
### Create the matrix by using indices:
```
>>> indices = [0, 1, 0, 1, 2, 3, 4]
>>> indptr = [0, 4, 7]
>>> data = [1, 1, 1, 1, 1, 1, 1]
>>> mat_2 = csr_matrix((data, indices, indptr), shape=(2, 5))
>>> mat_2.toarray()
array([[ 2., 2., 0., 0., 0.],
[ 0., 0., 1., 1., 1.]])
>>> mat_2.log1p().toarray()
array([[ 1.38629436, 1.38629436, 0. , 0. , 0. ],
[ 0. , 0. , 0.69314718, 0.69314718, 0.69314718]])
```
Apparently in the second case there are two separate 1s and two separate 0s added to the same entry in the final csr_matrix, and the corresponding entry in the log1p result is log(1+1) * 2 = 1.39, where it should be log(1 + 2*1) = 1.099.
| non_main | bug for on csr matrix i m trying to use function on csr matrix i use two different ways to construct the same csr matrix but got different results after applying create the matrix by directly using a python list mat raw mat csr matrix mat raw mat toarray array create the matrix by using indices indices indptr data mat csr matrix data indices indptr shape mat toarray array mat toarray array apparently in the second case there are two separate and two separate added to the same entry in the final csr matrix and the corresponding entry in the result is log where it should be log | 0 |
402 | 3,455,953,023 | IssuesEvent | 2015-12-17 22:28:16 | tgstation/-tg-station | https://api.github.com/repos/tgstation/-tg-station | closed | NanoUI JSRender not documented. | Maintainability - Hinders improvements Not a bug | ### Problem
@Faerdan implemented NanoUI and vanished before documenting the template markup.
#### Details
One day I decided I would try to learn how NanoUI works and found out it uses [JSRender](http://www.jsviews.com/#jsrapi) for its markup.
I saw how to basically use it, except I got scared when I saw Faerdan use carets ("^") after the first and before the second curly braces, ("{^{") and didn't know what they meant, since they are not documented in the official JSRender documentation. (In all of their examples, the usage is "{{", not "{^{".)
I have thus deduced that it is custom syntax and I have absolutely no idea what it means, and when I would want to, and not want to use it.
He was going to write a guide to templates in [/nano/templates/TemplatesGuide.txt](https://github.com/tgstation/-tg-station/blob/63ec19e9eda363863aa59db2bd6bc28d006cd0a3/nano/templates/TemplatesGuide.txt), but didn't even start it and simply linked to JSRender's documentation before disappearing, never to be seen again.
### Solution
The simple solution is for someone to find out the custom syntax and document it in [/nano/templates/TemplatesGuide.txt](https://github.com/tgstation/-tg-station/blob/master/nano/templates/TemplatesGuide.txt); then document the rest of the non-custom syntax which can be found in the JSRender [api documentation](http://www.jsviews.com/#jsrapi). | True | NanoUI JSRender not documented. - ### Problem
@Faerdan implemented NanoUI and vanished before documenting the template markup.
#### Details
One day I decided I would try to learn how NanoUI works and found out it uses [JSRender](http://www.jsviews.com/#jsrapi) for its markup.
I saw how to basically use it, except I got scared when I saw Faerdan use carets ("^") after the first and before the second curly braces, ("{^{") and didn't know what they meant, since they are not documented in the official JSRender documentation. (In all of their examples, the usage is "{{", not "{^{".)
I have thus deduced that it is custom syntax and I have absolutely no idea what it means, and when I would want to, and not want to use it.
He was going to write a guide to templates in [/nano/templates/TemplatesGuide.txt](https://github.com/tgstation/-tg-station/blob/63ec19e9eda363863aa59db2bd6bc28d006cd0a3/nano/templates/TemplatesGuide.txt), but didn't even start it and simply linked to JSRender's documentation before disappearing, never to be seen again.
### Solution
The simple solution is for someone to find out the custom syntax and document it in [/nano/templates/TemplatesGuide.txt](https://github.com/tgstation/-tg-station/blob/master/nano/templates/TemplatesGuide.txt); then document the rest of the non-custom syntax which can be found in the JSRender [api documentation](http://www.jsviews.com/#jsrapi). | main | nanoui jsrender not documented problem faerdan implemented nanoui and vanished before documenting the template markup details one day i decided i would try to learn how nanoui works and found out it uses for its markup i saw how to basically use it except i got scared when i saw faerdan use carets after the first and before the second curly braces and didn t know what they meant since they are not documented in the official jsrender documentation in all of their examples the usage is not i have thus deduced that it is custom syntax and i have absolutely no idea what it means and when i would want to and not want to use it he was going to write a guide to templates in but didn t even start it and simply linked to jsrender s documentation before disappearing never to be seen again solution the simple solution is for someone to find out the custom syntax and document it in then document the rest of the non custom syntax which can be found in the jsrender | 1 |
5,530 | 27,641,851,067 | IssuesEvent | 2023-03-10 18:47:19 | deislabs/spiderlightning | https://api.github.com/repos/deislabs/spiderlightning | opened | slight run with 0 capabilities | 🐛 bug 🚧 maintainer issue | **Description of the bug**
When slight is run using a slightfile with 0 capabilities, slight will error out.
**To Reproduce**
Create a slightfile with 0 capabilities and run the app.
| True | slight run with 0 capabilities - **Description of the bug**
When slight is run using a slightfile with 0 capabilities, slight will error out.
**To Reproduce**
Create a slightfile with 0 capabilities and run the app.
| main | slight run with capabilities description of the bug when slight is run using a slightfile with capabilities slight will error out to reproduce create a slightfile with capabilities and run the app | 1 |
11,285 | 13,235,781,480 | IssuesEvent | 2020-08-18 18:38:30 | gudmdharalds-a8c/testing123 | https://api.github.com/repos/gudmdharalds-a8c/testing123 | closed | PHP Upgrade: Compatibility issues found in dir1/dirb/diry | PHP 7.4 Compatibility PHP Compatibility | The following issues were found when scanning branch <code>abranch12</code> for compatibility problems:
* <b>Error in dir1/dirb/diry/y.php</b>: Extension 'mysql_' is deprecated since PHP 5.5 and removed since PHP 7.0; Use mysqli instead https://github.com/gudmdharalds-a8c/testing123/blob/b99a028e21f490f459f7095329fe4933e8643e79/dir1/dirb/diry/y.php#L5
This is an automated report.
| True | PHP Upgrade: Compatibility issues found in dir1/dirb/diry - The following issues were found when scanning branch <code>abranch12</code> for compatibility problems:
* <b>Error in dir1/dirb/diry/y.php</b>: Extension 'mysql_' is deprecated since PHP 5.5 and removed since PHP 7.0; Use mysqli instead https://github.com/gudmdharalds-a8c/testing123/blob/b99a028e21f490f459f7095329fe4933e8643e79/dir1/dirb/diry/y.php#L5
This is an automated report.
| non_main | php upgrade compatibility issues found in dirb diry the following issues were found when scanning branch for compatibility problems error in dirb diry y php extension mysql is deprecated since php and removed since php use mysqli instead this is an automated report | 0 |
112,701 | 9,597,269,901 | IssuesEvent | 2019-05-09 20:50:20 | saltstack/salt | https://api.github.com/repos/saltstack/salt | closed | integration.modules.test_dockermod.DockerCallTestCase.test_docker_call | 2019.2.2 Test Failure | 2019.2.1 failed [salt-opensuse-15-py3](https://jenkinsci.saltstack.com/job/2019.2.1/job/salt-opensuse-15-py3/23/testReport/junit/integration.modules.test_dockermod/DockerCallTestCase/test_docker_call)
---
```
Traceback (most recent call last):
File "/tmp/kitchen/testing/tests/integration/modules/test_dockermod.py", line 85, in test_docker_call
assert ret is True
AssertionError
``` | 1.0 | integration.modules.test_dockermod.DockerCallTestCase.test_docker_call - 2019.2.1 failed [salt-opensuse-15-py3](https://jenkinsci.saltstack.com/job/2019.2.1/job/salt-opensuse-15-py3/23/testReport/junit/integration.modules.test_dockermod/DockerCallTestCase/test_docker_call)
---
```
Traceback (most recent call last):
File "/tmp/kitchen/testing/tests/integration/modules/test_dockermod.py", line 85, in test_docker_call
assert ret is True
AssertionError
``` | non_main | integration modules test dockermod dockercalltestcase test docker call failed traceback most recent call last file tmp kitchen testing tests integration modules test dockermod py line in test docker call assert ret is true assertionerror | 0 |
4,576 | 23,773,392,656 | IssuesEvent | 2022-09-01 18:26:10 | pyOpenSci/software-review | https://api.github.com/repos/pyOpenSci/software-review | closed | humpi: The python code for the Hurricane Maximum Potential Intensity (HuMPI) model | 1/editor-checks ⌛ pending-maintainer-response New Submission! | Submitting Author: Albenis Pérez-Alarcón (@apalarcon)
Package Name: HuMPI: Hurricane Maximum Potential Intensity model
One-Line Description of Package: https://github.com/apalarcon/HuMPI-master
Repository Link: https://github.com/apalarcon/HuMPI-master
Version submitted: V1.0
Editor: TBD
Reviewer 1: TBD
Reviewer 2: TBD
Archive: TBD
Version accepted: TBD
---
## Description
- Include a brief paragraph describing what your package does:
The potential intensity (PI) of tropical cyclones (TCs) is the maximum surface wind speed and minimum central pressure limits found by representing the storm as a thermal heat engine. The PI theory proposed by @Emanuel1986 (hereafter E-PI) has been widely accepted as the upper bound for TCs intensity. In the E-PI theory, the dynamic and thermodynamic processes of the TC are described as an energy cycle like a Carnot engine, absorbing heat from the ocean, giving it up at the tropopause. Nevertheless, the "superintensity" phenomenon, which occurs when the observed or modelled TC intensity is higher than the E-PI prediction, is a research challenge nowadays. In a recent attempt to avoid the "superintensity" phenomenon, it was proposed a new hurricane maximum potential intensity (HuMPI) model based on the E-PI theory. HuMPI describes the TC thermo-energetic cycle as a generalized Carnot cycle and includes a TC model for the atmospheric boundary layer. This work aims to implement the HuMPI model formulation in Python.
## Scope
- Please indicate which [category or categories][PackageCategories] this package falls under:
- [ ] Data retrieval
- [ ] Data extraction
- [ ] Data munging
- [ ] Data deposition
- [x] Reproducibility
- [ ] Geospatial
- [x] Education
- [ ] Data visualization*
\* Please fill out a pre-submission inquiry before submitting a data visualization package. For more info, see [notes on categories][NotesOnCategories] of our guidebook.
- Explain how the and why the package falls under these categories (briefly, 1-2 sentences):
humpi is the Python code of the HuMPI model, which will allow its easy use to calculate the potential intensity of tropical cyclones, both to check and support the results of already published research and for academic purposes
- Who is the target audience and what are scientific applications of this package?
humpi is aimed primarily at researchers in atmospheric sciences, climate change and risk studies and allows the calculation
of the potential intensity of tropical cyclones
- Are there other Python packages that accomplish the same thing? If so, how does yours differ?
There is pyPI, another python package similar to HuMPI, but both differ in the physical approximations that support the model, that is, the way of calculating the potential intensity in both modules is different
- If you made a pre-submission enquiry, please paste the link to the corresponding issue, forum post, or other discussion, or `@tag` the editor you contacted:
## Technical checks
For details about the pyOpenSci packaging requirements, see our [packaging guide][PackagingGuide]. Confirm each of the following by checking the box. This package:
- [x] does not violate the Terms of Service of any service it interacts with.
- [x] has an [OSI approved license][OsiApprovedLicense].
- [x] contains a README with instructions for installing the development version.
- [x] includes documentation with examples for all functions.
- [x] contains a vignette with examples of its essential functions and uses.
- [ ] has a test suite.
- [ ] has continuous integration, such as Travis CI, AppVeyor, CircleCI, and/or others.
## Publication options
- [ ] Do you wish to automatically submit to the [Journal of Open Source Software][JournalOfOpenSourceSoftware]? If so:
<details>
<summary>JOSS Checks</summary>
- [ ] The package has an **obvious research application** according to JOSS's definition in their [submission requirements][JossSubmissionRequirements]. Be aware that completing the pyOpenSci review process **does not** guarantee acceptance to JOSS. Be sure to read their submission requirements (linked above) if you are interested in submitting to JOSS.
- [ ] The package is not a "minor utility" as defined by JOSS's [submission requirements][JossSubmissionRequirements]: "Minor ‘utility’ packages, including ‘thin’ API clients, are not acceptable." pyOpenSci welcomes these packages under "Data Retrieval", but JOSS has slightly different criteria.
- [ ] The package contains a `paper.md` matching [JOSS's requirements][JossPaperRequirements] with a high-level description in the package root or in `inst/`.
- [ ] The package is deposited in a long-term repository with the DOI:
*Note: Do not submit your package separately to JOSS*
</details>
## Are you OK with Reviewers Submitting Issues and/or pull requests to your Repo Directly?
This option will allow reviewers to open smaller issues that can then be linked to PR's rather than submitting a more dense text based review. It will also allow you to demonstrate addressing the issue via PR links.
- [x] Yes I am OK with reviewers submitting requested changes as issues to my repo. Reviewers will then link to the issues in their submitted review.
## Code of conduct
- [x] I agree to abide by [pyOpenSci's Code of Conduct][PyOpenSciCodeOfConduct] during the review process and in maintaining my package should it be accepted.
**P.S.** *Have feedback/comments about our review process? Leave a comment [here][Comments]
## Editor and Review Templates
[Editor and review templates can be found here][Templates]
[PackagingGuide]: https://www.pyopensci.org/contributing-guide/authoring/index.html#packaging-guide
[PackageCategories]: https://www.pyopensci.org/contributing-guide/open-source-software-peer-review/aims-and-scope.html?highlight=data#package-categories
[NotesOnCategories]: https://www.pyopensci.org/contributing-guide/open-source-software-peer-review/aims-and-scope.html?highlight=data#notes-on-categories
[JournalOfOpenSourceSoftware]: http://joss.theoj.org/
[JossSubmissionRequirements]: https://joss.readthedocs.io/en/latest/submitting.html#submission-requirements
[JossPaperRequirements]: https://joss.readthedocs.io/en/latest/submitting.html#what-should-my-paper-contain
[PyOpenSciCodeOfConduct]: https://www.pyopensci.org/contributing-guide/open-source-software-peer-review/code-of-conduct.html?highlight=code%20conduct
[OsiApprovedLicense]: https://opensource.org/licenses
[Templates]: https://www.pyopensci.org/contributing-guide/appendices/templates.html
[Comments]: https://github.com/pyOpenSci/governance/issues/8
| True | humpi: The python code for the Hurricane Maximum Potential Intensity (HuMPI) model - Submitting Author: Albenis Pérez-Alarcón (@apalarcon)
Package Name: HuMPI: Hurricane Maximum Potential Intensity model
One-Line Description of Package: https://github.com/apalarcon/HuMPI-master
Repository Link: https://github.com/apalarcon/HuMPI-master
Version submitted: V1.0
Editor: TBD
Reviewer 1: TBD
Reviewer 2: TBD
Archive: TBD
Version accepted: TBD
---
## Description
- Include a brief paragraph describing what your package does:
The potential intensity (PI) of tropical cyclones (TCs) is the maximum surface wind speed and minimum central pressure limits found by representing the storm as a thermal heat engine. The PI theory proposed by @Emanuel1986 (hereafter E-PI) has been widely accepted as the upper bound for TCs intensity. In the E-PI theory, the dynamic and thermodynamic processes of the TC are described as an energy cycle like a Carnot engine, absorbing heat from the ocean, giving it up at the tropopause. Nevertheless, the "superintensity" phenomenon, which occurs when the observed or modelled TC intensity is higher than the E-PI prediction, is a research challenge nowadays. In a recent attempt to avoid the "superintensity" phenomenon, it was proposed a new hurricane maximum potential intensity (HuMPI) model based on the E-PI theory. HuMPI describes the TC thermo-energetic cycle as a generalized Carnot cycle and includes a TC model for the atmospheric boundary layer. This work aims to implement the HuMPI model formulation in Python.
## Scope
- Please indicate which [category or categories][PackageCategories] this package falls under:
- [ ] Data retrieval
- [ ] Data extraction
- [ ] Data munging
- [ ] Data deposition
- [x] Reproducibility
- [ ] Geospatial
- [x] Education
- [ ] Data visualization*
\* Please fill out a pre-submission inquiry before submitting a data visualization package. For more info, see [notes on categories][NotesOnCategories] of our guidebook.
- Explain how the and why the package falls under these categories (briefly, 1-2 sentences):
humpi is the Python code of the HuMPI model, which will allow its easy use to calculate the potential intensity of tropical cyclones, both to check and support the results of already published research and for academic purposes
- Who is the target audience and what are scientific applications of this package?
humpi is aimed primarily at researchers in atmospheric sciences, climate change and risk studies and allows the calculation
of the potential intensity of tropical cyclones
- Are there other Python packages that accomplish the same thing? If so, how does yours differ?
There is pyPI, another python package similar to HuMPI, but both differ in the physical approximations that support the model, that is, the way of calculating the potential intensity in both modules is different
- If you made a pre-submission enquiry, please paste the link to the corresponding issue, forum post, or other discussion, or `@tag` the editor you contacted:
## Technical checks
For details about the pyOpenSci packaging requirements, see our [packaging guide][PackagingGuide]. Confirm each of the following by checking the box. This package:
- [x] does not violate the Terms of Service of any service it interacts with.
- [x] has an [OSI approved license][OsiApprovedLicense].
- [x] contains a README with instructions for installing the development version.
- [x] includes documentation with examples for all functions.
- [x] contains a vignette with examples of its essential functions and uses.
- [ ] has a test suite.
- [ ] has continuous integration, such as Travis CI, AppVeyor, CircleCI, and/or others.
## Publication options
- [ ] Do you wish to automatically submit to the [Journal of Open Source Software][JournalOfOpenSourceSoftware]? If so:
<details>
<summary>JOSS Checks</summary>
- [ ] The package has an **obvious research application** according to JOSS's definition in their [submission requirements][JossSubmissionRequirements]. Be aware that completing the pyOpenSci review process **does not** guarantee acceptance to JOSS. Be sure to read their submission requirements (linked above) if you are interested in submitting to JOSS.
- [ ] The package is not a "minor utility" as defined by JOSS's [submission requirements][JossSubmissionRequirements]: "Minor ‘utility’ packages, including ‘thin’ API clients, are not acceptable." pyOpenSci welcomes these packages under "Data Retrieval", but JOSS has slightly different criteria.
- [ ] The package contains a `paper.md` matching [JOSS's requirements][JossPaperRequirements] with a high-level description in the package root or in `inst/`.
- [ ] The package is deposited in a long-term repository with the DOI:
*Note: Do not submit your package separately to JOSS*
</details>
## Are you OK with Reviewers Submitting Issues and/or pull requests to your Repo Directly?
This option will allow reviewers to open smaller issues that can then be linked to PR's rather than submitting a more dense text based review. It will also allow you to demonstrate addressing the issue via PR links.
- [x] Yes I am OK with reviewers submitting requested changes as issues to my repo. Reviewers will then link to the issues in their submitted review.
## Code of conduct
- [x] I agree to abide by [pyOpenSci's Code of Conduct][PyOpenSciCodeOfConduct] during the review process and in maintaining my package should it be accepted.
**P.S.** *Have feedback/comments about our review process? Leave a comment [here][Comments]
## Editor and Review Templates
[Editor and review templates can be found here][Templates]
[PackagingGuide]: https://www.pyopensci.org/contributing-guide/authoring/index.html#packaging-guide
[PackageCategories]: https://www.pyopensci.org/contributing-guide/open-source-software-peer-review/aims-and-scope.html?highlight=data#package-categories
[NotesOnCategories]: https://www.pyopensci.org/contributing-guide/open-source-software-peer-review/aims-and-scope.html?highlight=data#notes-on-categories
[JournalOfOpenSourceSoftware]: http://joss.theoj.org/
[JossSubmissionRequirements]: https://joss.readthedocs.io/en/latest/submitting.html#submission-requirements
[JossPaperRequirements]: https://joss.readthedocs.io/en/latest/submitting.html#what-should-my-paper-contain
[PyOpenSciCodeOfConduct]: https://www.pyopensci.org/contributing-guide/open-source-software-peer-review/code-of-conduct.html?highlight=code%20conduct
[OsiApprovedLicense]: https://opensource.org/licenses
[Templates]: https://www.pyopensci.org/contributing-guide/appendices/templates.html
[Comments]: https://github.com/pyOpenSci/governance/issues/8
| main | humpi the python code for the hurricane maximum potential intensity humpi model submitting author albenis pérez alarcón apalarcon package name humpi hurricane maximum potential intensity model one line description of package repository link version submitted editor tbd reviewer tbd reviewer tbd archive tbd version accepted tbd description include a brief paragraph describing what your package does the potential intensity pi of tropical cyclones tcs is the maximum surface wind speed and minimum central pressure limits found by representing the storm as a thermal heat engine the pi theory proposed by hereafter e pi has been widely accepted as the upper bound for tcs intensity in the e pi theory the dynamic and thermodynamic processes of the tc are described as an energy cycle like a carnot engine absorbing heat from the ocean giving it up at the tropopause nevertheless the superintensity phenomenon which occurs when the observed or modelled tc intensity is higher than the e pi prediction is a research challenge nowadays in a recent attempt to avoid the superintensity phenomenon it was proposed a new hurricane maximum potential intensity humpi model based on the e pi theory humpi describes the tc thermo energetic cycle as a generalized carnot cycle and includes a tc model for the atmospheric boundary layer this work aims to implement the humpi model formulation in python scope please indicate which this package falls under data retrieval data extraction data munging data deposition reproducibility geospatial education data visualization please fill out a pre submission inquiry before submitting a data visualization package for more info see of our guidebook explain how the and why the package falls under these categories briefly sentences humpi is the python code of the humpi model which will allow its easy use to calculate the potential intensity of tropical cyclones both to check and support the results of already published research and for academic purposes who is the target audience and what are scientific applications of this package humpi is aimed primarily at researchers in atmospheric sciences climate change and risk studies and allows the calculation of the potential intensity of tropical cyclones are there other python packages that accomplish the same thing if so how does yours differ there is pypi another python package similar to humpi but both differ in the physical approximations that support the model that is the way of calculating the potential intensity in both modules is different if you made a pre submission enquiry please paste the link to the corresponding issue forum post or other discussion or tag the editor you contacted technical checks for details about the pyopensci packaging requirements see our confirm each of the following by checking the box this package does not violate the terms of service of any service it interacts with has an contains a readme with instructions for installing the development version includes documentation with examples for all functions contains a vignette with examples of its essential functions and uses has a test suite has continuous integration such as travis ci appveyor circleci and or others publication options do you wish to automatically submit to the if so joss checks the package has an obvious research application according to joss s definition in their be aware that completing the pyopensci review process does not guarantee acceptance to joss be sure to read their submission requirements linked above if you are interested in submitting to joss the package is not a minor utility as defined by joss s minor ‘utility’ packages including ‘thin’ api clients are not acceptable pyopensci welcomes these packages under data retrieval but joss has slightly different criteria the package contains a paper md matching with a high level description in the package root or in inst the package is deposited in a long term repository with the doi note do not submit your package separately to joss are you ok with reviewers submitting issues and or pull requests to your repo directly this option will allow reviewers to open smaller issues that can then be linked to pr s rather than submitting a more dense text based review it will also allow you to demonstrate addressing the issue via pr links yes i am ok with reviewers submitting requested changes as issues to my repo reviewers will then link to the issues in their submitted review code of conduct i agree to abide by during the review process and in maintaining my package should it be accepted p s have feedback comments about our review process leave a comment editor and review templates | 1 |
20,207 | 11,418,640,313 | IssuesEvent | 2020-02-03 05:25:05 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | Updating Node Pools | Pri1 container-service/svc cxp product-question triaged | > When you use an Azure Resource Manager template to create and managed resources, you can typically update the settings in your template and redeploy to update the resource. With node pools in AKS, the initial node pool profile can't be updated once the AKS cluster has been created. This behavior means that you can't update an existing Resource Manager template, make a change to the node pools, and redeploy. Instead, you must create a separate Resource Manager template that updates only the node pools for an existing AKS cluster.
(https://docs.microsoft.com/en-us/azure/aks/use-multiple-node-pools#manage-node-pools-using-a-resource-manager-template)
Just wanted to clarify. If I create a new AKS cluster with a default/dummy Node Pool can I then use ARM template to delete this Node Pool and replace it with the Node Pool(s) I need? Or do I need to create the first Node Pool with required settings (any of name, node type, node count) and then use ARM template to (only) update existing and or add new node pools?
I have tried creating a new AKS cluster (using ARM templates) but it fails without `agentPoolProfiles`.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 8c420b2c-e868-9a6a-49fb-913f2cd007ef
* Version Independent ID: 5c543e08-7d75-21eb-9510-22e2cd56d16b
* Content: [Use multiple node pools in Azure Kubernetes Service (AKS)](https://docs.microsoft.com/en-us/azure/aks/use-multiple-node-pools#feedback)
* Content Source: [articles/aks/use-multiple-node-pools.md](https://github.com/Microsoft/azure-docs/blob/master/articles/aks/use-multiple-node-pools.md)
* Service: **container-service**
* GitHub Login: @mlearned
* Microsoft Alias: **mlearned** | 1.0 | Updating Node Pools - > When you use an Azure Resource Manager template to create and managed resources, you can typically update the settings in your template and redeploy to update the resource. With node pools in AKS, the initial node pool profile can't be updated once the AKS cluster has been created. This behavior means that you can't update an existing Resource Manager template, make a change to the node pools, and redeploy. Instead, you must create a separate Resource Manager template that updates only the node pools for an existing AKS cluster.
(https://docs.microsoft.com/en-us/azure/aks/use-multiple-node-pools#manage-node-pools-using-a-resource-manager-template)
Just wanted to clarify. If I create a new AKS cluster with a default/dummy Node Pool can I then use ARM template to delete this Node Pool and replace it with the Node Pool(s) I need? Or do I need to create the first Node Pool with required settings (any of name, node type, node count) and then use ARM template to (only) update existing and or add new node pools?
I have tried creating a new AKS cluster (using ARM templates) but it fails without `agentPoolProfiles`.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 8c420b2c-e868-9a6a-49fb-913f2cd007ef
* Version Independent ID: 5c543e08-7d75-21eb-9510-22e2cd56d16b
* Content: [Use multiple node pools in Azure Kubernetes Service (AKS)](https://docs.microsoft.com/en-us/azure/aks/use-multiple-node-pools#feedback)
* Content Source: [articles/aks/use-multiple-node-pools.md](https://github.com/Microsoft/azure-docs/blob/master/articles/aks/use-multiple-node-pools.md)
* Service: **container-service**
* GitHub Login: @mlearned
* Microsoft Alias: **mlearned** | non_main | updating node pools when you use an azure resource manager template to create and managed resources you can typically update the settings in your template and redeploy to update the resource with node pools in aks the initial node pool profile can t be updated once the aks cluster has been created this behavior means that you can t update an existing resource manager template make a change to the node pools and redeploy instead you must create a separate resource manager template that updates only the node pools for an existing aks cluster just wanted to clarify if i create a new aks cluster with a default dummy node pool can i then use arm template to delete this node pool and replace it with the node pool s i need or do i need to create the first node pool with required settings any of name node type node count and then use arm template to only update existing and or add new node pools i have tried creating a new aks cluster using arm templates but it fails without agentpoolprofiles document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service container service github login mlearned microsoft alias mlearned | 0 |
3,886 | 17,241,647,892 | IssuesEvent | 2021-07-21 00:00:12 | Kotlin/kotlin-jupyter | https://api.github.com/repos/Kotlin/kotlin-jupyter | closed | Update Kravis and Krangl library descriptors | library-descriptors maintaining | Kravis: `0.8`, https://github.com/holgerbrandl/kravis/blob/master/CHANGES.md#v08
Krangl: `0.17`, https://github.com/holgerbrandl/krangl/blob/master/CHANGES.md#v017
May be done after the resolution of #254 | True | Update Kravis and Krangl library descriptors - Kravis: `0.8`, https://github.com/holgerbrandl/kravis/blob/master/CHANGES.md#v08
Krangl: `0.17`, https://github.com/holgerbrandl/krangl/blob/master/CHANGES.md#v017
May be done after the resolution of #254 | main | update kravis and krangl library descriptors kravis krangl may be done after the resolution of | 1 |
5,085 | 25,998,346,362 | IssuesEvent | 2022-12-20 13:28:10 | software-mansion/react-native-reanimated | https://api.github.com/repos/software-mansion/react-native-reanimated | opened | ☂️ Deadlock/ANR in performOperations | Platform: Android Platform: iOS Bug Maintainer issue | ### Description
This is an umbrella issue for ANRs/deadlocks on Android/iOS in NodesManager.performOperations.
The bug was introduced in #1215.
### Steps to reproduce
We don't have a repro yet but it needs to use modal or datetime picker as well as animate layout props using Reanimated.
### Snack or a link to a repository
work in progress
### Reanimated version
>= 2.0.0
### React Native version
n/d
### Platforms
Android, iOS
### JavaScript runtime
None
### Workflow
None
### Architecture
None
### Build type
None
### Device
None
### Device model
_No response_
### Acknowledgements
Yes | True | ☂️ Deadlock/ANR in performOperations - ### Description
This is an umbrella issue for ANRs/deadlocks on Android/iOS in NodesManager.performOperations.
The bug was introduced in #1215.
### Steps to reproduce
We don't have a repro yet but it needs to use modal or datetime picker as well as animate layout props using Reanimated.
### Snack or a link to a repository
work in progress
### Reanimated version
>= 2.0.0
### React Native version
n/d
### Platforms
Android, iOS
### JavaScript runtime
None
### Workflow
None
### Architecture
None
### Build type
None
### Device
None
### Device model
_No response_
### Acknowledgements
Yes | main | ☂️ deadlock anr in performoperations description this is an umbrella issue for anrs deadlocks on android ios in nodesmanager performoperations the bug was introduced in steps to reproduce we don t have a repro yet but it needs to use modal or datetime picker as well as animate layout props using reanimated snack or a link to a repository work in progress reanimated version react native version n d platforms android ios javascript runtime none workflow none architecture none build type none device none device model no response acknowledgements yes | 1 |
1,853 | 6,577,396,614 | IssuesEvent | 2017-09-12 00:37:26 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Support addgroup in group-module | affects_2.0 feature_idea waiting_on_maintainer | ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
group module
##### ANSIBLE VERSION
```
2.0.2.0
```
##### CONFIGURATION
##### OS / ENVIRONMENT
Target: Alpine Linux
##### SUMMARY
'groupadd' (provided by shadow) is currently in testing only. Ansible should use 'addgroup' (provided by Busybox)
##### STEPS TO REPRODUCE
Install Alpine Linux
```
- group: name=blafoo state=present
```
##### EXPECTED RESULTS
Correct managing of group
##### ACTUAL RESULTS
Failed because binary groupadd not found
| True | Support addgroup in group-module - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
group module
##### ANSIBLE VERSION
```
2.0.2.0
```
##### CONFIGURATION
##### OS / ENVIRONMENT
Target: Alpine Linux
##### SUMMARY
'groupadd' (provided by shadow) is currently in testing only. Ansible should use 'addgroup' (provided by Busybox)
##### STEPS TO REPRODUCE
Install Alpine Linux
```
- group: name=blafoo state=present
```
##### EXPECTED RESULTS
Correct managing of group
##### ACTUAL RESULTS
Failed because binary groupadd not found
| main | support addgroup in group module issue type feature idea component name group module ansible version configuration os environment target alpine linux summary groupadd provided by shadow is currently in testing only ansible should use addgroup provided by busybox steps to reproduce install alpine linux group name blafoo state present expected results correct managing of group actual results failed because binary groupadd not found | 1 |
13,447 | 23,141,673,877 | IssuesEvent | 2022-07-28 19:08:21 | renovatebot/renovate | https://api.github.com/repos/renovatebot/renovate | opened | commitMessagePrefix should not override semantic commits | type:bug status:requirements priority-5-triage | ### How are you running Renovate?
Self-hosted
### If you're self-hosting Renovate, tell us what version of Renovate you run.
32.132.0
### Please select which platform you are using if self-hosting.
Bitbucket Server
### If you're self-hosting Renovate, tell us what version of the platform you run.
_No response_
### Was this something which used to work for you, and then stopped?
I never saw this working
### Describe the bug
In its current state, [this function](https://github.com/renovatebot/renovate/blob/a3d12350328b04d0c0043095cd9d6ceacd32a8d7/lib/workers/repository/model/commit-message-factory.ts#L45-L50) overrides the semantic commit as soon as there is a prefix defined.
In our case, we have two sorts of repositories with different commit regexes configured: one with and one without semantic commits. Both of them need to have a (jira) issue defined within their commit message, so we allow each team (who have a distinct global configuration for all of their repositories) to configure a prefix containing an issue for their commitMessagePrefix for their non-semantic repositories, as well as a scope containing an issue for their semantic repositories. Having both of those options active breaks the onboarding commit for semantic repositories, as the non-semantic version will always win.
I would suggest an option (e.g. preferSemanticCommit) that allows to override this behaviour.
### Relevant debug logs
_No response_
### Have you created a minimal reproduction repository?
No reproduction repository | 1.0 | commitMessagePrefix should not override semantic commits - ### How are you running Renovate?
Self-hosted
### If you're self-hosting Renovate, tell us what version of Renovate you run.
32.132.0
### Please select which platform you are using if self-hosting.
Bitbucket Server
### If you're self-hosting Renovate, tell us what version of the platform you run.
_No response_
### Was this something which used to work for you, and then stopped?
I never saw this working
### Describe the bug
In its current state, [this function](https://github.com/renovatebot/renovate/blob/a3d12350328b04d0c0043095cd9d6ceacd32a8d7/lib/workers/repository/model/commit-message-factory.ts#L45-L50) overrides the semantic commit as soon as there is a prefix defined.
In our case, we have two sorts of repositories with different commit regexes configured: one with and one without semantic commits. Both of them need to have a (jira) issue defined within their commit message, so we allow each team (who have a distinct global configuration for all of their repositories) to configure a prefix containing an issue for their commitMessagePrefix for their non-semantic repositories, as well as a scope containing an issue for their semantic repositories. Having both of those options active breaks the onboarding commit for semantic repositories, as the non-semantic version will always win.
I would suggest an option (e.g. preferSemanticCommit) that allows to override this behaviour.
### Relevant debug logs
_No response_
### Have you created a minimal reproduction repository?
No reproduction repository | non_main | commitmessageprefix should not override semantic commits how are you running renovate self hosted if you re self hosting renovate tell us what version of renovate you run please select which platform you are using if self hosting bitbucket server if you re self hosting renovate tell us what version of the platform you run no response was this something which used to work for you and then stopped i never saw this working describe the bug in its current state overrides the semantic commit as soon as there is a prefix defined in our case we have two sorts of repositories with different commit regexes configured one with and one without semantic commits both of them need to have a jira issue defined within their commit message so we allow each team who have a distinct global configuration for all of their repositories to configure a prefix containing an issue for their commitmessageprefix for their non semantic repositories as well as a scope containing an issue for their semantic repositories having both of those options active breaks the onboarding commit for semantic repositories as the non semantic version will always win i would suggest an option e g prefersemanticcommit that allows to override this behaviour relevant debug logs no response have you created a minimal reproduction repository no reproduction repository | 0 |
4,186 | 20,246,049,863 | IssuesEvent | 2022-02-14 13:50:57 | Lissy93/dashy | https://api.github.com/repos/Lissy93/dashy | closed | [FEEDBACK] Allow useProxy: true to be a global widget setting | 🌈 Feedback 👤 Awaiting Maintainer Response | I'm getting into widgets now, and they had an issue connecting when on my reverse proxy, but worked fine when directly going to the port. I saw in troubleshooting this can be solved with `useProxy: true`, which did resolve my issue. I'm now copy/pasting it to all of my widgets which pull from self-hosted services.
It'd be nice if `useProxy: true` could be a global widget setting so I don't need to copy it on each widget, since my configuration seems to require it on all.
I guess external widgets may not require it, not sure if there's fallout of this being set on those as well? | True | [FEEDBACK] Allow useProxy: true to be a global widget setting - I'm getting into widgets now, and they had an issue connecting when on my reverse proxy, but worked fine when directly going to the port. I saw in troubleshooting this can be solved with `useProxy: true`, which did resolve my issue. I'm now copy/pasting it to all of my widgets which pull from self-hosted services.
It'd be nice if `useProxy: true` could be a global widget setting so I don't need to copy it on each widget, since my configuration seems to require it on all.
I guess external widgets may not require it, not sure if there's fallout of this being set on those as well? | main | allow useproxy true to be a global widget setting i m getting into widgets now and they had an issue connecting when on my reverse proxy but worked fine when directly going to the port i saw in troubleshooting this can be solved with useproxy true which did resolve my issue i m now copy pasting it to all of my widgets which pull from self hosted services it d be nice if useproxy true could be a global widget setting so i don t need to copy it on each widget since my configuration seems to require it on all i guess external widgets may not require it not sure if there s fallout of this being set on those as well | 1 |
791,693 | 27,872,649,502 | IssuesEvent | 2023-03-21 14:20:57 | mozilla/addons-server | https://api.github.com/repos/mozilla/addons-server | closed | Only apply `needs_human_review` to non-public listed versions under certain conditions | priority:p1 component:reviewer_tools | ### Describe the problem and steps to reproduce it:
This is a followup of #20422.
It states: "Listed addons with non-public versions should still be added to the Notable group once the user threshold is reached.",
**and `needs_human_review` should be set only if:**
the version has been signed and not been reviewed by a human (manually approved, auto-approval confirmed, rejected, blocked).
### Steps to reproduce the regression of #20422:
1. Submit a listed version.
2. Let it auto-approve and confirm approval, or manually approve it.
3. Disable or delete the add-on.
4. Set the usage above the threshold for Notable add-ons.
### What happened?
The latest version (actually `current_version` if I am not mistaken) gets the `needs_human_review` flag, which makes it get a due date, which in turn makes the add-on appear in the review queue.
### What did you expect to happen?
The version should not receive `needs_human_review`, and thus no due date and therefore not appear in the review queue. | 1.0 | Only apply `needs_human_review` to non-public listed versions under certain conditions - ### Describe the problem and steps to reproduce it:
This is a followup of #20422.
It states: "Listed addons with non-public versions should still be added to the Notable group once the user threshold is reached.",
**and `needs_human_review` should be set only if:**
the version has been signed and not been reviewed by a human (manually approved, auto-approval confirmed, rejected, blocked).
### Steps to reproduce the regression of #20422:
1. Submit a listed version.
2. Let it auto-approve and confirm approval, or manually approve it.
3. Disable or delete the add-on.
4. Set the usage above the threshold for Notable add-ons.
### What happened?
The latest version (actually `current_version` if I am not mistaken) gets the `needs_human_review` flag, which makes it get a due date, which in turn makes the add-on appear in the review queue.
### What did you expect to happen?
The version should not receive `needs_human_review`, and thus no due date and therefore not appear in the review queue. | non_main | only apply needs human review to non public listed versions under certain conditions describe the problem and steps to reproduce it this is a followup of it states listed addons with non public versions should still be added to the notable group once the user threshold is reached and needs human review should be set only if the version has been signed and not been reviewed by a human manually approved auto approval confirmed rejected blocked steps to reproduce the regression of submit a listed version let it auto approve and confirm approval or manually approve it disable or delete the add on set the usage above the threshold for notable add ons what happened the latest version actually current version if i am not mistaken gets the needs human review flag which makes it get a due date which in turn makes the add on appear in the review queue what did you expect to happen the version should not receive needs human review and thus no due date and therefore not appear in the review queue | 0 |
75,233 | 9,214,850,816 | IssuesEvent | 2019-03-10 23:10:28 | ServiceInnovationLab/PresenceChecker | https://api.github.com/repos/ServiceInnovationLab/PresenceChecker | closed | Presentation on call-outs / implications of the current solution | design development review | As the citizenship team
We want to be able to put together a presentation that calls out any implications with the process and the demo data that we've reviewed.
A / C
- [x] Should show any implications, pain points of the current process
- [x] Should show analysis of demo data
requires #62 | 1.0 | Presentation on call-outs / implications of the current solution - As the citizenship team
We want to be able to put together a presentation that calls out any implications with the process and the demo data that we've reviewed.
A / C
- [x] Should show any implications, pain points of the current process
- [x] Should show analysis of demo data
requires #62 | non_main | presentation on call outs implications of the current solution as the citizenship team we want to be able to put together a presentation that calls out any implications with the process and the demo data that we ve reviewed a c should show any implications pain points of the current process should show analysis of demo data requires | 0 |
4,907 | 25,224,623,043 | IssuesEvent | 2022-11-14 15:09:27 | precice/precice | https://api.github.com/repos/precice/precice | closed | Refactor RTree into query Package | maintainability | The implementation of the spacial index trees has become a major part of the Mesh and Mapping packages. However, the use of the `boost.geometry` rtree is an implementation detail and should not spread through our code-base. Emitted errors are often highly complicated and reading the code requires knowledge of some advances concepts.
To improve readability and the overall structure of the code, we should:
1. refactor the rtree from the `mesh` into the `query` package.
2. isolate it's use-cases and define clear interfaces to simplify the mapping codes. This is especially important for the cascaded probing in the nearest-projection mapping.
3. replace the old query functionality with an rtree-based implementation ( related to #243 ) | True | Refactor RTree into query Package - The implementation of the spacial index trees has become a major part of the Mesh and Mapping packages. However, the use of the `boost.geometry` rtree is an implementation detail and should not spread through our code-base. Emitted errors are often highly complicated and reading the code requires knowledge of some advances concepts.
To improve readability and the overall structure of the code, we should:
1. refactor the rtree from the `mesh` into the `query` package.
2. isolate it's use-cases and define clear interfaces to simplify the mapping codes. This is especially important for the cascaded probing in the nearest-projection mapping.
3. replace the old query functionality with an rtree-based implementation ( related to #243 ) | main | refactor rtree into query package the implementation of the spacial index trees has become a major part of the mesh and mapping packages however the use of the boost geometry rtree is an implementation detail and should not spread through our code base emitted errors are often highly complicated and reading the code requires knowledge of some advances concepts to improve readability and the overall structure of the code we should refactor the rtree from the mesh into the query package isolate it s use cases and define clear interfaces to simplify the mapping codes this is especially important for the cascaded probing in the nearest projection mapping replace the old query functionality with an rtree based implementation related to | 1 |
528,707 | 15,373,063,370 | IssuesEvent | 2021-03-02 12:06:02 | AbsaOSS/enceladus | https://api.github.com/repos/AbsaOSS/enceladus | opened | Add TLS arguments for Mongo connections from Menas | feature priority: undecided under discussion | ## Background
For MongoDB using TLS need extra parametrs: --tls --tlsCAFile test-ca.crt
Unfortunately add in URL "tlsCAFile=PathToCA.crt" doesn't work
## Feature
A description of the requested feature.
## Example [Optional]
Please add extra parameter like -Dmenas.mongo.connection.extraArgs="--tls --tlsCAFile PathToCA.crt"
| 1.0 | Add TLS arguments for Mongo connections from Menas - ## Background
For MongoDB using TLS need extra parametrs: --tls --tlsCAFile test-ca.crt
Unfortunately add in URL "tlsCAFile=PathToCA.crt" doesn't work
## Feature
A description of the requested feature.
## Example [Optional]
Please add extra parameter like -Dmenas.mongo.connection.extraArgs="--tls --tlsCAFile PathToCA.crt"
| non_main | add tls arguments for mongo connections from menas background for mongodb using tls need extra parametrs tls tlscafile test ca crt unfortunately add in url tlscafile pathtoca crt doesn t work feature a description of the requested feature example please add extra parameter like dmenas mongo connection extraargs tls tlscafile pathtoca crt | 0 |
134,463 | 19,210,900,975 | IssuesEvent | 2021-12-07 01:43:00 | tiki/app | https://api.github.com/repos/tiki/app | closed | Design questionnaire / ratings pop up | design | As a product manager, I want to know:
- user satisfaction
- why are users using the app | 1.0 | Design questionnaire / ratings pop up - As a product manager, I want to know:
- user satisfaction
- why are users using the app | non_main | design questionnaire ratings pop up as a product manager i want to know user satisfaction why are users using the app | 0 |
4,139 | 19,663,999,636 | IssuesEvent | 2022-01-10 20:11:29 | VA-Explorer/va_explorer | https://api.github.com/repos/VA-Explorer/va_explorer | opened | Change copyright/ year references to be current year | Type: Maintainance | **What is the expected state?**
Date references in project are either updated or set to auto-update
**What is the actual state?**
Some dates in project reference 2021.
**Relevant context**
`README.md`
`va_explorer/templates/base.html`
`LICENSE`?
more?
| True | Change copyright/ year references to be current year - **What is the expected state?**
Date references in project are either updated or set to auto-update
**What is the actual state?**
Some dates in project reference 2021.
**Relevant context**
`README.md`
`va_explorer/templates/base.html`
`LICENSE`?
more?
| main | change copyright year references to be current year what is the expected state date references in project are either updated or set to auto update what is the actual state some dates in project reference relevant context readme md va explorer templates base html license more | 1 |
4,057 | 18,981,624,335 | IssuesEvent | 2021-11-21 01:07:30 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | removing CloudWatchLambdaInsightsExecutionRolePolicy causes an error | type/bug stage/bug-repro maintainer/need-followup | <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed).
If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. -->
### Description:
<!-- Briefly describe the bug you are facing.-->
All I did was remove `CloudWatchLambdaInsightsExecutionRolePolicy` When I tried to deploy, I got this:
```
Waiting for changeset to be created..
Error: Failed to create changeset for the stack: myapp-dev, ex: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state: For expression "Status" we matched expected path: "FAILED" Status: FAILED. Reason: Template error: every Fn::Join object requires two parameters, (1) a string delimiter and (2) a list of strings to be joined or a function that returns a list of strings (such as Fn::GetAZs) to be joined.
```
I did not change anything related to `Fn::Join` or `!Join`, simply removed `CloudWatchLambdaInsightsExecutionRolePolicy` from all my functions. Deployment was successful prior to this change, and the error message is not specific.
### Steps to reproduce:
<!-- Provide detailed steps to replicate the bug, including steps from third party tools (CDK, etc.) -->
1. Create 10 functions with `CloudWatchLambdaInsightsExecutionRolePolicy` and make some of them have `LambdaInvokePolicy`.
2. Deploy.
3. Remove `CloudWatchLambdaInsightsExecutionRolePolicy`
4. Deploy.
### Observed result:
<!-- Please provide command output with `--debug` flag set. -->
It results in a vague error.
### Expected result:
<!-- Describe what you expected. -->
Should not even cause an error -- also the error message should be more specific.
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Linux, using Github Actions
2. `sam --version`: SAM CLI, version 1.23.0
3. AWS region: ap-southeast-1
`Add --debug flag to command you are running`
N/A
### EDIT 1:
Adding `--debug` does not help | True | removing CloudWatchLambdaInsightsExecutionRolePolicy causes an error - <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed).
If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. -->
### Description:
<!-- Briefly describe the bug you are facing.-->
All I did was remove `CloudWatchLambdaInsightsExecutionRolePolicy` When I tried to deploy, I got this:
```
Waiting for changeset to be created..
Error: Failed to create changeset for the stack: myapp-dev, ex: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state: For expression "Status" we matched expected path: "FAILED" Status: FAILED. Reason: Template error: every Fn::Join object requires two parameters, (1) a string delimiter and (2) a list of strings to be joined or a function that returns a list of strings (such as Fn::GetAZs) to be joined.
```
I did not change anything related to `Fn::Join` or `!Join`, simply removed `CloudWatchLambdaInsightsExecutionRolePolicy` from all my functions. Deployment was successful prior to this change, and the error message is not specific.
### Steps to reproduce:
<!-- Provide detailed steps to replicate the bug, including steps from third party tools (CDK, etc.) -->
1. Create 10 functions with `CloudWatchLambdaInsightsExecutionRolePolicy` and make some of them have `LambdaInvokePolicy`.
2. Deploy.
3. Remove `CloudWatchLambdaInsightsExecutionRolePolicy`
4. Deploy.
### Observed result:
<!-- Please provide command output with `--debug` flag set. -->
It results in a vague error.
### Expected result:
<!-- Describe what you expected. -->
Should not even cause an error -- also the error message should be more specific.
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Linux, using Github Actions
2. `sam --version`: SAM CLI, version 1.23.0
3. AWS region: ap-southeast-1
`Add --debug flag to command you are running`
N/A
### EDIT 1:
Adding `--debug` does not help | main | removing cloudwatchlambdainsightsexecutionrolepolicy causes an error make sure we don t have an existing issue that reports the bug you are seeing both open and closed if you do find an existing issue re open or add a comment to that issue instead of creating a new one description all i did was remove cloudwatchlambdainsightsexecutionrolepolicy when i tried to deploy i got this waiting for changeset to be created error failed to create changeset for the stack myapp dev ex waiter changesetcreatecomplete failed waiter encountered a terminal failure state for expression status we matched expected path failed status failed reason template error every fn join object requires two parameters a string delimiter and a list of strings to be joined or a function that returns a list of strings such as fn getazs to be joined i did not change anything related to fn join or join simply removed cloudwatchlambdainsightsexecutionrolepolicy from all my functions deployment was successful prior to this change and the error message is not specific steps to reproduce create functions with cloudwatchlambdainsightsexecutionrolepolicy and make some of them have lambdainvokepolicy deploy remove cloudwatchlambdainsightsexecutionrolepolicy deploy observed result it results in a vague error expected result should not even cause an error also the error message should be more specific additional environment details ex windows mac amazon linux etc os linux using github actions sam version sam cli version aws region ap southeast add debug flag to command you are running n a edit adding debug does not help | 1 |
72,074 | 31,153,853,557 | IssuesEvent | 2023-08-16 11:53:06 | gradido/gradido | https://api.github.com/repos/gradido/gradido | closed | 🚀 [Feature][DLT-Connector] trigger and send transaction-info to dlt-connector | feature service: iota | - [ ] database migration:
- new table dlt_transactions
- id
- transactions_id
- message_id
- community_balance
- community_balance_date
- created_at
- verified_at
- [ ] send transaction-info to dlt-connector
- create graphql client for send-request
- take response of request and writ in dlt-transaction entry
- [ ] insert trigger on transaction creations
- to create dlt_transactions entry
- to send dlt-transaction-request
- to calculate community_balance | 1.0 | 🚀 [Feature][DLT-Connector] trigger and send transaction-info to dlt-connector - - [ ] database migration:
- new table dlt_transactions
- id
- transactions_id
- message_id
- community_balance
- community_balance_date
- created_at
- verified_at
- [ ] send transaction-info to dlt-connector
- create graphql client for send-request
- take response of request and writ in dlt-transaction entry
- [ ] insert trigger on transaction creations
- to create dlt_transactions entry
- to send dlt-transaction-request
- to calculate community_balance | non_main | 🚀 trigger and send transaction info to dlt connector database migration new table dlt transactions id transactions id message id community balance community balance date created at verified at send transaction info to dlt connector create graphql client for send request take response of request and writ in dlt transaction entry insert trigger on transaction creations to create dlt transactions entry to send dlt transaction request to calculate community balance | 0 |
2,269 | 8,029,940,407 | IssuesEvent | 2018-07-27 17:48:07 | AlexsLemonade/refinebio-frontend | https://api.github.com/repos/AlexsLemonade/refinebio-frontend | closed | Clean Up Dependencies | maintainability review | Every time I try to run `yarn install` yarn tries and fails to build `sqlite3`, which it looks like is an optional dependency of something, and one dependency causes the install to fail unless `--ignore-engines` is passed. | True | Clean Up Dependencies - Every time I try to run `yarn install` yarn tries and fails to build `sqlite3`, which it looks like is an optional dependency of something, and one dependency causes the install to fail unless `--ignore-engines` is passed. | main | clean up dependencies every time i try to run yarn install yarn tries and fails to build which it looks like is an optional dependency of something and one dependency causes the install to fail unless ignore engines is passed | 1 |
4,025 | 18,792,879,408 | IssuesEvent | 2021-11-08 18:39:58 | carbon-design-system/carbon | https://api.github.com/repos/carbon-design-system/carbon | closed | [Bug]: Several problems with `selectorsFloatingMenus` prop on ComposedModal | type: bug 🐛 status: needs triage 🕵️♀️ status: waiting for maintainer response 💬 | ### Package
carbon-components-react
### Package version
v7.46.0
### Description
It is difficult to use the `selectorsFloatingMenus` prop on ComposedModal, for several reasons:
1. There is a mistake in the proptypes, declaring this prop as type 'string' when it should be array of string. The logic in the `wrapFocus` method assumes that an array will be provided, and causes errors if a string is supplied. If an array is supplied, it works correctly, but issues warnings (in non-production environments/builds) because of the proptypes mismatch. The proptypes should be fixed.
2. The value for this prop is not restructured out in the ComposedModal render method, meaning that it remains in the `...rest` object and thus gets passed through to the `<div>`, and this is both unnecessary and causes React error messages about adding camel-case attributes onto HTML nodes. The value should be remove from `...rest` by restructuring, so that the value is not passed through to HTML.
3. If no value is provided, an array of standard Carbon component CSS selectors is used as allowed focus escapes for the ComposedModal. However, if a value is provided then it replaces that array. This means that if a user needs to add a selector, but still wants other Carbon components to work within modals, they will need to include the same standard CSS selectors in the array they supply along with the value(s) they want to add. This is awkward, error-prone, and a maintenance risk (further selectors may need to be added in the future). It would be better if any value supplied for this prop ADDED TO, rather than replaced, the standard set of selectors.
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | True | [Bug]: Several problems with `selectorsFloatingMenus` prop on ComposedModal - ### Package
carbon-components-react
### Package version
v7.46.0
### Description
It is difficult to use the `selectorsFloatingMenus` prop on ComposedModal, for several reasons:
1. There is a mistake in the proptypes, declaring this prop as type 'string' when it should be array of string. The logic in the `wrapFocus` method assumes that an array will be provided, and causes errors if a string is supplied. If an array is supplied, it works correctly, but issues warnings (in non-production environments/builds) because of the proptypes mismatch. The proptypes should be fixed.
2. The value for this prop is not restructured out in the ComposedModal render method, meaning that it remains in the `...rest` object and thus gets passed through to the `<div>`, and this is both unnecessary and causes React error messages about adding camel-case attributes onto HTML nodes. The value should be remove from `...rest` by restructuring, so that the value is not passed through to HTML.
3. If no value is provided, an array of standard Carbon component CSS selectors is used as allowed focus escapes for the ComposedModal. However, if a value is provided then it replaces that array. This means that if a user needs to add a selector, but still wants other Carbon components to work within modals, they will need to include the same standard CSS selectors in the array they supply along with the value(s) they want to add. This is awkward, error-prone, and a maintenance risk (further selectors may need to be added in the future). It would be better if any value supplied for this prop ADDED TO, rather than replaced, the standard set of selectors.
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems | main | several problems with selectorsfloatingmenus prop on composedmodal package carbon components react package version description it is difficult to use the selectorsfloatingmenus prop on composedmodal for several reasons there is a mistake in the proptypes declaring this prop as type string when it should be array of string the logic in the wrapfocus method assumes that an array will be provided and causes errors if a string is supplied if an array is supplied it works correctly but issues warnings in non production environments builds because of the proptypes mismatch the proptypes should be fixed the value for this prop is not restructured out in the composedmodal render method meaning that it remains in the rest object and thus gets passed through to the and this is both unnecessary and causes react error messages about adding camel case attributes onto html nodes the value should be remove from rest by restructuring so that the value is not passed through to html if no value is provided an array of standard carbon component css selectors is used as allowed focus escapes for the composedmodal however if a value is provided then it replaces that array this means that if a user needs to add a selector but still wants other carbon components to work within modals they will need to include the same standard css selectors in the array they supply along with the value s they want to add this is awkward error prone and a maintenance risk further selectors may need to be added in the future it would be better if any value supplied for this prop added to rather than replaced the standard set of selectors code of conduct i agree to follow this project s i checked the for duplicate problems | 1 |
832 | 4,469,601,823 | IssuesEvent | 2016-08-25 13:37:07 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | vsphere_guest: should change backend to official API | cloud feature_idea vmware waiting_on_maintainer | ##### ISSUE TYPE
- Feature Request
##### COMPONENT NAME
vsphere_guest
##### ANSIBLE VERSION
```
ansible 2.1.1.0
```
##### CONFIGURATION
config file =
configured module search path = Default w/o overrides
##### OS / ENVIRONMENT
OSX 10.10.5
Centos 7.2.1511
##### SUMMARY
vsphere_guest depends on pysphere, which has been unmaintained since 2013.
suggest we move the backend to official API from Vmware ie. https://github.com/vmware/pyvmomi
| True | vsphere_guest: should change backend to official API - ##### ISSUE TYPE
- Feature Request
##### COMPONENT NAME
vsphere_guest
##### ANSIBLE VERSION
```
ansible 2.1.1.0
```
##### CONFIGURATION
config file =
configured module search path = Default w/o overrides
##### OS / ENVIRONMENT
OSX 10.10.5
Centos 7.2.1511
##### SUMMARY
vsphere_guest depends on pysphere, which has been unmaintained since 2013.
suggest we move the backend to official API from Vmware ie. https://github.com/vmware/pyvmomi
| main | vsphere guest should change backend to official api issue type feature request component name vsphere guest ansible version ansible configuration config file configured module search path default w o overrides os environment osx centos summary vsphere guest depends on pysphere which has been unmaintained since suggest we move the backend to official api from vmware ie | 1 |
347,376 | 31,160,871,290 | IssuesEvent | 2023-08-16 15:54:21 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | roachtest: npgsql failed | C-test-failure O-robot O-roachtest branch-master release-blocker T-sql-foundations | roachtest.npgsql [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/11339541?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/11339541?buildTab=artifacts#/npgsql) on master @ [e0de1f12d4f496fc7e4050a33e96f8c635c7a27c](https://github.com/cockroachdb/cockroach/commits/e0de1f12d4f496fc7e4050a33e96f8c635c7a27c):
```
(orm_helpers.go:212).summarizeFailed:
Tests run on Cockroach v23.2.0-alpha.00000000-dev-e0de1f12d4f496fc7e4050a33e96f8c635c7a27c
Tests run against npgsql v7.0.2
3292 Total Tests Run
2560 tests passed
732 tests failed
248 tests skipped
49 tests ignored
0 tests passed unexpectedly
2 tests failed unexpectedly
0 tests expected failed but skipped
0 tests expected failed but not run
---
--- FAIL: Npgsql.Tests.NotificationTests.Wait_with_timeout - (unexpected)
--- FAIL: Npgsql.Tests.CommandTests(Multiplexing).Use_across_connection_change(NotPrepared) - (unexpected)
For a full summary look at the npgsql artifacts
An updated blocklist (npgsqlBlocklist) is available in the artifacts' npgsql log
test artifacts and logs in: /artifacts/npgsql/run_1
```
<p>Parameters: <code>ROACHTEST_arch=amd64</code>
, <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
See: [Grafana](https://go.crdb.dev/p/roachfana/teamcity-11339541-1692164772-17-n1cpu4/1692201005708/1692201259502)
</p>
</details>
/cc @cockroachdb/sql-foundations
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*npgsql.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| 2.0 | roachtest: npgsql failed - roachtest.npgsql [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/11339541?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/11339541?buildTab=artifacts#/npgsql) on master @ [e0de1f12d4f496fc7e4050a33e96f8c635c7a27c](https://github.com/cockroachdb/cockroach/commits/e0de1f12d4f496fc7e4050a33e96f8c635c7a27c):
```
(orm_helpers.go:212).summarizeFailed:
Tests run on Cockroach v23.2.0-alpha.00000000-dev-e0de1f12d4f496fc7e4050a33e96f8c635c7a27c
Tests run against npgsql v7.0.2
3292 Total Tests Run
2560 tests passed
732 tests failed
248 tests skipped
49 tests ignored
0 tests passed unexpectedly
2 tests failed unexpectedly
0 tests expected failed but skipped
0 tests expected failed but not run
---
--- FAIL: Npgsql.Tests.NotificationTests.Wait_with_timeout - (unexpected)
--- FAIL: Npgsql.Tests.CommandTests(Multiplexing).Use_across_connection_change(NotPrepared) - (unexpected)
For a full summary look at the npgsql artifacts
An updated blocklist (npgsqlBlocklist) is available in the artifacts' npgsql log
test artifacts and logs in: /artifacts/npgsql/run_1
```
<p>Parameters: <code>ROACHTEST_arch=amd64</code>
, <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
See: [Grafana](https://go.crdb.dev/p/roachfana/teamcity-11339541-1692164772-17-n1cpu4/1692201005708/1692201259502)
</p>
</details>
/cc @cockroachdb/sql-foundations
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*npgsql.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| non_main | roachtest npgsql failed roachtest npgsql with on master orm helpers go summarizefailed tests run on cockroach alpha dev tests run against npgsql total tests run tests passed tests failed tests skipped tests ignored tests passed unexpectedly tests failed unexpectedly tests expected failed but skipped tests expected failed but not run fail npgsql tests notificationtests wait with timeout unexpected fail npgsql tests commandtests multiplexing use across connection change notprepared unexpected for a full summary look at the npgsql artifacts an updated blocklist npgsqlblocklist is available in the artifacts npgsql log test artifacts and logs in artifacts npgsql run parameters roachtest arch roachtest cloud gce roachtest cpu roachtest encrypted false roachtest ssd help see see see cc cockroachdb sql foundations | 0 |
442,959 | 12,753,921,810 | IssuesEvent | 2020-06-28 01:45:51 | projectacrn/acrn-hypervisor | https://api.github.com/repos/projectacrn/acrn-hypervisor | closed | In serial port have many Waag log, it cause many Auto-cases failed. | priority: P2-High type: bug | Waag version : win10-ltsc-17763.107
HW/Board:KBLNUC
Steps:
1.boot ACRN sos.
2. boot waag.
3. into serial port, after launch waag successful, still print mang waag log .
Expected result
into serial port, after launch waag successful, a lot of Auto cases step will check the serial output ,Not much waag log is expected,.
Actual result
in serial port have mang waag log, it cause many Auto-cases failed. | 1.0 | In serial port have many Waag log, it cause many Auto-cases failed. - Waag version : win10-ltsc-17763.107
HW/Board:KBLNUC
Steps:
1.boot ACRN sos.
2. boot waag.
3. into serial port, after launch waag successful, still print mang waag log .
Expected result
into serial port, after launch waag successful, a lot of Auto cases step will check the serial output ,Not much waag log is expected,.
Actual result
in serial port have mang waag log, it cause many Auto-cases failed. | non_main | in serial port have many waag log it cause many auto cases failed waag version ltsc hw board kblnuc steps boot acrn sos boot waag into serial port after launch waag successful still print mang waag log expected result into serial port after launch waag successful a lot of auto cases step will check the serial output not much waag log is expected actual result in serial port have mang waag log it cause many auto cases failed | 0 |
114,856 | 24,678,945,909 | IssuesEvent | 2022-10-18 19:28:51 | bnreplah/reactvulna | https://api.github.com/repos/bnreplah/reactvulna | opened | Use of Hard-coded Password [VID:259:src/App.js:42] | VeracodeFlaw: Medium Veracode Pipeline Scan | **Filename:** src/App.js
**Line:** 42
**CWE:** 259 (Use of Hard-coded Password)
<span>This variable assignment uses a hard-coded password that may compromise system security in a way that cannot be easily remedied. The use of a hard-coded password significantly increases the possibility that the account being protected will be compromised. Moreover, the password cannot be changed without patching the software. If a hard-coded password is compromised in a commercial product, all deployed instances may be vulnerable to attack. In some cases, this finding may indicate a reference to a password (e.g. the name of a key in a properties file) rather than an actual password. set</span> <span>Store passwords out-of-band from the application code. Follow best practices for protecting credentials stored in locations such as configuration or properties files. An HSM may be appropriate for particularly sensitive credentials.</span> <span>References: <a href="https://cwe.mitre.org/data/definitions/259.html">CWE</a></span> | 2.0 | Use of Hard-coded Password [VID:259:src/App.js:42] - **Filename:** src/App.js
**Line:** 42
**CWE:** 259 (Use of Hard-coded Password)
<span>This variable assignment uses a hard-coded password that may compromise system security in a way that cannot be easily remedied. The use of a hard-coded password significantly increases the possibility that the account being protected will be compromised. Moreover, the password cannot be changed without patching the software. If a hard-coded password is compromised in a commercial product, all deployed instances may be vulnerable to attack. In some cases, this finding may indicate a reference to a password (e.g. the name of a key in a properties file) rather than an actual password. set</span> <span>Store passwords out-of-band from the application code. Follow best practices for protecting credentials stored in locations such as configuration or properties files. An HSM may be appropriate for particularly sensitive credentials.</span> <span>References: <a href="https://cwe.mitre.org/data/definitions/259.html">CWE</a></span> | non_main | use of hard coded password filename src app js line cwe use of hard coded password this variable assignment uses a hard coded password that may compromise system security in a way that cannot be easily remedied the use of a hard coded password significantly increases the possibility that the account being protected will be compromised moreover the password cannot be changed without patching the software if a hard coded password is compromised in a commercial product all deployed instances may be vulnerable to attack in some cases this finding may indicate a reference to a password e g the name of a key in a properties file rather than an actual password set store passwords out of band from the application code follow best practices for protecting credentials stored in locations such as configuration or properties files an hsm may be appropriate for particularly sensitive credentials references a href | 0 |
170,199 | 20,842,068,541 | IssuesEvent | 2022-03-21 02:12:24 | ignatandrei/stankins | https://api.github.com/repos/ignatandrei/stankins | opened | CVE-2021-44906 (High) detected in minimist-0.0.8.tgz, minimist-1.2.0.tgz | security vulnerability | ## CVE-2021-44906 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>minimist-0.0.8.tgz</b>, <b>minimist-1.2.0.tgz</b></p></summary>
<p>
<details><summary><b>minimist-0.0.8.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz">https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz</a></p>
<p>Path to dependency file: /stankinsv2/solution/StankinsV2/StankinsAliveAngular/package.json</p>
<p>Path to vulnerable library: /stankinsv2/solution/StankinsV2/StankinsAliveAngular/node_modules/minimist/package.json,/stankinsV1/HtmlGenerator/wwwroot/lib/bootstrap/node_modules/extract-zip/node_modules/minimist/package.json,/stankinsv2/solution/StankinsV2/StankinsDataWebAngular/node_modules/minimist/package.json,/stankinsv2/solution/StankinsV2/StankinsElectron/node_modules/mkdirp/node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- karma-3.0.0.tgz (Root Library)
- optimist-0.6.1.tgz
- :x: **minimist-0.0.8.tgz** (Vulnerable Library)
</details>
<details><summary><b>minimist-1.2.0.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz</a></p>
<p>Path to dependency file: /stankinsv2/solution/StankinsV2/StankinsElectron/package.json</p>
<p>Path to vulnerable library: /stankinsv2/solution/StankinsV2/StankinsElectron/node_modules/minimist/package.json,/stankinsv2/solution/StankinsV2/StankinsDataWebAngular/node_modules/blocking-proxy/node_modules/minimist/package.json,/stankinsv2/solution/StankinsV2/StankinsAliveAngular/node_modules/rc/node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- ts-node-7.0.1.tgz (Root Library)
- :x: **minimist-1.2.0.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Minimist <=1.2.5 is vulnerable to Prototype Pollution via file index.js, function setKey() (lines 69-95).
<p>Publish Date: 2022-03-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44906>CVE-2021-44906</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-44906">https://nvd.nist.gov/vuln/detail/CVE-2021-44906</a></p>
<p>Release Date: 2022-03-17</p>
<p>Fix Resolution: BumperLane.Public.Service.Contracts - 0.23.35.214-prerelease;cloudscribe.templates - 5.2.0;Virteom.Tenant.Mobile.Bluetooth - 0.21.29.159-prerelease;ShowingVault.DotNet.Sdk - 0.13.41.190-prerelease;Envisia.DotNet.Templates - 3.0.1;Yarnpkg.Yarn - 0.26.1;Virteom.Tenant.Mobile.Framework.UWP - 0.20.41.103-prerelease;Virteom.Tenant.Mobile.Framework.iOS - 0.20.41.103-prerelease;BumperLane.Public.Api.V2.ClientModule - 0.23.35.214-prerelease;VueJS.NetCore - 1.1.1;Dianoga - 4.0.0,3.0.0-RC02;Virteom.Tenant.Mobile.Bluetooth.iOS - 0.20.41.103-prerelease;Virteom.Public.Utilities - 0.23.37.212-prerelease;Indianadavy.VueJsWebAPITemplate.CSharp - 1.0.1;NorDroN.AngularTemplate - 0.1.6;Virteom.Tenant.Mobile.Framework - 0.21.29.159-prerelease;Virteom.Tenant.Mobile.Bluetooth.Android - 0.20.41.103-prerelease;z4a-dotnet-scaffold - 1.0.0.2;Raml.Parser - 1.0.7;CoreVueWebTest - 3.0.101;dotnetng.template - 1.0.0.4;SitecoreMaster.TrueDynamicPlaceholders - 1.0.3;Virteom.Tenant.Mobile.Framework.Android - 0.20.41.103-prerelease;Fable.Template.Elmish.React - 0.1.6;BlazorPolyfill.Build - 6.0.100.2;Fable.Snowpack.Template - 2.1.0;BumperLane.Public.Api.Client - 0.23.35.214-prerelease;Yarn.MSBuild - 0.22.0,0.24.6;Blazor.TailwindCSS.BUnit - 1.0.2;Bridge.AWS - 0.3.30.36;tslint - 5.6.0;SAFE.Template - 3.0.1;GR.PageRender.Razor - 1.8.0;MIDIator.WebClient - 1.0.105</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-44906 (High) detected in minimist-0.0.8.tgz, minimist-1.2.0.tgz - ## CVE-2021-44906 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>minimist-0.0.8.tgz</b>, <b>minimist-1.2.0.tgz</b></p></summary>
<p>
<details><summary><b>minimist-0.0.8.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz">https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz</a></p>
<p>Path to dependency file: /stankinsv2/solution/StankinsV2/StankinsAliveAngular/package.json</p>
<p>Path to vulnerable library: /stankinsv2/solution/StankinsV2/StankinsAliveAngular/node_modules/minimist/package.json,/stankinsV1/HtmlGenerator/wwwroot/lib/bootstrap/node_modules/extract-zip/node_modules/minimist/package.json,/stankinsv2/solution/StankinsV2/StankinsDataWebAngular/node_modules/minimist/package.json,/stankinsv2/solution/StankinsV2/StankinsElectron/node_modules/mkdirp/node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- karma-3.0.0.tgz (Root Library)
- optimist-0.6.1.tgz
- :x: **minimist-0.0.8.tgz** (Vulnerable Library)
</details>
<details><summary><b>minimist-1.2.0.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz</a></p>
<p>Path to dependency file: /stankinsv2/solution/StankinsV2/StankinsElectron/package.json</p>
<p>Path to vulnerable library: /stankinsv2/solution/StankinsV2/StankinsElectron/node_modules/minimist/package.json,/stankinsv2/solution/StankinsV2/StankinsDataWebAngular/node_modules/blocking-proxy/node_modules/minimist/package.json,/stankinsv2/solution/StankinsV2/StankinsAliveAngular/node_modules/rc/node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- ts-node-7.0.1.tgz (Root Library)
- :x: **minimist-1.2.0.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Minimist <=1.2.5 is vulnerable to Prototype Pollution via file index.js, function setKey() (lines 69-95).
<p>Publish Date: 2022-03-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44906>CVE-2021-44906</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-44906">https://nvd.nist.gov/vuln/detail/CVE-2021-44906</a></p>
<p>Release Date: 2022-03-17</p>
<p>Fix Resolution: BumperLane.Public.Service.Contracts - 0.23.35.214-prerelease;cloudscribe.templates - 5.2.0;Virteom.Tenant.Mobile.Bluetooth - 0.21.29.159-prerelease;ShowingVault.DotNet.Sdk - 0.13.41.190-prerelease;Envisia.DotNet.Templates - 3.0.1;Yarnpkg.Yarn - 0.26.1;Virteom.Tenant.Mobile.Framework.UWP - 0.20.41.103-prerelease;Virteom.Tenant.Mobile.Framework.iOS - 0.20.41.103-prerelease;BumperLane.Public.Api.V2.ClientModule - 0.23.35.214-prerelease;VueJS.NetCore - 1.1.1;Dianoga - 4.0.0,3.0.0-RC02;Virteom.Tenant.Mobile.Bluetooth.iOS - 0.20.41.103-prerelease;Virteom.Public.Utilities - 0.23.37.212-prerelease;Indianadavy.VueJsWebAPITemplate.CSharp - 1.0.1;NorDroN.AngularTemplate - 0.1.6;Virteom.Tenant.Mobile.Framework - 0.21.29.159-prerelease;Virteom.Tenant.Mobile.Bluetooth.Android - 0.20.41.103-prerelease;z4a-dotnet-scaffold - 1.0.0.2;Raml.Parser - 1.0.7;CoreVueWebTest - 3.0.101;dotnetng.template - 1.0.0.4;SitecoreMaster.TrueDynamicPlaceholders - 1.0.3;Virteom.Tenant.Mobile.Framework.Android - 0.20.41.103-prerelease;Fable.Template.Elmish.React - 0.1.6;BlazorPolyfill.Build - 6.0.100.2;Fable.Snowpack.Template - 2.1.0;BumperLane.Public.Api.Client - 0.23.35.214-prerelease;Yarn.MSBuild - 0.22.0,0.24.6;Blazor.TailwindCSS.BUnit - 1.0.2;Bridge.AWS - 0.3.30.36;tslint - 5.6.0;SAFE.Template - 3.0.1;GR.PageRender.Razor - 1.8.0;MIDIator.WebClient - 1.0.105</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in minimist tgz minimist tgz cve high severity vulnerability vulnerable libraries minimist tgz minimist tgz minimist tgz parse argument options library home page a href path to dependency file solution stankinsaliveangular package json path to vulnerable library solution stankinsaliveangular node modules minimist package json htmlgenerator wwwroot lib bootstrap node modules extract zip node modules minimist package json solution stankinsdatawebangular node modules minimist package json solution stankinselectron node modules mkdirp node modules minimist package json dependency hierarchy karma tgz root library optimist tgz x minimist tgz vulnerable library minimist tgz parse argument options library home page a href path to dependency file solution stankinselectron package json path to vulnerable library solution stankinselectron node modules minimist package json solution stankinsdatawebangular node modules blocking proxy node modules minimist package json solution stankinsaliveangular node modules rc node modules minimist package json dependency hierarchy ts node tgz root library x minimist tgz vulnerable library found in base branch master vulnerability details minimist is vulnerable to prototype pollution via file index js function setkey lines publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution bumperlane public service contracts prerelease cloudscribe templates virteom tenant mobile bluetooth prerelease showingvault dotnet sdk prerelease envisia dotnet templates yarnpkg yarn virteom tenant mobile framework uwp prerelease virteom tenant mobile framework ios prerelease bumperlane public api clientmodule prerelease vuejs netcore dianoga virteom tenant mobile bluetooth ios prerelease virteom public utilities prerelease indianadavy vuejswebapitemplate csharp nordron angulartemplate virteom tenant mobile framework prerelease virteom tenant mobile bluetooth android prerelease dotnet scaffold raml parser corevuewebtest dotnetng template sitecoremaster truedynamicplaceholders virteom tenant mobile framework android prerelease fable template elmish react blazorpolyfill build fable snowpack template bumperlane public api client prerelease yarn msbuild blazor tailwindcss bunit bridge aws tslint safe template gr pagerender razor midiator webclient step up your open source security game with whitesource | 0 |
1,005 | 4,774,742,004 | IssuesEvent | 2016-10-27 08:01:30 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | win_user module check mode always returns skipping (never changed or OK) | affects_2.2 feature_idea waiting_on_maintainer windows | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Feature idea
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
win_user module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0 (devel c9a5b1c555) last updated 2016/06/02 15:42:56 (GMT +200)
lib/ansible/modules/core: (detached HEAD ca4365b644) last updated 2016/06/02 15:43:14 (GMT +200)
lib/ansible/modules/extras: (detached HEAD b0aec50b9a) last updated 2016/06/02 15:43:15 (GMT +200)
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
Python 2.7.5 (default, Oct 11 2015, 17:47:16)
[GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux2
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
ansible_user: xxxx
ansible_pass: xxxxx
ansible_connection: winrm
ansible_ssh_port: 5986
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Ansible running in RHEL 7.2
OS remote is Windows 2012R2 with powershell 4.0
##### SUMMARY
<!--- Explain the problem briefly -->
When i execute the playbook in check mode on remote node (windows) the win_user module always return skipping, never return state: Changed or OK.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
```
ANSIBLE COMMAND:
ansible-playbook -i ./inv/invs ./site.yml --check
```
```
ANSIBLE CODE:
$cat site.yml
- name: User Account Disabled
hosts: "{{hosts|default('all')}}"
vars_files:
- group_vars/User_Account_Disabled
tasks:
- name: "User Account Disabled"
win_user:
name: "User"
account_disabled: "yes"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
When i execute the playbook in check mode on remote node (windows) with the Win account disabled, the win_user module always return skipping, never return state: Changed or OK.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
PLAY [User Account Disabled] ***************************************************
TASK [setup] *******************************************************************
skipping: [server.com]
TASK [User_Account_Disabled : UserAccount=guest Disabled=yes] ******************
skipping: [server.com]
PLAY RECAP *********************************************************************
server.com : ok=1 changed=0 unreachable=0 failed=0
```
| True | win_user module check mode always returns skipping (never changed or OK) - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Feature idea
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
win_user module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0 (devel c9a5b1c555) last updated 2016/06/02 15:42:56 (GMT +200)
lib/ansible/modules/core: (detached HEAD ca4365b644) last updated 2016/06/02 15:43:14 (GMT +200)
lib/ansible/modules/extras: (detached HEAD b0aec50b9a) last updated 2016/06/02 15:43:15 (GMT +200)
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
Python 2.7.5 (default, Oct 11 2015, 17:47:16)
[GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux2
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
ansible_user: xxxx
ansible_pass: xxxxx
ansible_connection: winrm
ansible_ssh_port: 5986
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Ansible running in RHEL 7.2
OS remote is Windows 2012R2 with powershell 4.0
##### SUMMARY
<!--- Explain the problem briefly -->
When i execute the playbook in check mode on remote node (windows) the win_user module always return skipping, never return state: Changed or OK.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
```
ANSIBLE COMMAND:
ansible-playbook -i ./inv/invs ./site.yml --check
```
```
ANSIBLE CODE:
$cat site.yml
- name: User Account Disabled
hosts: "{{hosts|default('all')}}"
vars_files:
- group_vars/User_Account_Disabled
tasks:
- name: "User Account Disabled"
win_user:
name: "User"
account_disabled: "yes"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
When i execute the playbook in check mode on remote node (windows) with the Win account disabled, the win_user module always return skipping, never return state: Changed or OK.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
PLAY [User Account Disabled] ***************************************************
TASK [setup] *******************************************************************
skipping: [server.com]
TASK [User_Account_Disabled : UserAccount=guest Disabled=yes] ******************
skipping: [server.com]
PLAY RECAP *********************************************************************
server.com : ok=1 changed=0 unreachable=0 failed=0
```
| main | win user module check mode always returns skipping never changed or ok issue type feature idea component name win user module ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file etc ansible ansible cfg configured module search path default w o overrides python default oct on configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables ansible user xxxx ansible pass xxxxx ansible connection winrm ansible ssh port os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ansible running in rhel os remote is windows with powershell summary when i execute the playbook in check mode on remote node windows the win user module always return skipping never return state changed or ok steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used ansible command ansible playbook i inv invs site yml check ansible code cat site yml name user account disabled hosts hosts default all vars files group vars user account disabled tasks name user account disabled win user name user account disabled yes expected results when i execute the playbook in check mode on remote node windows with the win account disabled the win user module always return skipping never return state changed or ok actual results play task skipping task skipping play recap server com ok changed unreachable failed | 1 |
5,706 | 30,105,317,209 | IssuesEvent | 2023-06-30 00:16:53 | intel/cve-bin-tool | https://api.github.com/repos/intel/cve-bin-tool | closed | Python 3.7 is EOL in June 2023 | blocked awaiting maintainer | Python 3.7 will go end of life in June 2023. Need to update documentation etc to remove all references to 3.7. | True | Python 3.7 is EOL in June 2023 - Python 3.7 will go end of life in June 2023. Need to update documentation etc to remove all references to 3.7. | main | python is eol in june python will go end of life in june need to update documentation etc to remove all references to | 1 |
301,048 | 22,710,597,800 | IssuesEvent | 2022-07-05 18:55:04 | IBM/automation-data-foundation | https://api.github.com/repos/IBM/automation-data-foundation | opened | need instructions on portworx spec file generation | documentation | need instructions on portworx spec file generation | 1.0 | need instructions on portworx spec file generation - need instructions on portworx spec file generation | non_main | need instructions on portworx spec file generation need instructions on portworx spec file generation | 0 |
4,808 | 3,454,559,380 | IssuesEvent | 2015-12-17 16:23:31 | travis-ci/travis-ci | https://api.github.com/repos/travis-ci/travis-ci | closed | Saucelabs addon update | travis-build | Hello,
Sauce Connect 4.3.13 Released
https://support.saucelabs.com/customer/en/portal/articles/2247304-sauce-connect-4-3-13-released
New features:
High Availability: Load balance across multiple tunnels
Change to shutdown behavior to allow for rolling tunnel restarts
Can you update the agent please? I'm on travis pro.
| 1.0 | Saucelabs addon update - Hello,
Sauce Connect 4.3.13 Released
https://support.saucelabs.com/customer/en/portal/articles/2247304-sauce-connect-4-3-13-released
New features:
High Availability: Load balance across multiple tunnels
Change to shutdown behavior to allow for rolling tunnel restarts
Can you update the agent please? I'm on travis pro.
| non_main | saucelabs addon update hello sauce connect released new features high availability load balance across multiple tunnels change to shutdown behavior to allow for rolling tunnel restarts can you update the agent please i m on travis pro | 0 |
200,845 | 15,160,265,532 | IssuesEvent | 2021-02-12 06:36:33 | DMOJ/online-judge | https://api.github.com/repos/DMOJ/online-judge | closed | Disqualifying ContestPartipation rates all ongoing rated contests | admin bug contest | ## Reproduction:
1. Disqualify a ContestParticipation on any rated contest
## Erroneous Behavior
All ongoing rated contests are rated
## Expected Behavior
Ongoing rated contests are not rated | 1.0 | Disqualifying ContestPartipation rates all ongoing rated contests - ## Reproduction:
1. Disqualify a ContestParticipation on any rated contest
## Erroneous Behavior
All ongoing rated contests are rated
## Expected Behavior
Ongoing rated contests are not rated | non_main | disqualifying contestpartipation rates all ongoing rated contests reproduction disqualify a contestparticipation on any rated contest erroneous behavior all ongoing rated contests are rated expected behavior ongoing rated contests are not rated | 0 |
2,811 | 10,059,585,796 | IssuesEvent | 2019-07-22 16:46:59 | clearlinux/swupd-client | https://api.github.com/repos/clearlinux/swupd-client | closed | Enhance verifytime program to not rely on the versionstamp file | enhancement maintainability | Lifted directly from https://github.com/clearlinux/swupd-client/pull/329#issuecomment-341546064 by @tmarcu:
In the general case, with swupd running on Clear, that file will always exist. However, if we are trying to support multiple OSes, then we should implement the more pedantic version of verifytime, as seen in the comment on verifytime.c. The current state was an initial fix that works for Clear, however the intended implementation is to not check any file (like versionstamp) and instead query some nameservers or such to get a real time, and use those times to get an accurate system time. That removes the dependency on any file, and keeps verifytime as the short circuit, which it was intended to be.
With the exception of bundle-list, all swupd commands need network, and if the system time is bad, it is guaranteed to fail signature verification, so we should always exit early if verifytime fails with the new (and more robust) implementation. | True | Enhance verifytime program to not rely on the versionstamp file - Lifted directly from https://github.com/clearlinux/swupd-client/pull/329#issuecomment-341546064 by @tmarcu:
In the general case, with swupd running on Clear, that file will always exist. However, if we are trying to support multiple OSes, then we should implement the more pedantic version of verifytime, as seen in the comment on verifytime.c. The current state was an initial fix that works for Clear, however the intended implementation is to not check any file (like versionstamp) and instead query some nameservers or such to get a real time, and use those times to get an accurate system time. That removes the dependency on any file, and keeps verifytime as the short circuit, which it was intended to be.
With the exception of bundle-list, all swupd commands need network, and if the system time is bad, it is guaranteed to fail signature verification, so we should always exit early if verifytime fails with the new (and more robust) implementation. | main | enhance verifytime program to not rely on the versionstamp file lifted directly from by tmarcu in the general case with swupd running on clear that file will always exist however if we are trying to support multiple oses then we should implement the more pedantic version of verifytime as seen in the comment on verifytime c the current state was an initial fix that works for clear however the intended implementation is to not check any file like versionstamp and instead query some nameservers or such to get a real time and use those times to get an accurate system time that removes the dependency on any file and keeps verifytime as the short circuit which it was intended to be with the exception of bundle list all swupd commands need network and if the system time is bad it is guaranteed to fail signature verification so we should always exit early if verifytime fails with the new and more robust implementation | 1 |
5,421 | 27,209,530,795 | IssuesEvent | 2023-02-20 15:29:31 | mozilla/foundation.mozilla.org | https://api.github.com/repos/mozilla/foundation.mozilla.org | closed | Fix CSS errors and warnings | bug 🦠 engineering maintain | ## Description
Currently, starting the Docker stack with `docker compose up` shows a bunch of CSS errors/warnings:
```console
...
foundationmozillaorg-watch-static-files-1 | Creating context: 2.544s
foundationmozillaorg-watch-static-files-1 | Resolving content paths: 13.477ms
foundationmozillaorg-watch-static-files-1 | DEPRECATION WARNINGDEPRECATION WARNING: Using / for division is deprecated and will be removed in Dart Sass 2.0.0.
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | Recommendation: math.div($spacer, 2)
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | More info and automated migrator: https://sass-lang.com/d/slash-div
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | ╷
foundationmozillaorg-watch-static-files-1 | 302 │ $headings-margin-bottom: $spacer / 2 !default;
foundationmozillaorg-watch-static-files-1 | │ ^^^^^^^^^^^
foundationmozillaorg-watch-static-files-1 | ╵
foundationmozillaorg-watch-static-files-1 | : Using / for division is deprecated and will be removed in Dart Sass 2.0.0.
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | Recommendation: math.div($spacer, 2)
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | More info and automated migrator: https://sass-lang.com/d/slash-div
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | ╷
foundationmozillaorg-watch-static-files-1 | 302 │ $headings-margin-bottom: $spacer / 2 !default;
foundationmozillaorg-watch-static-files-1 | │ ^^^^^^^^^^^
foundationmozillaorg-watch-static-files-1 | ╵
foundationmozillaorg-watch-static-files-1 | node_modules/bootstrap/scss/_variables.scss 302:31 @import
foundationmozillaorg-watch-static-files-1 | source/sass/mofo-bootstrap/mofo-bootstrap.scss 6:9 @import
foundationmozillaorg-watch-static-files-1 | source/sass/buyers-guide/bg-main.scss 7:9 root stylesheet
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | node_modules/bootstrap/scss/_variables.scss 302:31 @import
foundationmozillaorg-watch-static-files-1 | source/sass/mofo-bootstrap/mofo-bootstrap.scss 6:9 @import
foundationmozillaorg-watch-static-files-1 | source/sass/main.scss 2:9 root stylesheet
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | DEPRECATION WARNINGDEPRECATION WARNING: Using / for division is deprecated and will be removed in Dart Sass 2.0.0.
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | Recommendation: math.div($input-padding-y, 2)
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | More info and automated migrator: https://sass-lang.com/d/slash-div
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | ╷
foundationmozillaorg-watch-static-files-1 | 498 │ $input-height-inner-quarter: add($input-line-height * .25em, $input-padding-y / 2) !default;
foundationmozillaorg-watch-static-files-1 | │ ^^^^^^^^^^^^^^^^^^^^
foundationmozillaorg-watch-static-files-1 | ╵
foundationmozillaorg-watch-static-files-1 | node_modules/bootstrap/scss/_variables.scss 498:73 @import
foundationmozillaorg-watch-static-files-1 | source/sass/mofo-bootstrap/mofo-bootstrap.scss 6:9 @import
foundationmozillaorg-watch-static-files-1 | source/sass/buyers-guide/bg-main.scss 7:9 root stylesheet
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | : Using / for division is deprecated and will be removed in Dart Sass 2.0.0.
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | Recommendation: math.div($input-padding-y, 2)
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | More info and automated migrator: https://sass-lang.com/d/slash-div
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | ╷
foundationmozillaorg-watch-static-files-1 | 498 │ $input-height-inner-quarter: add($input-line-height * .25em, $input-padding-y / 2) !default;
foundationmozillaorg-watch-static-files-1 | │ ^^^^^^^^^^^^^^^^^^^^
foundationmozillaorg-watch-static-files-1 | ╵
foundationmozillaorg-watch-static-files-1 | node_modules/bootstrap/scss/_variables.scss 498:73 @import
foundationmozillaorg-watch-static-files-1 | source/sass/mofo-bootstrap/mofo-bootstrap.scss 6:9 @import
foundationmozillaorg-watch-static-files-1 | source/sass/main.scss 2:9 root stylesheet
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | DEPRECATION WARNING: Using / for division is deprecated and will be removed in Dart Sass 2.0.0.
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | Recommendation: math.div($custom-control-indicator-size, 2)
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | More info and automated migrator: https://sass-lang.com/d/slash-div
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | ╷
foundationmozillaorg-watch-static-files-1 | 568 │ $custom-switch-indicator-border-radius: $custom-control-indicator-size / 2 !default;
foundationmozillaorg-watch-static-files-1 | │ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
foundationmozillaorg-watch-static-files-1 | ╵
foundationmozillaorg-watch-static-files-1 | node_modules/bootstrap/scss/_variables.scss 568:49 @import
foundationmozillaorg-watch-static-files-1 | source/sass/mofo-bootstrap/mofo-bootstrap.scss 6:9 @import
foundationmozillaorg-watch-static-files-1 | source/sass/buyers-guide/bg-main.scss 7:9 root stylesheet
foundationmozillaorg-watch-static-files-1 | DEPRECATION
foundationmozillaorg-watch-static-files-1 | WARNING: Using / for division is deprecated and will be removed in Dart Sass 2.0.0.
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | Recommendation: math.div($custom-control-indicator-size, 2)
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | More info and automated migrator: https://sass-lang.com/d/slash-div
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | ╷
foundationmozillaorg-watch-static-files-1 | 568 │ $custom-switch-indicator-border-radius: $custom-control-indicator-size / 2 !default;
foundationmozillaorg-watch-static-files-1 | │ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
foundationmozillaorg-watch-static-files-1 | ╵
foundationmozillaorg-watch-static-files-1 | node_modules/bootstrap/scss/_variables.scss 568:49 @import
foundationmozillaorg-watch-static-files-1 | source/sass/mofo-bootstrap/mofo-bootstrap.scss 6:9 @import
foundationmozillaorg-watch-static-files-1 | source/sass/main.scss 2:9 root stylesheet
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | DEPRECATION WARNING: Using / for division is deprecated and will be removed in Dart Sass 2.0.0.
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | Recommendation: math.div($spacer, 2)
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | More info and automated migrator: https://sass-lang.com/d/slash-div
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | ╷
foundationmozillaorg-watch-static-files-1 | 713 │ $nav-divider-margin-y: $spacer / 2 !default;
foundationmozillaorg-watch-static-files-1 | │ ^^^^^^^^^^^
foundationmozillaorg-watch-static-files-1 | ╵
foundationmozillaorg-watch-static-files-1 | node_modules/bootstrap/scss/_variables.scss 713:37 @import
foundationmozillaorg-watch-static-files-1 | source/sass/mofo-bootstrap/mofo-bootstrap.scss 6:9 @import
foundationmozillaorg-watch-static-files-1 | source/sass/buyers-guide/bg-main.scss 7:9 root stylesheet
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | DEPRECATION DEPRECATION WARNINGWARNING: Using / for division is deprecated and will be removed in Dart Sass 2.0.0.
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | Recommendation: math.div($spacer, 2)
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | More info and automated migrator: https://sass-lang.com/d/slash-div
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | ╷
foundationmozillaorg-watch-static-files-1 | 718 │ $navbar-padding-y: $spacer / 2 !default;
foundationmozillaorg-watch-static-files-1 | │ ^^^^^^^^^^^
foundationmozillaorg-watch-static-files-1 | ╵
foundationmozillaorg-watch-static-files-1 | : Using / for division is deprecated and will be removed in Dart Sass 2.0.0.
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | Recommendation: math.div($spacer, 2)
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | More info and automated migrator: https://sass-lang.com/d/slash-div
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | ╷
foundationmozillaorg-watch-static-files-1 | 713 │ $nav-divider-margin-y: $spacer / 2 !default;
foundationmozillaorg-watch-static-files-1 | │ ^^^^^^^^^^^
foundationmozillaorg-watch-static-files-1 | ╵
foundationmozillaorg-watch-static-files-1 | node_modules/bootstrap/scss/_variables.scss 718:37 @import
foundationmozillaorg-watch-static-files-1 | source/sass/mofo-bootstrap/mofo-bootstrap.scss 6:9 @import
foundationmozillaorg-watch-static-files-1 | source/sass/buyers-guide/bg-main.scss 7:9 root stylesheet
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | node_modules/bootstrap/scss/_variables.scss 713:37 @import
foundationmozillaorg-watch-static-files-1 | source/sass/mofo-bootstrap/mofo-bootstrap.scss 6:9 @import
foundationmozillaorg-watch-static-files-1 | source/sass/main.scss 2:9 root stylesheet
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | DEPRECATION WARNING: Using / for division is deprecated and will be removed in Dart Sass 2.0.0.
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | Recommendation: math.div($spacer, 2)
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | More info and automated migrator: https://sass-lang.com/d/slash-div
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | ╷
foundationmozillaorg-watch-static-files-1 | 718 │ $navbar-padding-y: $spacer / 2 !default;
foundationmozillaorg-watch-static-files-1 | │ ^^^^^^^^^^^
foundationmozillaorg-watch-static-files-1 | ╵
foundationmozillaorg-watch-static-files-1 | node_modules/bootstrap/scss/_variables.scss 718:37 @import
foundationmozillaorg-watch-static-files-1 | source/sass/mofo-bootstrap/mofo-bootstrap.scss 6:9 @import
foundationmozillaorg-watch-static-files-1 | source/sass/main.scss 2:9 root stylesheet
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | Reading content files: 11.732s
foundationmozillaorg-watch-static-files-1 | WARNING: 37 repetitive deprecation warnings omitted.
foundationmozillaorg-watch-static-files-1 | Run in verbose mode to see all warnings.
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | Compiling CSS: 29.729s
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | Done in 32826ms.
foundationmozillaorg-watch-static-files-1 | WARNING: 40 repetitive deprecation warnings omitted.
foundationmozillaorg-watch-static-files-1 | Run in verbose mode to see all warnings.
foundationmozillaorg-watch-static-files-1 |
...
```
We are working on a transition to Tailwind which should make some CSS use obsolete.
But, this transition has been going on for a while now and is likely to take a bit longer.
Therefore, it would be nice to fix the error messages that we are getting to make the log output a bit more useful and easier to follow.
## Acceptance criteria
- [ ] As a developer, when I start the Docker stack with `docker compose up` I don't get a bunch of CSS warnings and errors. | True | Fix CSS errors and warnings - ## Description
Currently, starting the Docker stack with `docker compose up` shows a bunch of CSS errors/warnings:
```console
...
foundationmozillaorg-watch-static-files-1 | Creating context: 2.544s
foundationmozillaorg-watch-static-files-1 | Resolving content paths: 13.477ms
foundationmozillaorg-watch-static-files-1 | DEPRECATION WARNINGDEPRECATION WARNING: Using / for division is deprecated and will be removed in Dart Sass 2.0.0.
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | Recommendation: math.div($spacer, 2)
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | More info and automated migrator: https://sass-lang.com/d/slash-div
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | ╷
foundationmozillaorg-watch-static-files-1 | 302 │ $headings-margin-bottom: $spacer / 2 !default;
foundationmozillaorg-watch-static-files-1 | │ ^^^^^^^^^^^
foundationmozillaorg-watch-static-files-1 | ╵
foundationmozillaorg-watch-static-files-1 | : Using / for division is deprecated and will be removed in Dart Sass 2.0.0.
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | Recommendation: math.div($spacer, 2)
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | More info and automated migrator: https://sass-lang.com/d/slash-div
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | ╷
foundationmozillaorg-watch-static-files-1 | 302 │ $headings-margin-bottom: $spacer / 2 !default;
foundationmozillaorg-watch-static-files-1 | │ ^^^^^^^^^^^
foundationmozillaorg-watch-static-files-1 | ╵
foundationmozillaorg-watch-static-files-1 | node_modules/bootstrap/scss/_variables.scss 302:31 @import
foundationmozillaorg-watch-static-files-1 | source/sass/mofo-bootstrap/mofo-bootstrap.scss 6:9 @import
foundationmozillaorg-watch-static-files-1 | source/sass/buyers-guide/bg-main.scss 7:9 root stylesheet
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | node_modules/bootstrap/scss/_variables.scss 302:31 @import
foundationmozillaorg-watch-static-files-1 | source/sass/mofo-bootstrap/mofo-bootstrap.scss 6:9 @import
foundationmozillaorg-watch-static-files-1 | source/sass/main.scss 2:9 root stylesheet
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | DEPRECATION WARNINGDEPRECATION WARNING: Using / for division is deprecated and will be removed in Dart Sass 2.0.0.
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | Recommendation: math.div($input-padding-y, 2)
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | More info and automated migrator: https://sass-lang.com/d/slash-div
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | ╷
foundationmozillaorg-watch-static-files-1 | 498 │ $input-height-inner-quarter: add($input-line-height * .25em, $input-padding-y / 2) !default;
foundationmozillaorg-watch-static-files-1 | │ ^^^^^^^^^^^^^^^^^^^^
foundationmozillaorg-watch-static-files-1 | ╵
foundationmozillaorg-watch-static-files-1 | node_modules/bootstrap/scss/_variables.scss 498:73 @import
foundationmozillaorg-watch-static-files-1 | source/sass/mofo-bootstrap/mofo-bootstrap.scss 6:9 @import
foundationmozillaorg-watch-static-files-1 | source/sass/buyers-guide/bg-main.scss 7:9 root stylesheet
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | : Using / for division is deprecated and will be removed in Dart Sass 2.0.0.
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | Recommendation: math.div($input-padding-y, 2)
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | More info and automated migrator: https://sass-lang.com/d/slash-div
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | ╷
foundationmozillaorg-watch-static-files-1 | 498 │ $input-height-inner-quarter: add($input-line-height * .25em, $input-padding-y / 2) !default;
foundationmozillaorg-watch-static-files-1 | │ ^^^^^^^^^^^^^^^^^^^^
foundationmozillaorg-watch-static-files-1 | ╵
foundationmozillaorg-watch-static-files-1 | node_modules/bootstrap/scss/_variables.scss 498:73 @import
foundationmozillaorg-watch-static-files-1 | source/sass/mofo-bootstrap/mofo-bootstrap.scss 6:9 @import
foundationmozillaorg-watch-static-files-1 | source/sass/main.scss 2:9 root stylesheet
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | DEPRECATION WARNING: Using / for division is deprecated and will be removed in Dart Sass 2.0.0.
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | Recommendation: math.div($custom-control-indicator-size, 2)
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | More info and automated migrator: https://sass-lang.com/d/slash-div
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | ╷
foundationmozillaorg-watch-static-files-1 | 568 │ $custom-switch-indicator-border-radius: $custom-control-indicator-size / 2 !default;
foundationmozillaorg-watch-static-files-1 | │ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
foundationmozillaorg-watch-static-files-1 | ╵
foundationmozillaorg-watch-static-files-1 | node_modules/bootstrap/scss/_variables.scss 568:49 @import
foundationmozillaorg-watch-static-files-1 | source/sass/mofo-bootstrap/mofo-bootstrap.scss 6:9 @import
foundationmozillaorg-watch-static-files-1 | source/sass/buyers-guide/bg-main.scss 7:9 root stylesheet
foundationmozillaorg-watch-static-files-1 | DEPRECATION
foundationmozillaorg-watch-static-files-1 | WARNING: Using / for division is deprecated and will be removed in Dart Sass 2.0.0.
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | Recommendation: math.div($custom-control-indicator-size, 2)
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | More info and automated migrator: https://sass-lang.com/d/slash-div
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | ╷
foundationmozillaorg-watch-static-files-1 | 568 │ $custom-switch-indicator-border-radius: $custom-control-indicator-size / 2 !default;
foundationmozillaorg-watch-static-files-1 | │ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
foundationmozillaorg-watch-static-files-1 | ╵
foundationmozillaorg-watch-static-files-1 | node_modules/bootstrap/scss/_variables.scss 568:49 @import
foundationmozillaorg-watch-static-files-1 | source/sass/mofo-bootstrap/mofo-bootstrap.scss 6:9 @import
foundationmozillaorg-watch-static-files-1 | source/sass/main.scss 2:9 root stylesheet
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | DEPRECATION WARNING: Using / for division is deprecated and will be removed in Dart Sass 2.0.0.
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | Recommendation: math.div($spacer, 2)
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | More info and automated migrator: https://sass-lang.com/d/slash-div
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | ╷
foundationmozillaorg-watch-static-files-1 | 713 │ $nav-divider-margin-y: $spacer / 2 !default;
foundationmozillaorg-watch-static-files-1 | │ ^^^^^^^^^^^
foundationmozillaorg-watch-static-files-1 | ╵
foundationmozillaorg-watch-static-files-1 | node_modules/bootstrap/scss/_variables.scss 713:37 @import
foundationmozillaorg-watch-static-files-1 | source/sass/mofo-bootstrap/mofo-bootstrap.scss 6:9 @import
foundationmozillaorg-watch-static-files-1 | source/sass/buyers-guide/bg-main.scss 7:9 root stylesheet
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | DEPRECATION DEPRECATION WARNINGWARNING: Using / for division is deprecated and will be removed in Dart Sass 2.0.0.
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | Recommendation: math.div($spacer, 2)
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | More info and automated migrator: https://sass-lang.com/d/slash-div
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | ╷
foundationmozillaorg-watch-static-files-1 | 718 │ $navbar-padding-y: $spacer / 2 !default;
foundationmozillaorg-watch-static-files-1 | │ ^^^^^^^^^^^
foundationmozillaorg-watch-static-files-1 | ╵
foundationmozillaorg-watch-static-files-1 | : Using / for division is deprecated and will be removed in Dart Sass 2.0.0.
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | Recommendation: math.div($spacer, 2)
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | More info and automated migrator: https://sass-lang.com/d/slash-div
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | ╷
foundationmozillaorg-watch-static-files-1 | 713 │ $nav-divider-margin-y: $spacer / 2 !default;
foundationmozillaorg-watch-static-files-1 | │ ^^^^^^^^^^^
foundationmozillaorg-watch-static-files-1 | ╵
foundationmozillaorg-watch-static-files-1 | node_modules/bootstrap/scss/_variables.scss 718:37 @import
foundationmozillaorg-watch-static-files-1 | source/sass/mofo-bootstrap/mofo-bootstrap.scss 6:9 @import
foundationmozillaorg-watch-static-files-1 | source/sass/buyers-guide/bg-main.scss 7:9 root stylesheet
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | node_modules/bootstrap/scss/_variables.scss 713:37 @import
foundationmozillaorg-watch-static-files-1 | source/sass/mofo-bootstrap/mofo-bootstrap.scss 6:9 @import
foundationmozillaorg-watch-static-files-1 | source/sass/main.scss 2:9 root stylesheet
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | DEPRECATION WARNING: Using / for division is deprecated and will be removed in Dart Sass 2.0.0.
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | Recommendation: math.div($spacer, 2)
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | More info and automated migrator: https://sass-lang.com/d/slash-div
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | ╷
foundationmozillaorg-watch-static-files-1 | 718 │ $navbar-padding-y: $spacer / 2 !default;
foundationmozillaorg-watch-static-files-1 | │ ^^^^^^^^^^^
foundationmozillaorg-watch-static-files-1 | ╵
foundationmozillaorg-watch-static-files-1 | node_modules/bootstrap/scss/_variables.scss 718:37 @import
foundationmozillaorg-watch-static-files-1 | source/sass/mofo-bootstrap/mofo-bootstrap.scss 6:9 @import
foundationmozillaorg-watch-static-files-1 | source/sass/main.scss 2:9 root stylesheet
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | Reading content files: 11.732s
foundationmozillaorg-watch-static-files-1 | WARNING: 37 repetitive deprecation warnings omitted.
foundationmozillaorg-watch-static-files-1 | Run in verbose mode to see all warnings.
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | Compiling CSS: 29.729s
foundationmozillaorg-watch-static-files-1 |
foundationmozillaorg-watch-static-files-1 | Done in 32826ms.
foundationmozillaorg-watch-static-files-1 | WARNING: 40 repetitive deprecation warnings omitted.
foundationmozillaorg-watch-static-files-1 | Run in verbose mode to see all warnings.
foundationmozillaorg-watch-static-files-1 |
...
```
We are working on a transition to Tailwind which should make some CSS use obsolete.
But, this transition has been going on for a while now and is likely to take a bit longer.
Therefore, it would be nice to fix the error messages that we are getting to make the log output a bit more useful and easier to follow.
## Acceptance criteria
- [ ] As a developer, when I start the Docker stack with `docker compose up` I don't get a bunch of CSS warnings and errors. | main | fix css errors and warnings description currently starting the docker stack with docker compose up shows a bunch of css errors warnings console foundationmozillaorg watch static files creating context foundationmozillaorg watch static files resolving content paths foundationmozillaorg watch static files deprecation warningdeprecation warning using for division is deprecated and will be removed in dart sass foundationmozillaorg watch static files foundationmozillaorg watch static files recommendation math div spacer foundationmozillaorg watch static files foundationmozillaorg watch static files more info and automated migrator foundationmozillaorg watch static files foundationmozillaorg watch static files ╷ foundationmozillaorg watch static files │ headings margin bottom spacer default foundationmozillaorg watch static files │ foundationmozillaorg watch static files ╵ foundationmozillaorg watch static files using for division is deprecated and will be removed in dart sass foundationmozillaorg watch static files foundationmozillaorg watch static files recommendation math div spacer foundationmozillaorg watch static files foundationmozillaorg watch static files more info and automated migrator foundationmozillaorg watch static files foundationmozillaorg watch static files ╷ foundationmozillaorg watch static files │ headings margin bottom spacer default foundationmozillaorg watch static files │ foundationmozillaorg watch static files ╵ foundationmozillaorg watch static files node modules bootstrap scss variables scss import foundationmozillaorg watch static files source sass mofo bootstrap mofo bootstrap scss import foundationmozillaorg watch static files source sass buyers guide bg main scss root stylesheet foundationmozillaorg watch static files foundationmozillaorg watch static files node modules bootstrap scss variables scss import foundationmozillaorg watch static files source sass mofo bootstrap mofo bootstrap scss import foundationmozillaorg watch static files source sass main scss root stylesheet foundationmozillaorg watch static files foundationmozillaorg watch static files deprecation warningdeprecation warning using for division is deprecated and will be removed in dart sass foundationmozillaorg watch static files foundationmozillaorg watch static files recommendation math div input padding y foundationmozillaorg watch static files foundationmozillaorg watch static files more info and automated migrator foundationmozillaorg watch static files foundationmozillaorg watch static files ╷ foundationmozillaorg watch static files │ input height inner quarter add input line height input padding y default foundationmozillaorg watch static files │ foundationmozillaorg watch static files ╵ foundationmozillaorg watch static files node modules bootstrap scss variables scss import foundationmozillaorg watch static files source sass mofo bootstrap mofo bootstrap scss import foundationmozillaorg watch static files source sass buyers guide bg main scss root stylesheet foundationmozillaorg watch static files foundationmozillaorg watch static files using for division is deprecated and will be removed in dart sass foundationmozillaorg watch static files foundationmozillaorg watch static files recommendation math div input padding y foundationmozillaorg watch static files foundationmozillaorg watch static files more info and automated migrator foundationmozillaorg watch static files foundationmozillaorg watch static files ╷ foundationmozillaorg watch static files │ input height inner quarter add input line height input padding y default foundationmozillaorg watch static files │ foundationmozillaorg watch static files ╵ foundationmozillaorg watch static files node modules bootstrap scss variables scss import foundationmozillaorg watch static files source sass mofo bootstrap mofo bootstrap scss import foundationmozillaorg watch static files source sass main scss root stylesheet foundationmozillaorg watch static files foundationmozillaorg watch static files deprecation warning using for division is deprecated and will be removed in dart sass foundationmozillaorg watch static files foundationmozillaorg watch static files recommendation math div custom control indicator size foundationmozillaorg watch static files foundationmozillaorg watch static files more info and automated migrator foundationmozillaorg watch static files foundationmozillaorg watch static files ╷ foundationmozillaorg watch static files │ custom switch indicator border radius custom control indicator size default foundationmozillaorg watch static files │ foundationmozillaorg watch static files ╵ foundationmozillaorg watch static files node modules bootstrap scss variables scss import foundationmozillaorg watch static files source sass mofo bootstrap mofo bootstrap scss import foundationmozillaorg watch static files source sass buyers guide bg main scss root stylesheet foundationmozillaorg watch static files deprecation foundationmozillaorg watch static files warning using for division is deprecated and will be removed in dart sass foundationmozillaorg watch static files foundationmozillaorg watch static files recommendation math div custom control indicator size foundationmozillaorg watch static files foundationmozillaorg watch static files more info and automated migrator foundationmozillaorg watch static files foundationmozillaorg watch static files ╷ foundationmozillaorg watch static files │ custom switch indicator border radius custom control indicator size default foundationmozillaorg watch static files │ foundationmozillaorg watch static files ╵ foundationmozillaorg watch static files node modules bootstrap scss variables scss import foundationmozillaorg watch static files source sass mofo bootstrap mofo bootstrap scss import foundationmozillaorg watch static files source sass main scss root stylesheet foundationmozillaorg watch static files foundationmozillaorg watch static files deprecation warning using for division is deprecated and will be removed in dart sass foundationmozillaorg watch static files foundationmozillaorg watch static files recommendation math div spacer foundationmozillaorg watch static files foundationmozillaorg watch static files more info and automated migrator foundationmozillaorg watch static files foundationmozillaorg watch static files ╷ foundationmozillaorg watch static files │ nav divider margin y spacer default foundationmozillaorg watch static files │ foundationmozillaorg watch static files ╵ foundationmozillaorg watch static files node modules bootstrap scss variables scss import foundationmozillaorg watch static files source sass mofo bootstrap mofo bootstrap scss import foundationmozillaorg watch static files source sass buyers guide bg main scss root stylesheet foundationmozillaorg watch static files foundationmozillaorg watch static files deprecation deprecation warningwarning using for division is deprecated and will be removed in dart sass foundationmozillaorg watch static files foundationmozillaorg watch static files recommendation math div spacer foundationmozillaorg watch static files foundationmozillaorg watch static files more info and automated migrator foundationmozillaorg watch static files foundationmozillaorg watch static files ╷ foundationmozillaorg watch static files │ navbar padding y spacer default foundationmozillaorg watch static files │ foundationmozillaorg watch static files ╵ foundationmozillaorg watch static files using for division is deprecated and will be removed in dart sass foundationmozillaorg watch static files foundationmozillaorg watch static files recommendation math div spacer foundationmozillaorg watch static files foundationmozillaorg watch static files more info and automated migrator foundationmozillaorg watch static files foundationmozillaorg watch static files ╷ foundationmozillaorg watch static files │ nav divider margin y spacer default foundationmozillaorg watch static files │ foundationmozillaorg watch static files ╵ foundationmozillaorg watch static files node modules bootstrap scss variables scss import foundationmozillaorg watch static files source sass mofo bootstrap mofo bootstrap scss import foundationmozillaorg watch static files source sass buyers guide bg main scss root stylesheet foundationmozillaorg watch static files foundationmozillaorg watch static files node modules bootstrap scss variables scss import foundationmozillaorg watch static files source sass mofo bootstrap mofo bootstrap scss import foundationmozillaorg watch static files source sass main scss root stylesheet foundationmozillaorg watch static files foundationmozillaorg watch static files deprecation warning using for division is deprecated and will be removed in dart sass foundationmozillaorg watch static files foundationmozillaorg watch static files recommendation math div spacer foundationmozillaorg watch static files foundationmozillaorg watch static files more info and automated migrator foundationmozillaorg watch static files foundationmozillaorg watch static files ╷ foundationmozillaorg watch static files │ navbar padding y spacer default foundationmozillaorg watch static files │ foundationmozillaorg watch static files ╵ foundationmozillaorg watch static files node modules bootstrap scss variables scss import foundationmozillaorg watch static files source sass mofo bootstrap mofo bootstrap scss import foundationmozillaorg watch static files source sass main scss root stylesheet foundationmozillaorg watch static files foundationmozillaorg watch static files reading content files foundationmozillaorg watch static files warning repetitive deprecation warnings omitted foundationmozillaorg watch static files run in verbose mode to see all warnings foundationmozillaorg watch static files foundationmozillaorg watch static files compiling css foundationmozillaorg watch static files foundationmozillaorg watch static files done in foundationmozillaorg watch static files warning repetitive deprecation warnings omitted foundationmozillaorg watch static files run in verbose mode to see all warnings foundationmozillaorg watch static files we are working on a transition to tailwind which should make some css use obsolete but this transition has been going on for a while now and is likely to take a bit longer therefore it would be nice to fix the error messages that we are getting to make the log output a bit more useful and easier to follow acceptance criteria as a developer when i start the docker stack with docker compose up i don t get a bunch of css warnings and errors | 1 |
110,120 | 11,688,987,513 | IssuesEvent | 2020-03-05 15:22:42 | artemis-analytics/artemis | https://api.github.com/repos/artemis-analytics/artemis | closed | Publish documentation from readthedocs | documentation | @DominicParent
Please publish the documentation to readthedocs based on:
https://github.com/artemis-analytics/artemis/commit/9aa841a2583b3916f508a9b3565fb7e9f4a9c4b2
The current documentation includes:
* Updated style
* Updated docstrings which use numpy docstring style
* API reference
* Description of Algorithms and tools
* Development documentation for cython, C++, Arrow and python
| 1.0 | Publish documentation from readthedocs - @DominicParent
Please publish the documentation to readthedocs based on:
https://github.com/artemis-analytics/artemis/commit/9aa841a2583b3916f508a9b3565fb7e9f4a9c4b2
The current documentation includes:
* Updated style
* Updated docstrings which use numpy docstring style
* API reference
* Description of Algorithms and tools
* Development documentation for cython, C++, Arrow and python
| non_main | publish documentation from readthedocs dominicparent please publish the documentation to readthedocs based on the current documentation includes updated style updated docstrings which use numpy docstring style api reference description of algorithms and tools development documentation for cython c arrow and python | 0 |
2,151 | 7,472,433,371 | IssuesEvent | 2018-04-03 12:40:01 | RalfKoban/MiKo-Analyzers | https://api.github.com/repos/RalfKoban/MiKo-Analyzers | closed | Specific kinds of exceptions shall not be thrown | Area: analyzer Area: maintainability feature in progress | The analyzer should report that there are exception thrown (created) which are not acceptable to be created.
Such exceptions are:
- System.Exception
- System.AccessViolationException
- System.IndexOutOfRangeException
- System.ExecutionEngineException
- System.FatalEngineException
- System.NullReferenceException
- System.OutOfMemoryException
- System.StackOverflowException
- System.Runtime.Interopservices.COMException
- System.Runtime.Interopservices.SEHException | True | Specific kinds of exceptions shall not be thrown - The analyzer should report that there are exception thrown (created) which are not acceptable to be created.
Such exceptions are:
- System.Exception
- System.AccessViolationException
- System.IndexOutOfRangeException
- System.ExecutionEngineException
- System.FatalEngineException
- System.NullReferenceException
- System.OutOfMemoryException
- System.StackOverflowException
- System.Runtime.Interopservices.COMException
- System.Runtime.Interopservices.SEHException | main | specific kinds of exceptions shall not be thrown the analyzer should report that there are exception thrown created which are not acceptable to be created such exceptions are system exception system accessviolationexception system indexoutofrangeexception system executionengineexception system fatalengineexception system nullreferenceexception system outofmemoryexception system stackoverflowexception system runtime interopservices comexception system runtime interopservices sehexception | 1 |
121,885 | 12,136,530,243 | IssuesEvent | 2020-04-23 14:30:34 | jupyterlab/debugger | https://api.github.com/repos/jupyterlab/debugger | closed | DOC: README: add link to and from medium article | documentation | - [ ] DOC: README: add link to medium ANN article
https://blog.jupyter.org/a-visual-debugger-for-jupyter-914e61716559
- [x] DOC: Article: add link to this repo:
https://github.com/jupyterlab/debugger
- [x] DOC: update gh project desc?
"a JupyterLab debugger extension [for debugging code in Jupyter notebook cells]"
| 1.0 | DOC: README: add link to and from medium article - - [ ] DOC: README: add link to medium ANN article
https://blog.jupyter.org/a-visual-debugger-for-jupyter-914e61716559
- [x] DOC: Article: add link to this repo:
https://github.com/jupyterlab/debugger
- [x] DOC: update gh project desc?
"a JupyterLab debugger extension [for debugging code in Jupyter notebook cells]"
| non_main | doc readme add link to and from medium article doc readme add link to medium ann article doc article add link to this repo doc update gh project desc a jupyterlab debugger extension | 0 |
628,132 | 19,976,482,082 | IssuesEvent | 2022-01-29 06:35:01 | RetroMusicPlayer/RetroMusicPlayer | https://api.github.com/repos/RetroMusicPlayer/RetroMusicPlayer | closed | Gapless playback | bug v4 Priority: High | even if I switched on the gapless playback feature a gap remains.
This feature worked in the previous version with the same phone and song
-
Device info:
---
<table>
<tr><td><b>App version</b></td><td>4.0.010_1012202056</td></tr>
<tr><td>App version code</td><td>10503</td></tr>
<tr><td>Android build version</td><td>V11.0.8.0.PCOMIXM</td></tr>
<tr><td>Android release version</td><td>9</td></tr>
<tr><td>Android SDK version</td><td>28</td></tr>
<tr><td>Android build ID</td><td>PKQ1.190616.001</td></tr>
<tr><td>Device brand</td><td>xiaomi</td></tr>
<tr><td>Device manufacturer</td><td>Xiaomi</td></tr>
<tr><td>Device name</td><td>ginkgo</td></tr>
<tr><td>Device model</td><td>Redmi Note 8</td></tr>
<tr><td>Device product name</td><td>ginkgo</td></tr>
<tr><td>Device hardware name</td><td>qcom</td></tr>
<tr><td>ABIs</td><td>[arm64-v8a, armeabi-v7a, armeabi]</td></tr>
<tr><td>ABIs (32bit)</td><td>[armeabi-v7a, armeabi]</td></tr>
<tr><td>ABIs (64bit)</td><td>[arm64-v8a]</td></tr>
<tr><td>Language</td><td>auto</td></tr>
</table>
| 1.0 | Gapless playback - even if I switched on the gapless playback feature a gap remains.
This feature worked in the previous version with the same phone and song
-
Device info:
---
<table>
<tr><td><b>App version</b></td><td>4.0.010_1012202056</td></tr>
<tr><td>App version code</td><td>10503</td></tr>
<tr><td>Android build version</td><td>V11.0.8.0.PCOMIXM</td></tr>
<tr><td>Android release version</td><td>9</td></tr>
<tr><td>Android SDK version</td><td>28</td></tr>
<tr><td>Android build ID</td><td>PKQ1.190616.001</td></tr>
<tr><td>Device brand</td><td>xiaomi</td></tr>
<tr><td>Device manufacturer</td><td>Xiaomi</td></tr>
<tr><td>Device name</td><td>ginkgo</td></tr>
<tr><td>Device model</td><td>Redmi Note 8</td></tr>
<tr><td>Device product name</td><td>ginkgo</td></tr>
<tr><td>Device hardware name</td><td>qcom</td></tr>
<tr><td>ABIs</td><td>[arm64-v8a, armeabi-v7a, armeabi]</td></tr>
<tr><td>ABIs (32bit)</td><td>[armeabi-v7a, armeabi]</td></tr>
<tr><td>ABIs (64bit)</td><td>[arm64-v8a]</td></tr>
<tr><td>Language</td><td>auto</td></tr>
</table>
| non_main | gapless playback even if i switched on the gapless playback feature a gap remains this feature worked in the previous version with the same phone and song device info app version app version code android build version pcomixm android release version android sdk version android build id device brand xiaomi device manufacturer xiaomi device name ginkgo device model redmi note device product name ginkgo device hardware name qcom abis abis abis language auto | 0 |
281,490 | 21,315,409,309 | IssuesEvent | 2022-04-16 07:21:21 | Decaxical/pe | https://api.github.com/repos/Decaxical/pe | opened | Inaccurate Instructions under FAQ | severity.Low type.DocumentationBug | 
it is stated that "overwrite the empty data file it creates". However, when the user installs a TAssist on another computer the data file will not be empty and will instead have the sample data.
This can be confusing for the user as the data file will not be empty.
<!--session: 1650085883293-5818fde3-a7aa-4285-8401-b8a072ca7b50-->
<!--Version: Web v3.4.2--> | 1.0 | Inaccurate Instructions under FAQ - 
it is stated that "overwrite the empty data file it creates". However, when the user installs a TAssist on another computer the data file will not be empty and will instead have the sample data.
This can be confusing for the user as the data file will not be empty.
<!--session: 1650085883293-5818fde3-a7aa-4285-8401-b8a072ca7b50-->
<!--Version: Web v3.4.2--> | non_main | inaccurate instructions under faq it is stated that overwrite the empty data file it creates however when the user installs a tassist on another computer the data file will not be empty and will instead have the sample data this can be confusing for the user as the data file will not be empty | 0 |
538,212 | 15,765,025,030 | IssuesEvent | 2021-03-31 13:46:58 | mskcc/pluto-cwl | https://api.github.com/repos/mskcc/pluto-cwl | closed | create mini test bam files, standardized mini test maf files | high priority | Some of the test cases for pluto take a very long time to run, for example the "1 maf" test case for `workflow_with_facets.cwl` takes about 30min on its own.
To help avoid some of these long execution times for test cases we have started using small dummy maf files as inputs in some test cases. However we do not have an equivalent for .bam files.
We should come up with some tiny .bam files that can be used in conjunction with the .maf files, which should be standarized, throughout the test cases to speed up execution and aid development. | 1.0 | create mini test bam files, standardized mini test maf files - Some of the test cases for pluto take a very long time to run, for example the "1 maf" test case for `workflow_with_facets.cwl` takes about 30min on its own.
To help avoid some of these long execution times for test cases we have started using small dummy maf files as inputs in some test cases. However we do not have an equivalent for .bam files.
We should come up with some tiny .bam files that can be used in conjunction with the .maf files, which should be standarized, throughout the test cases to speed up execution and aid development. | non_main | create mini test bam files standardized mini test maf files some of the test cases for pluto take a very long time to run for example the maf test case for workflow with facets cwl takes about on its own to help avoid some of these long execution times for test cases we have started using small dummy maf files as inputs in some test cases however we do not have an equivalent for bam files we should come up with some tiny bam files that can be used in conjunction with the maf files which should be standarized throughout the test cases to speed up execution and aid development | 0 |
185,991 | 15,039,408,954 | IssuesEvent | 2021-02-02 18:37:04 | Calmanning/so_thirsty | https://api.github.com/repos/Calmanning/so_thirsty | closed | "/:USER/PLANT/:PLANT" Page fix? - Conditional HNDLBR Statement? | documentation | As a user, if I have no photos for my plant (on my plant's page), I can expect to see a statement that will explain to me that no photos have been added in that section.
- [x] FIRST! Check with everyone to see if this is a feature we need.
- [x] Add an "(#if)" handlebars statement to the "/:USER/PLANT/:PLANT" page that will append the proper elements to the page. | 1.0 | "/:USER/PLANT/:PLANT" Page fix? - Conditional HNDLBR Statement? - As a user, if I have no photos for my plant (on my plant's page), I can expect to see a statement that will explain to me that no photos have been added in that section.
- [x] FIRST! Check with everyone to see if this is a feature we need.
- [x] Add an "(#if)" handlebars statement to the "/:USER/PLANT/:PLANT" page that will append the proper elements to the page. | non_main | user plant plant page fix conditional hndlbr statement as a user if i have no photos for my plant on my plant s page i can expect to see a statement that will explain to me that no photos have been added in that section first check with everyone to see if this is a feature we need add an if handlebars statement to the user plant plant page that will append the proper elements to the page | 0 |
269,252 | 20,378,097,149 | IssuesEvent | 2022-02-21 17:44:25 | benchttp/runner | https://api.github.com/repos/benchttp/runner | opened | Write main documentation | documentation must have | ## Description
<!-- Describe the purpose of the issue, why a change has to be done, and how if possible -->
🚧 WIP
## Tasks
- [ ] User documentation
- [ ] Download & install
- [ ] CLI/file config
- [ ] Dev documentation
- [ ] Prerequisites (go, lint, ...)
- [ ] Environment (cobaye, ...)
<!-- List the tasks that must be done -->
## Suggestions
<!-- Add here any relevant suggestion, such as an implementation detail or a library to use -->
## Notes
<!--
Add here some additionnal information that could be relevant for the assignee.
It can be a related issue, some context, or anything else.
-->
| 1.0 | Write main documentation - ## Description
<!-- Describe the purpose of the issue, why a change has to be done, and how if possible -->
🚧 WIP
## Tasks
- [ ] User documentation
- [ ] Download & install
- [ ] CLI/file config
- [ ] Dev documentation
- [ ] Prerequisites (go, lint, ...)
- [ ] Environment (cobaye, ...)
<!-- List the tasks that must be done -->
## Suggestions
<!-- Add here any relevant suggestion, such as an implementation detail or a library to use -->
## Notes
<!--
Add here some additionnal information that could be relevant for the assignee.
It can be a related issue, some context, or anything else.
-->
| non_main | write main documentation description 🚧 wip tasks user documentation download install cli file config dev documentation prerequisites go lint environment cobaye suggestions notes add here some additionnal information that could be relevant for the assignee it can be a related issue some context or anything else | 0 |
2,820 | 10,114,217,038 | IssuesEvent | 2019-07-30 18:38:14 | unoplatform/uno | https://api.github.com/repos/unoplatform/uno | closed | Compilation errors with Microsoft Visual Studio Enterprise 2019 Preview - Version 16.2.0 Preview 4.0 | area/vswin kind/bug kind/contributor-experience kind/maintainer-experience | ## Current behavior
```
git clone master
cd src
msbuild /t:restore /m uno.ui.sln
msbuild /m uno.ui.sln
```

## Expected behavior
🚀 ✔️
## How to reproduce it (as minimally and precisely as possible)
See attached binlog.
[msbuild.zip](https://github.com/unoplatform/uno/files/3421835/msbuild.zip)
## Environment
<details>
<summary>Microsoft Visual Studio Enterprise 2019 Preview - Version 16.2.0 Preview 4.0
Details</summary>
Microsoft Visual Studio Enterprise 2019 Preview - Version 16.2.0 Preview 4.0
VisualStudio.16.Preview/16.2.0-pre.4.0+29111.141
Microsoft .NET Framework
Version 4.8.03752
Installed Version: Enterprise
Visual C++ 2019 00435-60000-00000-AA184
Microsoft Visual C++ 2019
Application Insights Tools for Visual Studio Package 9.1.00611.1
Application Insights Tools for Visual Studio
ASP.NET and Web Tools 2019 16.2.288.12852
ASP.NET and Web Tools 2019
ASP.NET Web Frameworks and Tools 2019 16.2.288.12852
For additional information, visit https://www.asp.net/
Azure App Service Tools v3.0.0 16.2.288.12852
Azure App Service Tools v3.0.0
Azure Functions and Web Jobs Tools 16.2.288.12852
Azure Functions and Web Jobs Tools
C# Tools 3.2.0-beta4-19359-03+15b43b33901c88f68ef43f8314b5a2457716780d
C# components used in the IDE. Depending on your project type and settings, a different version of the compiler may be used.
Child Process Debugging Power Tool 1.0
Power tool to add child process debugging to Visual Studio.
Common Azure Tools 1.10
Provides common services for use by Azure Mobile Services and Microsoft Azure Tools.
Extensibility Message Bus 1.2.0 (d16-2@8b56e20)
Provides common messaging-based MEF services for loosely coupled Visual Studio extension components communication and integration.
IntelliCode Extension 1.0
IntelliCode Visual Studio Extension Detailed Info
Microsoft Azure Tools 2.9
Microsoft Azure Tools for Microsoft Visual Studio 0x10 - v2.9.20626.2
Microsoft Continuous Delivery Tools for Visual Studio 0.4
Simplifying the configuration of Azure DevOps pipelines from within the Visual Studio IDE.
Microsoft JVM Debugger 1.0
Provides support for connecting the Visual Studio debugger to JDWP compatible Java Virtual Machines
Microsoft Library Manager 1.0
Install client-side libraries easily to any web project
Microsoft MI-Based Debugger 1.0
Provides support for connecting Visual Studio to MI compatible debuggers
Microsoft Visual C++ Wizards 1.0
Microsoft Visual C++ Wizards
Microsoft Visual Studio Tools for Containers 1.1
Develop, run, validate your ASP.NET Core applications in the target environment. F5 your application directly into a container with debugging, or CTRL + F5 to edit & refresh your app without having to rebuild the container.
Microsoft Visual Studio VC Package 1.0
Microsoft Visual Studio VC Package
Mono Debugging for Visual Studio 16.2.6 (4cfc7c3)
Support for debugging Mono processes with Visual Studio.
Node.js Tools 1.5.10610.1 Commit Hash:529a87de2769e143655e31ea81795c42e5775ef8
Adds support for developing and debugging Node.js apps in Visual Studio
NuGet Package Manager 5.2.0
NuGet Package Manager in Visual Studio. For more information about NuGet, visit https://docs.nuget.org/
OzCodePackage Extension 1.0
OzCodePackage Visual Studio Extension Detailed Info
ProjectServicesPackage Extension 1.0
ProjectServicesPackage Visual Studio Extension Detailed Info
ResourcePackage Extension 1.0
ResourcePackage Visual Studio Extension Detailed Info
ResourcePackage Extension 1.0
ResourcePackage Visual Studio Extension Detailed Info
Snapshot Debugging Extension 1.0
Snapshot Debugging Visual Studio Extension Detailed Info
SQL Server Data Tools 16.0.61906.28070
Microsoft SQL Server Data Tools
Test Adapter for Boost.Test 1.0
Enables Visual Studio's testing tools with unit tests written for Boost.Test. The use terms and Third Party Notices are available in the extension installation directory.
Test Adapter for Google Test 1.0
Enables Visual Studio's testing tools with unit tests written for Google Test. The use terms and Third Party Notices are available in the extension installation directory.
TypeScript Tools 16.0.10627.2001
TypeScript Tools for Microsoft Visual Studio
Visual Basic Tools 3.2.0-beta4-19359-03+15b43b33901c88f68ef43f8314b5a2457716780d
Visual Basic components used in the IDE. Depending on your project type and settings, a different version of the compiler may be used.
Visual F# Tools 10.4 for F# 4.6 16.2.0-beta.19321.1+a24d94ecf97d0d69d4fbe6b8b10cd1f97737fff4
Microsoft Visual F# Tools 10.4 for F# 4.6
Visual Studio Code Debug Adapter Host Package 1.0
Interop layer for hosting Visual Studio Code debug adapters in Visual Studio
Visual Studio Tools for CMake 1.0
Visual Studio Tools for CMake
Visual Studio Tools for CMake 1.0
Visual Studio Tools for CMake
Visual Studio Tools for Containers 1.0
Visual Studio Tools for Containers
VisualStudio.Mac 1.0
Mac Extension for Visual Studio
Xamarin 16.2.0.90 (d16-2@ba267630e)
Visual Studio extension to enable development for Xamarin.iOS and Xamarin.Android.
Xamarin Designer 16.2.0.325 (remotes/origin/d16-2@f10cfbf83)
Visual Studio extension to enable Xamarin Designer tools in Visual Studio.
Xamarin Templates 16.3.117 (59a59e8)
Templates for building iOS, Android, and Windows apps with Xamarin and Xamarin.Forms.
Xamarin.Android SDK 9.4.0.51 (d16-2/9fa7775)
Xamarin.Android Reference Assemblies and MSBuild support.
Mono: mono/mono/2019-02@e6f5369c2d2
Java.Interop: xamarin/java.interop/d16-2@d64ada5
LibZipSharp: grendello/LibZipSharp/d16-2@caa0c74
LibZip: nih-at/libzip/rel-1-5-1@b95cf3f
ProGuard: xamarin/proguard/master@905836d
SQLite: xamarin/sqlite/3.27.1@8212a2d
Xamarin.Android Tools: xamarin/xamarin-android-tools/d16-2@6f6c969
Xamarin.iOS and Xamarin.Mac SDK 12.14.0.110 (a8bcecc)
Xamarin.iOS and Xamarin.Mac Reference Assemblies and MSBuild support.
</details>
## Anything else we need to know?
- Compiles fine on Microsoft Visual Studio Enterprise 2019 Version 16.1.6.
- Can confirm that this version fixes the regex problems w/resource naming in the Assets directory. | True | Compilation errors with Microsoft Visual Studio Enterprise 2019 Preview - Version 16.2.0 Preview 4.0 - ## Current behavior
```
git clone master
cd src
msbuild /t:restore /m uno.ui.sln
msbuild /m uno.ui.sln
```

## Expected behavior
🚀 ✔️
## How to reproduce it (as minimally and precisely as possible)
See attached binlog.
[msbuild.zip](https://github.com/unoplatform/uno/files/3421835/msbuild.zip)
## Environment
<details>
<summary>Microsoft Visual Studio Enterprise 2019 Preview - Version 16.2.0 Preview 4.0
Details</summary>
Microsoft Visual Studio Enterprise 2019 Preview - Version 16.2.0 Preview 4.0
VisualStudio.16.Preview/16.2.0-pre.4.0+29111.141
Microsoft .NET Framework
Version 4.8.03752
Installed Version: Enterprise
Visual C++ 2019 00435-60000-00000-AA184
Microsoft Visual C++ 2019
Application Insights Tools for Visual Studio Package 9.1.00611.1
Application Insights Tools for Visual Studio
ASP.NET and Web Tools 2019 16.2.288.12852
ASP.NET and Web Tools 2019
ASP.NET Web Frameworks and Tools 2019 16.2.288.12852
For additional information, visit https://www.asp.net/
Azure App Service Tools v3.0.0 16.2.288.12852
Azure App Service Tools v3.0.0
Azure Functions and Web Jobs Tools 16.2.288.12852
Azure Functions and Web Jobs Tools
C# Tools 3.2.0-beta4-19359-03+15b43b33901c88f68ef43f8314b5a2457716780d
C# components used in the IDE. Depending on your project type and settings, a different version of the compiler may be used.
Child Process Debugging Power Tool 1.0
Power tool to add child process debugging to Visual Studio.
Common Azure Tools 1.10
Provides common services for use by Azure Mobile Services and Microsoft Azure Tools.
Extensibility Message Bus 1.2.0 (d16-2@8b56e20)
Provides common messaging-based MEF services for loosely coupled Visual Studio extension components communication and integration.
IntelliCode Extension 1.0
IntelliCode Visual Studio Extension Detailed Info
Microsoft Azure Tools 2.9
Microsoft Azure Tools for Microsoft Visual Studio 0x10 - v2.9.20626.2
Microsoft Continuous Delivery Tools for Visual Studio 0.4
Simplifying the configuration of Azure DevOps pipelines from within the Visual Studio IDE.
Microsoft JVM Debugger 1.0
Provides support for connecting the Visual Studio debugger to JDWP compatible Java Virtual Machines
Microsoft Library Manager 1.0
Install client-side libraries easily to any web project
Microsoft MI-Based Debugger 1.0
Provides support for connecting Visual Studio to MI compatible debuggers
Microsoft Visual C++ Wizards 1.0
Microsoft Visual C++ Wizards
Microsoft Visual Studio Tools for Containers 1.1
Develop, run, validate your ASP.NET Core applications in the target environment. F5 your application directly into a container with debugging, or CTRL + F5 to edit & refresh your app without having to rebuild the container.
Microsoft Visual Studio VC Package 1.0
Microsoft Visual Studio VC Package
Mono Debugging for Visual Studio 16.2.6 (4cfc7c3)
Support for debugging Mono processes with Visual Studio.
Node.js Tools 1.5.10610.1 Commit Hash:529a87de2769e143655e31ea81795c42e5775ef8
Adds support for developing and debugging Node.js apps in Visual Studio
NuGet Package Manager 5.2.0
NuGet Package Manager in Visual Studio. For more information about NuGet, visit https://docs.nuget.org/
OzCodePackage Extension 1.0
OzCodePackage Visual Studio Extension Detailed Info
ProjectServicesPackage Extension 1.0
ProjectServicesPackage Visual Studio Extension Detailed Info
ResourcePackage Extension 1.0
ResourcePackage Visual Studio Extension Detailed Info
ResourcePackage Extension 1.0
ResourcePackage Visual Studio Extension Detailed Info
Snapshot Debugging Extension 1.0
Snapshot Debugging Visual Studio Extension Detailed Info
SQL Server Data Tools 16.0.61906.28070
Microsoft SQL Server Data Tools
Test Adapter for Boost.Test 1.0
Enables Visual Studio's testing tools with unit tests written for Boost.Test. The use terms and Third Party Notices are available in the extension installation directory.
Test Adapter for Google Test 1.0
Enables Visual Studio's testing tools with unit tests written for Google Test. The use terms and Third Party Notices are available in the extension installation directory.
TypeScript Tools 16.0.10627.2001
TypeScript Tools for Microsoft Visual Studio
Visual Basic Tools 3.2.0-beta4-19359-03+15b43b33901c88f68ef43f8314b5a2457716780d
Visual Basic components used in the IDE. Depending on your project type and settings, a different version of the compiler may be used.
Visual F# Tools 10.4 for F# 4.6 16.2.0-beta.19321.1+a24d94ecf97d0d69d4fbe6b8b10cd1f97737fff4
Microsoft Visual F# Tools 10.4 for F# 4.6
Visual Studio Code Debug Adapter Host Package 1.0
Interop layer for hosting Visual Studio Code debug adapters in Visual Studio
Visual Studio Tools for CMake 1.0
Visual Studio Tools for CMake
Visual Studio Tools for CMake 1.0
Visual Studio Tools for CMake
Visual Studio Tools for Containers 1.0
Visual Studio Tools for Containers
VisualStudio.Mac 1.0
Mac Extension for Visual Studio
Xamarin 16.2.0.90 (d16-2@ba267630e)
Visual Studio extension to enable development for Xamarin.iOS and Xamarin.Android.
Xamarin Designer 16.2.0.325 (remotes/origin/d16-2@f10cfbf83)
Visual Studio extension to enable Xamarin Designer tools in Visual Studio.
Xamarin Templates 16.3.117 (59a59e8)
Templates for building iOS, Android, and Windows apps with Xamarin and Xamarin.Forms.
Xamarin.Android SDK 9.4.0.51 (d16-2/9fa7775)
Xamarin.Android Reference Assemblies and MSBuild support.
Mono: mono/mono/2019-02@e6f5369c2d2
Java.Interop: xamarin/java.interop/d16-2@d64ada5
LibZipSharp: grendello/LibZipSharp/d16-2@caa0c74
LibZip: nih-at/libzip/rel-1-5-1@b95cf3f
ProGuard: xamarin/proguard/master@905836d
SQLite: xamarin/sqlite/3.27.1@8212a2d
Xamarin.Android Tools: xamarin/xamarin-android-tools/d16-2@6f6c969
Xamarin.iOS and Xamarin.Mac SDK 12.14.0.110 (a8bcecc)
Xamarin.iOS and Xamarin.Mac Reference Assemblies and MSBuild support.
</details>
## Anything else we need to know?
- Compiles fine on Microsoft Visual Studio Enterprise 2019 Version 16.1.6.
- Can confirm that this version fixes the regex problems w/resource naming in the Assets directory. | main | compilation errors with microsoft visual studio enterprise preview version preview current behavior git clone master cd src msbuild t restore m uno ui sln msbuild m uno ui sln expected behavior 🚀 ✔️ how to reproduce it as minimally and precisely as possible see attached binlog environment microsoft visual studio enterprise preview version preview details microsoft visual studio enterprise preview version preview visualstudio preview pre microsoft net framework version installed version enterprise visual c microsoft visual c application insights tools for visual studio package application insights tools for visual studio asp net and web tools asp net and web tools asp net web frameworks and tools for additional information visit azure app service tools azure app service tools azure functions and web jobs tools azure functions and web jobs tools c tools c components used in the ide depending on your project type and settings a different version of the compiler may be used child process debugging power tool power tool to add child process debugging to visual studio common azure tools provides common services for use by azure mobile services and microsoft azure tools extensibility message bus provides common messaging based mef services for loosely coupled visual studio extension components communication and integration intellicode extension intellicode visual studio extension detailed info microsoft azure tools microsoft azure tools for microsoft visual studio microsoft continuous delivery tools for visual studio simplifying the configuration of azure devops pipelines from within the visual studio ide microsoft jvm debugger provides support for connecting the visual studio debugger to jdwp compatible java virtual machines microsoft library manager install client side libraries easily to any web project microsoft mi based debugger provides support for connecting visual studio to mi compatible debuggers microsoft visual c wizards microsoft visual c wizards microsoft visual studio tools for containers develop run validate your asp net core applications in the target environment your application directly into a container with debugging or ctrl to edit refresh your app without having to rebuild the container microsoft visual studio vc package microsoft visual studio vc package mono debugging for visual studio support for debugging mono processes with visual studio node js tools commit hash adds support for developing and debugging node js apps in visual studio nuget package manager nuget package manager in visual studio for more information about nuget visit ozcodepackage extension ozcodepackage visual studio extension detailed info projectservicespackage extension projectservicespackage visual studio extension detailed info resourcepackage extension resourcepackage visual studio extension detailed info resourcepackage extension resourcepackage visual studio extension detailed info snapshot debugging extension snapshot debugging visual studio extension detailed info sql server data tools microsoft sql server data tools test adapter for boost test enables visual studio s testing tools with unit tests written for boost test the use terms and third party notices are available in the extension installation directory test adapter for google test enables visual studio s testing tools with unit tests written for google test the use terms and third party notices are available in the extension installation directory typescript tools typescript tools for microsoft visual studio visual basic tools visual basic components used in the ide depending on your project type and settings a different version of the compiler may be used visual f tools for f beta microsoft visual f tools for f visual studio code debug adapter host package interop layer for hosting visual studio code debug adapters in visual studio visual studio tools for cmake visual studio tools for cmake visual studio tools for cmake visual studio tools for cmake visual studio tools for containers visual studio tools for containers visualstudio mac mac extension for visual studio xamarin visual studio extension to enable development for xamarin ios and xamarin android xamarin designer remotes origin visual studio extension to enable xamarin designer tools in visual studio xamarin templates templates for building ios android and windows apps with xamarin and xamarin forms xamarin android sdk xamarin android reference assemblies and msbuild support mono mono mono java interop xamarin java interop libzipsharp grendello libzipsharp libzip nih at libzip rel proguard xamarin proguard master sqlite xamarin sqlite xamarin android tools xamarin xamarin android tools xamarin ios and xamarin mac sdk xamarin ios and xamarin mac reference assemblies and msbuild support anything else we need to know compiles fine on microsoft visual studio enterprise version can confirm that this version fixes the regex problems w resource naming in the assets directory | 1 |
18,207 | 10,219,077,689 | IssuesEvent | 2019-08-15 17:37:55 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | Mac Catalina compatibility: System.Security.Cryptography.X509Certificates.Tests crashing | area-System.Security os-mac-os-x | System.Security.Cryptography.X509Certificates.Tests are crashing without producing result
```
Starting: System.Security.Cryptography.X509Certificates.Tests (parallel test collections = on, max threads = 4)
Warning: unable to build chain to self-signed root for signer "(null)"Process terminated. Assertion failed.
X509ChainGetStatusAtIndex returned unexpected error 2
at Internal.Cryptography.Pal.SecTrustChainPal.ParseResults(SafeX509ChainHandle chainHandle, X509RevocationMode revocationMode) in /Users/dotnet/new_source/corefx/src/System.Security.Cryptography.X509Certificates/src/Internal/Cryptography/Pal.OSX/ChainPal.cs:line 250
at Internal.Cryptography.Pal.SecTrustChainPal.Execute(DateTime verificationTime, Boolean allowNetwork, OidCollection applicationPolicy, OidCollection certificatePolicy, X509RevocationFlag revocationFlag) in /Users/dotnet/new_source/corefx/src/System.Security.Cryptography.X509Certificates/src/Internal/Cryptography/Pal.OSX/ChainPal.cs:line 210
at Internal.Cryptography.Pal.ChainPal.BuildChain(Boolean useMachineContext, ICertificatePal cert, X509Certificate2Collection extraStore, OidCollection applicationPolicy, OidCollection certificatePolicy, X509RevocationMode revocationMode, X509RevocationFlag revocationFlag, DateTime verificationTime, TimeSpan timeout) in /Users/dotnet/new_source/corefx/src/System.Security.Cryptography.X509Certificates/src/Internal/Cryptography/Pal.OSX/ChainPal.cs:line 574
at System.Security.Cryptography.X509Certificates.X509Chain.Build(X509Certificate2 certificate, Boolean throwOnException) in /Users/dotnet/new_source/corefx/src/System.Security.Cryptography.X509Certificates/src/System/Security/Cryptography/X509Certificates/X509Chain.cs:line 118
at System.Security.Cryptography.X509Certificates.X509Chain.Build(X509Certificate2 certificate) in /Users/dotnet/new_source/corefx/src/System.Security.Cryptography.X509Certificates/src/System/Security/Cryptography/X509Certificates/X509Chain.cs:line 105
at System.Security.Cryptography.X509Certificates.Tests.DynamicChainTests.TestInvalidAia() in /Users/dotnet/new_source/corefx/src/System.Security.Cryptography.X509Certificates/tests/DynamicChainTests.cs:line 240
at System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor, Boolean wrapExceptions)
at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) in /_/src/System.Private.CoreLib/src/System/Reflection/RuntimeMethodInfo.cs:line 475
at System.Reflection.MethodBase.Invoke(Object obj, Object[] parameters) in /_/src/System.Private.CoreLib/shared/System/Reflection/MethodBase.cs:line 54
at Xunit.Sdk.TestInvoker`1.CallTestMethod(Object testClassInstance) in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\TestInvoker.cs:line 150
at Xunit.Sdk.TestInvoker`1.<>c__DisplayClass48_1.<<InvokeTestMethodAsync>b__1>d.MoveNext() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\TestInvoker.cs:line 257
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.TestInvoker`1.<>c__DisplayClass48_1.<InvokeTestMethodAsync>b__1()
at Xunit.Sdk.ExecutionTimer.AggregateAsync(Func`1 asyncAction) in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\ExecutionTimer.cs:line 48
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.ExecutionTimer.AggregateAsync(Func`1 asyncAction)
at Xunit.Sdk.TestInvoker`1.<>c__DisplayClass48_1.<InvokeTestMethodAsync>b__0() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\TestInvoker.cs:line 242
at Xunit.Sdk.ExceptionAggregator.RunAsync(Func`1 code) in C:\Dev\xunit\xunit\src\xunit.core\Sdk\ExceptionAggregator.cs:line 90
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.ExceptionAggregator.RunAsync(Func`1 code)
at Xunit.Sdk.TestInvoker`1.InvokeTestMethodAsync(Object testClassInstance) in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\TestInvoker.cs:line 241
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.TestInvoker`1.InvokeTestMethodAsync(Object testClassInstance)
at Xunit.Sdk.XunitTestInvoker.InvokeTestMethodAsync(Object testClassInstance) in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\XunitTestInvoker.cs:line 112
at Xunit.Sdk.TestInvoker`1.<RunAsync>b__47_0() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\TestInvoker.cs:line 206
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.TestInvoker`1.<RunAsync>b__47_0()
at Xunit.Sdk.ExceptionAggregator.RunAsync[T](Func`1 code) in C:\Dev\xunit\xunit\src\xunit.core\Sdk\ExceptionAggregator.cs:line 107
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.ExceptionAggregator.RunAsync[T](Func`1 code)
at Xunit.Sdk.TestInvoker`1.RunAsync() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\TestInvoker.cs:line 189
at Xunit.Sdk.XunitTestRunner.InvokeTestMethodAsync(ExceptionAggregator aggregator) in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\XunitTestRunner.cs:line 84
at Xunit.Sdk.XunitTestRunner.InvokeTestAsync(ExceptionAggregator aggregator) in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\XunitTestRunner.cs:line 67
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.XunitTestRunner.InvokeTestAsync(ExceptionAggregator aggregator)
at Xunit.Sdk.TestRunner`1.<>c__DisplayClass43_0.<RunAsync>b__0() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\TestRunner.cs:line 149
at Xunit.Sdk.ExceptionAggregator.RunAsync[T](Func`1 code) in C:\Dev\xunit\xunit\src\xunit.core\Sdk\ExceptionAggregator.cs:line 107
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.ExceptionAggregator.RunAsync[T](Func`1 code)
at Xunit.Sdk.TestRunner`1.RunAsync() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\TestRunner.cs:line 149
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.TestRunner`1.RunAsync()
at Xunit.Sdk.XunitTestCaseRunner.RunTestAsync() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\XunitTestCaseRunner.cs:line 139
at Xunit.Sdk.TestCaseRunner`1.RunAsync() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\TestCaseRunner.cs:line 82
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.TestCaseRunner`1.RunAsync()
at Xunit.Sdk.XunitTestCase.RunAsync(IMessageSink diagnosticMessageSink, IMessageBus messageBus, Object[] constructorArguments, ExceptionAggregator aggregator, CancellationTokenSource cancellationTokenSource) in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\XunitTestCase.cs:line 162
at Xunit.Sdk.XunitTestMethodRunner.RunTestCaseAsync(IXunitTestCase testCase) in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\XunitTestMethodRunner.cs:line 45
at Xunit.Sdk.TestMethodRunner`1.RunTestCasesAsync() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\TestMethodRunner.cs:line 136
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.TestMethodRunner`1.RunTestCasesAsync()
at Xunit.Sdk.TestMethodRunner`1.RunAsync() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\TestMethodRunner.cs:line 106
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.TestMethodRunner`1.RunAsync()
at Xunit.Sdk.XunitTestClassRunner.RunTestMethodAsync(ITestMethod testMethod, IReflectionMethodInfo method, IEnumerable`1 testCases, Object[] constructorArguments) in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\XunitTestClassRunner.cs:line 168
at Xunit.Sdk.TestClassRunner`1.RunTestMethodsAsync() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\TestClassRunner.cs:line 213
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.TestClassRunner`1.RunTestMethodsAsync()
at Xunit.Sdk.TestClassRunner`1.RunAsync() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\TestClassRunner.cs:line 171
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.TestClassRunner`1.RunAsync()
at Xunit.Sdk.XunitTestCollectionRunner.RunTestClassAsync(ITestClass testClass, IReflectionTypeInfo class, IEnumerable`1 testCases) in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\XunitTestCollectionRunner.cs:line 158
at Xunit.Sdk.TestCollectionRunner`1.RunTestClassesAsync() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\TestCollectionRunner.cs:line 130
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.TestCollectionRunner`1.RunTestClassesAsync()
at Xunit.Sdk.TestCollectionRunner`1.RunAsync() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\TestCollectionRunner.cs:line 101
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.TestCollectionRunner`1.RunAsync()
at Xunit.Sdk.XunitTestAssemblyRunner.RunTestCollectionAsync(IMessageBus messageBus, ITestCollection testCollection, IEnumerable`1 testCases, CancellationTokenSource cancellationTokenSource) in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\XunitTestAssemblyRunner.cs:line 235
at Xunit.Sdk.XunitTestAssemblyRunner.<>c__DisplayClass14_2.<RunTestCollectionsAsync>b__2() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\XunitTestAssemblyRunner.cs:line 184
at System.Threading.Tasks.Task`1.InnerInvoke() in /_/src/System.Private.CoreLib/shared/System/Threading/Tasks/Future.cs:line 518
at System.Threading.Tasks.Task.<>c.<.cctor>b__274_0(Object obj) in /_/src/System.Private.CoreLib/shared/System/Threading/Tasks/Task.cs:line 2428
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) in /_/src/System.Private.CoreLib/shared/System/Threading/ExecutionContext.cs:line 172
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot, Thread threadPoolThread) in /_/src/System.Private.CoreLib/shared/System/Threading/Tasks/Task.cs:line 2385
at System.Threading.Tasks.Task.ExecuteEntry() in /_/src/System.Private.CoreLib/shared/System/Threading/Tasks/Task.cs:line 2296
at System.Threading.Tasks.SynchronizationContextTaskScheduler.<>c.<.cctor>b__8_0(Object s) in /_/src/System.Private.CoreLib/shared/System/Threading/Tasks/TaskScheduler.cs:line 641
at Xunit.Sdk.MaxConcurrencySyncContext.RunOnSyncContext(SendOrPostCallback callback, Object state) in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\MaxConcurrencySyncContext.cs:line 107
at Xunit.Sdk.MaxConcurrencySyncContext.<>c__DisplayClass11_0.<WorkerThreadProc>b__0(Object _) in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\MaxConcurrencySyncContext.cs:line 96
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) in /_/src/System.Private.CoreLib/shared/System/Threading/ExecutionContext.cs:line 172
at Xunit.Sdk.ExecutionContextHelper.Run(Object context, Action`1 action) in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Utility\ExecutionContextHelper.cs:line 111
at Xunit.Sdk.MaxConcurrencySyncContext.WorkerThreadProc() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\MaxConcurrencySyncContext.cs:line 89
at Xunit.Sdk.XunitWorkerThread.<>c.<QueueUserWorkItem>b__5_0(Object _) in C:\Dev\xunit\xunit\src\common\XunitWorkerThread.cs:line 37
at System.Threading.Tasks.Task.InnerInvoke() in /_/src/System.Private.CoreLib/shared/System/Threading/Tasks/Task.cs:line 2449
at System.Threading.Tasks.Task.<>c.<.cctor>b__274_0(Object obj) in /_/src/System.Private.CoreLib/shared/System/Threading/Tasks/Task.cs:line 2428
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) in /_/src/System.Private.CoreLib/shared/System/Threading/ExecutionContext.cs:line 172
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot, Thread threadPoolThread) in /_/src/System.Private.CoreLib/shared/System/Threading/Tasks/Task.cs:line 2385
at System.Threading.Tasks.Task.ExecuteEntryUnsafe(Thread threadPoolThread) in /_/src/System.Private.CoreLib/shared/System/Threading/Tasks/Task.cs:line 2327
at System.Threading.Tasks.ThreadPoolTaskScheduler.<>c.<.cctor>b__10_0(Object s) in /_/src/System.Private.CoreLib/shared/System/Threading/Tasks/ThreadPoolTaskScheduler.cs:line 37
at System.Threading.ThreadHelper.ThreadStart_Context(Object state) in /_/src/System.Private.CoreLib/src/System/Threading/Thread.CoreCLR.cs:line 50
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) in /_/src/System.Private.CoreLib/shared/System/Threading/ExecutionContext.cs:line 172
at System.Threading.ThreadHelper.ThreadStart(Object obj) in /_/src/System.Private.CoreLib/src/System/Threading/Thread.CoreCLR.cs:line 83
Unknown Chain Status: UnparseableExtension
/Users/dotnet/new_source/corefx/artifacts/bin/System.Security.Cryptography.X509Certificates.Tests/netcoreapp-OSX-Debug/RunTests.sh: line 161: 61473 Abort trap: 6 "$RUNTIME_PATH/dotnet" exec --runtimeconfig System.Security.Cryptography.X509Certificates.Tests.runtimeconfig.json xunit.console.dll System.Security.Cryptography.X509Certificates.Tests.dll -xml testResults.xml -nologo -notrait category=nonnetcoreapptests -notrait category=nonosxtests -notrait category=OuterLoop -notrait category=failing $RSP_FILE
~/new_source/corefx/src/System.Security.Cryptography.X509Certificates/tests
``` | True | Mac Catalina compatibility: System.Security.Cryptography.X509Certificates.Tests crashing - System.Security.Cryptography.X509Certificates.Tests are crashing without producing result
```
Starting: System.Security.Cryptography.X509Certificates.Tests (parallel test collections = on, max threads = 4)
Warning: unable to build chain to self-signed root for signer "(null)"Process terminated. Assertion failed.
X509ChainGetStatusAtIndex returned unexpected error 2
at Internal.Cryptography.Pal.SecTrustChainPal.ParseResults(SafeX509ChainHandle chainHandle, X509RevocationMode revocationMode) in /Users/dotnet/new_source/corefx/src/System.Security.Cryptography.X509Certificates/src/Internal/Cryptography/Pal.OSX/ChainPal.cs:line 250
at Internal.Cryptography.Pal.SecTrustChainPal.Execute(DateTime verificationTime, Boolean allowNetwork, OidCollection applicationPolicy, OidCollection certificatePolicy, X509RevocationFlag revocationFlag) in /Users/dotnet/new_source/corefx/src/System.Security.Cryptography.X509Certificates/src/Internal/Cryptography/Pal.OSX/ChainPal.cs:line 210
at Internal.Cryptography.Pal.ChainPal.BuildChain(Boolean useMachineContext, ICertificatePal cert, X509Certificate2Collection extraStore, OidCollection applicationPolicy, OidCollection certificatePolicy, X509RevocationMode revocationMode, X509RevocationFlag revocationFlag, DateTime verificationTime, TimeSpan timeout) in /Users/dotnet/new_source/corefx/src/System.Security.Cryptography.X509Certificates/src/Internal/Cryptography/Pal.OSX/ChainPal.cs:line 574
at System.Security.Cryptography.X509Certificates.X509Chain.Build(X509Certificate2 certificate, Boolean throwOnException) in /Users/dotnet/new_source/corefx/src/System.Security.Cryptography.X509Certificates/src/System/Security/Cryptography/X509Certificates/X509Chain.cs:line 118
at System.Security.Cryptography.X509Certificates.X509Chain.Build(X509Certificate2 certificate) in /Users/dotnet/new_source/corefx/src/System.Security.Cryptography.X509Certificates/src/System/Security/Cryptography/X509Certificates/X509Chain.cs:line 105
at System.Security.Cryptography.X509Certificates.Tests.DynamicChainTests.TestInvalidAia() in /Users/dotnet/new_source/corefx/src/System.Security.Cryptography.X509Certificates/tests/DynamicChainTests.cs:line 240
at System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor, Boolean wrapExceptions)
at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) in /_/src/System.Private.CoreLib/src/System/Reflection/RuntimeMethodInfo.cs:line 475
at System.Reflection.MethodBase.Invoke(Object obj, Object[] parameters) in /_/src/System.Private.CoreLib/shared/System/Reflection/MethodBase.cs:line 54
at Xunit.Sdk.TestInvoker`1.CallTestMethod(Object testClassInstance) in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\TestInvoker.cs:line 150
at Xunit.Sdk.TestInvoker`1.<>c__DisplayClass48_1.<<InvokeTestMethodAsync>b__1>d.MoveNext() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\TestInvoker.cs:line 257
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.TestInvoker`1.<>c__DisplayClass48_1.<InvokeTestMethodAsync>b__1()
at Xunit.Sdk.ExecutionTimer.AggregateAsync(Func`1 asyncAction) in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\ExecutionTimer.cs:line 48
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.ExecutionTimer.AggregateAsync(Func`1 asyncAction)
at Xunit.Sdk.TestInvoker`1.<>c__DisplayClass48_1.<InvokeTestMethodAsync>b__0() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\TestInvoker.cs:line 242
at Xunit.Sdk.ExceptionAggregator.RunAsync(Func`1 code) in C:\Dev\xunit\xunit\src\xunit.core\Sdk\ExceptionAggregator.cs:line 90
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.ExceptionAggregator.RunAsync(Func`1 code)
at Xunit.Sdk.TestInvoker`1.InvokeTestMethodAsync(Object testClassInstance) in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\TestInvoker.cs:line 241
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.TestInvoker`1.InvokeTestMethodAsync(Object testClassInstance)
at Xunit.Sdk.XunitTestInvoker.InvokeTestMethodAsync(Object testClassInstance) in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\XunitTestInvoker.cs:line 112
at Xunit.Sdk.TestInvoker`1.<RunAsync>b__47_0() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\TestInvoker.cs:line 206
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.TestInvoker`1.<RunAsync>b__47_0()
at Xunit.Sdk.ExceptionAggregator.RunAsync[T](Func`1 code) in C:\Dev\xunit\xunit\src\xunit.core\Sdk\ExceptionAggregator.cs:line 107
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.ExceptionAggregator.RunAsync[T](Func`1 code)
at Xunit.Sdk.TestInvoker`1.RunAsync() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\TestInvoker.cs:line 189
at Xunit.Sdk.XunitTestRunner.InvokeTestMethodAsync(ExceptionAggregator aggregator) in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\XunitTestRunner.cs:line 84
at Xunit.Sdk.XunitTestRunner.InvokeTestAsync(ExceptionAggregator aggregator) in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\XunitTestRunner.cs:line 67
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.XunitTestRunner.InvokeTestAsync(ExceptionAggregator aggregator)
at Xunit.Sdk.TestRunner`1.<>c__DisplayClass43_0.<RunAsync>b__0() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\TestRunner.cs:line 149
at Xunit.Sdk.ExceptionAggregator.RunAsync[T](Func`1 code) in C:\Dev\xunit\xunit\src\xunit.core\Sdk\ExceptionAggregator.cs:line 107
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.ExceptionAggregator.RunAsync[T](Func`1 code)
at Xunit.Sdk.TestRunner`1.RunAsync() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\TestRunner.cs:line 149
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.TestRunner`1.RunAsync()
at Xunit.Sdk.XunitTestCaseRunner.RunTestAsync() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\XunitTestCaseRunner.cs:line 139
at Xunit.Sdk.TestCaseRunner`1.RunAsync() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\TestCaseRunner.cs:line 82
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.TestCaseRunner`1.RunAsync()
at Xunit.Sdk.XunitTestCase.RunAsync(IMessageSink diagnosticMessageSink, IMessageBus messageBus, Object[] constructorArguments, ExceptionAggregator aggregator, CancellationTokenSource cancellationTokenSource) in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\XunitTestCase.cs:line 162
at Xunit.Sdk.XunitTestMethodRunner.RunTestCaseAsync(IXunitTestCase testCase) in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\XunitTestMethodRunner.cs:line 45
at Xunit.Sdk.TestMethodRunner`1.RunTestCasesAsync() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\TestMethodRunner.cs:line 136
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.TestMethodRunner`1.RunTestCasesAsync()
at Xunit.Sdk.TestMethodRunner`1.RunAsync() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\TestMethodRunner.cs:line 106
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.TestMethodRunner`1.RunAsync()
at Xunit.Sdk.XunitTestClassRunner.RunTestMethodAsync(ITestMethod testMethod, IReflectionMethodInfo method, IEnumerable`1 testCases, Object[] constructorArguments) in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\XunitTestClassRunner.cs:line 168
at Xunit.Sdk.TestClassRunner`1.RunTestMethodsAsync() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\TestClassRunner.cs:line 213
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.TestClassRunner`1.RunTestMethodsAsync()
at Xunit.Sdk.TestClassRunner`1.RunAsync() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\TestClassRunner.cs:line 171
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.TestClassRunner`1.RunAsync()
at Xunit.Sdk.XunitTestCollectionRunner.RunTestClassAsync(ITestClass testClass, IReflectionTypeInfo class, IEnumerable`1 testCases) in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\XunitTestCollectionRunner.cs:line 158
at Xunit.Sdk.TestCollectionRunner`1.RunTestClassesAsync() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\TestCollectionRunner.cs:line 130
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.TestCollectionRunner`1.RunTestClassesAsync()
at Xunit.Sdk.TestCollectionRunner`1.RunAsync() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\TestCollectionRunner.cs:line 101
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine) in /_/src/System.Private.CoreLib/shared/System/Runtime/CompilerServices/AsyncMethodBuilder.cs:line 1015
at Xunit.Sdk.TestCollectionRunner`1.RunAsync()
at Xunit.Sdk.XunitTestAssemblyRunner.RunTestCollectionAsync(IMessageBus messageBus, ITestCollection testCollection, IEnumerable`1 testCases, CancellationTokenSource cancellationTokenSource) in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\XunitTestAssemblyRunner.cs:line 235
at Xunit.Sdk.XunitTestAssemblyRunner.<>c__DisplayClass14_2.<RunTestCollectionsAsync>b__2() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Frameworks\Runners\XunitTestAssemblyRunner.cs:line 184
at System.Threading.Tasks.Task`1.InnerInvoke() in /_/src/System.Private.CoreLib/shared/System/Threading/Tasks/Future.cs:line 518
at System.Threading.Tasks.Task.<>c.<.cctor>b__274_0(Object obj) in /_/src/System.Private.CoreLib/shared/System/Threading/Tasks/Task.cs:line 2428
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) in /_/src/System.Private.CoreLib/shared/System/Threading/ExecutionContext.cs:line 172
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot, Thread threadPoolThread) in /_/src/System.Private.CoreLib/shared/System/Threading/Tasks/Task.cs:line 2385
at System.Threading.Tasks.Task.ExecuteEntry() in /_/src/System.Private.CoreLib/shared/System/Threading/Tasks/Task.cs:line 2296
at System.Threading.Tasks.SynchronizationContextTaskScheduler.<>c.<.cctor>b__8_0(Object s) in /_/src/System.Private.CoreLib/shared/System/Threading/Tasks/TaskScheduler.cs:line 641
at Xunit.Sdk.MaxConcurrencySyncContext.RunOnSyncContext(SendOrPostCallback callback, Object state) in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\MaxConcurrencySyncContext.cs:line 107
at Xunit.Sdk.MaxConcurrencySyncContext.<>c__DisplayClass11_0.<WorkerThreadProc>b__0(Object _) in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\MaxConcurrencySyncContext.cs:line 96
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) in /_/src/System.Private.CoreLib/shared/System/Threading/ExecutionContext.cs:line 172
at Xunit.Sdk.ExecutionContextHelper.Run(Object context, Action`1 action) in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\Utility\ExecutionContextHelper.cs:line 111
at Xunit.Sdk.MaxConcurrencySyncContext.WorkerThreadProc() in C:\Dev\xunit\xunit\src\xunit.execution\Sdk\MaxConcurrencySyncContext.cs:line 89
at Xunit.Sdk.XunitWorkerThread.<>c.<QueueUserWorkItem>b__5_0(Object _) in C:\Dev\xunit\xunit\src\common\XunitWorkerThread.cs:line 37
at System.Threading.Tasks.Task.InnerInvoke() in /_/src/System.Private.CoreLib/shared/System/Threading/Tasks/Task.cs:line 2449
at System.Threading.Tasks.Task.<>c.<.cctor>b__274_0(Object obj) in /_/src/System.Private.CoreLib/shared/System/Threading/Tasks/Task.cs:line 2428
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) in /_/src/System.Private.CoreLib/shared/System/Threading/ExecutionContext.cs:line 172
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot, Thread threadPoolThread) in /_/src/System.Private.CoreLib/shared/System/Threading/Tasks/Task.cs:line 2385
at System.Threading.Tasks.Task.ExecuteEntryUnsafe(Thread threadPoolThread) in /_/src/System.Private.CoreLib/shared/System/Threading/Tasks/Task.cs:line 2327
at System.Threading.Tasks.ThreadPoolTaskScheduler.<>c.<.cctor>b__10_0(Object s) in /_/src/System.Private.CoreLib/shared/System/Threading/Tasks/ThreadPoolTaskScheduler.cs:line 37
at System.Threading.ThreadHelper.ThreadStart_Context(Object state) in /_/src/System.Private.CoreLib/src/System/Threading/Thread.CoreCLR.cs:line 50
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) in /_/src/System.Private.CoreLib/shared/System/Threading/ExecutionContext.cs:line 172
at System.Threading.ThreadHelper.ThreadStart(Object obj) in /_/src/System.Private.CoreLib/src/System/Threading/Thread.CoreCLR.cs:line 83
Unknown Chain Status: UnparseableExtension
/Users/dotnet/new_source/corefx/artifacts/bin/System.Security.Cryptography.X509Certificates.Tests/netcoreapp-OSX-Debug/RunTests.sh: line 161: 61473 Abort trap: 6 "$RUNTIME_PATH/dotnet" exec --runtimeconfig System.Security.Cryptography.X509Certificates.Tests.runtimeconfig.json xunit.console.dll System.Security.Cryptography.X509Certificates.Tests.dll -xml testResults.xml -nologo -notrait category=nonnetcoreapptests -notrait category=nonosxtests -notrait category=OuterLoop -notrait category=failing $RSP_FILE
~/new_source/corefx/src/System.Security.Cryptography.X509Certificates/tests
``` | non_main | mac catalina compatibility system security cryptography tests crashing system security cryptography tests are crashing without producing result starting system security cryptography tests parallel test collections on max threads warning unable to build chain to self signed root for signer null process terminated assertion failed returned unexpected error at internal cryptography pal sectrustchainpal parseresults chainhandle revocationmode in users dotnet new source corefx src system security cryptography src internal cryptography pal osx chainpal cs line at internal cryptography pal sectrustchainpal execute datetime verificationtime boolean allownetwork oidcollection applicationpolicy oidcollection certificatepolicy revocationflag in users dotnet new source corefx src system security cryptography src internal cryptography pal osx chainpal cs line at internal cryptography pal chainpal buildchain boolean usemachinecontext icertificatepal cert extrastore oidcollection applicationpolicy oidcollection certificatepolicy revocationmode revocationflag datetime verificationtime timespan timeout in users dotnet new source corefx src system security cryptography src internal cryptography pal osx chainpal cs line at system security cryptography build certificate boolean throwonexception in users dotnet new source corefx src system security cryptography src system security cryptography cs line at system security cryptography build certificate in users dotnet new source corefx src system security cryptography src system security cryptography cs line at system security cryptography tests dynamicchaintests testinvalidaia in users dotnet new source corefx src system security cryptography tests dynamicchaintests cs line at system runtimemethodhandle invokemethod object target object arguments signature sig boolean constructor boolean wrapexceptions at system reflection runtimemethodinfo invoke object obj bindingflags invokeattr binder binder object parameters cultureinfo culture in src system private corelib src system reflection runtimemethodinfo cs line at system reflection methodbase invoke object obj object parameters in src system private corelib shared system reflection methodbase cs line at xunit sdk testinvoker calltestmethod object testclassinstance in c dev xunit xunit src xunit execution sdk frameworks runners testinvoker cs line at xunit sdk testinvoker c b d movenext in c dev xunit xunit src xunit execution sdk frameworks runners testinvoker cs line at system runtime compilerservices asyncmethodbuildercore start tstatemachine statemachine in src system private corelib shared system runtime compilerservices asyncmethodbuilder cs line at xunit sdk testinvoker c b at xunit sdk executiontimer aggregateasync func asyncaction in c dev xunit xunit src xunit execution sdk frameworks executiontimer cs line at system runtime compilerservices asyncmethodbuildercore start tstatemachine statemachine in src system private corelib shared system runtime compilerservices asyncmethodbuilder cs line at xunit sdk executiontimer aggregateasync func asyncaction at xunit sdk testinvoker c b in c dev xunit xunit src xunit execution sdk frameworks runners testinvoker cs line at xunit sdk exceptionaggregator runasync func code in c dev xunit xunit src xunit core sdk exceptionaggregator cs line at system runtime compilerservices asyncmethodbuildercore start tstatemachine statemachine in src system private corelib shared system runtime compilerservices asyncmethodbuilder cs line at xunit sdk exceptionaggregator runasync func code at xunit sdk testinvoker invoketestmethodasync object testclassinstance in c dev xunit xunit src xunit execution sdk frameworks runners testinvoker cs line at system runtime compilerservices asyncmethodbuildercore start tstatemachine statemachine in src system private corelib shared system runtime compilerservices asyncmethodbuilder cs line at xunit sdk testinvoker invoketestmethodasync object testclassinstance at xunit sdk xunittestinvoker invoketestmethodasync object testclassinstance in c dev xunit xunit src xunit execution sdk frameworks runners xunittestinvoker cs line at xunit sdk testinvoker b in c dev xunit xunit src xunit execution sdk frameworks runners testinvoker cs line at system runtime compilerservices asyncmethodbuildercore start tstatemachine statemachine in src system private corelib shared system runtime compilerservices asyncmethodbuilder cs line at xunit sdk testinvoker b at xunit sdk exceptionaggregator runasync func code in c dev xunit xunit src xunit core sdk exceptionaggregator cs line at system runtime compilerservices asyncmethodbuildercore start tstatemachine statemachine in src system private corelib shared system runtime compilerservices asyncmethodbuilder cs line at xunit sdk exceptionaggregator runasync func code at xunit sdk testinvoker runasync in c dev xunit xunit src xunit execution sdk frameworks runners testinvoker cs line at xunit sdk xunittestrunner invoketestmethodasync exceptionaggregator aggregator in c dev xunit xunit src xunit execution sdk frameworks runners xunittestrunner cs line at xunit sdk xunittestrunner invoketestasync exceptionaggregator aggregator in c dev xunit xunit src xunit execution sdk frameworks runners xunittestrunner cs line at system runtime compilerservices asyncmethodbuildercore start tstatemachine statemachine in src system private corelib shared system runtime compilerservices asyncmethodbuilder cs line at xunit sdk xunittestrunner invoketestasync exceptionaggregator aggregator at xunit sdk testrunner c b in c dev xunit xunit src xunit execution sdk frameworks runners testrunner cs line at xunit sdk exceptionaggregator runasync func code in c dev xunit xunit src xunit core sdk exceptionaggregator cs line at system runtime compilerservices asyncmethodbuildercore start tstatemachine statemachine in src system private corelib shared system runtime compilerservices asyncmethodbuilder cs line at xunit sdk exceptionaggregator runasync func code at xunit sdk testrunner runasync in c dev xunit xunit src xunit execution sdk frameworks runners testrunner cs line at system runtime compilerservices asyncmethodbuildercore start tstatemachine statemachine in src system private corelib shared system runtime compilerservices asyncmethodbuilder cs line at xunit sdk testrunner runasync at xunit sdk xunittestcaserunner runtestasync in c dev xunit xunit src xunit execution sdk frameworks runners xunittestcaserunner cs line at xunit sdk testcaserunner runasync in c dev xunit xunit src xunit execution sdk frameworks runners testcaserunner cs line at system runtime compilerservices asyncmethodbuildercore start tstatemachine statemachine in src system private corelib shared system runtime compilerservices asyncmethodbuilder cs line at xunit sdk testcaserunner runasync at xunit sdk xunittestcase runasync imessagesink diagnosticmessagesink imessagebus messagebus object constructorarguments exceptionaggregator aggregator cancellationtokensource cancellationtokensource in c dev xunit xunit src xunit execution sdk frameworks xunittestcase cs line at xunit sdk xunittestmethodrunner runtestcaseasync ixunittestcase testcase in c dev xunit xunit src xunit execution sdk frameworks runners xunittestmethodrunner cs line at xunit sdk testmethodrunner runtestcasesasync in c dev xunit xunit src xunit execution sdk frameworks runners testmethodrunner cs line at system runtime compilerservices asyncmethodbuildercore start tstatemachine statemachine in src system private corelib shared system runtime compilerservices asyncmethodbuilder cs line at xunit sdk testmethodrunner runtestcasesasync at xunit sdk testmethodrunner runasync in c dev xunit xunit src xunit execution sdk frameworks runners testmethodrunner cs line at system runtime compilerservices asyncmethodbuildercore start tstatemachine statemachine in src system private corelib shared system runtime compilerservices asyncmethodbuilder cs line at xunit sdk testmethodrunner runasync at xunit sdk xunittestclassrunner runtestmethodasync itestmethod testmethod ireflectionmethodinfo method ienumerable testcases object constructorarguments in c dev xunit xunit src xunit execution sdk frameworks runners xunittestclassrunner cs line at xunit sdk testclassrunner runtestmethodsasync in c dev xunit xunit src xunit execution sdk frameworks runners testclassrunner cs line at system runtime compilerservices asyncmethodbuildercore start tstatemachine statemachine in src system private corelib shared system runtime compilerservices asyncmethodbuilder cs line at xunit sdk testclassrunner runtestmethodsasync at xunit sdk testclassrunner runasync in c dev xunit xunit src xunit execution sdk frameworks runners testclassrunner cs line at system runtime compilerservices asyncmethodbuildercore start tstatemachine statemachine in src system private corelib shared system runtime compilerservices asyncmethodbuilder cs line at xunit sdk testclassrunner runasync at xunit sdk xunittestcollectionrunner runtestclassasync itestclass testclass ireflectiontypeinfo class ienumerable testcases in c dev xunit xunit src xunit execution sdk frameworks runners xunittestcollectionrunner cs line at xunit sdk testcollectionrunner runtestclassesasync in c dev xunit xunit src xunit execution sdk frameworks runners testcollectionrunner cs line at system runtime compilerservices asyncmethodbuildercore start tstatemachine statemachine in src system private corelib shared system runtime compilerservices asyncmethodbuilder cs line at xunit sdk testcollectionrunner runtestclassesasync at xunit sdk testcollectionrunner runasync in c dev xunit xunit src xunit execution sdk frameworks runners testcollectionrunner cs line at system runtime compilerservices asyncmethodbuildercore start tstatemachine statemachine in src system private corelib shared system runtime compilerservices asyncmethodbuilder cs line at xunit sdk testcollectionrunner runasync at xunit sdk xunittestassemblyrunner runtestcollectionasync imessagebus messagebus itestcollection testcollection ienumerable testcases cancellationtokensource cancellationtokensource in c dev xunit xunit src xunit execution sdk frameworks runners xunittestassemblyrunner cs line at xunit sdk xunittestassemblyrunner c b in c dev xunit xunit src xunit execution sdk frameworks runners xunittestassemblyrunner cs line at system threading tasks task innerinvoke in src system private corelib shared system threading tasks future cs line at system threading tasks task c b object obj in src system private corelib shared system threading tasks task cs line at system threading executioncontext runinternal executioncontext executioncontext contextcallback callback object state in src system private corelib shared system threading executioncontext cs line at system threading tasks task executewiththreadlocal task currenttaskslot thread threadpoolthread in src system private corelib shared system threading tasks task cs line at system threading tasks task executeentry in src system private corelib shared system threading tasks task cs line at system threading tasks synchronizationcontexttaskscheduler c b object s in src system private corelib shared system threading tasks taskscheduler cs line at xunit sdk maxconcurrencysynccontext runonsynccontext sendorpostcallback callback object state in c dev xunit xunit src xunit execution sdk maxconcurrencysynccontext cs line at xunit sdk maxconcurrencysynccontext c b object in c dev xunit xunit src xunit execution sdk maxconcurrencysynccontext cs line at system threading executioncontext runinternal executioncontext executioncontext contextcallback callback object state in src system private corelib shared system threading executioncontext cs line at xunit sdk executioncontexthelper run object context action action in c dev xunit xunit src xunit execution sdk utility executioncontexthelper cs line at xunit sdk maxconcurrencysynccontext workerthreadproc in c dev xunit xunit src xunit execution sdk maxconcurrencysynccontext cs line at xunit sdk xunitworkerthread c b object in c dev xunit xunit src common xunitworkerthread cs line at system threading tasks task innerinvoke in src system private corelib shared system threading tasks task cs line at system threading tasks task c b object obj in src system private corelib shared system threading tasks task cs line at system threading executioncontext runinternal executioncontext executioncontext contextcallback callback object state in src system private corelib shared system threading executioncontext cs line at system threading tasks task executewiththreadlocal task currenttaskslot thread threadpoolthread in src system private corelib shared system threading tasks task cs line at system threading tasks task executeentryunsafe thread threadpoolthread in src system private corelib shared system threading tasks task cs line at system threading tasks threadpooltaskscheduler c b object s in src system private corelib shared system threading tasks threadpooltaskscheduler cs line at system threading threadhelper threadstart context object state in src system private corelib src system threading thread coreclr cs line at system threading executioncontext runinternal executioncontext executioncontext contextcallback callback object state in src system private corelib shared system threading executioncontext cs line at system threading threadhelper threadstart object obj in src system private corelib src system threading thread coreclr cs line unknown chain status unparseableextension users dotnet new source corefx artifacts bin system security cryptography tests netcoreapp osx debug runtests sh line abort trap runtime path dotnet exec runtimeconfig system security cryptography tests runtimeconfig json xunit console dll system security cryptography tests dll xml testresults xml nologo notrait category nonnetcoreapptests notrait category nonosxtests notrait category outerloop notrait category failing rsp file new source corefx src system security cryptography tests | 0 |
163 | 2,700,986,434 | IssuesEvent | 2015-04-04 19:52:38 | tgstation/-tg-station | https://api.github.com/repos/tgstation/-tg-station | closed | Chemistry needs to be updated | Feature Request Maintainability - Hinders improvements Not a bug | https://github.com/EuroNumbers/-tg-station-newchemistry
As we all know this was done a while ago now and we still dont have any clue how much is actually done what needs improving etc etc.
Food also needs a rework and a overhaul of which @Rosenmann has made a simple comment of some parts.
And a conversation today about how the fluid storage and all of that with people should be changed to be less magicy.
There was also a mention today about how reagents is bloated and some of it is redundant.
(only a mention not confirmed and not quoted)
There are three ways to remove all reagents from something. @MrPerson.
There are other parts that are missing from this which I believe people can easily add in since I am not entirely focued on this right now (as in reporting it). Just edit and update this when required Maintainers.
| True | Chemistry needs to be updated - https://github.com/EuroNumbers/-tg-station-newchemistry
As we all know this was done a while ago now and we still dont have any clue how much is actually done what needs improving etc etc.
Food also needs a rework and a overhaul of which @Rosenmann has made a simple comment of some parts.
And a conversation today about how the fluid storage and all of that with people should be changed to be less magicy.
There was also a mention today about how reagents is bloated and some of it is redundant.
(only a mention not confirmed and not quoted)
There are three ways to remove all reagents from something. @MrPerson.
There are other parts that are missing from this which I believe people can easily add in since I am not entirely focued on this right now (as in reporting it). Just edit and update this when required Maintainers.
| main | chemistry needs to be updated as we all know this was done a while ago now and we still dont have any clue how much is actually done what needs improving etc etc food also needs a rework and a overhaul of which rosenmann has made a simple comment of some parts and a conversation today about how the fluid storage and all of that with people should be changed to be less magicy there was also a mention today about how reagents is bloated and some of it is redundant only a mention not confirmed and not quoted there are three ways to remove all reagents from something mrperson there are other parts that are missing from this which i believe people can easily add in since i am not entirely focued on this right now as in reporting it just edit and update this when required maintainers | 1 |
44,797 | 5,888,871,578 | IssuesEvent | 2017-05-17 11:22:05 | Automattic/wp-calypso | https://api.github.com/repos/Automattic/wp-calypso | opened | Login: 2FA via SMS: using same terminology to refer to SMS code and recovery code | Login [Status] Needs Copy Review [Status] Needs Design Review [Type] Question | Steps to reproduce:
- choose an account that has 2FA via SMS enabled
- log in to that account in the WordPress mobile app
- visit `/login`, enter your credentials and click Log In
- you'll be redirected to `/login/sms` (should be `/login/push`, reported in #14152)
- click the link "The WordPress mobile app"
- you'll be redirected to `/login/push` where your alternative for log in is "Code via text message"
In the case of 2FA enabled by an authenticator app, in `/login/push` you'll see "An Authenticator app" and "Code via text message". For 2FA via SMS, we only have "Code via text message" (so no way to return to SMS auth).
As mentioned in #14158, "Code via text message" refers to recovery codes. In practice, the end result of that and of authenticating via SMS two-factor is the same (the user received an SMS with a code) but those are different things.
For example: use one phone number to enable 2FA and a different number under Me > Security > Account Recovery > Recovery SMS Number. When in the case above, the user clicks "Code via text message", they'll receive a code for the number they used to set up 2FA instead of to the number they used as a recovery number. Maybe it's worth having both options displayed, or at least change the copy of that alternative so it's clear we're referring to a 2FA code, not a recovery one. | 1.0 | Login: 2FA via SMS: using same terminology to refer to SMS code and recovery code - Steps to reproduce:
- choose an account that has 2FA via SMS enabled
- log in to that account in the WordPress mobile app
- visit `/login`, enter your credentials and click Log In
- you'll be redirected to `/login/sms` (should be `/login/push`, reported in #14152)
- click the link "The WordPress mobile app"
- you'll be redirected to `/login/push` where your alternative for log in is "Code via text message"
In the case of 2FA enabled by an authenticator app, in `/login/push` you'll see "An Authenticator app" and "Code via text message". For 2FA via SMS, we only have "Code via text message" (so no way to return to SMS auth).
As mentioned in #14158, "Code via text message" refers to recovery codes. In practice, the end result of that and of authenticating via SMS two-factor is the same (the user received an SMS with a code) but those are different things.
For example: use one phone number to enable 2FA and a different number under Me > Security > Account Recovery > Recovery SMS Number. When in the case above, the user clicks "Code via text message", they'll receive a code for the number they used to set up 2FA instead of to the number they used as a recovery number. Maybe it's worth having both options displayed, or at least change the copy of that alternative so it's clear we're referring to a 2FA code, not a recovery one. | non_main | login via sms using same terminology to refer to sms code and recovery code steps to reproduce choose an account that has via sms enabled log in to that account in the wordpress mobile app visit login enter your credentials and click log in you ll be redirected to login sms should be login push reported in click the link the wordpress mobile app you ll be redirected to login push where your alternative for log in is code via text message in the case of enabled by an authenticator app in login push you ll see an authenticator app and code via text message for via sms we only have code via text message so no way to return to sms auth as mentioned in code via text message refers to recovery codes in practice the end result of that and of authenticating via sms two factor is the same the user received an sms with a code but those are different things for example use one phone number to enable and a different number under me security account recovery recovery sms number when in the case above the user clicks code via text message they ll receive a code for the number they used to set up instead of to the number they used as a recovery number maybe it s worth having both options displayed or at least change the copy of that alternative so it s clear we re referring to a code not a recovery one | 0 |
2,832 | 10,163,592,794 | IssuesEvent | 2019-08-07 09:37:36 | filebrowser/filebrowser | https://api.github.com/repos/filebrowser/filebrowser | closed | filebrowser is searching for Maintainers! | Maintainers Wanted | *This issue was created by [Maintainers Wanted](https://maintainerswanted.com)* :nerd_face:
*Support us by leaving a star on [Github](https://github.com/flxwu/maintainerswanted.com)!* :star2:
## filebrowser is searching for new Maintainers! :man_technologist: :mailbox_with_mail:
Do you use filebrowser personally or at work and would like this project to be further developed and improved?
Or are you already a contributor and ready to take the next step to becoming a maintainer?
If you are interested, comment here below on this issue! :point_down::raised_hands: | True | filebrowser is searching for Maintainers! - *This issue was created by [Maintainers Wanted](https://maintainerswanted.com)* :nerd_face:
*Support us by leaving a star on [Github](https://github.com/flxwu/maintainerswanted.com)!* :star2:
## filebrowser is searching for new Maintainers! :man_technologist: :mailbox_with_mail:
Do you use filebrowser personally or at work and would like this project to be further developed and improved?
Or are you already a contributor and ready to take the next step to becoming a maintainer?
If you are interested, comment here below on this issue! :point_down::raised_hands: | main | filebrowser is searching for maintainers this issue was created by nerd face support us by leaving a star on filebrowser is searching for new maintainers man technologist mailbox with mail do you use filebrowser personally or at work and would like this project to be further developed and improved or are you already a contributor and ready to take the next step to becoming a maintainer if you are interested comment here below on this issue point down raised hands | 1 |
5,269 | 26,634,922,595 | IssuesEvent | 2023-01-24 21:02:26 | coq-community/manifesto | https://api.github.com/repos/coq-community/manifesto | closed | Volunteer co-maintainer needed for Docker-Coq | maintainer-wanted | The Coq Team and Coq-community are looking for a volunteer co-maintainer of the [Docker-Coq](https://github.com/coq-community/docker-coq) project, which provides [Docker container](https://www.docker.com/resources/what-container/) images of many versions of Coq as a service to Coq users.
Docker-Coq is an open source project on GitHub under the BSD-3-Clause license. It maintains definitions of a set of Docker images that provide a basic Coq environment for continuous integration and local use. Thanks to the [docker-keeper](https://gitlab.com/erikmd/docker-keeper) software, images built from Docker-Coq definitions are continuously deployed to the [public Docker registry](https://hub.docker.com/r/coqorg/coq), where users can pull them without worry of rate limitations.
Desirable skills:
- Familiarity with container software like Docker
- Familiarity with software continuous integration and build automation, in particular on GitHub and/or GitLab
- Basic shell scripting
- Basic Python programming
Maintainer core tasks:
- Create and deploy new Coq Docker definitions to the Docker registry after Coq (pre-)releases.
- Monitor the use of Coq Docker images for continuous integration on GitHub/GitLab and rebuild images when necessary.
- Work with the current Docker-Coq and Docker-Keeper [maintainer](https://github.com/erikmd) to further develop and automate the toolchain.
During their tenure, a maintainer will be considered part of the [Coq Team](https://coq.inria.fr/coq-team.html) and credited for their work in release notes for Coq releases, for example on [Zenodo](https://doi.org/10.5281/zenodo.1003420).
Please respond to this GitHub issue with your motivation, and a short summary of relevant experience, for becoming a Docker-Coq maintainer. The maintainer will be selected from the issue responders by the Coq Team and Coq-community owners. | True | Volunteer co-maintainer needed for Docker-Coq - The Coq Team and Coq-community are looking for a volunteer co-maintainer of the [Docker-Coq](https://github.com/coq-community/docker-coq) project, which provides [Docker container](https://www.docker.com/resources/what-container/) images of many versions of Coq as a service to Coq users.
Docker-Coq is an open source project on GitHub under the BSD-3-Clause license. It maintains definitions of a set of Docker images that provide a basic Coq environment for continuous integration and local use. Thanks to the [docker-keeper](https://gitlab.com/erikmd/docker-keeper) software, images built from Docker-Coq definitions are continuously deployed to the [public Docker registry](https://hub.docker.com/r/coqorg/coq), where users can pull them without worry of rate limitations.
Desirable skills:
- Familiarity with container software like Docker
- Familiarity with software continuous integration and build automation, in particular on GitHub and/or GitLab
- Basic shell scripting
- Basic Python programming
Maintainer core tasks:
- Create and deploy new Coq Docker definitions to the Docker registry after Coq (pre-)releases.
- Monitor the use of Coq Docker images for continuous integration on GitHub/GitLab and rebuild images when necessary.
- Work with the current Docker-Coq and Docker-Keeper [maintainer](https://github.com/erikmd) to further develop and automate the toolchain.
During their tenure, a maintainer will be considered part of the [Coq Team](https://coq.inria.fr/coq-team.html) and credited for their work in release notes for Coq releases, for example on [Zenodo](https://doi.org/10.5281/zenodo.1003420).
Please respond to this GitHub issue with your motivation, and a short summary of relevant experience, for becoming a Docker-Coq maintainer. The maintainer will be selected from the issue responders by the Coq Team and Coq-community owners. | main | volunteer co maintainer needed for docker coq the coq team and coq community are looking for a volunteer co maintainer of the project which provides images of many versions of coq as a service to coq users docker coq is an open source project on github under the bsd clause license it maintains definitions of a set of docker images that provide a basic coq environment for continuous integration and local use thanks to the software images built from docker coq definitions are continuously deployed to the where users can pull them without worry of rate limitations desirable skills familiarity with container software like docker familiarity with software continuous integration and build automation in particular on github and or gitlab basic shell scripting basic python programming maintainer core tasks create and deploy new coq docker definitions to the docker registry after coq pre releases monitor the use of coq docker images for continuous integration on github gitlab and rebuild images when necessary work with the current docker coq and docker keeper to further develop and automate the toolchain during their tenure a maintainer will be considered part of the and credited for their work in release notes for coq releases for example on please respond to this github issue with your motivation and a short summary of relevant experience for becoming a docker coq maintainer the maintainer will be selected from the issue responders by the coq team and coq community owners | 1 |
2,960 | 10,616,895,996 | IssuesEvent | 2019-10-12 15:11:24 | arcticicestudio/snowsaw | https://api.github.com/repos/arcticicestudio/snowsaw | opened | Development dependency global installation workaround | context-workflow scope-compatibility scope-maintainability type-improvement | The workaround implemented in #82 (PR #85) works, but due to the explicitly disabled _module_ mode it is not possible to define pinned dependency versions but only using the normal `go get` behavior to build the repositories default branch.
A better workaround is to run the `go get` command for development & build dependencies/packages outside of the project's root directory. Therefore the `go.mod` file is not in scope for the `go get` command and is therefore not updated. In order to use pinned versions the `GO1111MODULE=on` environment variable must be explicitly set when running the `go get` command.
See https://github.com/golang/go/issues/30515 for more details and proposed solutions that might be added to Go's build tools in future versions. | True | Development dependency global installation workaround - The workaround implemented in #82 (PR #85) works, but due to the explicitly disabled _module_ mode it is not possible to define pinned dependency versions but only using the normal `go get` behavior to build the repositories default branch.
A better workaround is to run the `go get` command for development & build dependencies/packages outside of the project's root directory. Therefore the `go.mod` file is not in scope for the `go get` command and is therefore not updated. In order to use pinned versions the `GO1111MODULE=on` environment variable must be explicitly set when running the `go get` command.
See https://github.com/golang/go/issues/30515 for more details and proposed solutions that might be added to Go's build tools in future versions. | main | development dependency global installation workaround the workaround implemented in pr works but due to the explicitly disabled module mode it is not possible to define pinned dependency versions but only using the normal go get behavior to build the repositories default branch a better workaround is to run the go get command for development build dependencies packages outside of the project s root directory therefore the go mod file is not in scope for the go get command and is therefore not updated in order to use pinned versions the on environment variable must be explicitly set when running the go get command see for more details and proposed solutions that might be added to go s build tools in future versions | 1 |
518,265 | 15,026,401,929 | IssuesEvent | 2021-02-01 22:37:02 | internetarchive/openlibrary | https://api.github.com/repos/internetarchive/openlibrary | closed | Uneditable bad date field in author record | Affects: Data Lead: @cdrini Priority: 3 Type: Bug | The author record at the URL below has a `date` key containing the string "(Dmitriĭ Nikolaevich)" which is rendered for display under the author's name, but not editable anywhere.
### Relevant url?
https://openlibrary.org/authors/OL1285518A
https://openlibrary.org/authors/OL1285518A.json
### Steps to Reproduce
1. Click the URL
2. Click the "Modifier" button
* Actual: The field isn't available anywhere for editing
* Expected: The field should either: a) not be rendered or b) be editable.
### Proposal
* Remove the `date` field from the display
* Move contents of any existing `date` fields to birth date and death date | 1.0 | Uneditable bad date field in author record - The author record at the URL below has a `date` key containing the string "(Dmitriĭ Nikolaevich)" which is rendered for display under the author's name, but not editable anywhere.
### Relevant url?
https://openlibrary.org/authors/OL1285518A
https://openlibrary.org/authors/OL1285518A.json
### Steps to Reproduce
1. Click the URL
2. Click the "Modifier" button
* Actual: The field isn't available anywhere for editing
* Expected: The field should either: a) not be rendered or b) be editable.
### Proposal
* Remove the `date` field from the display
* Move contents of any existing `date` fields to birth date and death date | non_main | uneditable bad date field in author record the author record at the url below has a date key containing the string dmitriĭ nikolaevich which is rendered for display under the author s name but not editable anywhere relevant url steps to reproduce click the url click the modifier button actual the field isn t available anywhere for editing expected the field should either a not be rendered or b be editable proposal remove the date field from the display move contents of any existing date fields to birth date and death date | 0 |
259,962 | 22,579,589,196 | IssuesEvent | 2022-06-28 10:22:14 | Narcis-Samec/changemyjob | https://api.github.com/repos/Narcis-Samec/changemyjob | closed | Testing - BE | testing | Implement testing & tests & testing environment for BE.
- [x] #10
- [x] #11
- [x] #6 | 1.0 | Testing - BE - Implement testing & tests & testing environment for BE.
- [x] #10
- [x] #11
- [x] #6 | non_main | testing be implement testing tests testing environment for be | 0 |
243 | 2,971,739,357 | IssuesEvent | 2015-07-14 09:10:01 | acl2/acl2 | https://api.github.com/repos/acl2/acl2 | closed | must-fail doesn't catch hard errors | Difficulty: Medium enhancement Maintainability | I think it would be nice if the following worked:
```
(include-book "misc/eval" :dir :system)
(encapsulate ()
(acl2::must-fail (value-triple (er hard? 'failure "this seems like a failure")))
(defun f (x) x))
(f 3) ;; Desired result: 3. Actual result: F is not defined
```
This would allow me to have quick tests that, e.g., extralogical run-time arity checking in SV is working correctly. I imagine this is hard, because it would require `must-fail` to be able to catch/trap hard errors somehow. This probably can't be done by user-level utilities. | True | must-fail doesn't catch hard errors - I think it would be nice if the following worked:
```
(include-book "misc/eval" :dir :system)
(encapsulate ()
(acl2::must-fail (value-triple (er hard? 'failure "this seems like a failure")))
(defun f (x) x))
(f 3) ;; Desired result: 3. Actual result: F is not defined
```
This would allow me to have quick tests that, e.g., extralogical run-time arity checking in SV is working correctly. I imagine this is hard, because it would require `must-fail` to be able to catch/trap hard errors somehow. This probably can't be done by user-level utilities. | main | must fail doesn t catch hard errors i think it would be nice if the following worked include book misc eval dir system encapsulate must fail value triple er hard failure this seems like a failure defun f x x f desired result actual result f is not defined this would allow me to have quick tests that e g extralogical run time arity checking in sv is working correctly i imagine this is hard because it would require must fail to be able to catch trap hard errors somehow this probably can t be done by user level utilities | 1 |
72,522 | 31,768,924,882 | IssuesEvent | 2023-09-12 10:28:56 | gauravrs18/issue_onboarding | https://api.github.com/repos/gauravrs18/issue_onboarding | closed | dev-angular-integration-account-services-new-connection-component-activate-component
-consumer-details-component
-connect-component
-meter-option-component | CX-account-services | dev-angular-integration-account-services-new-connection-component-activate-component
-consumer-details-component
-connect-component
-meter-option-component | 1.0 | dev-angular-integration-account-services-new-connection-component-activate-component
-consumer-details-component
-connect-component
-meter-option-component - dev-angular-integration-account-services-new-connection-component-activate-component
-consumer-details-component
-connect-component
-meter-option-component | non_main | dev angular integration account services new connection component activate component consumer details component connect component meter option component dev angular integration account services new connection component activate component consumer details component connect component meter option component | 0 |
5,112 | 26,038,227,006 | IssuesEvent | 2022-12-22 07:59:42 | camunda/zeebe | https://api.github.com/repos/camunda/zeebe | closed | Separate embedded from standalone gateway configuration | kind/toil scope/broker good first issue blocker/info area/maintainability | With the closer integration of the embedded gateway into the broker the configuration for the embedded gateway has a lot of parameters which are not respected. We should split them apart so they are focused on the actual use case. | True | Separate embedded from standalone gateway configuration - With the closer integration of the embedded gateway into the broker the configuration for the embedded gateway has a lot of parameters which are not respected. We should split them apart so they are focused on the actual use case. | main | separate embedded from standalone gateway configuration with the closer integration of the embedded gateway into the broker the configuration for the embedded gateway has a lot of parameters which are not respected we should split them apart so they are focused on the actual use case | 1 |
3,598 | 14,537,131,592 | IssuesEvent | 2020-12-15 08:43:56 | coq-community/manifesto | https://api.github.com/repos/coq-community/manifesto | opened | Change maintainer of ATBR | change-maintainer | **Project name and URL:** https://github.com/coq-community/atbr
**Current maintainer:** @tchajed
**Status:** unmaintained
**New maintainer:** looking for a volunteer
@tchajed has indicated he wants to step down as maintainer of ATBR due to other commitments. Until a new maintainer is found, other coq-community members can collaborate to do basic project maintenance. The project is likely to work at least through 8.13 with only minor changes required.
| True | Change maintainer of ATBR - **Project name and URL:** https://github.com/coq-community/atbr
**Current maintainer:** @tchajed
**Status:** unmaintained
**New maintainer:** looking for a volunteer
@tchajed has indicated he wants to step down as maintainer of ATBR due to other commitments. Until a new maintainer is found, other coq-community members can collaborate to do basic project maintenance. The project is likely to work at least through 8.13 with only minor changes required.
| main | change maintainer of atbr project name and url current maintainer tchajed status unmaintained new maintainer looking for a volunteer tchajed has indicated he wants to step down as maintainer of atbr due to other commitments until a new maintainer is found other coq community members can collaborate to do basic project maintenance the project is likely to work at least through with only minor changes required | 1 |
4,334 | 21,786,652,479 | IssuesEvent | 2022-05-14 08:29:22 | Numble-challenge-Team/client | https://api.github.com/repos/Numble-challenge-Team/client | closed | Refresh Token 발급 기능 구현 | feat maintain | ### ISSUE
- Type: feature
- Page: -
### TODO
- [x] Access Token 만료 시 Refresh Token을 통한 Access Token 재발급 | True | Refresh Token 발급 기능 구현 - ### ISSUE
- Type: feature
- Page: -
### TODO
- [x] Access Token 만료 시 Refresh Token을 통한 Access Token 재발급 | main | refresh token 발급 기능 구현 issue type feature page todo access token 만료 시 refresh token을 통한 access token 재발급 | 1 |
2,813 | 10,063,518,400 | IssuesEvent | 2019-07-23 06:09:48 | diofant/diofant | https://api.github.com/repos/diofant/diofant | opened | Throw away some modules | maintainability needs decision | Probably, some less important stuff (diffgeom, geometry, vector, stats, calculus) should be maintained separately, as independent packages, which will require diofant.
| True | Throw away some modules - Probably, some less important stuff (diffgeom, geometry, vector, stats, calculus) should be maintained separately, as independent packages, which will require diofant.
| main | throw away some modules probably some less important stuff diffgeom geometry vector stats calculus should be maintained separately as independent packages which will require diofant | 1 |
819 | 4,441,933,248 | IssuesEvent | 2016-08-19 11:23:45 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | support plain gzip in unarchive | feature_idea waiting_on_maintainer | ##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
unarchive module
##### ANSIBLE VERSION
N/A
##### SUMMARY
If an administrator needs to distribute a single file in compressed form, she may use gzip but would have no reason to use tar since there are not multiple files involved. unarchive claims to support gzip archives but in reality only supports .tar.gz or .tgz archives. Either the documentation or the code needs to be updated.
| True | support plain gzip in unarchive - ##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
unarchive module
##### ANSIBLE VERSION
N/A
##### SUMMARY
If an administrator needs to distribute a single file in compressed form, she may use gzip but would have no reason to use tar since there are not multiple files involved. unarchive claims to support gzip archives but in reality only supports .tar.gz or .tgz archives. Either the documentation or the code needs to be updated.
| main | support plain gzip in unarchive issue type feature idea component name unarchive module ansible version n a summary if an administrator needs to distribute a single file in compressed form she may use gzip but would have no reason to use tar since there are not multiple files involved unarchive claims to support gzip archives but in reality only supports tar gz or tgz archives either the documentation or the code needs to be updated | 1 |
48,658 | 13,392,357,830 | IssuesEvent | 2020-09-03 01:07:50 | jgeraigery/argo | https://api.github.com/repos/jgeraigery/argo | opened | CVE-2020-8203 (High) detected in lodash-4.17.15.tgz | security vulnerability | ## CVE-2020-8203 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.15.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/argo/ui/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/argo/ui/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- core-7.9.0.tgz (Root Library)
- :x: **lodash-4.17.15.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution attack when using _.zipObjectDeep in lodash <= 4.17.15.
<p>Publish Date: 2020-07-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203>CVE-2020-8203</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1523">https://www.npmjs.com/advisories/1523</a></p>
<p>Release Date: 2020-07-23</p>
<p>Fix Resolution: lodash - 4.17.19</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"4.17.15","isTransitiveDependency":true,"dependencyTree":"@babel/core:7.9.0;lodash:4.17.15","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash - 4.17.19"}],"vulnerabilityIdentifier":"CVE-2020-8203","vulnerabilityDetails":"Prototype pollution attack when using _.zipObjectDeep in lodash \u003c\u003d 4.17.15.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203","cvss3Severity":"high","cvss3Score":"7.4","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-8203 (High) detected in lodash-4.17.15.tgz - ## CVE-2020-8203 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.15.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/argo/ui/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/argo/ui/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- core-7.9.0.tgz (Root Library)
- :x: **lodash-4.17.15.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution attack when using _.zipObjectDeep in lodash <= 4.17.15.
<p>Publish Date: 2020-07-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203>CVE-2020-8203</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1523">https://www.npmjs.com/advisories/1523</a></p>
<p>Release Date: 2020-07-23</p>
<p>Fix Resolution: lodash - 4.17.19</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"4.17.15","isTransitiveDependency":true,"dependencyTree":"@babel/core:7.9.0;lodash:4.17.15","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash - 4.17.19"}],"vulnerabilityIdentifier":"CVE-2020-8203","vulnerabilityDetails":"Prototype pollution attack when using _.zipObjectDeep in lodash \u003c\u003d 4.17.15.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203","cvss3Severity":"high","cvss3Score":"7.4","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_main | cve high detected in lodash tgz cve high severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file tmp ws scm argo ui package json path to vulnerable library tmp ws scm argo ui node modules lodash package json dependency hierarchy core tgz root library x lodash tgz vulnerable library vulnerability details prototype pollution attack when using zipobjectdeep in lodash publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails prototype pollution attack when using zipobjectdeep in lodash vulnerabilityurl | 0 |
2,765 | 9,873,131,340 | IssuesEvent | 2019-06-22 11:36:08 | arcticicestudio/snowsaw | https://api.github.com/repos/arcticicestudio/snowsaw | opened | Initial repository clean up for Go rewrite | context-api context-workflow scope-maintainability type-task | <p align="center"><img src="https://user-images.githubusercontent.com/7836623/59963238-8838da80-94f0-11e9-8857-4322932bdb9c.png" width="20%" /></p>
> Epic: #33
> Blocks #34 #35 #36 #37 #38 #39 #42 #43 #44 #45 #46 #47 #48
In order to start the [Go project rewrite][gh-33] from scratch the current repository structure and files should be reset to a clean state to remove all references to the previous implementations, documentations and project structure/layout.
<p align="center"><img src="https://user-images.githubusercontent.com/7836623/59963372-8cfe8e00-94f2-11e9-85e2-b140a889d26e.png" width="12%" /></p>
Starting from a „fresh“ state allows to build the project up with the correct structure and design pattern as if there were leftovers from the previous repository data resulting in mixed files and folders.
This ticket **must be resolved first before all other tickets** also bound to the epic #33.
[gh-33]: https://github.com/arcticicestudio/snowsaw/issues/33 | True | Initial repository clean up for Go rewrite - <p align="center"><img src="https://user-images.githubusercontent.com/7836623/59963238-8838da80-94f0-11e9-8857-4322932bdb9c.png" width="20%" /></p>
> Epic: #33
> Blocks #34 #35 #36 #37 #38 #39 #42 #43 #44 #45 #46 #47 #48
In order to start the [Go project rewrite][gh-33] from scratch the current repository structure and files should be reset to a clean state to remove all references to the previous implementations, documentations and project structure/layout.
<p align="center"><img src="https://user-images.githubusercontent.com/7836623/59963372-8cfe8e00-94f2-11e9-85e2-b140a889d26e.png" width="12%" /></p>
Starting from a „fresh“ state allows to build the project up with the correct structure and design pattern as if there were leftovers from the previous repository data resulting in mixed files and folders.
This ticket **must be resolved first before all other tickets** also bound to the epic #33.
[gh-33]: https://github.com/arcticicestudio/snowsaw/issues/33 | main | initial repository clean up for go rewrite epic blocks in order to start the from scratch the current repository structure and files should be reset to a clean state to remove all references to the previous implementations documentations and project structure layout starting from a „fresh“ state allows to build the project up with the correct structure and design pattern as if there were leftovers from the previous repository data resulting in mixed files and folders this ticket must be resolved first before all other tickets also bound to the epic | 1 |
3,443 | 13,211,584,749 | IssuesEvent | 2020-08-16 00:18:16 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | pear module: always returns status changed for PECL mongodb | affects_2.8 bot_closed bug collection collection:community.general module needs_collection_redirect needs_maintainer needs_triage packaging support:community | <!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
This simple PECL installation task always returns changed, despite the extension already being present on the system.
Interestingly, this only happens with pecl/mongodb. Other PECL tasks work as expected.
I have this task run on 3 separate machines, and the result is the same.
```yaml
- name: Install PECL mongodb
become: true
pear:
name: pecl/mongodb
state: latest
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`pear` module.
##### ANSIBLE VERSION
```
ansible 2.8.4
config file = <redacted>ansible.cfg
configured module search path = [u'/<redacted>/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Nov 27 2018, 23:36:35) [GCC 7.3.0]
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
Ansible host: ElementaryOS 5.
Targets:
Debian 10 (buster) latest, all updates.
```
PEAR Version: 1.10.6
PHP Version: 7.3.4-2
Zend Engine Version: 3.3.4
Running on: Linux e31.c3.g1.frontal.ie1.he.infra.testing 4.19.0-5-amd64 #1 SMP Debian 4.19.37-5+deb10u2 (2019-08-08) x86_64
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Install PECL mongodb
become: true
pear:
name: pecl/mongodb
state: latest
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Status `ok` once the extension is installed.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Status `changed` even with the extension is installed.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [fpm : Install PECL mongodb] ************************
changed: [e31]
changed: [e32]
changed: [e33]
```
| True | pear module: always returns status changed for PECL mongodb - <!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
This simple PECL installation task always returns changed, despite the extension already being present on the system.
Interestingly, this only happens with pecl/mongodb. Other PECL tasks work as expected.
I have this task run on 3 separate machines, and the result is the same.
```yaml
- name: Install PECL mongodb
become: true
pear:
name: pecl/mongodb
state: latest
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`pear` module.
##### ANSIBLE VERSION
```
ansible 2.8.4
config file = <redacted>ansible.cfg
configured module search path = [u'/<redacted>/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Nov 27 2018, 23:36:35) [GCC 7.3.0]
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
Ansible host: ElementaryOS 5.
Targets:
Debian 10 (buster) latest, all updates.
```
PEAR Version: 1.10.6
PHP Version: 7.3.4-2
Zend Engine Version: 3.3.4
Running on: Linux e31.c3.g1.frontal.ie1.he.infra.testing 4.19.0-5-amd64 #1 SMP Debian 4.19.37-5+deb10u2 (2019-08-08) x86_64
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Install PECL mongodb
become: true
pear:
name: pecl/mongodb
state: latest
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Status `ok` once the extension is installed.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Status `changed` even with the extension is installed.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [fpm : Install PECL mongodb] ************************
changed: [e31]
changed: [e32]
changed: [e33]
```
| main | pear module always returns status changed for pecl mongodb summary this simple pecl installation task always returns changed despite the extension already being present on the system interestingly this only happens with pecl mongodb other pecl tasks work as expected i have this task run on separate machines and the result is the same yaml name install pecl mongodb become true pear name pecl mongodb state latest issue type bug report component name pear module ansible version ansible config file ansible cfg configured module search path ansible python module location usr lib dist packages ansible executable location usr bin ansible python version default nov configuration n a os environment ansible host elementaryos targets debian buster latest all updates pear version php version zend engine version running on linux frontal he infra testing smp debian steps to reproduce yaml name install pecl mongodb become true pear name pecl mongodb state latest expected results status ok once the extension is installed actual results status changed even with the extension is installed paste below task changed changed changed | 1 |
14,237 | 8,948,424,666 | IssuesEvent | 2019-01-25 02:19:38 | bisq-network/bisq | https://api.github.com/repos/bisq-network/bisq | closed | Make behaviour on address click configurable in settings | in:gui re:usability was:dropped | Clicking the address should open a btc wallet, but unfortunately that is not very well supported, so it is hard to control which wallet it opens. Better to add an option in the setting to define if it should open a wallet or shoudl just copy the address.
<!---
@huboard:{"order":319.9951171875,"milestone_order":314,"custom_state":""}
-->
| True | Make behaviour on address click configurable in settings - Clicking the address should open a btc wallet, but unfortunately that is not very well supported, so it is hard to control which wallet it opens. Better to add an option in the setting to define if it should open a wallet or shoudl just copy the address.
<!---
@huboard:{"order":319.9951171875,"milestone_order":314,"custom_state":""}
-->
| non_main | make behaviour on address click configurable in settings clicking the address should open a btc wallet but unfortunately that is not very well supported so it is hard to control which wallet it opens better to add an option in the setting to define if it should open a wallet or shoudl just copy the address huboard order milestone order custom state | 0 |
1,768 | 6,575,035,720 | IssuesEvent | 2017-09-11 14:50:39 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Docs on template module jinja2 vars `template_path` vs `template_fullpath` are confusing | affects_2.2 docs_report waiting_on_maintainer | ##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
Template module
##### ANSIBLE VERSION
Ansible 2.2
##### SUMMARY
The explanation of `template_path` vs `template_fullpath` are confusing (https://github.com/ansible/ansible-modules-core/blob/devel/files/template.py#L32-L34):
> ...`template_path` the absolute path of the template, `template_fullpath` is the absolute path of the template...
It's not clear if both variables are the same, or if one is a relative path and one a full path. Plus the wording reads awkwardly.
| True | Docs on template module jinja2 vars `template_path` vs `template_fullpath` are confusing - ##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
Template module
##### ANSIBLE VERSION
Ansible 2.2
##### SUMMARY
The explanation of `template_path` vs `template_fullpath` are confusing (https://github.com/ansible/ansible-modules-core/blob/devel/files/template.py#L32-L34):
> ...`template_path` the absolute path of the template, `template_fullpath` is the absolute path of the template...
It's not clear if both variables are the same, or if one is a relative path and one a full path. Plus the wording reads awkwardly.
| main | docs on template module vars template path vs template fullpath are confusing issue type documentation report component name template module ansible version ansible summary the explanation of template path vs template fullpath are confusing template path the absolute path of the template template fullpath is the absolute path of the template it s not clear if both variables are the same or if one is a relative path and one a full path plus the wording reads awkwardly | 1 |
407,920 | 11,939,302,607 | IssuesEvent | 2020-04-02 15:01:12 | UniversityOfHelsinkiCS/mobvita | https://api.github.com/repos/UniversityOfHelsinkiCS/mobvita | closed | on-screen RU keyboard | Desktop Only feature priority1 | available through sidebar (?)
There is some weirdly working code in the branch virtual-keyboard | 1.0 | on-screen RU keyboard - available through sidebar (?)
There is some weirdly working code in the branch virtual-keyboard | non_main | on screen ru keyboard available through sidebar there is some weirdly working code in the branch virtual keyboard | 0 |
91,633 | 10,724,532,694 | IssuesEvent | 2019-10-28 02:08:56 | cefsharp/CefSharp | https://api.github.com/repos/cefsharp/CefSharp | closed | IFrame.LoadRequest - Add xml doc warning | documentation feature-request | `CEF` has added a warning to the code doc in https://bitbucket.org/chromiumembedded/cef/commits/7d243e15c95d7fdb951024340c841e9722e9baa8
We should do the same. | 1.0 | IFrame.LoadRequest - Add xml doc warning - `CEF` has added a warning to the code doc in https://bitbucket.org/chromiumembedded/cef/commits/7d243e15c95d7fdb951024340c841e9722e9baa8
We should do the same. | non_main | iframe loadrequest add xml doc warning cef has added a warning to the code doc in we should do the same | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.