Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 4
112
| repo_url
stringlengths 33
141
| action
stringclasses 3
values | title
stringlengths 1
1.02k
| labels
stringlengths 4
1.54k
| body
stringlengths 1
262k
| index
stringclasses 17
values | text_combine
stringlengths 95
262k
| label
stringclasses 2
values | text
stringlengths 96
252k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3,706
| 2,685,500,691
|
IssuesEvent
|
2015-03-30 01:48:11
|
danielpclark/PolyBelongsTo
|
https://api.github.com/repos/danielpclark/PolyBelongsTo
|
closed
|
Optimize deep duplication method (unaffecting code?).
|
provide additional tests refactor
|
This doesn't affect the current test during the circular record test:
```ruby
unless singleton_record.include?(item_to_build_on)
```
https://github.com/danielpclark/PolyBelongsTo/blob/master/lib/poly_belongs_to/dup.rb#L25
|
1.0
|
Optimize deep duplication method (unaffecting code?). - This doesn't affect the current test during the circular record test:
```ruby
unless singleton_record.include?(item_to_build_on)
```
https://github.com/danielpclark/PolyBelongsTo/blob/master/lib/poly_belongs_to/dup.rb#L25
|
test
|
optimize deep duplication method unaffecting code this doesn t affect the current test during the circular record test ruby unless singleton record include item to build on
| 1
|
83,302
| 7,868,228,729
|
IssuesEvent
|
2018-06-23 18:49:11
|
brave/browser-laptop
|
https://api.github.com/repos/brave/browser-laptop
|
closed
|
URLbar paste-and-search should be disabled till Tor connection is successfully created
|
OS/Windows QA/test-plan-specified bug feature/tor priority/P3 release-notes/exclude
|
<!--
Have you searched for similar issues? We have received a lot of feedback and bug reports that we have closed as duplicates. Before submitting this issue, please visit our community site for common ones: https://community.brave.com/c/common-issues
-->
### Description
URLbar should be disabled till Tor connection is successfully created
### Test plan / Steps to Reproduce
<!--
Please add a series of steps to reproduce the problem. See https://stackoverflow.com/help/mcve for in depth information on how to create a minimal, complete, and verifiable example.
-->
1. Clean install 0.23.14
2. Launch browser and open a new tor private tab
3. While connection is being established, right click on URL bar and paste and search
4. Tries to load search while TOR connection is being created
5. Connection fails, doesn't load search result and throws about:error page
6. Open a new tor tab doesn't connect but log says `Bootstrap 100% Done`
**Actual result:**
Paste and search from context menu in URL while establishing tor connection breaks flow
https://youtu.be/RugK4-4MFVs
Tor Log shows
```
Jun 22 16:36:49.000 [notice] Have tried resolving or connecting to address '[scrubbed]' at 3 different places. Giving up.
```
**Expected result:**
URL bar should just show connection info and be in read only state while Tor connection is being established. If unsuccessful, clicking on `Disable Tor` button should activate URL bar
**Reproduces how often:**
100%
### Brave Version
**about:brave info:**
Brave | 0.23.14
-- | --
V8 | 6.7.288.46
rev | f4da855
Muon | 7.1.1
OS Release | 10.0.17134
Update Channel | Beta
OS Architecture | x64
OS Platform | Microsoft Windows
Node.js | 7.9.0
Brave Sync | v1.4.2
libchromiumcontent | 67.0.3396.87
**Reproducible on current live release:**
N/A
### Additional Information
cc: @kjozwiak @LaurenWags @btlechowski @GeetaSarvadnya
|
1.0
|
URLbar paste-and-search should be disabled till Tor connection is successfully created - <!--
Have you searched for similar issues? We have received a lot of feedback and bug reports that we have closed as duplicates. Before submitting this issue, please visit our community site for common ones: https://community.brave.com/c/common-issues
-->
### Description
URLbar should be disabled till Tor connection is successfully created
### Test plan / Steps to Reproduce
<!--
Please add a series of steps to reproduce the problem. See https://stackoverflow.com/help/mcve for in depth information on how to create a minimal, complete, and verifiable example.
-->
1. Clean install 0.23.14
2. Launch browser and open a new tor private tab
3. While connection is being established, right click on URL bar and paste and search
4. Tries to load search while TOR connection is being created
5. Connection fails, doesn't load search result and throws about:error page
6. Open a new tor tab doesn't connect but log says `Bootstrap 100% Done`
**Actual result:**
Paste and search from context menu in URL while establishing tor connection breaks flow
https://youtu.be/RugK4-4MFVs
Tor Log shows
```
Jun 22 16:36:49.000 [notice] Have tried resolving or connecting to address '[scrubbed]' at 3 different places. Giving up.
```
**Expected result:**
URL bar should just show connection info and be in read only state while Tor connection is being established. If unsuccessful, clicking on `Disable Tor` button should activate URL bar
**Reproduces how often:**
100%
### Brave Version
**about:brave info:**
Brave | 0.23.14
-- | --
V8 | 6.7.288.46
rev | f4da855
Muon | 7.1.1
OS Release | 10.0.17134
Update Channel | Beta
OS Architecture | x64
OS Platform | Microsoft Windows
Node.js | 7.9.0
Brave Sync | v1.4.2
libchromiumcontent | 67.0.3396.87
**Reproducible on current live release:**
N/A
### Additional Information
cc: @kjozwiak @LaurenWags @btlechowski @GeetaSarvadnya
|
test
|
urlbar paste and search should be disabled till tor connection is successfully created have you searched for similar issues we have received a lot of feedback and bug reports that we have closed as duplicates before submitting this issue please visit our community site for common ones description urlbar should be disabled till tor connection is successfully created test plan steps to reproduce please add a series of steps to reproduce the problem see for in depth information on how to create a minimal complete and verifiable example clean install launch browser and open a new tor private tab while connection is being established right click on url bar and paste and search tries to load search while tor connection is being created connection fails doesn t load search result and throws about error page open a new tor tab doesn t connect but log says bootstrap done actual result paste and search from context menu in url while establishing tor connection breaks flow tor log shows jun have tried resolving or connecting to address at different places giving up expected result url bar should just show connection info and be in read only state while tor connection is being established if unsuccessful clicking on disable tor button should activate url bar reproduces how often brave version about brave info brave rev muon os release update channel beta os architecture os platform microsoft windows node js brave sync libchromiumcontent reproducible on current live release n a additional information cc kjozwiak laurenwags btlechowski geetasarvadnya
| 1
|
57,163
| 6,539,617,661
|
IssuesEvent
|
2017-09-01 12:11:05
|
opensistemas-hub/osbrain
|
https://api.github.com/repos/opensistemas-hub/osbrain
|
opened
|
Be a bit more lenient in test_timer.py
|
test
|
https://travis-ci.org/opensistemas-hub/osbrain/jobs/270799690
Instead of sleeping for 0.9, sleep for 0.5?
See if there are other timer tests that could have the same problem.
|
1.0
|
Be a bit more lenient in test_timer.py - https://travis-ci.org/opensistemas-hub/osbrain/jobs/270799690
Instead of sleeping for 0.9, sleep for 0.5?
See if there are other timer tests that could have the same problem.
|
test
|
be a bit more lenient in test timer py instead of sleeping for sleep for see if there are other timer tests that could have the same problem
| 1
|
319,466
| 27,374,728,196
|
IssuesEvent
|
2023-02-28 04:22:05
|
prgrms-web-devcourse/Team-JJINSA-HyperLink-BE
|
https://api.github.com/repos/prgrms-web-devcourse/Team-JJINSA-HyperLink-BE
|
closed
|
Content, Creator table ๋ณ๊ฒฝ์ ๋ฐ๋ฅธ ์ฝ๋ ์์
|
Test Fix
|
- content, creator table category FK ์ปฌ๋ผ ์ถ๊ฐ์ ๋ฐ๋ฅธ ์ฝ๋ ์์ (ํ
์คํธ ์ฝ๋, ์ํฐํฐ ์์ฑ์)
|
1.0
|
Content, Creator table ๋ณ๊ฒฝ์ ๋ฐ๋ฅธ ์ฝ๋ ์์ - - content, creator table category FK ์ปฌ๋ผ ์ถ๊ฐ์ ๋ฐ๋ฅธ ์ฝ๋ ์์ (ํ
์คํธ ์ฝ๋, ์ํฐํฐ ์์ฑ์)
|
test
|
content creator table ๋ณ๊ฒฝ์ ๋ฐ๋ฅธ ์ฝ๋ ์์ content creator table category fk ์ปฌ๋ผ ์ถ๊ฐ์ ๋ฐ๋ฅธ ์ฝ๋ ์์ ํ
์คํธ ์ฝ๋ ์ํฐํฐ ์์ฑ์
| 1
|
239,142
| 19,823,492,047
|
IssuesEvent
|
2022-01-20 01:57:57
|
elastic/elasticsearch
|
https://api.github.com/repos/elastic/elasticsearch
|
opened
|
[CI] Security Index upgrade failure during FullClusterRestart
|
:Core/Infra/Core >test-failure
|
The failing test is FullClusterRestartIT testApiKeySuperuser
This happens on my PR CI and related to my change.
But the underlying reason seems to be upgrade failure of security system index. Cluster log shows a NPE https://gradle-enterprise.elastic.co/s/wgya6tyniu7y2/console-log#L2024 for `AliasMetadata#isHidden` invocation and comparison.
I suspect it has something to do with #79512 and recent enforce of 7.last for 8.x upgrade.
**Build scan:**
https://gradle-enterprise.elastic.co/s/wgya6tyniu7y2/tests/:x-pack:qa:full-cluster-restart:v7.8.1%23upgradedClusterTest/org.elasticsearch.xpack.restart.FullClusterRestartIT/testApiKeySuperuser
**Reproduction line:**
`./gradlew ':x-pack:qa:full-cluster-restart:v7.8.1#upgradedClusterTest' -Dtests.class="org.elasticsearch.xpack.restart.FullClusterRestartIT" -Dtests.method="testApiKeySuperuser" -Dtests.seed=704325F309264EE2 -Dtests.bwc=true -Dtests.locale=fi -Dtests.timezone=America/Boise -Druntime.java=17`
**Applicable branches:**
master
**Reproduces locally?:**
Yes
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.xpack.restart.FullClusterRestartIT&tests.test=testApiKeySuperuser
**Failure excerpt:**
```
org.elasticsearch.client.WarningFailureException: method [GET], host [http://127.0.0.1:42205], URI [.security/_search], status line [HTTP/1.1 200 OK]
{"took":2,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":4,"relation":"eq"},"max_score":1.0,"hits":[{"_index":".security-7","_id":"user-api_key_super_creator","_score":1.0,"_source":{"username":"api_key_super_creator","password":"$2a$10$OsBKVjcqgvziPdsewcOcmufqeJNjplujZ8NYFW6sjL2VXdVfTgpMe","roles":["superuser","monitoring_user"],"full_name":null,"email":null,"metadata":null,"enabled":true,"type":"user"}},{"_index":".security-7","_id":"_mHGdH4B2CIXTYmcz0_9","_score":1.0,"_source":{"doc_type":"api_key","creation_time":1642636693479,"expiration_time":null,"api_key_invalidated":false,"api_key_hash":"{PBKDF2}10000$v4R2re60QvaorlE+Cwl0MRq2uIE0rABYIrmBSMAxs4U=$yDhHqMHGzXtVgb4CPnGAdtKRMxP+nzpP0L4zdT45pPw=","role_descriptors":{},"limited_by_role_descriptors":{"superuser":{"cluster":["all"],"indices":[{"names":["*"],"privileges":["all"],"allow_restricted_indices":true}],"applications":[{"application":"*","privileges":["*"],"resources":["*"]}],"run_as":["*"],"metadata":{"_reserved":true},"type":"role"},"monitoring_user":{"cluster":["cluster:monitor/main","cluster:monitor/xpack/info","cluster:monitor/remote/info"],"indices":[{"names":[".monitoring-*"],"privileges":["read","read_cross_cluster"],"allow_restricted_indices":false}],"applications":[{"application":"kibana-*","privileges":["reserved_monitoring"],"resources":["*"]}],"run_as":[],"metadata":{"_reserved":true},"type":"role"}},"name":"super_legacy_key","version":7080199,"creator":{"principal":"api_key_super_creator","metadata":{},"realm":"default_native","realm_type":"native"}}},{"_index":".security-7","_id":"9aTGdH4B8Erq23pH0QLn","_score":1.0,"_source":{
"doc_type": "foo"
}},{"_index":".security-7","_id":"AGHGdH4B2CIXTYmc1FBS","_score":1.0,"_source":{"doc_type":"api_key","creation_time":1642636694588,"expiration_time":null,"api_key_invalidated":false,"api_key_hash":"{PBKDF2}10000$Uj+vKfhGVDJ+0dM+YBDyJIsC2hSFGXCqojoMQ6Ens9E=$/8X0Dup+8cwgm1L5d5zNHtUPoDG1fV1WCPhcPu1p4yg=","role_descriptors":{"r":{"cluster":["all"],"indices":[{"names":["*"],"privileges":["all"],"allow_restricted_indices":false}],"applications":[],"run_as":[],"metadata":{},"type":"role"}},"limited_by_role_descriptors":{"_es_test_root":{"cluster":["ALL"],"indices":[{"names":["*"],"privileges":["ALL"],"allow_restricted_indices":true}],"applications":[{"application":"*","privileges":["*"],"resources":["*"]}],"run_as":["*"],"metadata":{},"type":"role"}},"name":"key-1","version":7080199,"creator":{"principal":"test_user","metadata":{},"realm":"default_file","realm_type":"file"}}}]}}
at __randomizedtesting.SeedInfo.seed([704325F309264EE2:478E48673AE6CF31]:0)
at org.elasticsearch.client.RestClient.convertResponse(RestClient.java:342)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:312)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:287)
at org.elasticsearch.xpack.restart.FullClusterRestartIT.testApiKeySuperuser(FullClusterRestartIT.java:421)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:568)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:44)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:824)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:475)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:831)
at java.lang.Thread.run(Thread.java:833)
```
|
1.0
|
[CI] Security Index upgrade failure during FullClusterRestart - The failing test is FullClusterRestartIT testApiKeySuperuser
This happens on my PR CI and related to my change.
But the underlying reason seems to be upgrade failure of security system index. Cluster log shows a NPE https://gradle-enterprise.elastic.co/s/wgya6tyniu7y2/console-log#L2024 for `AliasMetadata#isHidden` invocation and comparison.
I suspect it has something to do with #79512 and recent enforce of 7.last for 8.x upgrade.
**Build scan:**
https://gradle-enterprise.elastic.co/s/wgya6tyniu7y2/tests/:x-pack:qa:full-cluster-restart:v7.8.1%23upgradedClusterTest/org.elasticsearch.xpack.restart.FullClusterRestartIT/testApiKeySuperuser
**Reproduction line:**
`./gradlew ':x-pack:qa:full-cluster-restart:v7.8.1#upgradedClusterTest' -Dtests.class="org.elasticsearch.xpack.restart.FullClusterRestartIT" -Dtests.method="testApiKeySuperuser" -Dtests.seed=704325F309264EE2 -Dtests.bwc=true -Dtests.locale=fi -Dtests.timezone=America/Boise -Druntime.java=17`
**Applicable branches:**
master
**Reproduces locally?:**
Yes
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.xpack.restart.FullClusterRestartIT&tests.test=testApiKeySuperuser
**Failure excerpt:**
```
org.elasticsearch.client.WarningFailureException: method [GET], host [http://127.0.0.1:42205], URI [.security/_search], status line [HTTP/1.1 200 OK]
{"took":2,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":4,"relation":"eq"},"max_score":1.0,"hits":[{"_index":".security-7","_id":"user-api_key_super_creator","_score":1.0,"_source":{"username":"api_key_super_creator","password":"$2a$10$OsBKVjcqgvziPdsewcOcmufqeJNjplujZ8NYFW6sjL2VXdVfTgpMe","roles":["superuser","monitoring_user"],"full_name":null,"email":null,"metadata":null,"enabled":true,"type":"user"}},{"_index":".security-7","_id":"_mHGdH4B2CIXTYmcz0_9","_score":1.0,"_source":{"doc_type":"api_key","creation_time":1642636693479,"expiration_time":null,"api_key_invalidated":false,"api_key_hash":"{PBKDF2}10000$v4R2re60QvaorlE+Cwl0MRq2uIE0rABYIrmBSMAxs4U=$yDhHqMHGzXtVgb4CPnGAdtKRMxP+nzpP0L4zdT45pPw=","role_descriptors":{},"limited_by_role_descriptors":{"superuser":{"cluster":["all"],"indices":[{"names":["*"],"privileges":["all"],"allow_restricted_indices":true}],"applications":[{"application":"*","privileges":["*"],"resources":["*"]}],"run_as":["*"],"metadata":{"_reserved":true},"type":"role"},"monitoring_user":{"cluster":["cluster:monitor/main","cluster:monitor/xpack/info","cluster:monitor/remote/info"],"indices":[{"names":[".monitoring-*"],"privileges":["read","read_cross_cluster"],"allow_restricted_indices":false}],"applications":[{"application":"kibana-*","privileges":["reserved_monitoring"],"resources":["*"]}],"run_as":[],"metadata":{"_reserved":true},"type":"role"}},"name":"super_legacy_key","version":7080199,"creator":{"principal":"api_key_super_creator","metadata":{},"realm":"default_native","realm_type":"native"}}},{"_index":".security-7","_id":"9aTGdH4B8Erq23pH0QLn","_score":1.0,"_source":{
"doc_type": "foo"
}},{"_index":".security-7","_id":"AGHGdH4B2CIXTYmc1FBS","_score":1.0,"_source":{"doc_type":"api_key","creation_time":1642636694588,"expiration_time":null,"api_key_invalidated":false,"api_key_hash":"{PBKDF2}10000$Uj+vKfhGVDJ+0dM+YBDyJIsC2hSFGXCqojoMQ6Ens9E=$/8X0Dup+8cwgm1L5d5zNHtUPoDG1fV1WCPhcPu1p4yg=","role_descriptors":{"r":{"cluster":["all"],"indices":[{"names":["*"],"privileges":["all"],"allow_restricted_indices":false}],"applications":[],"run_as":[],"metadata":{},"type":"role"}},"limited_by_role_descriptors":{"_es_test_root":{"cluster":["ALL"],"indices":[{"names":["*"],"privileges":["ALL"],"allow_restricted_indices":true}],"applications":[{"application":"*","privileges":["*"],"resources":["*"]}],"run_as":["*"],"metadata":{},"type":"role"}},"name":"key-1","version":7080199,"creator":{"principal":"test_user","metadata":{},"realm":"default_file","realm_type":"file"}}}]}}
at __randomizedtesting.SeedInfo.seed([704325F309264EE2:478E48673AE6CF31]:0)
at org.elasticsearch.client.RestClient.convertResponse(RestClient.java:342)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:312)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:287)
at org.elasticsearch.xpack.restart.FullClusterRestartIT.testApiKeySuperuser(FullClusterRestartIT.java:421)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:568)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:44)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:824)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:475)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:831)
at java.lang.Thread.run(Thread.java:833)
```
|
test
|
security index upgrade failure during fullclusterrestart the failing test is fullclusterrestartit testapikeysuperuser this happens on my pr ci and related to my change but the underlying reason seems to be upgrade failure of security system index cluster log shows a npe for aliasmetadata ishidden invocation and comparison i suspect it has something to do with and recent enforce of last for x upgrade build scan reproduction line gradlew x pack qa full cluster restart upgradedclustertest dtests class org elasticsearch xpack restart fullclusterrestartit dtests method testapikeysuperuser dtests seed dtests bwc true dtests locale fi dtests timezone america boise druntime java applicable branches master reproduces locally yes failure history failure excerpt org elasticsearch client warningfailureexception method host uri status line took timed out false shards total successful skipped failed hits total value relation eq max score hits full name null email null metadata null enabled true type user index security id score source doc type api key creation time expiration time null api key invalidated false api key hash role descriptors limited by role descriptors superuser cluster indices privileges allow restricted indices true applications resources run as metadata reserved true type role monitoring user cluster indices privileges allow restricted indices false applications resources run as metadata reserved true type role name super legacy key version creator principal api key super creator metadata realm default native realm type native index security id score source doc type foo index security id score source doc type api key creation time expiration time null api key invalidated false api key hash uj vkfhgvdj role descriptors r cluster indices privileges allow restricted indices false applications run as metadata type role limited by role descriptors es test root cluster indices privileges allow restricted indices true applications resources run as metadata type role name key version creator principal test user metadata realm default file realm type file at randomizedtesting seedinfo seed at org elasticsearch client restclient convertresponse restclient java at org elasticsearch client restclient performrequest restclient java at org elasticsearch client restclient performrequest restclient java at org elasticsearch xpack restart fullclusterrestartit testapikeysuperuser fullclusterrestartit java at jdk internal reflect nativemethodaccessorimpl nativemethodaccessorimpl java at jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at com carrotsearch randomizedtesting randomizedrunner invoke randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene util testrulesetupteardownchained evaluate testrulesetupteardownchained java at org apache lucene util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene util testrulethreadandtestname evaluate testrulethreadandtestname java at org apache lucene util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene util testrulemarkfailure evaluate testrulemarkfailure java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol forktimeoutingtask threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol evaluate threadleakcontrol java at com carrotsearch randomizedtesting randomizedrunner runsingletest randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at org apache lucene util abstractbeforeafterrule evaluate abstractbeforeafterrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene util testrulestoreclassname evaluate testrulestoreclassname java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene util testruleassertionsrequired evaluate testruleassertionsrequired java at org apache lucene util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene util testrulemarkfailure evaluate testrulemarkfailure java at org apache lucene util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene util testruleignoretestsuites evaluate testruleignoretestsuites java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol lambda forktimeoutingtask threadleakcontrol java at java lang thread run thread java
| 1
|
97,904
| 8,673,146,917
|
IssuesEvent
|
2018-11-30 00:59:22
|
rancher/rke
|
https://api.github.com/repos/rancher/rke
|
closed
|
Potential performance bottleneck when syncing node labels and taints on large environments
|
kind/bug status/resolved status/to-test
|
**RKE version:**
0.1.10
**Docker version: (`docker version`,`docker info` preferred)**
```
Client:
Version: 17.09.0-ce
API version: 1.32
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:42:18 2017
OS/Arch: linux/amd64
Server:
Version: 17.09.0-ce
API version: 1.32 (minimum version 1.12)
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:40:56 2017
OS/Arch: linux/amd64
Experimental: false
```
**Operating system and kernel: (`cat /etc/os-release`, `uname -r` preferred)**
Ubuntu 16.04.1
```
$ uname -r
4.13.0-21-generic
```
**Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)**
Bare-metal
**cluster.yml file:**
Available upon request
**Steps to Reproduce:**
- run rke up on a large environment (700+ nodes)
**Results:**
The step `Syncing nodes Labels and Taints` takes over 15 minutes and appears to be applying labels serially (one node at a time). Ran the command `kubectl get nodes | grep worker | wc -l` several times and saw the number of `worker` labels being applied was increasing gradually.
From the logs:
```
time="2018-10-15T02:52:47Z" level=info msg="[sync] Syncing nodes Labels and Taints"
time="2018-10-15T03:11:37Z" level=info msg="[sync] Successfully synced nodes Labels and Taints"
```
As a performance improvement, consider labeling nodes in parallel.
|
1.0
|
Potential performance bottleneck when syncing node labels and taints on large environments - **RKE version:**
0.1.10
**Docker version: (`docker version`,`docker info` preferred)**
```
Client:
Version: 17.09.0-ce
API version: 1.32
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:42:18 2017
OS/Arch: linux/amd64
Server:
Version: 17.09.0-ce
API version: 1.32 (minimum version 1.12)
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:40:56 2017
OS/Arch: linux/amd64
Experimental: false
```
**Operating system and kernel: (`cat /etc/os-release`, `uname -r` preferred)**
Ubuntu 16.04.1
```
$ uname -r
4.13.0-21-generic
```
**Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)**
Bare-metal
**cluster.yml file:**
Available upon request
**Steps to Reproduce:**
- run rke up on a large environment (700+ nodes)
**Results:**
The step `Syncing nodes Labels and Taints` takes over 15 minutes and appears to be applying labels serially (one node at a time). Ran the command `kubectl get nodes | grep worker | wc -l` several times and saw the number of `worker` labels being applied was increasing gradually.
From the logs:
```
time="2018-10-15T02:52:47Z" level=info msg="[sync] Syncing nodes Labels and Taints"
time="2018-10-15T03:11:37Z" level=info msg="[sync] Successfully synced nodes Labels and Taints"
```
As a performance improvement, consider labeling nodes in parallel.
|
test
|
potential performance bottleneck when syncing node labels and taints on large environments rke version docker version docker version docker info preferred client version ce api version go version git commit built tue sep os arch linux server version ce api version minimum version go version git commit built tue sep os arch linux experimental false operating system and kernel cat etc os release uname r preferred ubuntu uname r generic type provider of hosts virtualbox bare metal aws gce do bare metal cluster yml file available upon request steps to reproduce run rke up on a large environment nodes results the step syncing nodes labels and taints takes over minutes and appears to be applying labels serially one node at a time ran the command kubectl get nodes grep worker wc l several times and saw the number of worker labels being applied was increasing gradually from the logs time level info msg syncing nodes labels and taints time level info msg successfully synced nodes labels and taints as a performance improvement consider labeling nodes in parallel
| 1
|
43,634
| 13,026,129,986
|
IssuesEvent
|
2020-07-27 14:33:12
|
NixOS/nixpkgs
|
https://api.github.com/repos/NixOS/nixpkgs
|
opened
|
Vulnerability roundup 90: monero-0.16.0.1: 1 advisory [5.5]
|
1.severity: security
|
[search](https://search.nix.gsc.io/?q=monero&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=monero+in%3Apath&type=Code)
* [ ] [CVE-2020-6861](https://nvd.nist.gov/vuln/detail/CVE-2020-6861) CVSSv3=5.5 (nixos-unstable)
Scanned versions: nixos-unstable: 28fce082c8c. May contain false positives.
Cc @ehmry
Cc @rnhmjoj
|
True
|
Vulnerability roundup 90: monero-0.16.0.1: 1 advisory [5.5] - [search](https://search.nix.gsc.io/?q=monero&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=monero+in%3Apath&type=Code)
* [ ] [CVE-2020-6861](https://nvd.nist.gov/vuln/detail/CVE-2020-6861) CVSSv3=5.5 (nixos-unstable)
Scanned versions: nixos-unstable: 28fce082c8c. May contain false positives.
Cc @ehmry
Cc @rnhmjoj
|
non_test
|
vulnerability roundup monero advisory nixos unstable scanned versions nixos unstable may contain false positives cc ehmry cc rnhmjoj
| 0
|
52,154
| 3,021,925,068
|
IssuesEvent
|
2015-07-31 17:22:59
|
creatorsschool/Encoden
|
https://api.github.com/repos/creatorsschool/Encoden
|
closed
|
create Home/Dashboard link for user when logged
|
Priority 3
|
Create a link for the home or dashboard for the user to be able to return to his homepage when it's logged in.
to be implemented after creating the feautre of login validation with rails and after changing users table to accomodate the teacher and students users.
|
1.0
|
create Home/Dashboard link for user when logged - Create a link for the home or dashboard for the user to be able to return to his homepage when it's logged in.
to be implemented after creating the feautre of login validation with rails and after changing users table to accomodate the teacher and students users.
|
non_test
|
create home dashboard link for user when logged create a link for the home or dashboard for the user to be able to return to his homepage when it s logged in to be implemented after creating the feautre of login validation with rails and after changing users table to accomodate the teacher and students users
| 0
|
288,256
| 31,861,220,568
|
IssuesEvent
|
2023-09-15 11:03:49
|
nidhi7598/linux-v4.19.72_CVE-2022-3564
|
https://api.github.com/repos/nidhi7598/linux-v4.19.72_CVE-2022-3564
|
opened
|
CVE-2020-10711 (Medium) detected in linuxlinux-4.19.294
|
Mend: dependency security vulnerability
|
## CVE-2020-10711 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.294</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-v4.19.72_CVE-2022-3564/commit/9ffee08efa44c7887e2babb8f304df0fa1094efb">9ffee08efa44c7887e2babb8f304df0fa1094efb</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netlabel/netlabel_kapi.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netlabel/netlabel_kapi.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A NULL pointer dereference flaw was found in the Linux kernel's SELinux subsystem in versions before 5.7. This flaw occurs while importing the Commercial IP Security Option (CIPSO) protocol's category bitmap into the SELinux extensible bitmap via the' ebitmap_netlbl_import' routine. While processing the CIPSO restricted bitmap tag in the 'cipso_v4_parsetag_rbm' routine, it sets the security attribute to indicate that the category bitmap is present, even if it has not been allocated. This issue leads to a NULL pointer dereference issue while importing the same category bitmap into SELinux. This flaw allows a remote network user to crash the system kernel, resulting in a denial of service.
<p>Publish Date: 2020-05-22
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-10711>CVE-2020-10711</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2020-10711">https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2020-10711</a></p>
<p>Release Date: 2020-05-22</p>
<p>Fix Resolution: v5.7-rc6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-10711 (Medium) detected in linuxlinux-4.19.294 - ## CVE-2020-10711 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.294</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-v4.19.72_CVE-2022-3564/commit/9ffee08efa44c7887e2babb8f304df0fa1094efb">9ffee08efa44c7887e2babb8f304df0fa1094efb</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netlabel/netlabel_kapi.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netlabel/netlabel_kapi.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A NULL pointer dereference flaw was found in the Linux kernel's SELinux subsystem in versions before 5.7. This flaw occurs while importing the Commercial IP Security Option (CIPSO) protocol's category bitmap into the SELinux extensible bitmap via the' ebitmap_netlbl_import' routine. While processing the CIPSO restricted bitmap tag in the 'cipso_v4_parsetag_rbm' routine, it sets the security attribute to indicate that the category bitmap is present, even if it has not been allocated. This issue leads to a NULL pointer dereference issue while importing the same category bitmap into SELinux. This flaw allows a remote network user to crash the system kernel, resulting in a denial of service.
<p>Publish Date: 2020-05-22
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-10711>CVE-2020-10711</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2020-10711">https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2020-10711</a></p>
<p>Release Date: 2020-05-22</p>
<p>Fix Resolution: v5.7-rc6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch main vulnerable source files net netlabel netlabel kapi c net netlabel netlabel kapi c vulnerability details a null pointer dereference flaw was found in the linux kernel s selinux subsystem in versions before this flaw occurs while importing the commercial ip security option cipso protocol s category bitmap into the selinux extensible bitmap via the ebitmap netlbl import routine while processing the cipso restricted bitmap tag in the cipso parsetag rbm routine it sets the security attribute to indicate that the category bitmap is present even if it has not been allocated this issue leads to a null pointer dereference issue while importing the same category bitmap into selinux this flaw allows a remote network user to crash the system kernel resulting in a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
179,605
| 13,890,929,988
|
IssuesEvent
|
2020-10-19 09:56:37
|
dailykit/dailyos
|
https://api.github.com/repos/dailykit/dailyos
|
closed
|
Module opening Crashing
|
Highest TestQuality bug
|
#### Precondition
1. CRM app should be allowed to be used by test user.
#### Steps to Reproduce:
| Step | Action | Expected | Status |
| -------- | -------- | -------- | -------- |
| 1| Double Click on CRM app| CRM app opens successfully in active state| Pass |
| 2| Click on Coupons Module| A new tab should open with Coupon Listing</p><br><ol><br><li>Should show Coupon Name</li><br><li>Should show it's reward value</li><br></ol>| Fail |
#### Actual Results:
For this reason, the test failed.
|
1.0
|
Module opening Crashing - #### Precondition
1. CRM app should be allowed to be used by test user.
#### Steps to Reproduce:
| Step | Action | Expected | Status |
| -------- | -------- | -------- | -------- |
| 1| Double Click on CRM app| CRM app opens successfully in active state| Pass |
| 2| Click on Coupons Module| A new tab should open with Coupon Listing</p><br><ol><br><li>Should show Coupon Name</li><br><li>Should show it's reward value</li><br></ol>| Fail |
#### Actual Results:
For this reason, the test failed.
|
test
|
module opening crashing precondition crm app should be allowed to be used by test user steps to reproduce step action expected status double click on crm app crm app opens successfully in active state pass click on coupons module a new tab should open with coupon listing should show coupon name should show it s reward value fail actual results for this reason the test failed
| 1
|
61,998
| 6,772,642,827
|
IssuesEvent
|
2017-10-27 00:11:56
|
istio/istio
|
https://api.github.com/repos/istio/istio
|
closed
|
bug in release-0.2 bot setup
|
automated-release test-infra
|
the bot keep trying to and successfully merging auth changes into release-0.2 while there has been no such changes
https://github.com/istio/istio/commits/release-0.2 has 2 that went through
for instance
https://github.com/istio/istio/commit/6b3c74568fe241dfca8f8d479a6b345d519591d8
https://github.com/istio/auth/tree/release-0.2
last change was 12 days ago
cc @ayj who is trying to make a 0.2 release and will need to make sure the SHAs/tags are all from the actual release-0.2 branches unlike what is now in istio/istio release-0.2 for CA
|
1.0
|
bug in release-0.2 bot setup - the bot keep trying to and successfully merging auth changes into release-0.2 while there has been no such changes
https://github.com/istio/istio/commits/release-0.2 has 2 that went through
for instance
https://github.com/istio/istio/commit/6b3c74568fe241dfca8f8d479a6b345d519591d8
https://github.com/istio/auth/tree/release-0.2
last change was 12 days ago
cc @ayj who is trying to make a 0.2 release and will need to make sure the SHAs/tags are all from the actual release-0.2 branches unlike what is now in istio/istio release-0.2 for CA
|
test
|
bug in release bot setup the bot keep trying to and successfully merging auth changes into release while there has been no such changes has that went through for instance last change was days ago cc ayj who is trying to make a release and will need to make sure the shas tags are all from the actual release branches unlike what is now in istio istio release for ca
| 1
|
534,643
| 15,631,604,035
|
IssuesEvent
|
2021-03-22 05:17:07
|
kubesphere/kubesphere
|
https://api.github.com/repos/kubesphere/kubesphere
|
closed
|
The status of a new pipeline should not be displayed as 'Warning'
|
area/devops kind/bug priority/medium
|
**Describe the Bug**
**Versions Used**
KubeSphere: `dev:latest`
**Preset conditions**
There is a devops project 'dev1'
**How To Reproduce**
Steps to reproduce the behavior:
1. Go to devops project 'dev1'
2. Create a new pipeline based on gitlab
3. View status of the new pipeline
**Expected behavior**
The status of a new pipeline is 'Healthy'
**Actual behavior**
The status of a new pipeline is 'Warning'

/priority medium
/area devops
/cc @kubesphere/sig-devops
/kind bug
/milestone 3.1.0
|
1.0
|
The status of a new pipeline should not be displayed as 'Warning' - **Describe the Bug**
**Versions Used**
KubeSphere: `dev:latest`
**Preset conditions**
There is a devops project 'dev1'
**How To Reproduce**
Steps to reproduce the behavior:
1. Go to devops project 'dev1'
2. Create a new pipeline based on gitlab
3. View status of the new pipeline
**Expected behavior**
The status of a new pipeline is 'Healthy'
**Actual behavior**
The status of a new pipeline is 'Warning'

/priority medium
/area devops
/cc @kubesphere/sig-devops
/kind bug
/milestone 3.1.0
|
non_test
|
the status of a new pipeline should not be displayed as warning describe the bug versions used kubesphere dev latest preset conditions there is a devops project how to reproduce steps to reproduce the behavior go to devops project create a new pipeline based on gitlab view status of the new pipeline expected behavior the status of a new pipeline is healthy actual behavior the status of a new pipeline is warning priority medium area devops cc kubesphere sig devops kind bug milestone
| 0
|
252,138
| 18,990,451,263
|
IssuesEvent
|
2021-11-22 06:22:37
|
ztsv-av/spellbook
|
https://api.github.com/repos/ztsv-av/spellbook
|
closed
|
Unionize Formatting for README's
|
documentation
|
Come back to me after the important stuff is done. We will go through
- character limit per line
- header blank space management
- standard for item lists and other `.md` elements
Files to go through:
- [x] .`/README.md`
- [x] `object_detection/README.md`
- [x] `projects/README.md`
- [x] `birdclef-2021/README.md`
- [x] `covid19/README.md`
- [x] `exoplanet_hunting/README.md`
|
1.0
|
Unionize Formatting for README's - Come back to me after the important stuff is done. We will go through
- character limit per line
- header blank space management
- standard for item lists and other `.md` elements
Files to go through:
- [x] .`/README.md`
- [x] `object_detection/README.md`
- [x] `projects/README.md`
- [x] `birdclef-2021/README.md`
- [x] `covid19/README.md`
- [x] `exoplanet_hunting/README.md`
|
non_test
|
unionize formatting for readme s come back to me after the important stuff is done we will go through character limit per line header blank space management standard for item lists and other md elements files to go through readme md object detection readme md projects readme md birdclef readme md readme md exoplanet hunting readme md
| 0
|
39,836
| 5,252,143,632
|
IssuesEvent
|
2017-02-02 02:46:18
|
semperfiwebdesign/all-in-one-seo-pack
|
https://api.github.com/repos/semperfiwebdesign/all-in-one-seo-pack
|
closed
|
Uncaught exception โBadMethodCallExceptionโ generate_htaccess_blocklist
|
Bug Needs Testing Priority - High
|
Reported here - https://wordpress.org/support/topic/uncaught-exception-badmethodcallexception-generate_htaccess_blocklist/
User states that when updating to WordPress v4.7.2 they get a white screen and this error in the debug log:
Method generate_htaccess_blocklist doesnโt existโ in /path/to/wordpress/wp-content/plugins/all-in-one-seo-pack/admin/aioseop_module_class.php:52
User is running nginx.
|
1.0
|
Uncaught exception โBadMethodCallExceptionโ generate_htaccess_blocklist - Reported here - https://wordpress.org/support/topic/uncaught-exception-badmethodcallexception-generate_htaccess_blocklist/
User states that when updating to WordPress v4.7.2 they get a white screen and this error in the debug log:
Method generate_htaccess_blocklist doesnโt existโ in /path/to/wordpress/wp-content/plugins/all-in-one-seo-pack/admin/aioseop_module_class.php:52
User is running nginx.
|
test
|
uncaught exception โbadmethodcallexceptionโ generate htaccess blocklist reported here user states that when updating to wordpress they get a white screen and this error in the debug log method generate htaccess blocklist doesnโt existโ in path to wordpress wp content plugins all in one seo pack admin aioseop module class php user is running nginx
| 1
|
245,871
| 20,799,820,173
|
IssuesEvent
|
2022-03-17 12:56:09
|
compare-ci/admin
|
https://api.github.com/repos/compare-ci/admin
|
closed
|
Automated test 1647521697.166569
|
Test
|
This is a tracking issue for the automated tests being run. Test id: `automated-test-1647521697.166569`
|[python-sum](https://github.com/compare-ci/python-sum/pull/2366)|Pull Created|Check Start|Check End|Total|Check|
|-|-|-|-|-|-|
|CircleCI Checks|12:55:07|12:55:08|12:55:12|0:00:05|0:00:04|
|GitHub Actions|12:55:07|12:55:23|12:55:25|0:00:18|0:00:02|
|Azure Pipelines|12:55:07|12:55:24|12:55:36|0:00:29|0:00:12|
|[node-sum](https://github.com/compare-ci/node-sum/pull/2343)|Pull Created|Check Start|Check End|Total|Check|
|-|-|-|-|-|-|
|CircleCI Checks|12:55:15|12:55:16|12:55:29|0:00:14|0:00:13|
|GitHub Actions|12:55:15|12:55:34|12:55:53|0:00:38|0:00:19|
|
1.0
|
Automated test 1647521697.166569 - This is a tracking issue for the automated tests being run. Test id: `automated-test-1647521697.166569`
|[python-sum](https://github.com/compare-ci/python-sum/pull/2366)|Pull Created|Check Start|Check End|Total|Check|
|-|-|-|-|-|-|
|CircleCI Checks|12:55:07|12:55:08|12:55:12|0:00:05|0:00:04|
|GitHub Actions|12:55:07|12:55:23|12:55:25|0:00:18|0:00:02|
|Azure Pipelines|12:55:07|12:55:24|12:55:36|0:00:29|0:00:12|
|[node-sum](https://github.com/compare-ci/node-sum/pull/2343)|Pull Created|Check Start|Check End|Total|Check|
|-|-|-|-|-|-|
|CircleCI Checks|12:55:15|12:55:16|12:55:29|0:00:14|0:00:13|
|GitHub Actions|12:55:15|12:55:34|12:55:53|0:00:38|0:00:19|
|
test
|
automated test this is a tracking issue for the automated tests being run test id automated test created check start check end total check circleci checks github actions azure pipelines created check start check end total check circleci checks github actions
| 1
|
50,701
| 6,107,775,456
|
IssuesEvent
|
2017-06-21 08:56:28
|
Microsoft/vscode
|
https://api.github.com/repos/Microsoft/vscode
|
opened
|
Test: multi root explorer
|
multi-root testplan-item
|
- [ ] win
- [ ] mac
- [ ] linux
Complexity: 4
Refs: https://github.com/Microsoft/vscode/pull/29030
This milestone we have started to work on the multi root experience, currently this is only available on insiders. Verify:
* Quickly check that the single root experience is the same as before
* You can nicely transition from single root to multi root
* View state is preserved in the explorer between restarts (focus, expand state, opened editor)
* You can drag and drop in the explorer (also between different roots)
* All the context menu actions make sense (check the ones for the roots)
* Deleting, renaming, adding file should work as before
* Check the case when you have the same folder opened twice (try to rename / delete something in that folder, both parents should get updated)
* File events: TODO@isidor
* files.exclude: TODO@isidor
* Be creative in trying to break the explorer
@bpasero feel free to edit
|
1.0
|
Test: multi root explorer - - [ ] win
- [ ] mac
- [ ] linux
Complexity: 4
Refs: https://github.com/Microsoft/vscode/pull/29030
This milestone we have started to work on the multi root experience, currently this is only available on insiders. Verify:
* Quickly check that the single root experience is the same as before
* You can nicely transition from single root to multi root
* View state is preserved in the explorer between restarts (focus, expand state, opened editor)
* You can drag and drop in the explorer (also between different roots)
* All the context menu actions make sense (check the ones for the roots)
* Deleting, renaming, adding file should work as before
* Check the case when you have the same folder opened twice (try to rename / delete something in that folder, both parents should get updated)
* File events: TODO@isidor
* files.exclude: TODO@isidor
* Be creative in trying to break the explorer
@bpasero feel free to edit
|
test
|
test multi root explorer win mac linux complexity refs this milestone we have started to work on the multi root experience currently this is only available on insiders verify quickly check that the single root experience is the same as before you can nicely transition from single root to multi root view state is preserved in the explorer between restarts focus expand state opened editor you can drag and drop in the explorer also between different roots all the context menu actions make sense check the ones for the roots deleting renaming adding file should work as before check the case when you have the same folder opened twice try to rename delete something in that folder both parents should get updated file events todo isidor files exclude todo isidor be creative in trying to break the explorer bpasero feel free to edit
| 1
|
214,775
| 16,611,426,869
|
IssuesEvent
|
2021-06-02 12:01:42
|
microsoft/vscode
|
https://api.github.com/repos/microsoft/vscode
|
closed
|
Smoke Test
|
testplan-item
|
- [x] Windows @bpasero
- [x] macOS @Tyriar
- [x] Linux @JacksonKearl
Complexity: 2
[Create Issue](https://github.com/microsoft/vscode/issues/new?body=Testing+%23125033%0A%0A)
---
**NOTE:** Desktop & Web tests MUST run with `--build` argument
**NOTE:** Desktop tests MUST run with `--stable-build` argument additionally
Documentation: https://github.com/Microsoft/vscode/blob/main/test/smoke/README.md#run.
If the automated tests fail, create and issue for that and run the tests manually: https://github.com/microsoft/vscode/wiki/Smoke-Test
|
1.0
|
Smoke Test - - [x] Windows @bpasero
- [x] macOS @Tyriar
- [x] Linux @JacksonKearl
Complexity: 2
[Create Issue](https://github.com/microsoft/vscode/issues/new?body=Testing+%23125033%0A%0A)
---
**NOTE:** Desktop & Web tests MUST run with `--build` argument
**NOTE:** Desktop tests MUST run with `--stable-build` argument additionally
Documentation: https://github.com/Microsoft/vscode/blob/main/test/smoke/README.md#run.
If the automated tests fail, create and issue for that and run the tests manually: https://github.com/microsoft/vscode/wiki/Smoke-Test
|
test
|
smoke test windows bpasero macos tyriar linux jacksonkearl complexity note desktop web tests must run with build argument note desktop tests must run with stable build argument additionally documentation if the automated tests fail create and issue for that and run the tests manually
| 1
|
365,489
| 25,538,664,094
|
IssuesEvent
|
2022-11-29 13:52:46
|
department-of-veterans-affairs/va.gov-cms
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms
|
opened
|
Create a document with some good ELI5 information on the Queue API.
|
Needs refining documentation โญ๏ธ Sitewide CMS
|
## Description
I feel like the Queue API is underutilized, and part of the reason might be a lack of good ELI5-level documentation that makes it clear how simple and easy to use the Queue API really is. It'd be nice to provide some, especially since we find ourselves writing scripts to perform automated changes across tens of thousands of nodes.
## Acceptance Criteria
- [ ] A README exists within the repo documenting how to use the Queue API.
- [ ] We've attempted to improve the Drupal.org documentation on the Queue API.
|
1.0
|
Create a document with some good ELI5 information on the Queue API. - ## Description
I feel like the Queue API is underutilized, and part of the reason might be a lack of good ELI5-level documentation that makes it clear how simple and easy to use the Queue API really is. It'd be nice to provide some, especially since we find ourselves writing scripts to perform automated changes across tens of thousands of nodes.
## Acceptance Criteria
- [ ] A README exists within the repo documenting how to use the Queue API.
- [ ] We've attempted to improve the Drupal.org documentation on the Queue API.
|
non_test
|
create a document with some good information on the queue api description i feel like the queue api is underutilized and part of the reason might be a lack of good level documentation that makes it clear how simple and easy to use the queue api really is it d be nice to provide some especially since we find ourselves writing scripts to perform automated changes across tens of thousands of nodes acceptance criteria a readme exists within the repo documenting how to use the queue api we ve attempted to improve the drupal org documentation on the queue api
| 0
|
36,200
| 17,533,062,065
|
IssuesEvent
|
2021-08-12 01:28:33
|
eclipse/eclipse.jdt.ls
|
https://api.github.com/repos/eclipse/eclipse.jdt.ls
|
closed
|
completion performance: calculating constantValue is expensive
|
performance
|
When code completion suggests constant fields, it will resolve their values directly and display them in the label section. See the screenshot.

It turns out this is an expensive operation. Especially when a type contains many constant fields, resolving them all during code completion will significantly reduce performance. See the profiling result, resolving constant value for the fields of `org.eclipse.jdt.internal.compiler.ast.ASTNode` will cost 45% CPU time of the language server.

And if I set the CompletionHandler.completion as the call tree root, you can see that resolving constant field will cost more than 90% CPU time of the completion handler.

**Suggestion**: We can remove the constant value from the label part so as to avoid the expensive calculations. But keep its value in javadoc in case the user wants to see its value.
|
True
|
completion performance: calculating constantValue is expensive - When code completion suggests constant fields, it will resolve their values directly and display them in the label section. See the screenshot.

It turns out this is an expensive operation. Especially when a type contains many constant fields, resolving them all during code completion will significantly reduce performance. See the profiling result, resolving constant value for the fields of `org.eclipse.jdt.internal.compiler.ast.ASTNode` will cost 45% CPU time of the language server.

And if I set the CompletionHandler.completion as the call tree root, you can see that resolving constant field will cost more than 90% CPU time of the completion handler.

**Suggestion**: We can remove the constant value from the label part so as to avoid the expensive calculations. But keep its value in javadoc in case the user wants to see its value.
|
non_test
|
completion performance calculating constantvalue is expensive when code completion suggests constant fields it will resolve their values directly and display them in the label section see the screenshot it turns out this is an expensive operation especially when a type contains many constant fields resolving them all during code completion will significantly reduce performance see the profiling result resolving constant value for the fields of org eclipse jdt internal compiler ast astnode will cost cpu time of the language server and if i set the completionhandler completion as the call tree root you can see that resolving constant field will cost more than cpu time of the completion handler suggestion we can remove the constant value from the label part so as to avoid the expensive calculations but keep its value in javadoc in case the user wants to see its value
| 0
|
159,468
| 6,046,631,917
|
IssuesEvent
|
2017-06-12 12:39:35
|
mborzenkov/Read-Later-List
|
https://api.github.com/repos/mborzenkov/Read-Later-List
|
opened
|
Improve ContentProvider with AbstractThreadedSyncAdapter
|
Priority: High Type: Maintenance
|
AbstractThreadedSyncAdapter:
- https://developer.android.com/training/sync-adapters/creating-sync-adapter.html
- https://developer.android.com/training/efficient-downloads/index.html
|
1.0
|
Improve ContentProvider with AbstractThreadedSyncAdapter - AbstractThreadedSyncAdapter:
- https://developer.android.com/training/sync-adapters/creating-sync-adapter.html
- https://developer.android.com/training/efficient-downloads/index.html
|
non_test
|
improve contentprovider with abstractthreadedsyncadapter abstractthreadedsyncadapter
| 0
|
96,846
| 16,167,131,283
|
IssuesEvent
|
2021-05-01 18:27:07
|
Ryan-Oneil/oneil-industries-website
|
https://api.github.com/repos/Ryan-Oneil/oneil-industries-website
|
closed
|
CVE-2021-23368 (Medium) detected in postcss-7.0.21.tgz, postcss-7.0.35.tgz
|
security vulnerability
|
## CVE-2021-23368 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>postcss-7.0.21.tgz</b>, <b>postcss-7.0.35.tgz</b></p></summary>
<p>
<details><summary><b>postcss-7.0.21.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.21.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.21.tgz</a></p>
<p>Path to dependency file: oneil-industries-website/frontend/package.json</p>
<p>Path to vulnerable library: oneil-industries-website/frontend/node_modules/resolve-url-loader/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.4.tgz (Root Library)
- resolve-url-loader-3.1.2.tgz
- :x: **postcss-7.0.21.tgz** (Vulnerable Library)
</details>
<details><summary><b>postcss-7.0.35.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.35.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.35.tgz</a></p>
<p>Path to dependency file: oneil-industries-website/frontend/package.json</p>
<p>Path to vulnerable library: oneil-industries-website/frontend/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.4.tgz (Root Library)
- css-loader-3.4.2.tgz
- :x: **postcss-7.0.35.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/Ryan-Oneil/oneil-industries-website/commit/eefbb36da29b795d82b2f6a32f4ea94c8bc821f8">eefbb36da29b795d82b2f6a32f4ea94c8bc821f8</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package postcss from 7.0.0 and before 8.2.10 are vulnerable to Regular Expression Denial of Service (ReDoS) during source map parsing.
<p>Publish Date: 2021-04-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23368>CVE-2021-23368</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23368">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23368</a></p>
<p>Release Date: 2021-04-12</p>
<p>Fix Resolution: postcss -8.2.10</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-23368 (Medium) detected in postcss-7.0.21.tgz, postcss-7.0.35.tgz - ## CVE-2021-23368 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>postcss-7.0.21.tgz</b>, <b>postcss-7.0.35.tgz</b></p></summary>
<p>
<details><summary><b>postcss-7.0.21.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.21.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.21.tgz</a></p>
<p>Path to dependency file: oneil-industries-website/frontend/package.json</p>
<p>Path to vulnerable library: oneil-industries-website/frontend/node_modules/resolve-url-loader/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.4.tgz (Root Library)
- resolve-url-loader-3.1.2.tgz
- :x: **postcss-7.0.21.tgz** (Vulnerable Library)
</details>
<details><summary><b>postcss-7.0.35.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.35.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.35.tgz</a></p>
<p>Path to dependency file: oneil-industries-website/frontend/package.json</p>
<p>Path to vulnerable library: oneil-industries-website/frontend/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.4.tgz (Root Library)
- css-loader-3.4.2.tgz
- :x: **postcss-7.0.35.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/Ryan-Oneil/oneil-industries-website/commit/eefbb36da29b795d82b2f6a32f4ea94c8bc821f8">eefbb36da29b795d82b2f6a32f4ea94c8bc821f8</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package postcss from 7.0.0 and before 8.2.10 are vulnerable to Regular Expression Denial of Service (ReDoS) during source map parsing.
<p>Publish Date: 2021-04-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23368>CVE-2021-23368</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23368">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23368</a></p>
<p>Release Date: 2021-04-12</p>
<p>Fix Resolution: postcss -8.2.10</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve medium detected in postcss tgz postcss tgz cve medium severity vulnerability vulnerable libraries postcss tgz postcss tgz postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file oneil industries website frontend package json path to vulnerable library oneil industries website frontend node modules resolve url loader node modules postcss package json dependency hierarchy react scripts tgz root library resolve url loader tgz x postcss tgz vulnerable library postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file oneil industries website frontend package json path to vulnerable library oneil industries website frontend node modules postcss package json dependency hierarchy react scripts tgz root library css loader tgz x postcss tgz vulnerable library found in head commit a href vulnerability details the package postcss from and before are vulnerable to regular expression denial of service redos during source map parsing publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution postcss step up your open source security game with whitesource
| 0
|
19,433
| 3,202,786,022
|
IssuesEvent
|
2015-10-02 15:39:24
|
JPaulMora/Pyrit
|
https://api.github.com/repos/JPaulMora/Pyrit
|
closed
|
Implementation question: OSX + Linux + MPI + crunch + attack_passthrough .. how?
|
auto-migrated Priority-Medium Type-Defect
|
```
This is more of an implementation / best practice question than it is a bug.
I have three machines, a decked out 17" macbook, a decent linux box with a gpu
in it, and a high power windows gaming rig. Understanding that currently there
is no support for doing pyrit operations on windows, I'm leaving that out out
of the context of my question.
Is there any way (or a best practice) to use attack_passthrough to pipe
passwords to pyrit, running on multiple systems, setup using MPI to be
multi-cored?
I'm currently doing it on a single computer with two cores, and looking at cpu
graphs, I can see that python is only using one CPU at a time. It jumps back
and forth between the two cores, but only one is used at any given point in
time. I'd like to use "all gpus" and "all cpus" in a given group of machines to
crack WPA.
```
Original issue reported on code.google.com by `viss...@gmail.com` on 3 Sep 2011 at 8:48
|
1.0
|
Implementation question: OSX + Linux + MPI + crunch + attack_passthrough .. how? - ```
This is more of an implementation / best practice question than it is a bug.
I have three machines, a decked out 17" macbook, a decent linux box with a gpu
in it, and a high power windows gaming rig. Understanding that currently there
is no support for doing pyrit operations on windows, I'm leaving that out out
of the context of my question.
Is there any way (or a best practice) to use attack_passthrough to pipe
passwords to pyrit, running on multiple systems, setup using MPI to be
multi-cored?
I'm currently doing it on a single computer with two cores, and looking at cpu
graphs, I can see that python is only using one CPU at a time. It jumps back
and forth between the two cores, but only one is used at any given point in
time. I'd like to use "all gpus" and "all cpus" in a given group of machines to
crack WPA.
```
Original issue reported on code.google.com by `viss...@gmail.com` on 3 Sep 2011 at 8:48
|
non_test
|
implementation question osx linux mpi crunch attack passthrough how this is more of an implementation best practice question than it is a bug i have three machines a decked out macbook a decent linux box with a gpu in it and a high power windows gaming rig understanding that currently there is no support for doing pyrit operations on windows i m leaving that out out of the context of my question is there any way or a best practice to use attack passthrough to pipe passwords to pyrit running on multiple systems setup using mpi to be multi cored i m currently doing it on a single computer with two cores and looking at cpu graphs i can see that python is only using one cpu at a time it jumps back and forth between the two cores but only one is used at any given point in time i d like to use all gpus and all cpus in a given group of machines to crack wpa original issue reported on code google com by viss gmail com on sep at
| 0
|
292,961
| 25,253,820,666
|
IssuesEvent
|
2022-11-15 16:29:05
|
osquery/osquery
|
https://api.github.com/repos/osquery/osquery
|
closed
|
Unblock CI failing due to the python `psutil` package not found in the macOS runner
|
bug macOS test CI/CD
|
```
82: Test command: /Users/runner/work/osquery/osquery/workspace/install/cmake-3.21.4-macos-universal/CMake.app/Contents/bin/cmake "-E" "env" "PYTHONPATH=/Users/runner/work/osquery/osquery/workspace/build/python_path" "/usr/local/Frameworks/Python.framework/Versions/3.10/bin/python3.10" "-u" "test_osqueryd.py" "--verbose" "--build" "/Users/runner/work/osquery/osquery/workspace/build" "--test-configs-dir" "/Users/runner/work/osquery/osquery/workspace/build/test_configs"
82: Test timeout computed to be: 300
82: Traceback (most recent call last):
82: File "/Users/runner/work/osquery/osquery/workspace/build/tools/tests/test_osqueryd.py", line 19, in <module>
82: import test_base
82: File "/Users/runner/work/osquery/osquery/workspace/build/tools/tests/test_base.py", line 13, in <module>
82: import psutil
82: ModuleNotFoundError: No module named 'psutil'
82/85 Test #82: tools_tests_testosqueryd ..............................................***Failed 0.19 sec
```
I see this failure is increasingly happening on recent builds from master and PR; not sure what's happening but probably a new build of the runner broke the detection.
|
1.0
|
Unblock CI failing due to the python `psutil` package not found in the macOS runner - ```
82: Test command: /Users/runner/work/osquery/osquery/workspace/install/cmake-3.21.4-macos-universal/CMake.app/Contents/bin/cmake "-E" "env" "PYTHONPATH=/Users/runner/work/osquery/osquery/workspace/build/python_path" "/usr/local/Frameworks/Python.framework/Versions/3.10/bin/python3.10" "-u" "test_osqueryd.py" "--verbose" "--build" "/Users/runner/work/osquery/osquery/workspace/build" "--test-configs-dir" "/Users/runner/work/osquery/osquery/workspace/build/test_configs"
82: Test timeout computed to be: 300
82: Traceback (most recent call last):
82: File "/Users/runner/work/osquery/osquery/workspace/build/tools/tests/test_osqueryd.py", line 19, in <module>
82: import test_base
82: File "/Users/runner/work/osquery/osquery/workspace/build/tools/tests/test_base.py", line 13, in <module>
82: import psutil
82: ModuleNotFoundError: No module named 'psutil'
82/85 Test #82: tools_tests_testosqueryd ..............................................***Failed 0.19 sec
```
I see this failure is increasingly happening on recent builds from master and PR; not sure what's happening but probably a new build of the runner broke the detection.
|
test
|
unblock ci failing due to the python psutil package not found in the macos runner test command users runner work osquery osquery workspace install cmake macos universal cmake app contents bin cmake e env pythonpath users runner work osquery osquery workspace build python path usr local frameworks python framework versions bin u test osqueryd py verbose build users runner work osquery osquery workspace build test configs dir users runner work osquery osquery workspace build test configs test timeout computed to be traceback most recent call last file users runner work osquery osquery workspace build tools tests test osqueryd py line in import test base file users runner work osquery osquery workspace build tools tests test base py line in import psutil modulenotfounderror no module named psutil test tools tests testosqueryd failed sec i see this failure is increasingly happening on recent builds from master and pr not sure what s happening but probably a new build of the runner broke the detection
| 1
|
144,406
| 11,614,997,404
|
IssuesEvent
|
2020-02-26 13:32:58
|
avocode/avocode-email-tagsinput
|
https://api.github.com/repos/avocode/avocode-email-tagsinput
|
opened
|
fix e2e test for CollapsibleTagsInput component
|
tests
|
Spec `should update count when tags are added and input is on multiple lines` is not passing. Need to pull docker image and see what is wrong.
|
1.0
|
fix e2e test for CollapsibleTagsInput component - Spec `should update count when tags are added and input is on multiple lines` is not passing. Need to pull docker image and see what is wrong.
|
test
|
fix test for collapsibletagsinput component spec should update count when tags are added and input is on multiple lines is not passing need to pull docker image and see what is wrong
| 1
|
125,845
| 10,361,185,649
|
IssuesEvent
|
2019-09-06 09:25:54
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
storage: TestTxnRecordLifecycleTransitions: TxnMeta.Timestamp.Logical: 434 != 436
|
C-test-failure
|
From failed master build for which the issue poster didn't fire (https://github.com/cockroachdb/cockroach/issues/40414#event-2611186443)
https://teamcity.cockroachdb.com/viewLog.html?buildId=1470353&buildTypeId=Cockroach_UnitTests
|
1.0
|
storage: TestTxnRecordLifecycleTransitions: TxnMeta.Timestamp.Logical: 434 != 436 - From failed master build for which the issue poster didn't fire (https://github.com/cockroachdb/cockroach/issues/40414#event-2611186443)
https://teamcity.cockroachdb.com/viewLog.html?buildId=1470353&buildTypeId=Cockroach_UnitTests
|
test
|
storage testtxnrecordlifecycletransitions txnmeta timestamp logical from failed master build for which the issue poster didn t fire
| 1
|
32,826
| 8,957,880,427
|
IssuesEvent
|
2019-01-27 09:07:46
|
termux/termux-packages
|
https://api.github.com/repos/termux/termux-packages
|
closed
|
Build failed for unzip
|
bug report building packages
|
<!-- Important note: Refusing to provide needed information may result in issue closing. -->
**Problem description**
Build for unzip failed.
**Steps to reproduce**
https://cirrus-ci.com/build/5737301503115264 performed on https://github.com/Wetitpig/termux-packages/tree/cirrus-ci
|
1.0
|
Build failed for unzip - <!-- Important note: Refusing to provide needed information may result in issue closing. -->
**Problem description**
Build for unzip failed.
**Steps to reproduce**
https://cirrus-ci.com/build/5737301503115264 performed on https://github.com/Wetitpig/termux-packages/tree/cirrus-ci
|
non_test
|
build failed for unzip problem description build for unzip failed steps to reproduce performed on
| 0
|
166,019
| 12,888,537,536
|
IssuesEvent
|
2020-07-13 13:11:46
|
CiscoDevNet/webexteamssdk
|
https://api.github.com/repos/CiscoDevNet/webexteamssdk
|
closed
|
Add ability to initialise API object using OAUTH details
|
docs tests
|
The access_token.get method allows you to convert an OAUTH token into an access token for making API commands on a user's behalf for [WebexTeams Integrations](https://developer.webex.com/authentication.html). However, currently the ciscospark package won't let you call this method unless you have already an API object, which is a problem since you need a token to make an API object even though you don't need a token to make this call!
Luckily, the constructor doesn't actually check if the token you give it is valid on creation. As a workaround, I currently initialise an API object with a fake token, use it to convert the OAUTH code into the new token and then use that new token to create another API object I'll use elsewhere. To demonstrate:
```python
def makeAPI(client_id, client_secret, OAuthcode, redirect_uri):
fakeAPI = CiscoSparkAPI(access_token='blahblahblah')
tokenObject = fakeAPI.access_tokens.get(client_id, client_secret, OAuthcode, redirect_uri)
realAPI = CiscoSparkAPI(access_token=tokenObject.access_token)
return realAPI
```
Ideally, I should be able to initialise the API by calling `API = CiscoSparkAPI(client_id, client_secret, OAuthcode, redirect_uri)` or something like that directly.
|
1.0
|
Add ability to initialise API object using OAUTH details - The access_token.get method allows you to convert an OAUTH token into an access token for making API commands on a user's behalf for [WebexTeams Integrations](https://developer.webex.com/authentication.html). However, currently the ciscospark package won't let you call this method unless you have already an API object, which is a problem since you need a token to make an API object even though you don't need a token to make this call!
Luckily, the constructor doesn't actually check if the token you give it is valid on creation. As a workaround, I currently initialise an API object with a fake token, use it to convert the OAUTH code into the new token and then use that new token to create another API object I'll use elsewhere. To demonstrate:
```python
def makeAPI(client_id, client_secret, OAuthcode, redirect_uri):
fakeAPI = CiscoSparkAPI(access_token='blahblahblah')
tokenObject = fakeAPI.access_tokens.get(client_id, client_secret, OAuthcode, redirect_uri)
realAPI = CiscoSparkAPI(access_token=tokenObject.access_token)
return realAPI
```
Ideally, I should be able to initialise the API by calling `API = CiscoSparkAPI(client_id, client_secret, OAuthcode, redirect_uri)` or something like that directly.
|
test
|
add ability to initialise api object using oauth details the access token get method allows you to convert an oauth token into an access token for making api commands on a user s behalf for however currently the ciscospark package won t let you call this method unless you have already an api object which is a problem since you need a token to make an api object even though you don t need a token to make this call luckily the constructor doesn t actually check if the token you give it is valid on creation as a workaround i currently initialise an api object with a fake token use it to convert the oauth code into the new token and then use that new token to create another api object i ll use elsewhere to demonstrate python def makeapi client id client secret oauthcode redirect uri fakeapi ciscosparkapi access token blahblahblah tokenobject fakeapi access tokens get client id client secret oauthcode redirect uri realapi ciscosparkapi access token tokenobject access token return realapi ideally i should be able to initialise the api by calling api ciscosparkapi client id client secret oauthcode redirect uri or something like that directly
| 1
|
105,498
| 9,085,284,742
|
IssuesEvent
|
2019-02-18 07:46:39
|
sb/smallbasic-editor
|
https://api.github.com/repos/sb/smallbasic-editor
|
closed
|
Some tests failing with different CultureInfo setting
|
type/testing
|
My OS is set to German language which uses "," instead of "." as decimal separator. This causes some tests to fail on my machine because floating point numbers are converted to string according to the current culture settings but compared to strings with "." as separator. One example:
> Expected values to be
"
ar = 1=2;
x = 2.5", but
"
ar = 1=2;
x = 2,5" differs near ",5" (index 18).
Maybe InvariantCulture could be set explicitly for executing the tests?
|
1.0
|
Some tests failing with different CultureInfo setting - My OS is set to German language which uses "," instead of "." as decimal separator. This causes some tests to fail on my machine because floating point numbers are converted to string according to the current culture settings but compared to strings with "." as separator. One example:
> Expected values to be
"
ar = 1=2;
x = 2.5", but
"
ar = 1=2;
x = 2,5" differs near ",5" (index 18).
Maybe InvariantCulture could be set explicitly for executing the tests?
|
test
|
some tests failing with different cultureinfo setting my os is set to german language which uses instead of as decimal separator this causes some tests to fail on my machine because floating point numbers are converted to string according to the current culture settings but compared to strings with as separator one example expected values to be ar x but ar x differs near index maybe invariantculture could be set explicitly for executing the tests
| 1
|
223,952
| 17,648,015,576
|
IssuesEvent
|
2021-08-20 09:09:12
|
kubeedge/kubeedge
|
https://api.github.com/repos/kubeedge/kubeedge
|
opened
|
Temperature-demo deployment failed.(Experience sharing)
|
kind/failing-test
|
<!-- Please only use this template for submitting reports about failing tests in KubeEdge CI jobs -->
**Which jobs are failing**:
**Which test(s) are failing**:
Temperature-demo
**Since when has it been failing**:
**Reason for failure**:
The temperature- Mapper Pod could not be scheduled to the Raspberry PI node,because the label of nodeSelector did not match and the scheduler could not find the appropriate edge node.The node label is displayed as 'kubernetes.io/hostname=raspberrypi',but in the deployment.yaml file, the tag is 'name: raspberrypi'.After the label is changed, the pod is scheduled to the raspberry PI node.However, after the image built by the cloud node is transmitted to the Raspberry PI node, the Pod status error is found. After analysis, the cloud node is AMD64, while the raspberry PI node is ARM64. An attempt is made to rebuild the image locally on the Raspberry PI node,after the construction is complete, the Pod runs properly and successfully obtains DHT11 data.
I would like to share some problems and my own practices in the process of trying the demo, hoping to provide some reference for students with the same problems.
**Anything else we need to know**:
kubeedge version: v1.7.2
kubectl version: v1.19.3
go version: go1.14.4 linux/amd64 (cloudnode)
go version: go1.14.4 linux/arm64 (Raspberry PI)
|
1.0
|
Temperature-demo deployment failed.(Experience sharing) - <!-- Please only use this template for submitting reports about failing tests in KubeEdge CI jobs -->
**Which jobs are failing**:
**Which test(s) are failing**:
Temperature-demo
**Since when has it been failing**:
**Reason for failure**:
The temperature- Mapper Pod could not be scheduled to the Raspberry PI node,because the label of nodeSelector did not match and the scheduler could not find the appropriate edge node.The node label is displayed as 'kubernetes.io/hostname=raspberrypi',but in the deployment.yaml file, the tag is 'name: raspberrypi'.After the label is changed, the pod is scheduled to the raspberry PI node.However, after the image built by the cloud node is transmitted to the Raspberry PI node, the Pod status error is found. After analysis, the cloud node is AMD64, while the raspberry PI node is ARM64. An attempt is made to rebuild the image locally on the Raspberry PI node,after the construction is complete, the Pod runs properly and successfully obtains DHT11 data.
I would like to share some problems and my own practices in the process of trying the demo, hoping to provide some reference for students with the same problems.
**Anything else we need to know**:
kubeedge version: v1.7.2
kubectl version: v1.19.3
go version: go1.14.4 linux/amd64 (cloudnode)
go version: go1.14.4 linux/arm64 (Raspberry PI)
|
test
|
temperature demo deployment failed experience sharing which jobs are failing which test s are failing temperature demo since when has it been failing reason for failure the temperature mapper pod could not be scheduled to the raspberry pi node because the label of nodeselector did not match and the scheduler could not find the appropriate edge node the node label is displayed as kubernetes io hostname raspberrypi but in the deployment yaml file the tag is name raspberrypi after the label is changed the pod is scheduled to the raspberry pi node however after the image built by the cloud node is transmitted to the raspberry pi node the pod status error is found after analysis the cloud node is while the raspberry pi node is an attempt is made to rebuild the image locally on the raspberry pi node after the construction is complete the pod runs properly and successfully obtains data i would like to share some problems and my own practices in the process of trying the demo hoping to provide some reference for students with the same problems anything else we need to know kubeedge version kubectl version go version linux cloudnode go version linux raspberry pi
| 1
|
11,645
| 3,213,512,083
|
IssuesEvent
|
2015-10-06 20:14:13
|
ntop/ntopng
|
https://api.github.com/repos/ntop/ntopng
|
closed
|
ntopng not showing full traffic - Do I need PF_RING - How do i install and use with ntopng
|
Testing Needed
|
I have a nTAP installed feeding into an Intel SFP+ LR optic going into an Intel 82599ES 10-Gigabit SFI/SFP+.
I have ntopng installed
[ntop@ntopng ~]# ntopng --version
v.2.0.150930 [Professional Edition]
GIT rev: dev:f2d2b1b8ad9c00cd60decc30ccb2005486ea08dc:20150930
Pro rev: r473
System Id: 3D0153209105A1EF
Built on: CentOS Linux release 7.1.1503 (Core)
With nload started and running on the interface that is tapped into the server. I see the following.
Curr: 339.58 MB/s
Avg: 272.06 MB/s
Max: 602.82 MB/s
Ttl: 1127.43 GByte
ยฉ 1998- 2015 - ntop.org
Generated by ntopng Professional v.2.0.150930
for user admin and interface p2p1
193.27 Mbps [74,024 pps]
Uptime: 2 h, 48 min, 56 sec
Those numbers don't match up. So I spoke a bit to Luca and he suggested PF_RING ZC now I am looking for a PF_RING install guide or setup guide. Do I need PF_RING ZC? What does that do compared to PF_RING?
I really need to be able to watch traffic on my network which pushes currently 1.95 GBps. Right now it doesn't look like all the packets are being inspected properly.
|
1.0
|
ntopng not showing full traffic - Do I need PF_RING - How do i install and use with ntopng - I have a nTAP installed feeding into an Intel SFP+ LR optic going into an Intel 82599ES 10-Gigabit SFI/SFP+.
I have ntopng installed
[ntop@ntopng ~]# ntopng --version
v.2.0.150930 [Professional Edition]
GIT rev: dev:f2d2b1b8ad9c00cd60decc30ccb2005486ea08dc:20150930
Pro rev: r473
System Id: 3D0153209105A1EF
Built on: CentOS Linux release 7.1.1503 (Core)
With nload started and running on the interface that is tapped into the server. I see the following.
Curr: 339.58 MB/s
Avg: 272.06 MB/s
Max: 602.82 MB/s
Ttl: 1127.43 GByte
ยฉ 1998- 2015 - ntop.org
Generated by ntopng Professional v.2.0.150930
for user admin and interface p2p1
193.27 Mbps [74,024 pps]
Uptime: 2 h, 48 min, 56 sec
Those numbers don't match up. So I spoke a bit to Luca and he suggested PF_RING ZC now I am looking for a PF_RING install guide or setup guide. Do I need PF_RING ZC? What does that do compared to PF_RING?
I really need to be able to watch traffic on my network which pushes currently 1.95 GBps. Right now it doesn't look like all the packets are being inspected properly.
|
test
|
ntopng not showing full traffic do i need pf ring how do i install and use with ntopng i have a ntap installed feeding into an intel sfp lr optic going into an intel gigabit sfi sfp i have ntopng installed ntopng version v git rev dev pro rev system id built on centos linux release core with nload started and running on the interface that is tapped into the server i see the following curr mb s avg mb s max mb s ttl gbyte ยฉ ntop org generated by ntopng professional v for user admin and interface mbps uptime h min sec those numbers don t match up so i spoke a bit to luca and he suggested pf ring zc now i am looking for a pf ring install guide or setup guide do i need pf ring zc what does that do compared to pf ring i really need to be able to watch traffic on my network which pushes currently gbps right now it doesn t look like all the packets are being inspected properly
| 1
|
89,224
| 11,205,648,391
|
IssuesEvent
|
2020-01-05 15:39:37
|
nextcloud/android
|
https://api.github.com/repos/nextcloud/android
|
opened
|
Text editing has different share dialog
|
design enhancement
|
The share dialog when editing text documents looks quite different from the sidebar one, why is that? (Also the primary button is black even though the theming is not changed?)

|
1.0
|
Text editing has different share dialog - The share dialog when editing text documents looks quite different from the sidebar one, why is that? (Also the primary button is black even though the theming is not changed?)

|
non_test
|
text editing has different share dialog the share dialog when editing text documents looks quite different from the sidebar one why is that also the primary button is black even though the theming is not changed
| 0
|
243,610
| 26,285,216,257
|
IssuesEvent
|
2023-01-07 19:05:27
|
rsoreq/beats
|
https://api.github.com/repos/rsoreq/beats
|
closed
|
CVE-2022-27664 (High) detected in github.com/golang/net-v0.0.0-20200904194848-62affa334b73 - autoclosed
|
security vulnerability
|
## CVE-2022-27664 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/golang/net-v0.0.0-20200904194848-62affa334b73</b></p></summary>
<p>[mirror] Go supplementary network libraries</p>
<p>Library home page: <a href="https://proxy.golang.org/github.com/golang/net/@v/v0.0.0-20200904194848-62affa334b73.zip">https://proxy.golang.org/github.com/golang/net/@v/v0.0.0-20200904194848-62affa334b73.zip</a></p>
<p>
Dependency Hierarchy:
- github.com/docker/go-plugins-helpers/sdk-22072378427bffd471a5681c979b88f9f4f10715 (Root Library)
- github.com/docker/go-connections/sockets-v0.4.0
- github.com/golang/net-v0.0.0-20200904194848-62affa334b73
- :x: **github.com/golang/net-v0.0.0-20200904194848-62affa334b73** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rsoreq/beats/commit/a2fa76330818078401465c112c08c753b82a0aec">a2fa76330818078401465c112c08c753b82a0aec</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In net/http in Go before 1.18.6 and 1.19.x before 1.19.1, attackers can cause a denial of service because an HTTP/2 connection can hang during closing if shutdown were preempted by a fatal error.
<p>Publish Date: 2022-09-06
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-27664>CVE-2022-27664</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
|
True
|
CVE-2022-27664 (High) detected in github.com/golang/net-v0.0.0-20200904194848-62affa334b73 - autoclosed - ## CVE-2022-27664 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/golang/net-v0.0.0-20200904194848-62affa334b73</b></p></summary>
<p>[mirror] Go supplementary network libraries</p>
<p>Library home page: <a href="https://proxy.golang.org/github.com/golang/net/@v/v0.0.0-20200904194848-62affa334b73.zip">https://proxy.golang.org/github.com/golang/net/@v/v0.0.0-20200904194848-62affa334b73.zip</a></p>
<p>
Dependency Hierarchy:
- github.com/docker/go-plugins-helpers/sdk-22072378427bffd471a5681c979b88f9f4f10715 (Root Library)
- github.com/docker/go-connections/sockets-v0.4.0
- github.com/golang/net-v0.0.0-20200904194848-62affa334b73
- :x: **github.com/golang/net-v0.0.0-20200904194848-62affa334b73** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rsoreq/beats/commit/a2fa76330818078401465c112c08c753b82a0aec">a2fa76330818078401465c112c08c753b82a0aec</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In net/http in Go before 1.18.6 and 1.19.x before 1.19.1, attackers can cause a denial of service because an HTTP/2 connection can hang during closing if shutdown were preempted by a fatal error.
<p>Publish Date: 2022-09-06
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-27664>CVE-2022-27664</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
|
non_test
|
cve high detected in github com golang net autoclosed cve high severity vulnerability vulnerable library github com golang net go supplementary network libraries library home page a href dependency hierarchy github com docker go plugins helpers sdk root library github com docker go connections sockets github com golang net x github com golang net vulnerable library found in head commit a href found in base branch master vulnerability details in net http in go before and x before attackers can cause a denial of service because an http connection can hang during closing if shutdown were preempted by a fatal error publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href
| 0
|
194,669
| 6,897,625,278
|
IssuesEvent
|
2017-11-24 04:10:34
|
qhacks/hacker-dashboard
|
https://api.github.com/repos/qhacks/hacker-dashboard
|
opened
|
Solution around splitting client and server dependencies
|
priority: nice-to-have (low)
|
**Problem**
We should look at using `lerna` to split the client and server dependencies into different `package.json` files
**Requirements**
- [ ] Isolate client and server dependencies within the project
|
1.0
|
Solution around splitting client and server dependencies - **Problem**
We should look at using `lerna` to split the client and server dependencies into different `package.json` files
**Requirements**
- [ ] Isolate client and server dependencies within the project
|
non_test
|
solution around splitting client and server dependencies problem we should look at using lerna to split the client and server dependencies into different package json files requirements isolate client and server dependencies within the project
| 0
|
250,445
| 21,299,810,444
|
IssuesEvent
|
2022-04-15 00:32:50
|
UglyToad/PdfPig
|
https://api.github.com/repos/UglyToad/PdfPig
|
closed
|
NotSupportedException in PdfDocumentBuilder.Build()
|
enhancement testing
|
I found two PDF files that cause the following exception in `PdfDocumentBuilder.Build()` after they are copied via `PdfDocumentBuilder.AddPage(source, pageNum)` to a new PDF:
```
System.NotSupportedException
HResult=0x80131515
Message=Object numbers must form a contiguous range
Source=UglyToad.PdfPig
StackTrace:
at UglyToad.PdfPig.Writer.TokenWriter.WriteCrossReferenceTable(IReadOnlyDictionary`2 objectOffsets, IndirectReference catalogToken, Stream outputStream, Nullable`1 documentInformationReference)
at UglyToad.PdfPig.Writer.PdfStreamWriter.CompletePdf(IndirectReferenceToken catalogReference, IndirectReferenceToken documentInformationReference)
at UglyToad.PdfPig.Writer.PdfDocumentBuilder.CompleteDocument()
at UglyToad.PdfPig.Writer.PdfDocumentBuilder.Build()
```
The files do **not** show any error when they are opened with Adobe Reader or other PDF readers. So I'm not sure if they are corrupted or if this is a bug in PdfPig.
Either way, can PdfPig be changed to handle this case?
The two PDF files should not be shared publicly, I'll send them to you via email.
|
1.0
|
NotSupportedException in PdfDocumentBuilder.Build() - I found two PDF files that cause the following exception in `PdfDocumentBuilder.Build()` after they are copied via `PdfDocumentBuilder.AddPage(source, pageNum)` to a new PDF:
```
System.NotSupportedException
HResult=0x80131515
Message=Object numbers must form a contiguous range
Source=UglyToad.PdfPig
StackTrace:
at UglyToad.PdfPig.Writer.TokenWriter.WriteCrossReferenceTable(IReadOnlyDictionary`2 objectOffsets, IndirectReference catalogToken, Stream outputStream, Nullable`1 documentInformationReference)
at UglyToad.PdfPig.Writer.PdfStreamWriter.CompletePdf(IndirectReferenceToken catalogReference, IndirectReferenceToken documentInformationReference)
at UglyToad.PdfPig.Writer.PdfDocumentBuilder.CompleteDocument()
at UglyToad.PdfPig.Writer.PdfDocumentBuilder.Build()
```
The files do **not** show any error when they are opened with Adobe Reader or other PDF readers. So I'm not sure if they are corrupted or if this is a bug in PdfPig.
Either way, can PdfPig be changed to handle this case?
The two PDF files should not be shared publicly, I'll send them to you via email.
|
test
|
notsupportedexception in pdfdocumentbuilder build i found two pdf files that cause the following exception in pdfdocumentbuilder build after they are copied via pdfdocumentbuilder addpage source pagenum to a new pdf system notsupportedexception hresult message object numbers must form a contiguous range source uglytoad pdfpig stacktrace at uglytoad pdfpig writer tokenwriter writecrossreferencetable ireadonlydictionary objectoffsets indirectreference catalogtoken stream outputstream nullable documentinformationreference at uglytoad pdfpig writer pdfstreamwriter completepdf indirectreferencetoken catalogreference indirectreferencetoken documentinformationreference at uglytoad pdfpig writer pdfdocumentbuilder completedocument at uglytoad pdfpig writer pdfdocumentbuilder build the files do not show any error when they are opened with adobe reader or other pdf readers so i m not sure if they are corrupted or if this is a bug in pdfpig either way can pdfpig be changed to handle this case the two pdf files should not be shared publicly i ll send them to you via email
| 1
|
28,460
| 8,148,612,029
|
IssuesEvent
|
2018-08-22 06:44:25
|
SouthAfricaDigitalScience/gromacs-deploy
|
https://api.github.com/repos/SouthAfricaDigitalScience/gromacs-deploy
|
closed
|
gromacs : incorrect install path
|
build failures in progress
|
Build 22 passed build and check-build, but failed on deploy with
```
Install the project...
-- Install configuration: "Release"
CMake Error at cmake_install.cmake:36 (file):
file cannot create directory: /usr/local/gromacs/share/gromacs. Maybe need
administrative privileges.
```
the prefix was set :
`-DCMAKE_INSTALL_PREFIX=${SOFT_DIR}/${VERSION}-gcc-${GCC_VERSION}-mpi-${OPENMPI_VERSION}`
which was similarly set in `check-build.sh`. It seems this was not seen by CMake.
|
1.0
|
gromacs : incorrect install path - Build 22 passed build and check-build, but failed on deploy with
```
Install the project...
-- Install configuration: "Release"
CMake Error at cmake_install.cmake:36 (file):
file cannot create directory: /usr/local/gromacs/share/gromacs. Maybe need
administrative privileges.
```
the prefix was set :
`-DCMAKE_INSTALL_PREFIX=${SOFT_DIR}/${VERSION}-gcc-${GCC_VERSION}-mpi-${OPENMPI_VERSION}`
which was similarly set in `check-build.sh`. It seems this was not seen by CMake.
|
non_test
|
gromacs incorrect install path build passed build and check build but failed on deploy with install the project install configuration release cmake error at cmake install cmake file file cannot create directory usr local gromacs share gromacs maybe need administrative privileges the prefix was set dcmake install prefix soft dir version gcc gcc version mpi openmpi version which was similarly set in check build sh it seems this was not seen by cmake
| 0
|
107,122
| 9,202,890,660
|
IssuesEvent
|
2019-03-08 00:02:19
|
pbtoast/polytope
|
https://api.github.com/repos/pbtoast/polytope
|
opened
|
A continuous integration (CI) system would be nice.
|
testing
|
We could use Travis CI (https://travis-ci.org) to set up automated builds/tests when we submit pull requests. This would give us a lot more confidence that things are working.
|
1.0
|
A continuous integration (CI) system would be nice. - We could use Travis CI (https://travis-ci.org) to set up automated builds/tests when we submit pull requests. This would give us a lot more confidence that things are working.
|
test
|
a continuous integration ci system would be nice we could use travis ci to set up automated builds tests when we submit pull requests this would give us a lot more confidence that things are working
| 1
|
144,812
| 19,307,402,513
|
IssuesEvent
|
2021-12-13 13:02:56
|
jgeraigery/Java-Demo
|
https://api.github.com/repos/jgeraigery/Java-Demo
|
opened
|
CVE-2016-2510 (High) detected in bsh-core-2.0b4.jar
|
security vulnerability
|
## CVE-2016-2510 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bsh-core-2.0b4.jar</b></p></summary>
<p>BeanShell core</p>
<p>Path to dependency file: Java-Demo/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/beanshell/bsh-core/2.0b4/bsh-core-2.0b4.jar</p>
<p>
Dependency Hierarchy:
- esapi-2.1.0.1.jar (Root Library)
- :x: **bsh-core-2.0b4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/Java-Demo/commit/c6fb981b15217a4d0cfc36ccf725182fdf783ef1">c6fb981b15217a4d0cfc36ccf725182fdf783ef1</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
BeanShell (bsh) before 2.0b6, when included on the classpath by an application that uses Java serialization or XStream, allows remote attackers to execute arbitrary code via crafted serialized data, related to XThis.Handler.
<p>Publish Date: 2016-04-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-2510>CVE-2016-2510</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-2510">https://nvd.nist.gov/vuln/detail/CVE-2016-2510</a></p>
<p>Release Date: 2016-04-07</p>
<p>Fix Resolution: 2.0b6</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.beanshell","packageName":"bsh-core","packageVersion":"2.0b4","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.owasp.esapi:esapi:2.1.0.1;org.beanshell:bsh-core:2.0b4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.0b6","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2016-2510","vulnerabilityDetails":"BeanShell (bsh) before 2.0b6, when included on the classpath by an application that uses Java serialization or XStream, allows remote attackers to execute arbitrary code via crafted serialized data, related to XThis.Handler.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-2510","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2016-2510 (High) detected in bsh-core-2.0b4.jar - ## CVE-2016-2510 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bsh-core-2.0b4.jar</b></p></summary>
<p>BeanShell core</p>
<p>Path to dependency file: Java-Demo/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/beanshell/bsh-core/2.0b4/bsh-core-2.0b4.jar</p>
<p>
Dependency Hierarchy:
- esapi-2.1.0.1.jar (Root Library)
- :x: **bsh-core-2.0b4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/Java-Demo/commit/c6fb981b15217a4d0cfc36ccf725182fdf783ef1">c6fb981b15217a4d0cfc36ccf725182fdf783ef1</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
BeanShell (bsh) before 2.0b6, when included on the classpath by an application that uses Java serialization or XStream, allows remote attackers to execute arbitrary code via crafted serialized data, related to XThis.Handler.
<p>Publish Date: 2016-04-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-2510>CVE-2016-2510</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-2510">https://nvd.nist.gov/vuln/detail/CVE-2016-2510</a></p>
<p>Release Date: 2016-04-07</p>
<p>Fix Resolution: 2.0b6</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.beanshell","packageName":"bsh-core","packageVersion":"2.0b4","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.owasp.esapi:esapi:2.1.0.1;org.beanshell:bsh-core:2.0b4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.0b6","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2016-2510","vulnerabilityDetails":"BeanShell (bsh) before 2.0b6, when included on the classpath by an application that uses Java serialization or XStream, allows remote attackers to execute arbitrary code via crafted serialized data, related to XThis.Handler.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-2510","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_test
|
cve high detected in bsh core jar cve high severity vulnerability vulnerable library bsh core jar beanshell core path to dependency file java demo pom xml path to vulnerable library home wss scanner repository org beanshell bsh core bsh core jar dependency hierarchy esapi jar root library x bsh core jar vulnerable library found in head commit a href found in base branch main vulnerability details beanshell bsh before when included on the classpath by an application that uses java serialization or xstream allows remote attackers to execute arbitrary code via crafted serialized data related to xthis handler publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org owasp esapi esapi org beanshell bsh core isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails beanshell bsh before when included on the classpath by an application that uses java serialization or xstream allows remote attackers to execute arbitrary code via crafted serialized data related to xthis handler vulnerabilityurl
| 0
|
38,352
| 15,647,304,195
|
IssuesEvent
|
2021-03-23 02:57:57
|
Azure/azure-sdk-for-net
|
https://api.github.com/repos/Azure/azure-sdk-for-net
|
closed
|
[QUERY] Message processor stops processing messages.
|
Client Service Bus customer-reported needs-team-attention question
|
**Query/Question**
I am hosting a message receiver and session message receiver in a aspnet core hosted service. I am noticing the receivers going into long (very long) periods of inactivity where no messages are being processed. Restarting the consumers or adding new consumers has no effect. This issue effects multiple queues sometimes at the same time. Sometimes one queue will start processing again and deliver messages to another queue that won't process anything.
I am also periodically seeing this error:
Azure.Messaging.ServiceBus.ServiceBusException
The lock supplied is invalid. Either the lock expired, or the message has already been removed from the queue, or was received by a different receiver instance. (MessageLockLost)
This happens when dead lettering a message. Theres also a few instances of TaskCanceledException when trying to dead letter a message.
Below is the hosted service that runs the processor.
``` csharp
public class MessageReceiver : IHostedService
{
private readonly IContainer _container;
private readonly IDatabase _database;
private readonly ServiceBusProcessor _receiver;
private readonly ServiceBusProcessor _deadletterReceiver;
private readonly ILogger<MessageReceiver> _logger;
private const string MessageMimeType = "application/bson";
/// <summary>
/// Creates a message receiver hosted service for the desired queue and/or subscription.
/// </summary>
/// <param name="container">The dependency container. When a new message is received a new DI scope is created and the message and all dependencies are handled within that scope.</param>
/// <param name="serviceBusClient">The service bus client used to create a message processor.</param>
/// <param name="queueName">The queue to listen for new messages on.</param>
/// <param name="subscription">The optional subscription to target.</param>
public MessageReceiver(IContainer container, ServiceBusClient serviceBusClient, IDatabase database, string queueName, string subscription = null, int maxConcurrency = 1)
{
_container = container;
_database = database;
_logger = container.GetService<ILogger<MessageReceiver>>();
var options = new ServiceBusProcessorOptions
{
AutoCompleteMessages = false,
MaxConcurrentCalls = maxConcurrency,
//PrefetchCount = 10
};
if (subscription == null)
{
_receiver = serviceBusClient.CreateProcessor(queueName, options);
_deadletterReceiver = serviceBusClient.CreateProcessor($"{queueName}/$deadletterqueue");
}
else
{
_receiver = serviceBusClient.CreateProcessor(queueName, subscription, options);
_deadletterReceiver = serviceBusClient.CreateProcessor($"{queueName}/Subscriptions/{subscription}/$deadletterqueue");
}
}
private async Task DeadletterReceiverOnProcessMessageAsync(ProcessMessageEventArgs arg)
{
using var nestedContainer = _container.GetNestedContainer();
var logger = nestedContainer.GetInstance<ILogger<MessageSessionReceiver>>();
var correlationId = arg.Message.CorrelationId;
var jobId = arg.Message.ApplicationProperties["JobId"].ToString();
var fileId = arg.Message.ApplicationProperties["FileId"].ToString();
using (logger.BeginScope(new Dictionary<string, object>
{
{"CorrelationId", correlationId},
{"JobId", jobId},
{"FileId", fileId},
}))
{
try
{
logger.LogDebug("Processing failed message.");
arg.Message.ApplicationProperties.TryGetValue("PageNo", out var pageNo);
arg.Message.ApplicationProperties.TryGetValue("LogicalPageNo", out var logicalPageNo);
var context = nestedContainer.GetInstance<IContext>();
var failFile = arg.Message.ApplicationProperties.TryGetValue("DocumentType", out var documentType)
&& (string) documentType == "MultiPageDocument";
if (failFile || (pageNo == null && logicalPageNo == null))
{
await context.WriteAsync(jobId, new FailFileCommand() {FileId = fileId});
//file level processing has failed (probably the get pdf info/convert to pdf stage)
//so formally abort the file
await _database.StringSetAsync($"$Abort_{fileId}", "1", TimeSpan.FromDays(1));
}
else
{
await context.WriteAsync(jobId, new ChangePageStateCommand
{
FileId = fileId,
PageNumbers = new[] { int.Parse(logicalPageNo?.ToString() ?? pageNo?.ToString()) },
State = BatchFileProcessingEntity.OperationState.Failed
});
await _database.StringSetAsync($"$Abort_{fileId}_{logicalPageNo ?? pageNo}", "1",
TimeSpan.FromDays(1));
}
await arg.CompleteMessageAsync(arg.Message, arg.CancellationToken);
}
catch (Exception ex)
{
logger.LogError(ex, "Error occurred processing message failure.");
}
finally
{
//messages remaining in the DLQ should be investigated
}
}
}
public async Task StartAsync(CancellationToken cancellationToken)
{
_receiver.ProcessMessageAsync += ReceiverOnProcessMessageAsync;
_receiver.ProcessErrorAsync += ReceiverOnProcessErrorAsync;
_deadletterReceiver.ProcessMessageAsync += DeadletterReceiverOnProcessMessageAsync;
_deadletterReceiver.ProcessErrorAsync += ReceiverOnProcessErrorAsync;
await Task.WhenAll(_deadletterReceiver.StartProcessingAsync(cancellationToken), _receiver.StartProcessingAsync(cancellationToken));
_logger.LogInformation("Receiver listening to {Queue}", _receiver.EntityPath);
}
public async Task StopAsync(CancellationToken cancellationToken)
{
await _receiver.StopProcessingAsync(cancellationToken);
await _receiver.CloseAsync(cancellationToken);
}
private Task ReceiverOnProcessErrorAsync(ProcessErrorEventArgs arg)
{
_logger.LogError(arg.Exception, "SessionMessageProcessing Failed");
return Task.CompletedTask;
}
private async Task ReceiverOnProcessMessageAsync(ProcessMessageEventArgs arg)
{
try
{
var message = arg.Message;
var cancellationToken = arg.CancellationToken;
if (arg.Message.ApplicationProperties.TryGetValue("FileId", out var fileId))
{
var checkResult = false;
await Policy.Handle<RedisConnectionException>()
.WaitAndRetryAsync(10, (count) => TimeSpan.FromMilliseconds(count * 250))
.ExecuteAsync(async () =>
{
if (await _database.KeyExistsAsync($"$Abort_{fileId}"))
{
await arg.CompleteMessageAsync(message, cancellationToken);
checkResult = true;
return;
}
if (arg.Message.ApplicationProperties.TryGetValue("LogicalPageNo", out var logicalPageNo))
{
if (await _database.KeyExistsAsync($"$Abort_{fileId}_{logicalPageNo}"))
{
await arg.CompleteMessageAsync(message, cancellationToken);
checkResult = true;
return;
}
}
if (message.ApplicationProperties.ContainsKey("AbortFileId"))
{
await _database.StringSetAsync(
$"$Abort_{message.ApplicationProperties["AbortFileId"]}",
"1", TimeSpan.FromDays(1));
await arg.CompleteMessageAsync(message, cancellationToken);
checkResult = true;
return;
}
});
if (checkResult)
{
return;
}
}
arg.Message.ApplicationProperties.TryGetValue("OrganizationId", out var organisationId);
arg.Message.ApplicationProperties.TryGetValue("ProjectId", out var projectId);
var (payloadTypeName, userId, correlationId) = message.GetCommonMessageProperties();
var otherProperties = message.GetOtherMessageProperties();
var logValues = new Dictionary<string, object>
{
{"CorrelationId", correlationId}
};
foreach (var pair in otherProperties)
{
logValues[pair.Key] = pair.Value;
}
using (var nestedContainer = _container.GetNestedContainer())
{
nestedContainer.Inject(typeof(ICorrelationId), new CorrelationId(correlationId), true);
var logger = nestedContainer.GetInstance<ILogger<MessageReceiver>>();
using (logger.BeginScope(logValues))
{
try
{
//this validation is done here so all logging is scoped properly.
if (payloadTypeName == null || userId == null)
{
logger.LogDebug("A message was received with no specified payload type or no security token.");
await arg.DeadLetterMessageAsync(arg.Message, "Message is not in a format MessageReceiver can process.",
cancellationToken: cancellationToken);
return;
}
var claimList = new List<Claim>
{
new Claim(ClaimTypes.NameIdentifier, userId)
};
if (organisationId != null)
{
claimList.Add(new Claim("OrganizationId", organisationId.ToString()));
}
if (projectId != null)
{
claimList.Add(new Claim("ProjectId", projectId.ToString()));
}
var user = new ClaimsPrincipal(new ClaimsIdentity(claimList));
using (logger.BeginScope(new Dictionary<string, object>
{
{"UserId", userId}
}))
{
var messageBytes = message.Body.ToArray();
var request = RequestTypeAggregator.Deserialize(messageBytes, payloadTypeName, message.ContentType);
if (request == null)
{
logger.LogDebug("Failed to deserialize request payload. Type {payloadTypeName}", payloadTypeName);
await arg.DeadLetterMessageAsync(message, "Request payload was invalid.",
cancellationToken: cancellationToken);
return;
}
nestedContainer.Inject(typeof(IPrincipal), user, true);
nestedContainer.Inject(user, true);
var mediator = nestedContainer.GetInstance<IMediator>();
try
{
await mediator.Send(request, cancellationToken);
await arg.CompleteMessageAsync(message, cancellationToken);
}
catch
{
// any upstream errors in the mediator are logged by the logging behaviour and are not our concern here
await arg.DeadLetterMessageAsync(message, "Request execution failed with error.",
cancellationToken: cancellationToken);
}
}
}
catch (Exception ex)
{
logger.LogError(ex, "Exception with message handling.");
await arg.DeadLetterMessageAsync(message, "Unexpected error processing request.", cancellationToken: cancellationToken);
}
}
}
}
catch (Exception ex)
{
//this logger has no scope
_logger.LogError(ex, "Exception with message handling.");
await arg.DeadLetterMessageAsync(arg.Message, "Unexpected error processing request.", cancellationToken: arg.CancellationToken);
}
}
}
```
**Environment:**
- Azure.Messaging.ServiceBus 7.0.1
- Running in a azure k8s pod from linux aspnet core docker image
|
1.0
|
[QUERY] Message processor stops processing messages. - **Query/Question**
I am hosting a message receiver and session message receiver in a aspnet core hosted service. I am noticing the receivers going into long (very long) periods of inactivity where no messages are being processed. Restarting the consumers or adding new consumers has no effect. This issue effects multiple queues sometimes at the same time. Sometimes one queue will start processing again and deliver messages to another queue that won't process anything.
I am also periodically seeing this error:
Azure.Messaging.ServiceBus.ServiceBusException
The lock supplied is invalid. Either the lock expired, or the message has already been removed from the queue, or was received by a different receiver instance. (MessageLockLost)
This happens when dead lettering a message. Theres also a few instances of TaskCanceledException when trying to dead letter a message.
Below is the hosted service that runs the processor.
``` csharp
public class MessageReceiver : IHostedService
{
private readonly IContainer _container;
private readonly IDatabase _database;
private readonly ServiceBusProcessor _receiver;
private readonly ServiceBusProcessor _deadletterReceiver;
private readonly ILogger<MessageReceiver> _logger;
private const string MessageMimeType = "application/bson";
/// <summary>
/// Creates a message receiver hosted service for the desired queue and/or subscription.
/// </summary>
/// <param name="container">The dependency container. When a new message is received a new DI scope is created and the message and all dependencies are handled within that scope.</param>
/// <param name="serviceBusClient">The service bus client used to create a message processor.</param>
/// <param name="queueName">The queue to listen for new messages on.</param>
/// <param name="subscription">The optional subscription to target.</param>
public MessageReceiver(IContainer container, ServiceBusClient serviceBusClient, IDatabase database, string queueName, string subscription = null, int maxConcurrency = 1)
{
_container = container;
_database = database;
_logger = container.GetService<ILogger<MessageReceiver>>();
var options = new ServiceBusProcessorOptions
{
AutoCompleteMessages = false,
MaxConcurrentCalls = maxConcurrency,
//PrefetchCount = 10
};
if (subscription == null)
{
_receiver = serviceBusClient.CreateProcessor(queueName, options);
_deadletterReceiver = serviceBusClient.CreateProcessor($"{queueName}/$deadletterqueue");
}
else
{
_receiver = serviceBusClient.CreateProcessor(queueName, subscription, options);
_deadletterReceiver = serviceBusClient.CreateProcessor($"{queueName}/Subscriptions/{subscription}/$deadletterqueue");
}
}
private async Task DeadletterReceiverOnProcessMessageAsync(ProcessMessageEventArgs arg)
{
using var nestedContainer = _container.GetNestedContainer();
var logger = nestedContainer.GetInstance<ILogger<MessageSessionReceiver>>();
var correlationId = arg.Message.CorrelationId;
var jobId = arg.Message.ApplicationProperties["JobId"].ToString();
var fileId = arg.Message.ApplicationProperties["FileId"].ToString();
using (logger.BeginScope(new Dictionary<string, object>
{
{"CorrelationId", correlationId},
{"JobId", jobId},
{"FileId", fileId},
}))
{
try
{
logger.LogDebug("Processing failed message.");
arg.Message.ApplicationProperties.TryGetValue("PageNo", out var pageNo);
arg.Message.ApplicationProperties.TryGetValue("LogicalPageNo", out var logicalPageNo);
var context = nestedContainer.GetInstance<IContext>();
var failFile = arg.Message.ApplicationProperties.TryGetValue("DocumentType", out var documentType)
&& (string) documentType == "MultiPageDocument";
if (failFile || (pageNo == null && logicalPageNo == null))
{
await context.WriteAsync(jobId, new FailFileCommand() {FileId = fileId});
//file level processing has failed (probably the get pdf info/convert to pdf stage)
//so formally abort the file
await _database.StringSetAsync($"$Abort_{fileId}", "1", TimeSpan.FromDays(1));
}
else
{
await context.WriteAsync(jobId, new ChangePageStateCommand
{
FileId = fileId,
PageNumbers = new[] { int.Parse(logicalPageNo?.ToString() ?? pageNo?.ToString()) },
State = BatchFileProcessingEntity.OperationState.Failed
});
await _database.StringSetAsync($"$Abort_{fileId}_{logicalPageNo ?? pageNo}", "1",
TimeSpan.FromDays(1));
}
await arg.CompleteMessageAsync(arg.Message, arg.CancellationToken);
}
catch (Exception ex)
{
logger.LogError(ex, "Error occurred processing message failure.");
}
finally
{
//messages remaining in the DLQ should be investigated
}
}
}
public async Task StartAsync(CancellationToken cancellationToken)
{
_receiver.ProcessMessageAsync += ReceiverOnProcessMessageAsync;
_receiver.ProcessErrorAsync += ReceiverOnProcessErrorAsync;
_deadletterReceiver.ProcessMessageAsync += DeadletterReceiverOnProcessMessageAsync;
_deadletterReceiver.ProcessErrorAsync += ReceiverOnProcessErrorAsync;
await Task.WhenAll(_deadletterReceiver.StartProcessingAsync(cancellationToken), _receiver.StartProcessingAsync(cancellationToken));
_logger.LogInformation("Receiver listening to {Queue}", _receiver.EntityPath);
}
public async Task StopAsync(CancellationToken cancellationToken)
{
await _receiver.StopProcessingAsync(cancellationToken);
await _receiver.CloseAsync(cancellationToken);
}
private Task ReceiverOnProcessErrorAsync(ProcessErrorEventArgs arg)
{
_logger.LogError(arg.Exception, "SessionMessageProcessing Failed");
return Task.CompletedTask;
}
private async Task ReceiverOnProcessMessageAsync(ProcessMessageEventArgs arg)
{
try
{
var message = arg.Message;
var cancellationToken = arg.CancellationToken;
if (arg.Message.ApplicationProperties.TryGetValue("FileId", out var fileId))
{
var checkResult = false;
await Policy.Handle<RedisConnectionException>()
.WaitAndRetryAsync(10, (count) => TimeSpan.FromMilliseconds(count * 250))
.ExecuteAsync(async () =>
{
if (await _database.KeyExistsAsync($"$Abort_{fileId}"))
{
await arg.CompleteMessageAsync(message, cancellationToken);
checkResult = true;
return;
}
if (arg.Message.ApplicationProperties.TryGetValue("LogicalPageNo", out var logicalPageNo))
{
if (await _database.KeyExistsAsync($"$Abort_{fileId}_{logicalPageNo}"))
{
await arg.CompleteMessageAsync(message, cancellationToken);
checkResult = true;
return;
}
}
if (message.ApplicationProperties.ContainsKey("AbortFileId"))
{
await _database.StringSetAsync(
$"$Abort_{message.ApplicationProperties["AbortFileId"]}",
"1", TimeSpan.FromDays(1));
await arg.CompleteMessageAsync(message, cancellationToken);
checkResult = true;
return;
}
});
if (checkResult)
{
return;
}
}
arg.Message.ApplicationProperties.TryGetValue("OrganizationId", out var organisationId);
arg.Message.ApplicationProperties.TryGetValue("ProjectId", out var projectId);
var (payloadTypeName, userId, correlationId) = message.GetCommonMessageProperties();
var otherProperties = message.GetOtherMessageProperties();
var logValues = new Dictionary<string, object>
{
{"CorrelationId", correlationId}
};
foreach (var pair in otherProperties)
{
logValues[pair.Key] = pair.Value;
}
using (var nestedContainer = _container.GetNestedContainer())
{
nestedContainer.Inject(typeof(ICorrelationId), new CorrelationId(correlationId), true);
var logger = nestedContainer.GetInstance<ILogger<MessageReceiver>>();
using (logger.BeginScope(logValues))
{
try
{
//this validation is done here so all logging is scoped properly.
if (payloadTypeName == null || userId == null)
{
logger.LogDebug("A message was received with no specified payload type or no security token.");
await arg.DeadLetterMessageAsync(arg.Message, "Message is not in a format MessageReceiver can process.",
cancellationToken: cancellationToken);
return;
}
var claimList = new List<Claim>
{
new Claim(ClaimTypes.NameIdentifier, userId)
};
if (organisationId != null)
{
claimList.Add(new Claim("OrganizationId", organisationId.ToString()));
}
if (projectId != null)
{
claimList.Add(new Claim("ProjectId", projectId.ToString()));
}
var user = new ClaimsPrincipal(new ClaimsIdentity(claimList));
using (logger.BeginScope(new Dictionary<string, object>
{
{"UserId", userId}
}))
{
var messageBytes = message.Body.ToArray();
var request = RequestTypeAggregator.Deserialize(messageBytes, payloadTypeName, message.ContentType);
if (request == null)
{
logger.LogDebug("Failed to deserialize request payload. Type {payloadTypeName}", payloadTypeName);
await arg.DeadLetterMessageAsync(message, "Request payload was invalid.",
cancellationToken: cancellationToken);
return;
}
nestedContainer.Inject(typeof(IPrincipal), user, true);
nestedContainer.Inject(user, true);
var mediator = nestedContainer.GetInstance<IMediator>();
try
{
await mediator.Send(request, cancellationToken);
await arg.CompleteMessageAsync(message, cancellationToken);
}
catch
{
// any upstream errors in the mediator are logged by the logging behaviour and are not our concern here
await arg.DeadLetterMessageAsync(message, "Request execution failed with error.",
cancellationToken: cancellationToken);
}
}
}
catch (Exception ex)
{
logger.LogError(ex, "Exception with message handling.");
await arg.DeadLetterMessageAsync(message, "Unexpected error processing request.", cancellationToken: cancellationToken);
}
}
}
}
catch (Exception ex)
{
//this logger has no scope
_logger.LogError(ex, "Exception with message handling.");
await arg.DeadLetterMessageAsync(arg.Message, "Unexpected error processing request.", cancellationToken: arg.CancellationToken);
}
}
}
```
**Environment:**
- Azure.Messaging.ServiceBus 7.0.1
- Running in a azure k8s pod from linux aspnet core docker image
|
non_test
|
message processor stops processing messages query question i am hosting a message receiver and session message receiver in a aspnet core hosted service i am noticing the receivers going into long very long periods of inactivity where no messages are being processed restarting the consumers or adding new consumers has no effect this issue effects multiple queues sometimes at the same time sometimes one queue will start processing again and deliver messages to another queue that won t process anything i am also periodically seeing this error azure messaging servicebus servicebusexception the lock supplied is invalid either the lock expired or the message has already been removed from the queue or was received by a different receiver instance messagelocklost this happens when dead lettering a message theres also a few instances of taskcanceledexception when trying to dead letter a message below is the hosted service that runs the processor csharp public class messagereceiver ihostedservice private readonly icontainer container private readonly idatabase database private readonly servicebusprocessor receiver private readonly servicebusprocessor deadletterreceiver private readonly ilogger logger private const string messagemimetype application bson creates a message receiver hosted service for the desired queue and or subscription the dependency container when a new message is received a new di scope is created and the message and all dependencies are handled within that scope the service bus client used to create a message processor the queue to listen for new messages on the optional subscription to target public messagereceiver icontainer container servicebusclient servicebusclient idatabase database string queuename string subscription null int maxconcurrency container container database database logger container getservice var options new servicebusprocessoroptions autocompletemessages false maxconcurrentcalls maxconcurrency prefetchcount if subscription null receiver servicebusclient createprocessor queuename options deadletterreceiver servicebusclient createprocessor queuename deadletterqueue else receiver servicebusclient createprocessor queuename subscription options deadletterreceiver servicebusclient createprocessor queuename subscriptions subscription deadletterqueue private async task deadletterreceiveronprocessmessageasync processmessageeventargs arg using var nestedcontainer container getnestedcontainer var logger nestedcontainer getinstance var correlationid arg message correlationid var jobid arg message applicationproperties tostring var fileid arg message applicationproperties tostring using logger beginscope new dictionary correlationid correlationid jobid jobid fileid fileid try logger logdebug processing failed message arg message applicationproperties trygetvalue pageno out var pageno arg message applicationproperties trygetvalue logicalpageno out var logicalpageno var context nestedcontainer getinstance var failfile arg message applicationproperties trygetvalue documenttype out var documenttype string documenttype multipagedocument if failfile pageno null logicalpageno null await context writeasync jobid new failfilecommand fileid fileid file level processing has failed probably the get pdf info convert to pdf stage so formally abort the file await database stringsetasync abort fileid timespan fromdays else await context writeasync jobid new changepagestatecommand fileid fileid pagenumbers new int parse logicalpageno tostring pageno tostring state batchfileprocessingentity operationstate failed await database stringsetasync abort fileid logicalpageno pageno timespan fromdays await arg completemessageasync arg message arg cancellationtoken catch exception ex logger logerror ex error occurred processing message failure finally messages remaining in the dlq should be investigated public async task startasync cancellationtoken cancellationtoken receiver processmessageasync receiveronprocessmessageasync receiver processerrorasync receiveronprocesserrorasync deadletterreceiver processmessageasync deadletterreceiveronprocessmessageasync deadletterreceiver processerrorasync receiveronprocesserrorasync await task whenall deadletterreceiver startprocessingasync cancellationtoken receiver startprocessingasync cancellationtoken logger loginformation receiver listening to queue receiver entitypath public async task stopasync cancellationtoken cancellationtoken await receiver stopprocessingasync cancellationtoken await receiver closeasync cancellationtoken private task receiveronprocesserrorasync processerroreventargs arg logger logerror arg exception sessionmessageprocessing failed return task completedtask private async task receiveronprocessmessageasync processmessageeventargs arg try var message arg message var cancellationtoken arg cancellationtoken if arg message applicationproperties trygetvalue fileid out var fileid var checkresult false await policy handle waitandretryasync count timespan frommilliseconds count executeasync async if await database keyexistsasync abort fileid await arg completemessageasync message cancellationtoken checkresult true return if arg message applicationproperties trygetvalue logicalpageno out var logicalpageno if await database keyexistsasync abort fileid logicalpageno await arg completemessageasync message cancellationtoken checkresult true return if message applicationproperties containskey abortfileid await database stringsetasync abort message applicationproperties timespan fromdays await arg completemessageasync message cancellationtoken checkresult true return if checkresult return arg message applicationproperties trygetvalue organizationid out var organisationid arg message applicationproperties trygetvalue projectid out var projectid var payloadtypename userid correlationid message getcommonmessageproperties var otherproperties message getothermessageproperties var logvalues new dictionary correlationid correlationid foreach var pair in otherproperties logvalues pair value using var nestedcontainer container getnestedcontainer nestedcontainer inject typeof icorrelationid new correlationid correlationid true var logger nestedcontainer getinstance using logger beginscope logvalues try this validation is done here so all logging is scoped properly if payloadtypename null userid null logger logdebug a message was received with no specified payload type or no security token await arg deadlettermessageasync arg message message is not in a format messagereceiver can process cancellationtoken cancellationtoken return var claimlist new list new claim claimtypes nameidentifier userid if organisationid null claimlist add new claim organizationid organisationid tostring if projectid null claimlist add new claim projectid projectid tostring var user new claimsprincipal new claimsidentity claimlist using logger beginscope new dictionary userid userid var messagebytes message body toarray var request requesttypeaggregator deserialize messagebytes payloadtypename message contenttype if request null logger logdebug failed to deserialize request payload type payloadtypename payloadtypename await arg deadlettermessageasync message request payload was invalid cancellationtoken cancellationtoken return nestedcontainer inject typeof iprincipal user true nestedcontainer inject user true var mediator nestedcontainer getinstance try await mediator send request cancellationtoken await arg completemessageasync message cancellationtoken catch any upstream errors in the mediator are logged by the logging behaviour and are not our concern here await arg deadlettermessageasync message request execution failed with error cancellationtoken cancellationtoken catch exception ex logger logerror ex exception with message handling await arg deadlettermessageasync message unexpected error processing request cancellationtoken cancellationtoken catch exception ex this logger has no scope logger logerror ex exception with message handling await arg deadlettermessageasync arg message unexpected error processing request cancellationtoken arg cancellationtoken environment azure messaging servicebus running in a azure pod from linux aspnet core docker image
| 0
|
163,885
| 12,749,244,407
|
IssuesEvent
|
2020-06-26 22:11:16
|
googleforgames/agones
|
https://api.github.com/repos/googleforgames/agones
|
closed
|
Better cleanup of namespace on e2e test failure
|
area/tests kind/bug
|
<!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
If the matter is security related, please disclose it privately via
-->
**What happened**:
e2e cluster ends up in a non-recoverable state when namespaces filled with game servers aren't cleaned up on some e2e test failures.
I'm manually cleaning the cluster around once a day, at least.
**What you expected to happen**:
Either on test end, and on test start should GameServers (and Fleets, etc) be deleted from the test cluster.
I would recommend adding a label to test namespaces, and then searching for them by label and deleting them at the beginning of the test, as well as the end.
**How to reproduce it (as minimally and precisely as possible)**:
Wait for a e2e test to fail, then review the cluster.
**Anything else we need to know?**:
No.
**Environment**:
- Agones version: Development
- Kubernetes version (use `kubectl version`): 1.15
- Cloud provider or hardware configuration: GKE
- Install method (yaml/helm): Helm
- Troubleshooting guide log(s):
```shell
markmandel@cloudshell:~ (agones-images)$ kubectl get gs --all-namespaces
NAMESPACE NAME STATE ADDRESS PORT NODE AGE
1593180083 simple-fleet-4lhb7-6gn6l-br7s6 Ready 35.203.183.113 7783 gke-e2e-test-cluster-default-d46634d8-01t8 172m
1593180083 simple-fleet-4lhb7-6gn6l-ctq75 Ready 35.230.49.39 7023 gke-e2e-test-cluster-default-d46634d8-5qvv 172m
1593180083 simple-fleet-4lhb7-6gn6l-gdbvw Ready 35.203.183.113 7549 gke-e2e-test-cluster-default-d46634d8-01t8 171m
1593180083 simple-fleet-4lhb7-6gn6l-jsdwz Ready 35.230.49.39 7387 gke-e2e-test-cluster-default-d46634d8-5qvv 172m
1593180083 simple-fleet-4lhb7-6gn6l-k4rrv Ready 35.203.183.113 7189 gke-e2e-test-cluster-default-d46634d8-01t8 172m
1593180083 simple-fleet-4lhb7-6gn6l-mwzjk Ready 35.203.183.113 7003 gke-e2e-test-cluster-default-d46634d8-01t8 171m
1593180083 simple-fleet-4lhb7-6gn6l-rf6ql Ready 35.203.183.113 7057 gke-e2e-test-cluster-default-d46634d8-01t8 171m
1593180083 simple-fleet-4lhb7-6gn6l-tsnsh Ready 35.203.183.113 7989 gke-e2e-test-cluster-default-d46634d8-01t8 171m
1593180083 simple-fleet-gw8ml-fhwkt-4wqwx Scheduled 35.247.81.255 7581 gke-e2e-test-cluster-default-d46634d8-dwmn 11s
1593180083 simple-fleet-gw8ml-fhwkt-q7nrk Scheduled 35.247.81.255 7759 gke-e2e-test-cluster-default-d46634d8-dwmn 10s
1593180083 simple-fleet-gw8ml-fhwkt-x8c8p Scheduled 35.247.81.255 7050 gke-e2e-test-cluster-default-d46634d8-dwmn 11s
1593180083 simple-fleet-tch44-hxmdl-jrd8g Ready 35.247.81.255 7471 gke-e2e-test-cluster-default-d46634d8-dwmn 175m
1593180083 simple-fleet-tch44-hxmdl-qsrxn Allocated 35.247.81.255 7106 gke-e2e-test-cluster-default-d46634d8-dwmn 175m
1593180083 simple-fleet-tch44-hxmdl-vk7wd Ready 35.247.81.255 7055 gke-e2e-test-cluster-default-d46634d8-dwmn 175m
1593180083 simple-fleet-xjg6h-9gprx-45t5d Ready 34.82.230.193 0 gke-e2e-test-cluster-default-d46634d8-fz4x 173m
1593180083 simple-fleet-xjg6h-9gprx-87jsn Ready 34.82.230.193 0 gke-e2e-test-cluster-default-d46634d8-fz4x 173m
1593180083 simple-fleet-xjg6h-9gprx-rc7ws Ready 35.247.81.255 0 gke-e2e-test-cluster-default-d46634d8-dwmn 173m
1593180083 udp-server427f4 Ready 34.82.244.246 5555 gke-e2e-test-cluster-default-d46634d8-tcg4 172m
1593180083 udp-server4fzrg Unhealthy 172m
1593180083 udp-server96k4h Unhealthy 35.247.81.255 7188 gke-e2e-test-cluster-default-d46634d8-dwmn 174m
1593180083 udp-serverghnkn Unhealthy 35.247.81.255 7641 gke-e2e-test-cluster-default-d46634d8-dwmn 174m
1593180083 udp-servergpq9z Ready 34.82.230.193 5555 gke-e2e-test-cluster-default-d46634d8-fz4x 174m
1593180083 udp-servergptsp Ready 35.247.4.172 5555 gke-e2e-test-cluster-default-d46634d8-tdsl 172m
1593180083 udp-serverhtfx5 Ready 35.230.49.39 5555 gke-e2e-test-cluster-default-d46634d8-5qvv 172m
1593180083 udp-serverk5z8l Ready 34.83.149.154 5555 gke-e2e-test-cluster-default-d46634d8-g2f5 172m
1593180083 udp-serverp9p88 Ready 104.198.11.54 5555 gke-e2e-test-cluster-default-d46634d8-s24f 173m
1593180083 udp-serverqsdt7 Ready 35.247.81.255 7064 gke-e2e-test-cluster-default-d46634d8-dwmn 174m
1593180083 udp-serverswsjq Ready 35.247.81.255 7847 gke-e2e-test-cluster-default-d46634d8-dwmn 174m
1593180083 udp-servert7kgr Ready 35.247.81.255 7109 gke-e2e-test-cluster-default-d46634d8-dwmn 174m
1593180083 udp-servervcfsl Ready 35.203.183.113 5555 gke-e2e-test-cluster-default-d46634d8-01t8 173m
1593180083 udp-serverzxdnr Ready 35.247.81.255 5555 gke-e2e-test-cluster-default-d46634d8-dwmn 173m
1593181156 simple-fleet-b8xjk-hrl74-md94b Scheduled 35.230.49.39 7955 gke-e2e-test-cluster-default-d46634d8-5qvv 11s
1593181156 simple-fleet-b8xjk-hrl74-pxbk8 Scheduled 35.230.49.39 7575 gke-e2e-test-cluster-default-d46634d8-5qvv 9s
1593181156 simple-fleet-b8xjk-hrl74-xpncc Scheduled 35.230.49.39 7588 gke-e2e-test-cluster-default-d46634d8-5qvv 9s
1593181156 simple-fleet-fs582-phqfh-25hlv Allocated 35.230.49.39 7019 gke-e2e-test-cluster-default-d46634d8-5qvv 157m
1593181156 simple-fleet-fs582-phqfh-6l2sd Ready 35.230.49.39 7200 gke-e2e-test-cluster-default-d46634d8-5qvv 157m
1593181156 simple-fleet-fs582-phqfh-m48pf Ready 35.230.49.39 7901 gke-e2e-test-cluster-default-d46634d8-5qvv 157m
1593181156 simple-fleet-kh9cq-blb8z-6nrpc Ready 35.247.4.172 0 gke-e2e-test-cluster-default-d46634d8-tdsl 155m
1593181156 simple-fleet-kh9cq-blb8z-pxmkt Ready 35.247.4.172 0 gke-e2e-test-cluster-default-d46634d8-tdsl 155m
1593181156 simple-fleet-kh9cq-blb8z-zwx4b Ready 35.230.49.39 0 gke-e2e-test-cluster-default-d46634d8-5qvv 155m
1593181156 udp-server2j65m Unhealthy 156m
1593181156 udp-server6hxr6 Unhealthy 35.230.49.39 7135 gke-e2e-test-cluster-default-d46634d8-5qvv 156m
1593181156 udp-server9svnl Unhealthy 151m
1593181156 udp-serverc9v87 Unhealthy 104.198.11.54 7309 gke-e2e-test-cluster-default-d46634d8-s24f 156m
1593181156 udp-servergxgfp Ready 35.230.49.39 7915 gke-e2e-test-cluster-default-d46634d8-5qvv 156m
1593181156 udp-serverh24jr Ready 104.198.11.54 7254 gke-e2e-test-cluster-default-d46634d8-s24f 156m
1593181156 udp-serverx7vtt Ready 104.198.11.54 7375 gke-e2e-test-cluster-default-d46634d8-s24f 156m
```
- Others:
|
1.0
|
Better cleanup of namespace on e2e test failure - <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
If the matter is security related, please disclose it privately via
-->
**What happened**:
e2e cluster ends up in a non-recoverable state when namespaces filled with game servers aren't cleaned up on some e2e test failures.
I'm manually cleaning the cluster around once a day, at least.
**What you expected to happen**:
Either on test end, and on test start should GameServers (and Fleets, etc) be deleted from the test cluster.
I would recommend adding a label to test namespaces, and then searching for them by label and deleting them at the beginning of the test, as well as the end.
**How to reproduce it (as minimally and precisely as possible)**:
Wait for a e2e test to fail, then review the cluster.
**Anything else we need to know?**:
No.
**Environment**:
- Agones version: Development
- Kubernetes version (use `kubectl version`): 1.15
- Cloud provider or hardware configuration: GKE
- Install method (yaml/helm): Helm
- Troubleshooting guide log(s):
```shell
markmandel@cloudshell:~ (agones-images)$ kubectl get gs --all-namespaces
NAMESPACE NAME STATE ADDRESS PORT NODE AGE
1593180083 simple-fleet-4lhb7-6gn6l-br7s6 Ready 35.203.183.113 7783 gke-e2e-test-cluster-default-d46634d8-01t8 172m
1593180083 simple-fleet-4lhb7-6gn6l-ctq75 Ready 35.230.49.39 7023 gke-e2e-test-cluster-default-d46634d8-5qvv 172m
1593180083 simple-fleet-4lhb7-6gn6l-gdbvw Ready 35.203.183.113 7549 gke-e2e-test-cluster-default-d46634d8-01t8 171m
1593180083 simple-fleet-4lhb7-6gn6l-jsdwz Ready 35.230.49.39 7387 gke-e2e-test-cluster-default-d46634d8-5qvv 172m
1593180083 simple-fleet-4lhb7-6gn6l-k4rrv Ready 35.203.183.113 7189 gke-e2e-test-cluster-default-d46634d8-01t8 172m
1593180083 simple-fleet-4lhb7-6gn6l-mwzjk Ready 35.203.183.113 7003 gke-e2e-test-cluster-default-d46634d8-01t8 171m
1593180083 simple-fleet-4lhb7-6gn6l-rf6ql Ready 35.203.183.113 7057 gke-e2e-test-cluster-default-d46634d8-01t8 171m
1593180083 simple-fleet-4lhb7-6gn6l-tsnsh Ready 35.203.183.113 7989 gke-e2e-test-cluster-default-d46634d8-01t8 171m
1593180083 simple-fleet-gw8ml-fhwkt-4wqwx Scheduled 35.247.81.255 7581 gke-e2e-test-cluster-default-d46634d8-dwmn 11s
1593180083 simple-fleet-gw8ml-fhwkt-q7nrk Scheduled 35.247.81.255 7759 gke-e2e-test-cluster-default-d46634d8-dwmn 10s
1593180083 simple-fleet-gw8ml-fhwkt-x8c8p Scheduled 35.247.81.255 7050 gke-e2e-test-cluster-default-d46634d8-dwmn 11s
1593180083 simple-fleet-tch44-hxmdl-jrd8g Ready 35.247.81.255 7471 gke-e2e-test-cluster-default-d46634d8-dwmn 175m
1593180083 simple-fleet-tch44-hxmdl-qsrxn Allocated 35.247.81.255 7106 gke-e2e-test-cluster-default-d46634d8-dwmn 175m
1593180083 simple-fleet-tch44-hxmdl-vk7wd Ready 35.247.81.255 7055 gke-e2e-test-cluster-default-d46634d8-dwmn 175m
1593180083 simple-fleet-xjg6h-9gprx-45t5d Ready 34.82.230.193 0 gke-e2e-test-cluster-default-d46634d8-fz4x 173m
1593180083 simple-fleet-xjg6h-9gprx-87jsn Ready 34.82.230.193 0 gke-e2e-test-cluster-default-d46634d8-fz4x 173m
1593180083 simple-fleet-xjg6h-9gprx-rc7ws Ready 35.247.81.255 0 gke-e2e-test-cluster-default-d46634d8-dwmn 173m
1593180083 udp-server427f4 Ready 34.82.244.246 5555 gke-e2e-test-cluster-default-d46634d8-tcg4 172m
1593180083 udp-server4fzrg Unhealthy 172m
1593180083 udp-server96k4h Unhealthy 35.247.81.255 7188 gke-e2e-test-cluster-default-d46634d8-dwmn 174m
1593180083 udp-serverghnkn Unhealthy 35.247.81.255 7641 gke-e2e-test-cluster-default-d46634d8-dwmn 174m
1593180083 udp-servergpq9z Ready 34.82.230.193 5555 gke-e2e-test-cluster-default-d46634d8-fz4x 174m
1593180083 udp-servergptsp Ready 35.247.4.172 5555 gke-e2e-test-cluster-default-d46634d8-tdsl 172m
1593180083 udp-serverhtfx5 Ready 35.230.49.39 5555 gke-e2e-test-cluster-default-d46634d8-5qvv 172m
1593180083 udp-serverk5z8l Ready 34.83.149.154 5555 gke-e2e-test-cluster-default-d46634d8-g2f5 172m
1593180083 udp-serverp9p88 Ready 104.198.11.54 5555 gke-e2e-test-cluster-default-d46634d8-s24f 173m
1593180083 udp-serverqsdt7 Ready 35.247.81.255 7064 gke-e2e-test-cluster-default-d46634d8-dwmn 174m
1593180083 udp-serverswsjq Ready 35.247.81.255 7847 gke-e2e-test-cluster-default-d46634d8-dwmn 174m
1593180083 udp-servert7kgr Ready 35.247.81.255 7109 gke-e2e-test-cluster-default-d46634d8-dwmn 174m
1593180083 udp-servervcfsl Ready 35.203.183.113 5555 gke-e2e-test-cluster-default-d46634d8-01t8 173m
1593180083 udp-serverzxdnr Ready 35.247.81.255 5555 gke-e2e-test-cluster-default-d46634d8-dwmn 173m
1593181156 simple-fleet-b8xjk-hrl74-md94b Scheduled 35.230.49.39 7955 gke-e2e-test-cluster-default-d46634d8-5qvv 11s
1593181156 simple-fleet-b8xjk-hrl74-pxbk8 Scheduled 35.230.49.39 7575 gke-e2e-test-cluster-default-d46634d8-5qvv 9s
1593181156 simple-fleet-b8xjk-hrl74-xpncc Scheduled 35.230.49.39 7588 gke-e2e-test-cluster-default-d46634d8-5qvv 9s
1593181156 simple-fleet-fs582-phqfh-25hlv Allocated 35.230.49.39 7019 gke-e2e-test-cluster-default-d46634d8-5qvv 157m
1593181156 simple-fleet-fs582-phqfh-6l2sd Ready 35.230.49.39 7200 gke-e2e-test-cluster-default-d46634d8-5qvv 157m
1593181156 simple-fleet-fs582-phqfh-m48pf Ready 35.230.49.39 7901 gke-e2e-test-cluster-default-d46634d8-5qvv 157m
1593181156 simple-fleet-kh9cq-blb8z-6nrpc Ready 35.247.4.172 0 gke-e2e-test-cluster-default-d46634d8-tdsl 155m
1593181156 simple-fleet-kh9cq-blb8z-pxmkt Ready 35.247.4.172 0 gke-e2e-test-cluster-default-d46634d8-tdsl 155m
1593181156 simple-fleet-kh9cq-blb8z-zwx4b Ready 35.230.49.39 0 gke-e2e-test-cluster-default-d46634d8-5qvv 155m
1593181156 udp-server2j65m Unhealthy 156m
1593181156 udp-server6hxr6 Unhealthy 35.230.49.39 7135 gke-e2e-test-cluster-default-d46634d8-5qvv 156m
1593181156 udp-server9svnl Unhealthy 151m
1593181156 udp-serverc9v87 Unhealthy 104.198.11.54 7309 gke-e2e-test-cluster-default-d46634d8-s24f 156m
1593181156 udp-servergxgfp Ready 35.230.49.39 7915 gke-e2e-test-cluster-default-d46634d8-5qvv 156m
1593181156 udp-serverh24jr Ready 104.198.11.54 7254 gke-e2e-test-cluster-default-d46634d8-s24f 156m
1593181156 udp-serverx7vtt Ready 104.198.11.54 7375 gke-e2e-test-cluster-default-d46634d8-s24f 156m
```
- Others:
|
test
|
better cleanup of namespace on test failure please use this template while reporting a bug and provide as much info as possible not doing so may result in your bug not being addressed in a timely manner thanks if the matter is security related please disclose it privately via what happened cluster ends up in a non recoverable state when namespaces filled with game servers aren t cleaned up on some test failures i m manually cleaning the cluster around once a day at least what you expected to happen either on test end and on test start should gameservers and fleets etc be deleted from the test cluster i would recommend adding a label to test namespaces and then searching for them by label and deleting them at the beginning of the test as well as the end how to reproduce it as minimally and precisely as possible wait for a test to fail then review the cluster anything else we need to know no environment agones version development kubernetes version use kubectl version cloud provider or hardware configuration gke install method yaml helm helm troubleshooting guide log s shell markmandel cloudshell agones images kubectl get gs all namespaces namespace name state address port node age simple fleet ready gke test cluster default simple fleet ready gke test cluster default simple fleet gdbvw ready gke test cluster default simple fleet jsdwz ready gke test cluster default simple fleet ready gke test cluster default simple fleet mwzjk ready gke test cluster default simple fleet ready gke test cluster default simple fleet tsnsh ready gke test cluster default simple fleet fhwkt scheduled gke test cluster default dwmn simple fleet fhwkt scheduled gke test cluster default dwmn simple fleet fhwkt scheduled gke test cluster default dwmn simple fleet hxmdl ready gke test cluster default dwmn simple fleet hxmdl qsrxn allocated gke test cluster default dwmn simple fleet hxmdl ready gke test cluster default dwmn simple fleet ready gke test cluster default simple fleet ready gke test cluster default simple fleet ready gke test cluster default dwmn udp ready gke test cluster default udp unhealthy udp unhealthy gke test cluster default dwmn udp serverghnkn unhealthy gke test cluster default dwmn udp ready gke test cluster default udp servergptsp ready gke test cluster default tdsl udp ready gke test cluster default udp ready gke test cluster default udp ready gke test cluster default udp ready gke test cluster default dwmn udp serverswsjq ready gke test cluster default dwmn udp ready gke test cluster default dwmn udp servervcfsl ready gke test cluster default udp serverzxdnr ready gke test cluster default dwmn simple fleet scheduled gke test cluster default simple fleet scheduled gke test cluster default simple fleet xpncc scheduled gke test cluster default simple fleet phqfh allocated gke test cluster default simple fleet phqfh ready gke test cluster default simple fleet phqfh ready gke test cluster default simple fleet ready gke test cluster default tdsl simple fleet pxmkt ready gke test cluster default tdsl simple fleet ready gke test cluster default udp unhealthy udp unhealthy gke test cluster default udp unhealthy udp unhealthy gke test cluster default udp servergxgfp ready gke test cluster default udp ready gke test cluster default udp ready gke test cluster default others
| 1
|
253,232
| 21,664,515,892
|
IssuesEvent
|
2022-05-07 01:37:03
|
ossf/scorecard-action
|
https://api.github.com/repos/ossf/scorecard-action
|
closed
|
Failed to run e2e test-organization-ls/scorecard-action-private-repo-tests
|
e2e automated-tests
|
Repo: https://github.com/test-organization-ls/scorecard-action-private-repo-tests/tree/main \n Run: https://github.com/test-organization-ls/scorecard-action-private-repo-tests/actions/runs/2279452474 \n Workflow name: scorecard-priavte-repo \n Workflow file: https://github.com/test-organization-ls/scorecard-action-private-repo-tests/tree/main/.github/workflows/scorecard-main.yml \n Trigger: schedule \n Branch: main \n Date: Fri May 6 03:01:43 UTC 2022
|
1.0
|
Failed to run e2e test-organization-ls/scorecard-action-private-repo-tests - Repo: https://github.com/test-organization-ls/scorecard-action-private-repo-tests/tree/main \n Run: https://github.com/test-organization-ls/scorecard-action-private-repo-tests/actions/runs/2279452474 \n Workflow name: scorecard-priavte-repo \n Workflow file: https://github.com/test-organization-ls/scorecard-action-private-repo-tests/tree/main/.github/workflows/scorecard-main.yml \n Trigger: schedule \n Branch: main \n Date: Fri May 6 03:01:43 UTC 2022
|
test
|
failed to run test organization ls scorecard action private repo tests repo n run n workflow name scorecard priavte repo n workflow file n trigger schedule n branch main n date fri may utc
| 1
|
294,810
| 25,407,376,345
|
IssuesEvent
|
2022-11-22 16:13:07
|
eclipse-openj9/openj9
|
https://api.github.com/repos/eclipse-openj9/openj9
|
closed
|
jdk_security4_0_FAILED sun/security/krb5/auto/ReplayCacheTestProc.java Exception at ReplayCacheTestProc.main0
|
test failure
|
Failure link
------------
From [an internal build](https://hyc-runtimes-jenkins.swg-devops.com/job/Test_openjdk11_j9_extended.openjdk_s390x_linux/103/)(`rhel7s390x-3-8`):
```
java version "11.0.18" 2023-01-17
IBM Semeru Runtime Certified Edition 11.0.18+3 (build 11.0.18+3)
Eclipse OpenJ9 VM 11.0.18+3 (build master-5e4baa709, JRE 11 Linux s390x-64-Bit Compressed References 20221118_420 (JIT enabled, AOT enabled)
OpenJ9 - 5e4baa709
OMR - fe4c3b9b5
JCL - 8be395e14c based on jdk-11.0.18+3)
```
[Rerun in Grinder](https://hyc-runtimes-jenkins.swg-devops.com/job/Grinder/parambuild/?SDK_RESOURCE=customized&TARGET=jdk_security4_0&TEST_FLAG=&UPSTREAM_TEST_JOB_NAME=&DOCKER_REQUIRED=false&ACTIVE_NODE_TIMEOUT=0&VENDOR_TEST_DIRS=&EXTRA_DOCKER_ARGS=&TKG_OWNER_BRANCH=adoptium%3Amaster&OPENJ9_SYSTEMTEST_OWNER_BRANCH=eclipse%3Amaster&PLATFORM=s390x_linux&GENERATE_JOBS=true&KEEP_REPORTDIR=true&PERSONAL_BUILD=false&ADOPTOPENJDK_REPO=https%3A%2F%2Fgithub.com%2Fadoptium%2Faqa-tests.git&DOCKER_REGISTRY_URL_CREDENTIAL_ID=&LABEL=&EXTRA_OPTIONS=&CUSTOMIZED_SDK_URL=+https%3A%2F%2Fna.artifactory.swg-devops.com%2Fartifactory%2Fsys-rt-generic-local%2Fhyc-runtimes-jenkins.swg-devops.com%2Fbuild-scripts%2Fjobs%2Fjdk11u%2Fjdk11u-linux-s390x-openj9-IBM%2F420%2Fibm-semeru-certified-jre_s390x_linux_JDK11U_2022-11-19-02-01.tar.gz+https%3A%2F%2Fna.artifactory.swg-devops.com%2Fartifactory%2Fsys-rt-generic-local%2Fhyc-runtimes-jenkins.swg-devops.com%2Fbuild-scripts%2Fjobs%2Fjdk11u%2Fjdk11u-linux-s390x-openj9-IBM%2F420%2Fibm-semeru-certified-testimage_s390x_linux_JDK11U_2022-11-19-02-01.tar.gz+https%3A%2F%2Fna.artifactory.swg-devops.com%2Fartifactory%2Fsys-rt-generic-local%2Fhyc-runtimes-jenkins.swg-devops.com%2Fbuild-scripts%2Fjobs%2Fjdk11u%2Fjdk11u-linux-s390x-openj9-IBM%2F420%2Fibm-semeru-certified-debugimage_s390x_linux_JDK11U_2022-11-19-02-01.tar.gz+https%3A%2F%2Fna.artifactory.swg-devops.com%2Fartifactory%2Fsys-rt-generic-local%2Fhyc-runtimes-jenkins.swg-devops.com%2Fbuild-scripts%2Fjobs%2Fjdk11u%2Fjdk11u-linux-s390x-openj9-IBM%2F420%2Fibm-semeru-certified-jdk_s390x_linux_JDK11U_2022-11-19-02-01.tar.gz&BUILD_IDENTIFIER=&ADOPTOPENJDK_BRANCH=master&LIGHT_WEIGHT_CHECKOUT=false&USE_JRE=false&ARTIFACTORY_SERVER=na.artifactory.swg-devops&KEEP_WORKSPACE=false&USER_CREDENTIALS_ID=83181e25-eea4-4f55-8b3e-e79615733226&JDK_VERSION=11&DOCKER_REGISTRY_URL=&ITERATIONS=1&VENDOR_TEST_REPOS=&JDK_REPO=git%40github.com%3Aibmruntimes%2Fopenj9-openjdk-jdk11&RELEASE_TAG=&OPENJ9_BRANCH=master&OPENJ9_SHA=&JCK_GIT_REPO=&VENDOR_TEST_BRANCHES=&OPENJ9_REPO=https%3A%2F%2Fgithub.com%2Feclipse-openj9%2Fopenj9.git&UPSTREAM_JOB_NAME=&CLOUD_PROVIDER=&CUSTOM_TARGET=&VENDOR_TEST_SHAS=&JDK_BRANCH=openj9&LABEL_ADDITION=&ARTIFACTORY_REPO=&ARTIFACTORY_ROOT_DIR=&UPSTREAM_TEST_JOB_NUMBER=&DOCKERIMAGE_TAG=&JDK_IMPL=openj9&TEST_TIME=&SSH_AGENT_CREDENTIAL=83181e25-eea4-4f55-8b3e-e79615733226&AUTO_DETECT=true&SLACK_CHANNEL=&DYNAMIC_COMPILE=false&ADOPTOPENJDK_SYSTEMTEST_OWNER_BRANCH=adoptium%3Amaster&CUSTOMIZED_SDK_URL_CREDENTIAL_ID=4e18ffe7-b1b1-4272-9979-99769b68bcc2&ARCHIVE_TEST_RESULTS=false&NUM_MACHINES=&OPENJDK_SHA=&TRSS_URL=http%3A%2F%2Ftrss1.fyre.ibm.com&USE_TESTENV_PROPERTIES=false&BUILD_LIST=openjdk&UPSTREAM_JOB_NUMBER=&STF_OWNER_BRANCH=adoptium%3Amaster&TIME_LIMIT=20&JVM_OPTIONS=&PARALLEL=None) - Change TARGET to run only the failed test targets.
Optional info
-------------
Failure output (captured from console output)
---------------------------------------------
```
[2022-11-19T05:40:53.341Z] variation: Mode150
[2022-11-19T05:40:53.341Z] JVM_OPTIONS: -XX:+UseCompressedOops
[2022-11-19T05:44:17.148Z] TEST: sun/security/krb5/auto/ReplayCacheTestProc.java
[2022-11-19T05:44:17.154Z] -----------------------------------------------
[2022-11-19T05:44:17.154Z] >>>>> UDP packet received
[2022-11-19T05:44:17.154Z] RABBIT.HOLE> client4@RABBIT.HOLE sends AS-REQ for krbtgt/RABBIT.HOLE@RABBIT.HOLE, KDCOptions:
[2022-11-19T05:44:17.154Z] KrbException: Additional pre-authentication required (25)
[2022-11-19T05:44:17.154Z] at KDC.processAsReq(KDC.java:1296)
[2022-11-19T05:44:17.154Z] at KDC.processMessage(KDC.java:774)
[2022-11-19T05:44:17.154Z] at KDC$1.run(KDC.java:1526)
[2022-11-19T05:44:17.154Z] Error 25 Additional pre-authentication required
[2022-11-19T05:44:17.154Z] >>>>> UDP request honored
[2022-11-19T05:44:17.182Z] STDERR:
[2022-11-19T05:44:17.182Z] Nsanity started
[2022-11-19T05:44:17.182Z] Na started
[2022-11-19T05:44:17.182Z] Nb started
[2022-11-19T05:44:17.182Z] java.lang.Exception
[2022-11-19T05:44:17.182Z] at ReplayCacheTestProc.main0(ReplayCacheTestProc.java:279)
[2022-11-19T05:44:17.182Z] at ReplayCacheTestProc.main(ReplayCacheTestProc.java:326)
[2022-11-19T05:44:17.182Z] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[2022-11-19T05:44:17.182Z] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[2022-11-19T05:44:17.182Z] at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[2022-11-19T05:44:17.182Z] at java.base/java.lang.reflect.Method.invoke(Method.java:566)
[2022-11-19T05:44:17.182Z] at com.sun.javatest.regtest.agent.MainWrapper$MainThread.run(MainWrapper.java:127)
[2022-11-19T05:44:17.182Z] at java.base/java.lang.Thread.run(Thread.java:839)
[2022-11-19T05:44:17.182Z] java.lang.Exception
[2022-11-19T05:44:17.182Z] at ReplayCacheTestProc.main0(ReplayCacheTestProc.java:279)
[2022-11-19T05:44:17.182Z] at ReplayCacheTestProc.main(ReplayCacheTestProc.java:326)
[2022-11-19T05:44:17.182Z] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[2022-11-19T05:44:17.182Z] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[2022-11-19T05:44:17.182Z] at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[2022-11-19T05:44:17.182Z] at java.base/java.lang.reflect.Method.invoke(Method.java:566)
[2022-11-19T05:44:17.182Z] at com.sun.javatest.regtest.agent.MainWrapper$MainThread.run(MainWrapper.java:127)
[2022-11-19T05:44:17.182Z] at java.base/java.lang.Thread.run(Thread.java:839)
[2022-11-19T05:44:17.182Z]
[2022-11-19T05:44:17.182Z] JavaTest Message: Test threw exception: java.lang.Exception
[2022-11-19T05:44:17.182Z] TEST RESULT: Failed. Execution failed: `main' threw exception: java.lang.Exception
[2022-11-19T05:44:17.182Z] --------------------------------------------------
[2022-11-19T05:45:18.134Z] Test results: passed: 137; failed: 1
[2022-11-19T05:45:23.885Z] Report written to /home/jenkins/workspace/Test_openjdk11_j9_extended.openjdk_s390x_linux/aqa-tests/TKG/output_1668827831241/jdk_security4_0/report/html/report.html
[2022-11-19T05:45:23.885Z] Results written to /home/jenkins/workspace/Test_openjdk11_j9_extended.openjdk_s390x_linux/aqa-tests/TKG/output_1668827831241/jdk_security4_0/work
[2022-11-19T05:45:23.885Z] Error: Some tests failed or other problems occurred.
[2022-11-19T05:45:23.885Z]
[2022-11-19T05:45:23.885Z] jdk_security4_0_FAILED
```
[50x internal grinder](https://hyc-runtimes-jenkins.swg-devops.com/job/Grinder/29682/)
|
1.0
|
jdk_security4_0_FAILED sun/security/krb5/auto/ReplayCacheTestProc.java Exception at ReplayCacheTestProc.main0 - Failure link
------------
From [an internal build](https://hyc-runtimes-jenkins.swg-devops.com/job/Test_openjdk11_j9_extended.openjdk_s390x_linux/103/)(`rhel7s390x-3-8`):
```
java version "11.0.18" 2023-01-17
IBM Semeru Runtime Certified Edition 11.0.18+3 (build 11.0.18+3)
Eclipse OpenJ9 VM 11.0.18+3 (build master-5e4baa709, JRE 11 Linux s390x-64-Bit Compressed References 20221118_420 (JIT enabled, AOT enabled)
OpenJ9 - 5e4baa709
OMR - fe4c3b9b5
JCL - 8be395e14c based on jdk-11.0.18+3)
```
[Rerun in Grinder](https://hyc-runtimes-jenkins.swg-devops.com/job/Grinder/parambuild/?SDK_RESOURCE=customized&TARGET=jdk_security4_0&TEST_FLAG=&UPSTREAM_TEST_JOB_NAME=&DOCKER_REQUIRED=false&ACTIVE_NODE_TIMEOUT=0&VENDOR_TEST_DIRS=&EXTRA_DOCKER_ARGS=&TKG_OWNER_BRANCH=adoptium%3Amaster&OPENJ9_SYSTEMTEST_OWNER_BRANCH=eclipse%3Amaster&PLATFORM=s390x_linux&GENERATE_JOBS=true&KEEP_REPORTDIR=true&PERSONAL_BUILD=false&ADOPTOPENJDK_REPO=https%3A%2F%2Fgithub.com%2Fadoptium%2Faqa-tests.git&DOCKER_REGISTRY_URL_CREDENTIAL_ID=&LABEL=&EXTRA_OPTIONS=&CUSTOMIZED_SDK_URL=+https%3A%2F%2Fna.artifactory.swg-devops.com%2Fartifactory%2Fsys-rt-generic-local%2Fhyc-runtimes-jenkins.swg-devops.com%2Fbuild-scripts%2Fjobs%2Fjdk11u%2Fjdk11u-linux-s390x-openj9-IBM%2F420%2Fibm-semeru-certified-jre_s390x_linux_JDK11U_2022-11-19-02-01.tar.gz+https%3A%2F%2Fna.artifactory.swg-devops.com%2Fartifactory%2Fsys-rt-generic-local%2Fhyc-runtimes-jenkins.swg-devops.com%2Fbuild-scripts%2Fjobs%2Fjdk11u%2Fjdk11u-linux-s390x-openj9-IBM%2F420%2Fibm-semeru-certified-testimage_s390x_linux_JDK11U_2022-11-19-02-01.tar.gz+https%3A%2F%2Fna.artifactory.swg-devops.com%2Fartifactory%2Fsys-rt-generic-local%2Fhyc-runtimes-jenkins.swg-devops.com%2Fbuild-scripts%2Fjobs%2Fjdk11u%2Fjdk11u-linux-s390x-openj9-IBM%2F420%2Fibm-semeru-certified-debugimage_s390x_linux_JDK11U_2022-11-19-02-01.tar.gz+https%3A%2F%2Fna.artifactory.swg-devops.com%2Fartifactory%2Fsys-rt-generic-local%2Fhyc-runtimes-jenkins.swg-devops.com%2Fbuild-scripts%2Fjobs%2Fjdk11u%2Fjdk11u-linux-s390x-openj9-IBM%2F420%2Fibm-semeru-certified-jdk_s390x_linux_JDK11U_2022-11-19-02-01.tar.gz&BUILD_IDENTIFIER=&ADOPTOPENJDK_BRANCH=master&LIGHT_WEIGHT_CHECKOUT=false&USE_JRE=false&ARTIFACTORY_SERVER=na.artifactory.swg-devops&KEEP_WORKSPACE=false&USER_CREDENTIALS_ID=83181e25-eea4-4f55-8b3e-e79615733226&JDK_VERSION=11&DOCKER_REGISTRY_URL=&ITERATIONS=1&VENDOR_TEST_REPOS=&JDK_REPO=git%40github.com%3Aibmruntimes%2Fopenj9-openjdk-jdk11&RELEASE_TAG=&OPENJ9_BRANCH=master&OPENJ9_SHA=&JCK_GIT_REPO=&VENDOR_TEST_BRANCHES=&OPENJ9_REPO=https%3A%2F%2Fgithub.com%2Feclipse-openj9%2Fopenj9.git&UPSTREAM_JOB_NAME=&CLOUD_PROVIDER=&CUSTOM_TARGET=&VENDOR_TEST_SHAS=&JDK_BRANCH=openj9&LABEL_ADDITION=&ARTIFACTORY_REPO=&ARTIFACTORY_ROOT_DIR=&UPSTREAM_TEST_JOB_NUMBER=&DOCKERIMAGE_TAG=&JDK_IMPL=openj9&TEST_TIME=&SSH_AGENT_CREDENTIAL=83181e25-eea4-4f55-8b3e-e79615733226&AUTO_DETECT=true&SLACK_CHANNEL=&DYNAMIC_COMPILE=false&ADOPTOPENJDK_SYSTEMTEST_OWNER_BRANCH=adoptium%3Amaster&CUSTOMIZED_SDK_URL_CREDENTIAL_ID=4e18ffe7-b1b1-4272-9979-99769b68bcc2&ARCHIVE_TEST_RESULTS=false&NUM_MACHINES=&OPENJDK_SHA=&TRSS_URL=http%3A%2F%2Ftrss1.fyre.ibm.com&USE_TESTENV_PROPERTIES=false&BUILD_LIST=openjdk&UPSTREAM_JOB_NUMBER=&STF_OWNER_BRANCH=adoptium%3Amaster&TIME_LIMIT=20&JVM_OPTIONS=&PARALLEL=None) - Change TARGET to run only the failed test targets.
Optional info
-------------
Failure output (captured from console output)
---------------------------------------------
```
[2022-11-19T05:40:53.341Z] variation: Mode150
[2022-11-19T05:40:53.341Z] JVM_OPTIONS: -XX:+UseCompressedOops
[2022-11-19T05:44:17.148Z] TEST: sun/security/krb5/auto/ReplayCacheTestProc.java
[2022-11-19T05:44:17.154Z] -----------------------------------------------
[2022-11-19T05:44:17.154Z] >>>>> UDP packet received
[2022-11-19T05:44:17.154Z] RABBIT.HOLE> client4@RABBIT.HOLE sends AS-REQ for krbtgt/RABBIT.HOLE@RABBIT.HOLE, KDCOptions:
[2022-11-19T05:44:17.154Z] KrbException: Additional pre-authentication required (25)
[2022-11-19T05:44:17.154Z] at KDC.processAsReq(KDC.java:1296)
[2022-11-19T05:44:17.154Z] at KDC.processMessage(KDC.java:774)
[2022-11-19T05:44:17.154Z] at KDC$1.run(KDC.java:1526)
[2022-11-19T05:44:17.154Z] Error 25 Additional pre-authentication required
[2022-11-19T05:44:17.154Z] >>>>> UDP request honored
[2022-11-19T05:44:17.182Z] STDERR:
[2022-11-19T05:44:17.182Z] Nsanity started
[2022-11-19T05:44:17.182Z] Na started
[2022-11-19T05:44:17.182Z] Nb started
[2022-11-19T05:44:17.182Z] java.lang.Exception
[2022-11-19T05:44:17.182Z] at ReplayCacheTestProc.main0(ReplayCacheTestProc.java:279)
[2022-11-19T05:44:17.182Z] at ReplayCacheTestProc.main(ReplayCacheTestProc.java:326)
[2022-11-19T05:44:17.182Z] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[2022-11-19T05:44:17.182Z] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[2022-11-19T05:44:17.182Z] at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[2022-11-19T05:44:17.182Z] at java.base/java.lang.reflect.Method.invoke(Method.java:566)
[2022-11-19T05:44:17.182Z] at com.sun.javatest.regtest.agent.MainWrapper$MainThread.run(MainWrapper.java:127)
[2022-11-19T05:44:17.182Z] at java.base/java.lang.Thread.run(Thread.java:839)
[2022-11-19T05:44:17.182Z] java.lang.Exception
[2022-11-19T05:44:17.182Z] at ReplayCacheTestProc.main0(ReplayCacheTestProc.java:279)
[2022-11-19T05:44:17.182Z] at ReplayCacheTestProc.main(ReplayCacheTestProc.java:326)
[2022-11-19T05:44:17.182Z] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[2022-11-19T05:44:17.182Z] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[2022-11-19T05:44:17.182Z] at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[2022-11-19T05:44:17.182Z] at java.base/java.lang.reflect.Method.invoke(Method.java:566)
[2022-11-19T05:44:17.182Z] at com.sun.javatest.regtest.agent.MainWrapper$MainThread.run(MainWrapper.java:127)
[2022-11-19T05:44:17.182Z] at java.base/java.lang.Thread.run(Thread.java:839)
[2022-11-19T05:44:17.182Z]
[2022-11-19T05:44:17.182Z] JavaTest Message: Test threw exception: java.lang.Exception
[2022-11-19T05:44:17.182Z] TEST RESULT: Failed. Execution failed: `main' threw exception: java.lang.Exception
[2022-11-19T05:44:17.182Z] --------------------------------------------------
[2022-11-19T05:45:18.134Z] Test results: passed: 137; failed: 1
[2022-11-19T05:45:23.885Z] Report written to /home/jenkins/workspace/Test_openjdk11_j9_extended.openjdk_s390x_linux/aqa-tests/TKG/output_1668827831241/jdk_security4_0/report/html/report.html
[2022-11-19T05:45:23.885Z] Results written to /home/jenkins/workspace/Test_openjdk11_j9_extended.openjdk_s390x_linux/aqa-tests/TKG/output_1668827831241/jdk_security4_0/work
[2022-11-19T05:45:23.885Z] Error: Some tests failed or other problems occurred.
[2022-11-19T05:45:23.885Z]
[2022-11-19T05:45:23.885Z] jdk_security4_0_FAILED
```
[50x internal grinder](https://hyc-runtimes-jenkins.swg-devops.com/job/Grinder/29682/)
|
test
|
jdk failed sun security auto replaycachetestproc java exception at replaycachetestproc failure link from java version ibm semeru runtime certified edition build eclipse vm build master jre linux bit compressed references jit enabled aot enabled omr jcl based on jdk change target to run only the failed test targets optional info failure output captured from console output variation jvm options xx usecompressedoops test sun security auto replaycachetestproc java udp packet received rabbit hole rabbit hole sends as req for krbtgt rabbit hole rabbit hole kdcoptions krbexception additional pre authentication required at kdc processasreq kdc java at kdc processmessage kdc java at kdc run kdc java error additional pre authentication required udp request honored stderr nsanity started na started nb started java lang exception at replaycachetestproc replaycachetestproc java at replaycachetestproc main replaycachetestproc java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at com sun javatest regtest agent mainwrapper mainthread run mainwrapper java at java base java lang thread run thread java java lang exception at replaycachetestproc replaycachetestproc java at replaycachetestproc main replaycachetestproc java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at com sun javatest regtest agent mainwrapper mainthread run mainwrapper java at java base java lang thread run thread java javatest message test threw exception java lang exception test result failed execution failed main threw exception java lang exception test results passed failed report written to home jenkins workspace test extended openjdk linux aqa tests tkg output jdk report html report html results written to home jenkins workspace test extended openjdk linux aqa tests tkg output jdk work error some tests failed or other problems occurred jdk failed
| 1
|
320,982
| 27,496,678,177
|
IssuesEvent
|
2023-03-05 08:01:03
|
road86/bahis-serve
|
https://api.github.com/repos/road86/bahis-serve
|
closed
|
Test add-to-project workflow works with new ROBOT_TOKEN secret
|
test
|
### Test Description
I have replaced the default `secrets.GITHUB_TOKEN` with a new fine-grained personal access token from @road86-robot. This should now mean issues and PRs get added to the project board automatically.
### To Manually Test
See what happens with this issue.
### Related Bugs
_No response_
### Additional Context
_No response_
|
1.0
|
Test add-to-project workflow works with new ROBOT_TOKEN secret - ### Test Description
I have replaced the default `secrets.GITHUB_TOKEN` with a new fine-grained personal access token from @road86-robot. This should now mean issues and PRs get added to the project board automatically.
### To Manually Test
See what happens with this issue.
### Related Bugs
_No response_
### Additional Context
_No response_
|
test
|
test add to project workflow works with new robot token secret test description i have replaced the default secrets github token with a new fine grained personal access token from robot this should now mean issues and prs get added to the project board automatically to manually test see what happens with this issue related bugs no response additional context no response
| 1
|
136,493
| 11,049,294,896
|
IssuesEvent
|
2019-12-09 23:16:30
|
rook/rook
|
https://api.github.com/repos/rook/rook
|
closed
|
NFS integration tests failing in master
|
bug test
|
<!-- **Are you in the right place?**
1. For issues or feature requests, please create an issue in this repository.
2. For general technical and non-technical questions, we are happy to help you on our [Rook.io Slack](https://slack.rook.io/).
3. Did you already search the existing open issues for anything similar? -->
**Is this a bug report or feature request?**
* Bug Report
**Deviation from expected behavior:**
The NFS integration tests are failing consistently in master since *.
```
--- FAIL: TestNfsSuite (863.80s)
--- FAIL: TestNfsSuite/TestNfsServerInstallation (814.52s)
read_write.go:45:
Error Trace: read_write.go:45
nfs_test.go:114
Error: Should be true
Test: TestNfsSuite/TestNfsServerInstallation
Messages: Make sure there are two read-write-test pods present in Running state
require.go:794:
Error Trace: nfs_test.go:145
nfs_test.go:116
Error: Received unexpected error:
unable to write data to pod -- : Failed to run: kubectl [exec read-write-test-5755fb789d-9z9q2 -- cat /mnt/data] : Failed to complete 'kubectl': exit status 1.
Test: TestNfsSuite/TestNfsServerInstallation
```
**Expected behavior:**
Integration tests should pass
**How to reproduce it (minimal and precise):**
<!-- Please let us know any circumstances for reproduction of your bug. -->
Look at the CI logs starting with master build [#1577](https://jenkins.rook.io/blue/organizations/jenkins/rook%2Frook/detail/master/1577/pipeline/56)
This error is related:
```
2019-12-05 03:51:18.195066 I | exec: Running command: kubectl get pod -n default -l app=read-write-test -o jsonpath={.items[*].metadata.name}
2019-12-05 03:51:18.264858 I | exec: Running command: kubectl exec read-write-test-5755fb789d-9z9q2 -- cat /mnt/data
2019-12-05 03:51:18.364925 E | utils: Failed to execute: kubectl [exec read-write-test-5755fb789d-9z9q2 -- cat /mnt/data] : Failed to complete 'kubectl': exit status 1. . error: unable to upgrade connection: container not found ("alpine")
2019-12-05 03:51:18.364956 I | integrationTest: nfs volume read exited, err: unable to write data to pod -- : Failed to run: kubectl [exec read-write-test-5755fb789d-9z9q2 -- cat /mnt/data] : Failed to complete 'kubectl': exit status 1. . result:
2019-12-05 03:51:18.364963 W | integrationTest: nfs volume read failed, will try again
```
**File(s) to submit**:
* Cluster CR (custom resource), typically called `cluster.yaml`, if necessary
* Operator's logs, if necessary
* Crashing pod(s) logs, if necessary
To get logs, use `kubectl -n <namespace> logs <pod name>`
When pasting logs, always surround them with backticks or use the `insert code` button from the Github UI.
Read [Github documentation if you need help](https://help.github.com/en/articles/creating-and-highlighting-code-blocks).
**Environment**:
* OS (e.g. from /etc/os-release):
* Kernel (e.g. `uname -a`):
* Cloud provider or hardware configuration:
* Rook version (use `rook version` inside of a Rook Pod):
* Storage backend version (e.g. for ceph do `ceph -v`):
* Kubernetes version (use `kubectl version`):
* Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift):
* Storage backend status (e.g. for Ceph use `ceph health` in the [Rook Ceph toolbox](https://rook.io/docs/rook/master/ceph-toolbox.html)):
|
1.0
|
NFS integration tests failing in master - <!-- **Are you in the right place?**
1. For issues or feature requests, please create an issue in this repository.
2. For general technical and non-technical questions, we are happy to help you on our [Rook.io Slack](https://slack.rook.io/).
3. Did you already search the existing open issues for anything similar? -->
**Is this a bug report or feature request?**
* Bug Report
**Deviation from expected behavior:**
The NFS integration tests are failing consistently in master since *.
```
--- FAIL: TestNfsSuite (863.80s)
--- FAIL: TestNfsSuite/TestNfsServerInstallation (814.52s)
read_write.go:45:
Error Trace: read_write.go:45
nfs_test.go:114
Error: Should be true
Test: TestNfsSuite/TestNfsServerInstallation
Messages: Make sure there are two read-write-test pods present in Running state
require.go:794:
Error Trace: nfs_test.go:145
nfs_test.go:116
Error: Received unexpected error:
unable to write data to pod -- : Failed to run: kubectl [exec read-write-test-5755fb789d-9z9q2 -- cat /mnt/data] : Failed to complete 'kubectl': exit status 1.
Test: TestNfsSuite/TestNfsServerInstallation
```
**Expected behavior:**
Integration tests should pass
**How to reproduce it (minimal and precise):**
<!-- Please let us know any circumstances for reproduction of your bug. -->
Look at the CI logs starting with master build [#1577](https://jenkins.rook.io/blue/organizations/jenkins/rook%2Frook/detail/master/1577/pipeline/56)
This error is related:
```
2019-12-05 03:51:18.195066 I | exec: Running command: kubectl get pod -n default -l app=read-write-test -o jsonpath={.items[*].metadata.name}
2019-12-05 03:51:18.264858 I | exec: Running command: kubectl exec read-write-test-5755fb789d-9z9q2 -- cat /mnt/data
2019-12-05 03:51:18.364925 E | utils: Failed to execute: kubectl [exec read-write-test-5755fb789d-9z9q2 -- cat /mnt/data] : Failed to complete 'kubectl': exit status 1. . error: unable to upgrade connection: container not found ("alpine")
2019-12-05 03:51:18.364956 I | integrationTest: nfs volume read exited, err: unable to write data to pod -- : Failed to run: kubectl [exec read-write-test-5755fb789d-9z9q2 -- cat /mnt/data] : Failed to complete 'kubectl': exit status 1. . result:
2019-12-05 03:51:18.364963 W | integrationTest: nfs volume read failed, will try again
```
**File(s) to submit**:
* Cluster CR (custom resource), typically called `cluster.yaml`, if necessary
* Operator's logs, if necessary
* Crashing pod(s) logs, if necessary
To get logs, use `kubectl -n <namespace> logs <pod name>`
When pasting logs, always surround them with backticks or use the `insert code` button from the Github UI.
Read [Github documentation if you need help](https://help.github.com/en/articles/creating-and-highlighting-code-blocks).
**Environment**:
* OS (e.g. from /etc/os-release):
* Kernel (e.g. `uname -a`):
* Cloud provider or hardware configuration:
* Rook version (use `rook version` inside of a Rook Pod):
* Storage backend version (e.g. for ceph do `ceph -v`):
* Kubernetes version (use `kubectl version`):
* Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift):
* Storage backend status (e.g. for Ceph use `ceph health` in the [Rook Ceph toolbox](https://rook.io/docs/rook/master/ceph-toolbox.html)):
|
test
|
nfs integration tests failing in master are you in the right place for issues or feature requests please create an issue in this repository for general technical and non technical questions we are happy to help you on our did you already search the existing open issues for anything similar is this a bug report or feature request bug report deviation from expected behavior the nfs integration tests are failing consistently in master since fail testnfssuite fail testnfssuite testnfsserverinstallation read write go error trace read write go nfs test go error should be true test testnfssuite testnfsserverinstallation messages make sure there are two read write test pods present in running state require go error trace nfs test go nfs test go error received unexpected error unable to write data to pod failed to run kubectl failed to complete kubectl exit status test testnfssuite testnfsserverinstallation expected behavior integration tests should pass how to reproduce it minimal and precise look at the ci logs starting with master build this error is related i exec running command kubectl get pod n default l app read write test o jsonpath items metadata name i exec running command kubectl exec read write test cat mnt data e utils failed to execute kubectl failed to complete kubectl exit status error unable to upgrade connection container not found alpine i integrationtest nfs volume read exited err unable to write data to pod failed to run kubectl failed to complete kubectl exit status result w integrationtest nfs volume read failed will try again file s to submit cluster cr custom resource typically called cluster yaml if necessary operator s logs if necessary crashing pod s logs if necessary to get logs use kubectl n logs when pasting logs always surround them with backticks or use the insert code button from the github ui read environment os e g from etc os release kernel e g uname a cloud provider or hardware configuration rook version use rook version inside of a rook pod storage backend version e g for ceph do ceph v kubernetes version use kubectl version kubernetes cluster type e g tectonic gke openshift storage backend status e g for ceph use ceph health in the
| 1
|
13,865
| 3,366,223,179
|
IssuesEvent
|
2015-11-21 05:32:39
|
arecker/bennedetto
|
https://api.github.com/repos/arecker/bennedetto
|
closed
|
Test coverage for rates model and amount_per_day
|
testing up for grabs
|
This sounds like a very straight forward unit tests. The `Rate` model should create its own `amount_per_day` value. Write a tests that asserts this value is correct each time.
|
1.0
|
Test coverage for rates model and amount_per_day - This sounds like a very straight forward unit tests. The `Rate` model should create its own `amount_per_day` value. Write a tests that asserts this value is correct each time.
|
test
|
test coverage for rates model and amount per day this sounds like a very straight forward unit tests the rate model should create its own amount per day value write a tests that asserts this value is correct each time
| 1
|
86,515
| 17,017,912,252
|
IssuesEvent
|
2021-07-02 14:30:14
|
cython/cython
|
https://api.github.com/repos/cython/cython
|
closed
|
[BUG] Infinite loop/crash on __add__
|
Code Generation defect
|
<!--
**PLEASE READ THIS FIRST:**
- Do not use the bug and feature tracker for support requests. Use the `cython-users` mailing list instead.
- Did you search for similar issues already? Please do, it helps to save us precious time that we otherwise could not invest into development.
- Did you try the latest master branch or pre-release? It might already have what you want to report. Also see the [Changelog](https://github.com/cython/cython/blob/master/CHANGES.rst) regarding recent changes.
-->
**Describe the bug**
It's possible to get an infinite loop for an add (and presumably other operators) operator that returns `NotImplemented` when used with a derived class.
**To Reproduce**
```cython
cdef class Base:
def __add__(self, other):
return NotImplemented
class Derived(Base):
pass
```
and to test:
```
>>> import hmmm
>>> x = hmmm.Base()
>>> y = hmmm.Derived()
>>> y+x # works OK as expected
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for +: 'Derived' and 'hmmm.Base'
>>> x+y
```
For `x+y` I get an infinite loop but [Pandas reports segmentation fault](https://github.com/pandas-dev/pandas/issues/34213#issuecomment-842694795) - probably just to do with tail-call recursion optimization or not.
**Expected behavior**
Should just be `NotImplementedError`
**Environment (please complete the following information):**
- OS: Linux
- Python version 3.8.9
- Cython version 0e80efb82480b777057770ce2006a6b46ec46028 (close to current master)
**Additional context**
I think this is to do with the changes to support `__radd__` etc.. I think it's somewhere in:
https://github.com/cython/cython/blob/034fc26c1c25b5b69581464b73136038bb201ce4/Cython/Utility/ExtensionTypes.c#L383
However, I haven't traced the error through in great detail so not 100% sure exactly what's gone wrong.
|
1.0
|
[BUG] Infinite loop/crash on __add__ - <!--
**PLEASE READ THIS FIRST:**
- Do not use the bug and feature tracker for support requests. Use the `cython-users` mailing list instead.
- Did you search for similar issues already? Please do, it helps to save us precious time that we otherwise could not invest into development.
- Did you try the latest master branch or pre-release? It might already have what you want to report. Also see the [Changelog](https://github.com/cython/cython/blob/master/CHANGES.rst) regarding recent changes.
-->
**Describe the bug**
It's possible to get an infinite loop for an add (and presumably other operators) operator that returns `NotImplemented` when used with a derived class.
**To Reproduce**
```cython
cdef class Base:
def __add__(self, other):
return NotImplemented
class Derived(Base):
pass
```
and to test:
```
>>> import hmmm
>>> x = hmmm.Base()
>>> y = hmmm.Derived()
>>> y+x # works OK as expected
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for +: 'Derived' and 'hmmm.Base'
>>> x+y
```
For `x+y` I get an infinite loop but [Pandas reports segmentation fault](https://github.com/pandas-dev/pandas/issues/34213#issuecomment-842694795) - probably just to do with tail-call recursion optimization or not.
**Expected behavior**
Should just be `NotImplementedError`
**Environment (please complete the following information):**
- OS: Linux
- Python version 3.8.9
- Cython version 0e80efb82480b777057770ce2006a6b46ec46028 (close to current master)
**Additional context**
I think this is to do with the changes to support `__radd__` etc.. I think it's somewhere in:
https://github.com/cython/cython/blob/034fc26c1c25b5b69581464b73136038bb201ce4/Cython/Utility/ExtensionTypes.c#L383
However, I haven't traced the error through in great detail so not 100% sure exactly what's gone wrong.
|
non_test
|
infinite loop crash on add please read this first do not use the bug and feature tracker for support requests use the cython users mailing list instead did you search for similar issues already please do it helps to save us precious time that we otherwise could not invest into development did you try the latest master branch or pre release it might already have what you want to report also see the regarding recent changes describe the bug it s possible to get an infinite loop for an add and presumably other operators operator that returns notimplemented when used with a derived class to reproduce cython cdef class base def add self other return notimplemented class derived base pass and to test import hmmm x hmmm base y hmmm derived y x works ok as expected traceback most recent call last file line in typeerror unsupported operand type s for derived and hmmm base x y for x y i get an infinite loop but probably just to do with tail call recursion optimization or not expected behavior should just be notimplementederror environment please complete the following information os linux python version cython version close to current master additional context i think this is to do with the changes to support radd etc i think it s somewhere in however i haven t traced the error through in great detail so not sure exactly what s gone wrong
| 0
|
113,665
| 14,449,760,643
|
IssuesEvent
|
2020-12-08 08:37:29
|
sButtons/sbuttons
|
https://api.github.com/repos/sButtons/sbuttons
|
closed
|
Enhance sidebar behavior when reaching the footer
|
Hacktoberfest Priority: High design enhancement good first issue help wanted up-for-grabs website
|
The sidebar currently is hidden behind the footer, we need a better behavior for this to make the design look better.
|
1.0
|
Enhance sidebar behavior when reaching the footer - The sidebar currently is hidden behind the footer, we need a better behavior for this to make the design look better.
|
non_test
|
enhance sidebar behavior when reaching the footer the sidebar currently is hidden behind the footer we need a better behavior for this to make the design look better
| 0
|
349,992
| 31,845,550,961
|
IssuesEvent
|
2023-09-14 19:38:12
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
opened
|
Failing test: Jest Integration Tests.src/plugins/content_management/server/event_stream/es/integration_tests - EsEventStreamClient .filter() can filter results for multiple subjects
|
failed-test
|
A test failed on a tracked branch
```
Error: ES exited with code 1
at createCliError (/var/lib/buildkite-agent/builds/kb-n2-4-spot-fea803905140333a/elastic/kibana-on-merge/kibana/packages/kbn-es/src/errors.ts:14:24)
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-fea803905140333a/elastic/kibana-on-merge/kibana/packages/kbn-es/src/cluster.ts:502:29
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-fea803905140333a/elastic/kibana-on-merge/kibana/packages/kbn-es/src/cluster.ts:205:7
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-fea803905140333a/elastic/kibana-on-merge/kibana/packages/kbn-tooling-log/src/tooling_log.ts:84:18
at Cluster.start (/var/lib/buildkite-agent/builds/kb-n2-4-spot-fea803905140333a/elastic/kibana-on-merge/kibana/packages/kbn-es/src/cluster.ts:202:5)
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-fea803905140333a/elastic/kibana-on-merge/kibana/packages/kbn-test/src/es/test_es_cluster.ts:281:18
at TestCluster.start (/var/lib/buildkite-agent/builds/kb-n2-4-spot-fea803905140333a/elastic/kibana-on-merge/kibana/packages/kbn-test/src/es/test_es_cluster.ts:298:9)
at startES (/var/lib/buildkite-agent/builds/kb-n2-4-spot-fea803905140333a/elastic/kibana-on-merge/kibana/packages/core/test-helpers/core-test-helpers-kbn-server/src/create_root.ts:268:7)
at Object.<anonymous> (/var/lib/buildkite-agent/builds/kb-n2-4-spot-fea803905140333a/elastic/kibana-on-merge/kibana/src/plugins/content_management/server/event_stream/es/integration_tests/es_event_stream_client.test.ts:35:16)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/35493#018a9513-80fc-4dcd-b5c9-f28916457eed)
<!-- kibanaCiData = {"failed-test":{"test.class":"Jest Integration Tests.src/plugins/content_management/server/event_stream/es/integration_tests","test.name":"EsEventStreamClient .filter() can filter results for multiple subjects","test.failCount":1}} -->
|
1.0
|
Failing test: Jest Integration Tests.src/plugins/content_management/server/event_stream/es/integration_tests - EsEventStreamClient .filter() can filter results for multiple subjects - A test failed on a tracked branch
```
Error: ES exited with code 1
at createCliError (/var/lib/buildkite-agent/builds/kb-n2-4-spot-fea803905140333a/elastic/kibana-on-merge/kibana/packages/kbn-es/src/errors.ts:14:24)
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-fea803905140333a/elastic/kibana-on-merge/kibana/packages/kbn-es/src/cluster.ts:502:29
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-fea803905140333a/elastic/kibana-on-merge/kibana/packages/kbn-es/src/cluster.ts:205:7
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-fea803905140333a/elastic/kibana-on-merge/kibana/packages/kbn-tooling-log/src/tooling_log.ts:84:18
at Cluster.start (/var/lib/buildkite-agent/builds/kb-n2-4-spot-fea803905140333a/elastic/kibana-on-merge/kibana/packages/kbn-es/src/cluster.ts:202:5)
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-fea803905140333a/elastic/kibana-on-merge/kibana/packages/kbn-test/src/es/test_es_cluster.ts:281:18
at TestCluster.start (/var/lib/buildkite-agent/builds/kb-n2-4-spot-fea803905140333a/elastic/kibana-on-merge/kibana/packages/kbn-test/src/es/test_es_cluster.ts:298:9)
at startES (/var/lib/buildkite-agent/builds/kb-n2-4-spot-fea803905140333a/elastic/kibana-on-merge/kibana/packages/core/test-helpers/core-test-helpers-kbn-server/src/create_root.ts:268:7)
at Object.<anonymous> (/var/lib/buildkite-agent/builds/kb-n2-4-spot-fea803905140333a/elastic/kibana-on-merge/kibana/src/plugins/content_management/server/event_stream/es/integration_tests/es_event_stream_client.test.ts:35:16)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/35493#018a9513-80fc-4dcd-b5c9-f28916457eed)
<!-- kibanaCiData = {"failed-test":{"test.class":"Jest Integration Tests.src/plugins/content_management/server/event_stream/es/integration_tests","test.name":"EsEventStreamClient .filter() can filter results for multiple subjects","test.failCount":1}} -->
|
test
|
failing test jest integration tests src plugins content management server event stream es integration tests eseventstreamclient filter can filter results for multiple subjects a test failed on a tracked branch error es exited with code at createclierror var lib buildkite agent builds kb spot elastic kibana on merge kibana packages kbn es src errors ts at var lib buildkite agent builds kb spot elastic kibana on merge kibana packages kbn es src cluster ts at processticksandrejections node internal process task queues at var lib buildkite agent builds kb spot elastic kibana on merge kibana packages kbn es src cluster ts at var lib buildkite agent builds kb spot elastic kibana on merge kibana packages kbn tooling log src tooling log ts at cluster start var lib buildkite agent builds kb spot elastic kibana on merge kibana packages kbn es src cluster ts at var lib buildkite agent builds kb spot elastic kibana on merge kibana packages kbn test src es test es cluster ts at testcluster start var lib buildkite agent builds kb spot elastic kibana on merge kibana packages kbn test src es test es cluster ts at startes var lib buildkite agent builds kb spot elastic kibana on merge kibana packages core test helpers core test helpers kbn server src create root ts at object var lib buildkite agent builds kb spot elastic kibana on merge kibana src plugins content management server event stream es integration tests es event stream client test ts first failure
| 1
|
86,621
| 15,755,699,983
|
IssuesEvent
|
2021-03-31 02:14:29
|
lokesh5654/gittest
|
https://api.github.com/repos/lokesh5654/gittest
|
opened
|
CVE-2020-36048 (High) detected in engine.io-3.2.1.tgz
|
security vulnerability
|
## CVE-2020-36048 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>engine.io-3.2.1.tgz</b></p></summary>
<p>The realtime engine behind Socket.IO. Provides the foundation of a bidirectional connection between client and server</p>
<p>Library home page: <a href="https://registry.npmjs.org/engine.io/-/engine.io-3.2.1.tgz">https://registry.npmjs.org/engine.io/-/engine.io-3.2.1.tgz</a></p>
<p>Path to dependency file: /gittest/package.json</p>
<p>Path to vulnerable library: gittest/node_modules/engine.io/package.json</p>
<p>
Dependency Hierarchy:
- testing-karma-1.1.2.tgz (Root Library)
- karma-4.2.0.tgz
- socket.io-2.1.1.tgz
- :x: **engine.io-3.2.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Engine.IO before 4.0.0 allows attackers to cause a denial of service (resource consumption) via a POST request to the long polling transport.
<p>Publish Date: 2021-01-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36048>CVE-2020-36048</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-36048">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-36048</a></p>
<p>Release Date: 2021-01-08</p>
<p>Fix Resolution: engine.io - 4.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-36048 (High) detected in engine.io-3.2.1.tgz - ## CVE-2020-36048 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>engine.io-3.2.1.tgz</b></p></summary>
<p>The realtime engine behind Socket.IO. Provides the foundation of a bidirectional connection between client and server</p>
<p>Library home page: <a href="https://registry.npmjs.org/engine.io/-/engine.io-3.2.1.tgz">https://registry.npmjs.org/engine.io/-/engine.io-3.2.1.tgz</a></p>
<p>Path to dependency file: /gittest/package.json</p>
<p>Path to vulnerable library: gittest/node_modules/engine.io/package.json</p>
<p>
Dependency Hierarchy:
- testing-karma-1.1.2.tgz (Root Library)
- karma-4.2.0.tgz
- socket.io-2.1.1.tgz
- :x: **engine.io-3.2.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Engine.IO before 4.0.0 allows attackers to cause a denial of service (resource consumption) via a POST request to the long polling transport.
<p>Publish Date: 2021-01-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36048>CVE-2020-36048</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-36048">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-36048</a></p>
<p>Release Date: 2021-01-08</p>
<p>Fix Resolution: engine.io - 4.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve high detected in engine io tgz cve high severity vulnerability vulnerable library engine io tgz the realtime engine behind socket io provides the foundation of a bidirectional connection between client and server library home page a href path to dependency file gittest package json path to vulnerable library gittest node modules engine io package json dependency hierarchy testing karma tgz root library karma tgz socket io tgz x engine io tgz vulnerable library vulnerability details engine io before allows attackers to cause a denial of service resource consumption via a post request to the long polling transport publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution engine io step up your open source security game with whitesource
| 0
|
167,372
| 13,023,631,167
|
IssuesEvent
|
2020-07-27 10:20:07
|
linkedpipes/dcat-ap-forms
|
https://api.github.com/repos/linkedpipes/dcat-ap-forms
|
closed
|
In LKOD mode, add publisher IRI
|
enhancement test
|
In the mode switcher, we also need to provide publisher IRI when in LKOD mode. This is then outputted in `poskytovatel`.
|
1.0
|
In LKOD mode, add publisher IRI - In the mode switcher, we also need to provide publisher IRI when in LKOD mode. This is then outputted in `poskytovatel`.
|
test
|
in lkod mode add publisher iri in the mode switcher we also need to provide publisher iri when in lkod mode this is then outputted in poskytovatel
| 1
|
234,361
| 19,145,009,325
|
IssuesEvent
|
2021-12-02 06:21:20
|
Azure/azure-sdk-for-js
|
https://api.github.com/repos/Azure/azure-sdk-for-js
|
closed
|
Azure Web PubSub Readme Issue
|
Client Docs test-manual-pass WebPubSub
|
1.
Section [link](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/web-pubsub/web-pubsub#2-create-and-authenticate-a-webpubsubserviceclient) :

Reason:
Use incorrect format
Suggestion:
Update `batch` format to `bash` format
2.
Section [link](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/web-pubsub/web-pubsub#2-create-and-authenticate-a-webpubsubserviceclient):

Reason:
Identity package is not imported
Suggestion:
Add code as following:
```js
const { DefaultAzureCredential } = require("@azure/identity");
```
3.
Section [link](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/web-pubsub/web-pubsub#access-the-raw-http-response-for-an-operation):

Reason:
Type is not define
Suggestion:
Add `core client` package as following:
```js
import { FullOperationResponse } from "@azure/core-client";
```
@ramya-rao-a , @nickzhums and @bterlson for notification.
|
1.0
|
Azure Web PubSub Readme Issue - 1.
Section [link](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/web-pubsub/web-pubsub#2-create-and-authenticate-a-webpubsubserviceclient) :

Reason:
Use incorrect format
Suggestion:
Update `batch` format to `bash` format
2.
Section [link](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/web-pubsub/web-pubsub#2-create-and-authenticate-a-webpubsubserviceclient):

Reason:
Identity package is not imported
Suggestion:
Add code as following:
```js
const { DefaultAzureCredential } = require("@azure/identity");
```
3.
Section [link](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/web-pubsub/web-pubsub#access-the-raw-http-response-for-an-operation):

Reason:
Type is not define
Suggestion:
Add `core client` package as following:
```js
import { FullOperationResponse } from "@azure/core-client";
```
@ramya-rao-a , @nickzhums and @bterlson for notification.
|
test
|
azure web pubsub readme issue section reason use incorrect format suggestion update batch format to bash format section reason identity package is not imported suggestion add code as following js const defaultazurecredential require azure identity section reason type is not define suggestion add core client package as following js import fulloperationresponse from azure core client ramya rao a nickzhums and bterlson for notification
| 1
|
77,905
| 3,507,493,848
|
IssuesEvent
|
2016-01-08 13:39:36
|
pombase/curation
|
https://api.github.com/repos/pombase/curation
|
closed
|
make sure all detoxification terms are "cellular"
|
annotation_priority check after update waiting for feedback
|
when this is addressed
https://github.com/geneontology/go-ontology/issues/12027
This will involve
i) migrating detoxification direct annotations to "cellular detoxification"
ii) detoxification of cadmium ion to cellular....
iii) detoxification of copper ion to cellular
iv) detoxification of arsenic-containing substance -> cellular
|
1.0
|
make sure all detoxification terms are "cellular" - when this is addressed
https://github.com/geneontology/go-ontology/issues/12027
This will involve
i) migrating detoxification direct annotations to "cellular detoxification"
ii) detoxification of cadmium ion to cellular....
iii) detoxification of copper ion to cellular
iv) detoxification of arsenic-containing substance -> cellular
|
non_test
|
make sure all detoxification terms are cellular when this is addressed this will involve i migrating detoxification direct annotations to cellular detoxification ii detoxification of cadmium ion to cellular iii detoxification of copper ion to cellular iv detoxification of arsenic containing substance cellular
| 0
|
207,631
| 23,469,979,167
|
IssuesEvent
|
2022-08-16 20:42:34
|
kcp-dev/kcp
|
https://api.github.com/repos/kcp-dev/kcp
|
closed
|
insecure: claimed.internal.apis.kcp.dev/<hash> is not APIExport dependent
|
area/security area/apiexports
|
Attack vector:
1. APIExport A claims resource R and workspace W bound against A has accepted the claim.
2. APIExport B claims resource R and workspace W bound against B has **NOT** accepted the claim.
3. The APIExport VW filters by `claimed.internal.apis.kcp.dev/<hash[:8]>: hash`, which is equal for A and B. Hence, owner of B gets access despite the user not accepting its claim.
|
True
|
insecure: claimed.internal.apis.kcp.dev/<hash> is not APIExport dependent - Attack vector:
1. APIExport A claims resource R and workspace W bound against A has accepted the claim.
2. APIExport B claims resource R and workspace W bound against B has **NOT** accepted the claim.
3. The APIExport VW filters by `claimed.internal.apis.kcp.dev/<hash[:8]>: hash`, which is equal for A and B. Hence, owner of B gets access despite the user not accepting its claim.
|
non_test
|
insecure claimed internal apis kcp dev is not apiexport dependent attack vector apiexport a claims resource r and workspace w bound against a has accepted the claim apiexport b claims resource r and workspace w bound against b has not accepted the claim the apiexport vw filters by claimed internal apis kcp dev hash which is equal for a and b hence owner of b gets access despite the user not accepting its claim
| 0
|
44,185
| 23,516,755,032
|
IssuesEvent
|
2022-08-18 22:25:42
|
pulumi/pulumi
|
https://api.github.com/repos/pulumi/pulumi
|
opened
|
Reduce imported TypeScript definition count
|
kind/enhancement impact/performance needs-triage
|
Consider modifying TypeScript provider code generation to optimize for the number of definitions that TypeScript compiler needs to process during compilation of simple Pulumi programs that use resource providers.
## Issue details
For a motivating example consider that ~13s is spent compiling TypeScript on a simple program that references an S3 Bucket. This is compared to ~2s spent on a program that does not refernce any resources. Note how the compiler needs to process 1101447 lines of definitions vs 10 lines of TypeScript.
```
$ pulumi new aws-typescript
$ tsc --extendedDiagnostics ~/tmp/my-perf-ts-aws-test/test2
Files: 2354
Lines of Library: 26582
Lines of Definitions: 1101447
Lines of TypeScript: 10
Lines of JavaScript: 0
Lines of JSON: 0
Lines of Other: 0
Nodes of Library: 117113
Nodes of Definitions: 2834671
Nodes of TypeScript: 37
Nodes of JavaScript: 0
Nodes of JSON: 0
Nodes of Other: 0
Identifiers: 1070341
Symbols: 649499
Types: 283897
Instantiations: 285896
Memory used: 1126911K
Assignability cache size: 19810
Identity cache size: 4
Subtype cache size: 0
Strict subtype cache size: 0
I/O Read time: 0.36s
Parse time: 4.52s
ResolveModule time: 0.28s
ResolveTypeReference time: 0.00s
Program time: 5.39s
Bind time: 1.73s
Check time: 6.20s
transformTime time: 0.01s
Source Map time: 0.00s
commentTime time: 0.00s
I/O Write time: 0.00s
printTime time: 0.02s
Emit time: 0.02s
Total time: 13.35s
```
Current source code:
```typescript
import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws";
import * as awsx from "@pulumi/awsx";
// Create an AWS resource (S3 Bucket)
const bucket = new aws.s3.Bucket("my-bucket");
// Export the name of the bucket
export const bucketName = bucket.id;
```
Optimizing the imports in this program gives:
```typescript
import { Bucket } from "@pulumi/aws/s3";
// Create an AWS resource (S3 Bucket)
const bucket = new Bucket("my-bucket");
// Export the name of the bucket
export const bucketName = bucket.id;
```
However, TypeScript compilation remains just as slow. Digging deeper, it appears that `bucket.d.ts` makes these references:
```
import { input as inputs, output as outputs, enums } from "../types";
import { PolicyDocument } from "../iam";
```
There is a way to debug loading of the .d.ts files `tsc --traceResolution`. It appears that the entire AWS set of definitions is imported. Judicious optimizations here can help ensure that a smaller set is imported; for example only S3 definitions.
```dot
digraph G {
"/Users/anton/tmp/my-perf-ts-aws-test/index.ts" -> "@pulumi/aws/s3/bucket.d.ts";
"@pulumi/aws/s3/bucket.d.ts" -> "@pulumi/aws/types/index.d.ts";
"@pulumi/aws/s3/bucket.d.ts" -> "@pulumi/aws/iam/index.d.ts";
"@pulumi/aws/types/index.d.ts" -> "@pulumi/aws/types/enums/index.d.ts";
"@pulumi/aws/types/index.d.ts" -> "@pulumi/aws/types/input.d.ts";
"@pulumi/aws/types/index.d.ts" -> "@pulumi/aws/types/output.d.ts";
"@pulumi/aws/types/enums/index.d.ts" -> "@pulumi/aws/types/enums/alb/index.d.ts";
"@pulumi/aws/types/enums/index.d.ts" -> "@pulumi/aws/types/enums/applicationloadbalancing/index.d.ts";
"@pulumi/aws/types/enums/index.d.ts" -> "@pulumi/aws/types/enums/autoscaling/index.d.ts";
"@pulumi/aws/types/enums/index.d.ts" -> "@pulumi/aws/types/enums/ec2/index.d.ts";
"@pulumi/aws/types/enums/index.d.ts" -> "@pulumi/aws/types/enums/iam/index.d.ts";
"@pulumi/aws/types/enums/index.d.ts" -> "@pulumi/aws/types/enums/lambda/index.d.ts";
"@pulumi/aws/types/enums/index.d.ts" -> "@pulumi/aws/types/enums/rds/index.d.ts";
"@pulumi/aws/types/enums/index.d.ts" -> "@pulumi/aws/types/enums/route53/index.d.ts";
"@pulumi/aws/types/enums/index.d.ts" -> "@pulumi/aws/types/enums/s3/index.d.ts";
"@pulumi/aws/types/enums/index.d.ts" -> "@pulumi/aws/types/enums/ssm/index.d.ts";
"@pulumi/aws/types/input.d.ts" -> "@pulumi/aws/types/index.d.ts";
"@pulumi/aws/types/input.d.ts" -> "@pulumi/aws/s3/index.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/accessPoint.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/accountPublicAccessBlock.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/analyticsConfiguration.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucket.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketAccelerateConfigurationV2.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketAclV2.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketCorsConfigurationV2.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketIntelligentTieringConfiguration.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketLifecycleConfigurationV2.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketLoggingV2.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketMetric.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketNotification.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketObject.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketObjectLockConfigurationV2.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketObjectv2.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketOwnershipControls.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketPolicy.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketPublicAccessBlock.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketReplicationConfig.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketRequestPaymentConfigurationV2.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketServerSideEncryptionConfigurationV2.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketV2.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketVersioningV2.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketWebsiteConfigurationV2.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/cannedAcl.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/getAccountPublicAccessBlock.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/getBucket.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/getBucketObject.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/getBucketObjects.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/getBucketPolicy.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/getCanonicalUserId.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/getObject.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/getObjects.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/inventory.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/objectCopy.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/routingRules.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/s3Mixins.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/types/enums/s3/index.d.ts";
"@pulumi/aws/s3/accessPoint.d.ts" -> "@pulumi/aws/types/index.d.ts";
"@pulumi/aws/s3/analyticsConfiguration.d.ts" -> "@pulumi/aws/types/index.d.ts";
"@pulumi/aws/s3/bucketAclV2.d.ts" -> "@pulumi/aws/types/index.d.ts";
"@pulumi/aws/s3/bucketCorsConfigurationV2.d.ts" -> "@pulumi/aws/types/index.d.ts";
"@pulumi/aws/s3/bucketIntelligentTieringConfiguration.d.ts" -> "@pulumi/aws/types/index.d.ts";
"@pulumi/aws/s3/bucketLifecycleConfigurationV2.d.ts" -> "@pulumi/aws/types/index.d.ts";
"@pulumi/aws/s3/bucketLoggingV2.d.ts" -> "@pulumi/aws/types/index.d.ts";
"@pulumi/aws/s3/bucketMetric.d.ts" -> "@pulumi/aws/types/index.d.ts";
"@pulumi/aws/s3/bucketNotification.d.ts" -> "@pulumi/aws/types/index.d.ts";
"@pulumi/aws/s3/bucketObject.d.ts" -> "@pulumi/aws/s3/index.d.ts";
"@pulumi/aws/s3/bucketObjectLockConfigurationV2.d.ts" -> "@pulumi/aws/types/index.d.ts";
"@pulumi/aws/s3/bucketObjectv2.d.ts" -> "@pulumi/aws/s3/index.d.ts";
"@pulumi/aws/s3/bucketOwnershipControls.d.ts" -> "@pulumi/aws/types/index.d.ts";
"@pulumi/aws/s3/bucketPolicy.d.ts" -> "@pulumi/aws/iam/index.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/accessKey.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/accountAlias.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/accountPasswordPolicy.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/documents.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/getAccountAlias.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/getGroup.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/getInstanceProfile.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/getInstanceProfiles.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/getOpenidConnectProvider.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/getPolicy.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/getPolicyDocument.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/getRole.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/getRoles.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/getSamlProvider.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/getServerCertificate.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/getSessionContext.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/getUser.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/getUserSshKey.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/getUsers.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/group.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/groupMembership.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/groupPolicy.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/groupPolicyAttachment.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/instanceProfile.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/managedPolicies.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/openIdConnectProvider.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/policy.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/policyAttachment.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/principals.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/role.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/rolePolicy.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/rolePolicyAttachment.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/samlProvider.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/serverCertificate.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/serviceLinkedRole.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/serviceSpecificCredential.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/signingCertificate.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/sshKey.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/user.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/userGroupMembership.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/userLoginProfile.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/userPolicy.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/userPolicyAttachment.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/virtualMfaDevice.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/types/enums/iam/index.d.ts";
"@pulumi/aws/iam/getGroup.d.ts" -> "@pulumi/aws/types/index.d.ts";
"@pulumi/aws/iam/getPolicyDocument.d.ts" -> "@pulumi/aws/types/index.d.ts";
"@pulumi/aws/iam/groupPolicy.d.ts" -> "@pulumi/aws/iam/index.d.ts";
"@pulumi/aws/iam/groupPolicyAttachment.d.ts" -> "@pulumi/aws/index.d.ts";
"@pulumi/aws/iam/groupPolicyAttachment.d.ts" -> "@pulumi/aws/iam/index.d.ts";
"@pulumi/aws/index.d.ts" -> "@pulumi/aws/arn.d.ts";
"@pulumi/aws/index.d.ts" -> "@pulumi/aws/awsMixins.d.ts";
"@pulumi/aws/index.d.ts" -> "@pulumi/aws/getAmi.d.ts";
"@pulumi/aws/index.d.ts" -> "@pulumi/aws/getAmiIds.d.ts";
.....
```
### Affected area/feature
Node codegen.
|
True
|
Reduce imported TypeScript definition count - Consider modifying TypeScript provider code generation to optimize for the number of definitions that TypeScript compiler needs to process during compilation of simple Pulumi programs that use resource providers.
## Issue details
For a motivating example consider that ~13s is spent compiling TypeScript on a simple program that references an S3 Bucket. This is compared to ~2s spent on a program that does not refernce any resources. Note how the compiler needs to process 1101447 lines of definitions vs 10 lines of TypeScript.
```
$ pulumi new aws-typescript
$ tsc --extendedDiagnostics ~/tmp/my-perf-ts-aws-test/test2
Files: 2354
Lines of Library: 26582
Lines of Definitions: 1101447
Lines of TypeScript: 10
Lines of JavaScript: 0
Lines of JSON: 0
Lines of Other: 0
Nodes of Library: 117113
Nodes of Definitions: 2834671
Nodes of TypeScript: 37
Nodes of JavaScript: 0
Nodes of JSON: 0
Nodes of Other: 0
Identifiers: 1070341
Symbols: 649499
Types: 283897
Instantiations: 285896
Memory used: 1126911K
Assignability cache size: 19810
Identity cache size: 4
Subtype cache size: 0
Strict subtype cache size: 0
I/O Read time: 0.36s
Parse time: 4.52s
ResolveModule time: 0.28s
ResolveTypeReference time: 0.00s
Program time: 5.39s
Bind time: 1.73s
Check time: 6.20s
transformTime time: 0.01s
Source Map time: 0.00s
commentTime time: 0.00s
I/O Write time: 0.00s
printTime time: 0.02s
Emit time: 0.02s
Total time: 13.35s
```
Current source code:
```typescript
import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws";
import * as awsx from "@pulumi/awsx";
// Create an AWS resource (S3 Bucket)
const bucket = new aws.s3.Bucket("my-bucket");
// Export the name of the bucket
export const bucketName = bucket.id;
```
Optimizing the imports in this program gives:
```typescript
import { Bucket } from "@pulumi/aws/s3";
// Create an AWS resource (S3 Bucket)
const bucket = new Bucket("my-bucket");
// Export the name of the bucket
export const bucketName = bucket.id;
```
However, TypeScript compilation remains just as slow. Digging deeper, it appears that `bucket.d.ts` makes these references:
```
import { input as inputs, output as outputs, enums } from "../types";
import { PolicyDocument } from "../iam";
```
There is a way to debug loading of the .d.ts files `tsc --traceResolution`. It appears that the entire AWS set of definitions is imported. Judicious optimizations here can help ensure that a smaller set is imported; for example only S3 definitions.
```dot
digraph G {
"/Users/anton/tmp/my-perf-ts-aws-test/index.ts" -> "@pulumi/aws/s3/bucket.d.ts";
"@pulumi/aws/s3/bucket.d.ts" -> "@pulumi/aws/types/index.d.ts";
"@pulumi/aws/s3/bucket.d.ts" -> "@pulumi/aws/iam/index.d.ts";
"@pulumi/aws/types/index.d.ts" -> "@pulumi/aws/types/enums/index.d.ts";
"@pulumi/aws/types/index.d.ts" -> "@pulumi/aws/types/input.d.ts";
"@pulumi/aws/types/index.d.ts" -> "@pulumi/aws/types/output.d.ts";
"@pulumi/aws/types/enums/index.d.ts" -> "@pulumi/aws/types/enums/alb/index.d.ts";
"@pulumi/aws/types/enums/index.d.ts" -> "@pulumi/aws/types/enums/applicationloadbalancing/index.d.ts";
"@pulumi/aws/types/enums/index.d.ts" -> "@pulumi/aws/types/enums/autoscaling/index.d.ts";
"@pulumi/aws/types/enums/index.d.ts" -> "@pulumi/aws/types/enums/ec2/index.d.ts";
"@pulumi/aws/types/enums/index.d.ts" -> "@pulumi/aws/types/enums/iam/index.d.ts";
"@pulumi/aws/types/enums/index.d.ts" -> "@pulumi/aws/types/enums/lambda/index.d.ts";
"@pulumi/aws/types/enums/index.d.ts" -> "@pulumi/aws/types/enums/rds/index.d.ts";
"@pulumi/aws/types/enums/index.d.ts" -> "@pulumi/aws/types/enums/route53/index.d.ts";
"@pulumi/aws/types/enums/index.d.ts" -> "@pulumi/aws/types/enums/s3/index.d.ts";
"@pulumi/aws/types/enums/index.d.ts" -> "@pulumi/aws/types/enums/ssm/index.d.ts";
"@pulumi/aws/types/input.d.ts" -> "@pulumi/aws/types/index.d.ts";
"@pulumi/aws/types/input.d.ts" -> "@pulumi/aws/s3/index.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/accessPoint.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/accountPublicAccessBlock.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/analyticsConfiguration.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucket.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketAccelerateConfigurationV2.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketAclV2.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketCorsConfigurationV2.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketIntelligentTieringConfiguration.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketLifecycleConfigurationV2.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketLoggingV2.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketMetric.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketNotification.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketObject.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketObjectLockConfigurationV2.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketObjectv2.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketOwnershipControls.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketPolicy.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketPublicAccessBlock.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketReplicationConfig.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketRequestPaymentConfigurationV2.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketServerSideEncryptionConfigurationV2.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketV2.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketVersioningV2.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/bucketWebsiteConfigurationV2.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/cannedAcl.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/getAccountPublicAccessBlock.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/getBucket.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/getBucketObject.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/getBucketObjects.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/getBucketPolicy.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/getCanonicalUserId.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/getObject.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/getObjects.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/inventory.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/objectCopy.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/routingRules.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/s3/s3Mixins.d.ts";
"@pulumi/aws/s3/index.d.ts" -> "@pulumi/aws/types/enums/s3/index.d.ts";
"@pulumi/aws/s3/accessPoint.d.ts" -> "@pulumi/aws/types/index.d.ts";
"@pulumi/aws/s3/analyticsConfiguration.d.ts" -> "@pulumi/aws/types/index.d.ts";
"@pulumi/aws/s3/bucketAclV2.d.ts" -> "@pulumi/aws/types/index.d.ts";
"@pulumi/aws/s3/bucketCorsConfigurationV2.d.ts" -> "@pulumi/aws/types/index.d.ts";
"@pulumi/aws/s3/bucketIntelligentTieringConfiguration.d.ts" -> "@pulumi/aws/types/index.d.ts";
"@pulumi/aws/s3/bucketLifecycleConfigurationV2.d.ts" -> "@pulumi/aws/types/index.d.ts";
"@pulumi/aws/s3/bucketLoggingV2.d.ts" -> "@pulumi/aws/types/index.d.ts";
"@pulumi/aws/s3/bucketMetric.d.ts" -> "@pulumi/aws/types/index.d.ts";
"@pulumi/aws/s3/bucketNotification.d.ts" -> "@pulumi/aws/types/index.d.ts";
"@pulumi/aws/s3/bucketObject.d.ts" -> "@pulumi/aws/s3/index.d.ts";
"@pulumi/aws/s3/bucketObjectLockConfigurationV2.d.ts" -> "@pulumi/aws/types/index.d.ts";
"@pulumi/aws/s3/bucketObjectv2.d.ts" -> "@pulumi/aws/s3/index.d.ts";
"@pulumi/aws/s3/bucketOwnershipControls.d.ts" -> "@pulumi/aws/types/index.d.ts";
"@pulumi/aws/s3/bucketPolicy.d.ts" -> "@pulumi/aws/iam/index.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/accessKey.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/accountAlias.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/accountPasswordPolicy.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/documents.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/getAccountAlias.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/getGroup.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/getInstanceProfile.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/getInstanceProfiles.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/getOpenidConnectProvider.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/getPolicy.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/getPolicyDocument.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/getRole.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/getRoles.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/getSamlProvider.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/getServerCertificate.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/getSessionContext.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/getUser.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/getUserSshKey.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/getUsers.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/group.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/groupMembership.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/groupPolicy.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/groupPolicyAttachment.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/instanceProfile.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/managedPolicies.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/openIdConnectProvider.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/policy.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/policyAttachment.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/principals.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/role.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/rolePolicy.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/rolePolicyAttachment.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/samlProvider.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/serverCertificate.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/serviceLinkedRole.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/serviceSpecificCredential.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/signingCertificate.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/sshKey.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/user.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/userGroupMembership.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/userLoginProfile.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/userPolicy.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/userPolicyAttachment.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/iam/virtualMfaDevice.d.ts";
"@pulumi/aws/iam/index.d.ts" -> "@pulumi/aws/types/enums/iam/index.d.ts";
"@pulumi/aws/iam/getGroup.d.ts" -> "@pulumi/aws/types/index.d.ts";
"@pulumi/aws/iam/getPolicyDocument.d.ts" -> "@pulumi/aws/types/index.d.ts";
"@pulumi/aws/iam/groupPolicy.d.ts" -> "@pulumi/aws/iam/index.d.ts";
"@pulumi/aws/iam/groupPolicyAttachment.d.ts" -> "@pulumi/aws/index.d.ts";
"@pulumi/aws/iam/groupPolicyAttachment.d.ts" -> "@pulumi/aws/iam/index.d.ts";
"@pulumi/aws/index.d.ts" -> "@pulumi/aws/arn.d.ts";
"@pulumi/aws/index.d.ts" -> "@pulumi/aws/awsMixins.d.ts";
"@pulumi/aws/index.d.ts" -> "@pulumi/aws/getAmi.d.ts";
"@pulumi/aws/index.d.ts" -> "@pulumi/aws/getAmiIds.d.ts";
.....
```
### Affected area/feature
Node codegen.
|
non_test
|
reduce imported typescript definition count consider modifying typescript provider code generation to optimize for the number of definitions that typescript compiler needs to process during compilation of simple pulumi programs that use resource providers issue details for a motivating example consider that is spent compiling typescript on a simple program that references an bucket this is compared to spent on a program that does not refernce any resources note how the compiler needs to process lines of definitions vs lines of typescript pulumi new aws typescript tsc extendeddiagnostics tmp my perf ts aws test files lines of library lines of definitions lines of typescript lines of javascript lines of json lines of other nodes of library nodes of definitions nodes of typescript nodes of javascript nodes of json nodes of other identifiers symbols types instantiations memory used assignability cache size identity cache size subtype cache size strict subtype cache size i o read time parse time resolvemodule time resolvetypereference time program time bind time check time transformtime time source map time commenttime time i o write time printtime time emit time total time current source code typescript import as pulumi from pulumi pulumi import as aws from pulumi aws import as awsx from pulumi awsx create an aws resource bucket const bucket new aws bucket my bucket export the name of the bucket export const bucketname bucket id optimizing the imports in this program gives typescript import bucket from pulumi aws create an aws resource bucket const bucket new bucket my bucket export the name of the bucket export const bucketname bucket id however typescript compilation remains just as slow digging deeper it appears that bucket d ts makes these references import input as inputs output as outputs enums from types import policydocument from iam there is a way to debug loading of the d ts files tsc traceresolution it appears that the entire aws set of definitions is imported judicious optimizations here can help ensure that a smaller set is imported for example only definitions dot digraph g users anton tmp my perf ts aws test index ts pulumi aws bucket d ts pulumi aws bucket d ts pulumi aws types index d ts pulumi aws bucket d ts pulumi aws iam index d ts pulumi aws types index d ts pulumi aws types enums index d ts pulumi aws types index d ts pulumi aws types input d ts pulumi aws types index d ts pulumi aws types output d ts pulumi aws types enums index d ts pulumi aws types enums alb index d ts pulumi aws types enums index d ts pulumi aws types enums applicationloadbalancing index d ts pulumi aws types enums index d ts pulumi aws types enums autoscaling index d ts pulumi aws types enums index d ts pulumi aws types enums index d ts pulumi aws types enums index d ts pulumi aws types enums iam index d ts pulumi aws types enums index d ts pulumi aws types enums lambda index d ts pulumi aws types enums index d ts pulumi aws types enums rds index d ts pulumi aws types enums index d ts pulumi aws types enums index d ts pulumi aws types enums index d ts pulumi aws types enums index d ts pulumi aws types enums index d ts pulumi aws types enums ssm index d ts pulumi aws types input d ts pulumi aws types index d ts pulumi aws types input d ts pulumi aws index d ts pulumi aws index d ts pulumi aws accesspoint d ts pulumi aws index d ts pulumi aws accountpublicaccessblock d ts pulumi aws index d ts pulumi aws analyticsconfiguration d ts pulumi aws index d ts pulumi aws bucket d ts pulumi aws index d ts pulumi aws d ts pulumi aws index d ts pulumi aws d ts pulumi aws index d ts pulumi aws d ts pulumi aws index d ts pulumi aws bucketintelligenttieringconfiguration d ts pulumi aws index d ts pulumi aws d ts pulumi aws index d ts pulumi aws d ts pulumi aws index d ts pulumi aws bucketmetric d ts pulumi aws index d ts pulumi aws bucketnotification d ts pulumi aws index d ts pulumi aws bucketobject d ts pulumi aws index d ts pulumi aws d ts pulumi aws index d ts pulumi aws d ts pulumi aws index d ts pulumi aws bucketownershipcontrols d ts pulumi aws index d ts pulumi aws bucketpolicy d ts pulumi aws index d ts pulumi aws bucketpublicaccessblock d ts pulumi aws index d ts pulumi aws bucketreplicationconfig d ts pulumi aws index d ts pulumi aws d ts pulumi aws index d ts pulumi aws d ts pulumi aws index d ts pulumi aws d ts pulumi aws index d ts pulumi aws d ts pulumi aws index d ts pulumi aws d ts pulumi aws index d ts pulumi aws cannedacl d ts pulumi aws index d ts pulumi aws getaccountpublicaccessblock d ts pulumi aws index d ts pulumi aws getbucket d ts pulumi aws index d ts pulumi aws getbucketobject d ts pulumi aws index d ts pulumi aws getbucketobjects d ts pulumi aws index d ts pulumi aws getbucketpolicy d ts pulumi aws index d ts pulumi aws getcanonicaluserid d ts pulumi aws index d ts pulumi aws getobject d ts pulumi aws index d ts pulumi aws getobjects d ts pulumi aws index d ts pulumi aws inventory d ts pulumi aws index d ts pulumi aws objectcopy d ts pulumi aws index d ts pulumi aws routingrules d ts pulumi aws index d ts pulumi aws d ts pulumi aws index d ts pulumi aws types enums index d ts pulumi aws accesspoint d ts pulumi aws types index d ts pulumi aws analyticsconfiguration d ts pulumi aws types index d ts pulumi aws d ts pulumi aws types index d ts pulumi aws d ts pulumi aws types index d ts pulumi aws bucketintelligenttieringconfiguration d ts pulumi aws types index d ts pulumi aws d ts pulumi aws types index d ts pulumi aws d ts pulumi aws types index d ts pulumi aws bucketmetric d ts pulumi aws types index d ts pulumi aws bucketnotification d ts pulumi aws types index d ts pulumi aws bucketobject d ts pulumi aws index d ts pulumi aws d ts pulumi aws types index d ts pulumi aws d ts pulumi aws index d ts pulumi aws bucketownershipcontrols d ts pulumi aws types index d ts pulumi aws bucketpolicy d ts pulumi aws iam index d ts pulumi aws iam index d ts pulumi aws iam accesskey d ts pulumi aws iam index d ts pulumi aws iam accountalias d ts pulumi aws iam index d ts pulumi aws iam accountpasswordpolicy d ts pulumi aws iam index d ts pulumi aws iam documents d ts pulumi aws iam index d ts pulumi aws iam getaccountalias d ts pulumi aws iam index d ts pulumi aws iam getgroup d ts pulumi aws iam index d ts pulumi aws iam getinstanceprofile d ts pulumi aws iam index d ts pulumi aws iam getinstanceprofiles d ts pulumi aws iam index d ts pulumi aws iam getopenidconnectprovider d ts pulumi aws iam index d ts pulumi aws iam getpolicy d ts pulumi aws iam index d ts pulumi aws iam getpolicydocument d ts pulumi aws iam index d ts pulumi aws iam getrole d ts pulumi aws iam index d ts pulumi aws iam getroles d ts pulumi aws iam index d ts pulumi aws iam getsamlprovider d ts pulumi aws iam index d ts pulumi aws iam getservercertificate d ts pulumi aws iam index d ts pulumi aws iam getsessioncontext d ts pulumi aws iam index d ts pulumi aws iam getuser d ts pulumi aws iam index d ts pulumi aws iam getusersshkey d ts pulumi aws iam index d ts pulumi aws iam getusers d ts pulumi aws iam index d ts pulumi aws iam group d ts pulumi aws iam index d ts pulumi aws iam groupmembership d ts pulumi aws iam index d ts pulumi aws iam grouppolicy d ts pulumi aws iam index d ts pulumi aws iam grouppolicyattachment d ts pulumi aws iam index d ts pulumi aws iam instanceprofile d ts pulumi aws iam index d ts pulumi aws iam managedpolicies d ts pulumi aws iam index d ts pulumi aws iam openidconnectprovider d ts pulumi aws iam index d ts pulumi aws iam policy d ts pulumi aws iam index d ts pulumi aws iam policyattachment d ts pulumi aws iam index d ts pulumi aws iam principals d ts pulumi aws iam index d ts pulumi aws iam role d ts pulumi aws iam index d ts pulumi aws iam rolepolicy d ts pulumi aws iam index d ts pulumi aws iam rolepolicyattachment d ts pulumi aws iam index d ts pulumi aws iam samlprovider d ts pulumi aws iam index d ts pulumi aws iam servercertificate d ts pulumi aws iam index d ts pulumi aws iam servicelinkedrole d ts pulumi aws iam index d ts pulumi aws iam servicespecificcredential d ts pulumi aws iam index d ts pulumi aws iam signingcertificate d ts pulumi aws iam index d ts pulumi aws iam sshkey d ts pulumi aws iam index d ts pulumi aws iam user d ts pulumi aws iam index d ts pulumi aws iam usergroupmembership d ts pulumi aws iam index d ts pulumi aws iam userloginprofile d ts pulumi aws iam index d ts pulumi aws iam userpolicy d ts pulumi aws iam index d ts pulumi aws iam userpolicyattachment d ts pulumi aws iam index d ts pulumi aws iam virtualmfadevice d ts pulumi aws iam index d ts pulumi aws types enums iam index d ts pulumi aws iam getgroup d ts pulumi aws types index d ts pulumi aws iam getpolicydocument d ts pulumi aws types index d ts pulumi aws iam grouppolicy d ts pulumi aws iam index d ts pulumi aws iam grouppolicyattachment d ts pulumi aws index d ts pulumi aws iam grouppolicyattachment d ts pulumi aws iam index d ts pulumi aws index d ts pulumi aws arn d ts pulumi aws index d ts pulumi aws awsmixins d ts pulumi aws index d ts pulumi aws getami d ts pulumi aws index d ts pulumi aws getamiids d ts affected area feature node codegen
| 0
|
180,691
| 13,943,100,458
|
IssuesEvent
|
2020-10-22 22:18:13
|
kaetemi/ryzomclassic
|
https://api.github.com/repos/kaetemi/ryzomclassic
|
closed
|
Disable all r2 DRM options (UI)
|
blocking bug r2 ready for test
|
MD5 security check disabled in #147
Disable check for OtherCharAccess
|
1.0
|
Disable all r2 DRM options (UI) - MD5 security check disabled in #147
Disable check for OtherCharAccess
|
test
|
disable all drm options ui security check disabled in disable check for othercharaccess
| 1
|
115,850
| 9,815,056,426
|
IssuesEvent
|
2019-06-13 11:44:04
|
WordPress/gutenberg
|
https://api.github.com/repos/WordPress/gutenberg
|
closed
|
how to use a reusable block in a template_
|
Needs Testing [Type] Help Request
|
hi
i have a post template that looks like this:
`<?php
function my_add_template_to_posts() {
$post_type_object = get_post_type_object( 'post' );
$post_type_object->template = array(
array( 'core/paragraph', array(
'placeholder' => __('Start writing your post'),
) ),
);
//$post_type_object->template_lock = 'insert';
}
add_action( 'init', 'my_add_template_to_posts' );`
what should i do inorder to use here a reusable block? like social share? or whatever other reusable block i have?
|
1.0
|
how to use a reusable block in a template_ - hi
i have a post template that looks like this:
`<?php
function my_add_template_to_posts() {
$post_type_object = get_post_type_object( 'post' );
$post_type_object->template = array(
array( 'core/paragraph', array(
'placeholder' => __('Start writing your post'),
) ),
);
//$post_type_object->template_lock = 'insert';
}
add_action( 'init', 'my_add_template_to_posts' );`
what should i do inorder to use here a reusable block? like social share? or whatever other reusable block i have?
|
test
|
how to use a reusable block in a template hi i have a post template that looks like this php function my add template to posts post type object get post type object post post type object template array array core paragraph array placeholder start writing your post post type object template lock insert add action init my add template to posts what should i do inorder to use here a reusable block like social share or whatever other reusable block i have
| 1
|
171,539
| 13,236,959,379
|
IssuesEvent
|
2020-08-18 20:44:49
|
elastic/elasticsearch
|
https://api.github.com/repos/elastic/elasticsearch
|
closed
|
[CI] ESTestCaseTests.testRandomDateFormatterPattern fails reproducibly
|
>test-failure
|
**Build scan**:
https://gradle-enterprise.elastic.co/s/pui6fsuhb34yu
**Repro line**:
./gradlew ':test:framework:test' --tests "org.elasticsearch.test.test.ESTestCaseTests.testRandomDateFormatterPattern" \
-Dtests.seed=A30749E4133D87A5 \
-Dtests.security.manager=true \
-Dtests.locale=ja-JP \
-Dtests.timezone=Asia/Dacca \
-Druntime.java=14
**Reproduces locally?**:
Yes
**Applicable branches**:
master
**Failure history**:
Another failure on 7.x earlier today: https://gradle-enterprise.elastic.co/s/7sxkmswcglt6a
**Failure excerpt**:
java.lang.AssertionError:
Expected: <0L>
but: was <-259200000L>
at __randomizedtesting.SeedInfo.seed([A30749E4133D87A5:AE6E17DEFD8D5E5E]:0)
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18)
at org.junit.Assert.assertThat(Assert.java:956)
at org.junit.Assert.assertThat(Assert.java:923)
at org.elasticsearch.test.test.ESTestCaseTests.testRandomDateFormatterPattern(ESTestCaseTests.java:207)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
Looking at the commit history my guess is https://github.com/elastic/elasticsearch/commit/ca36bca5bd3d51c69a726281e757b5ba7643d445 might have to do something with this. @nik9000 can you check and otherwise assign someone else or another team?
|
1.0
|
[CI] ESTestCaseTests.testRandomDateFormatterPattern fails reproducibly - **Build scan**:
https://gradle-enterprise.elastic.co/s/pui6fsuhb34yu
**Repro line**:
./gradlew ':test:framework:test' --tests "org.elasticsearch.test.test.ESTestCaseTests.testRandomDateFormatterPattern" \
-Dtests.seed=A30749E4133D87A5 \
-Dtests.security.manager=true \
-Dtests.locale=ja-JP \
-Dtests.timezone=Asia/Dacca \
-Druntime.java=14
**Reproduces locally?**:
Yes
**Applicable branches**:
master
**Failure history**:
Another failure on 7.x earlier today: https://gradle-enterprise.elastic.co/s/7sxkmswcglt6a
**Failure excerpt**:
java.lang.AssertionError:
Expected: <0L>
but: was <-259200000L>
at __randomizedtesting.SeedInfo.seed([A30749E4133D87A5:AE6E17DEFD8D5E5E]:0)
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18)
at org.junit.Assert.assertThat(Assert.java:956)
at org.junit.Assert.assertThat(Assert.java:923)
at org.elasticsearch.test.test.ESTestCaseTests.testRandomDateFormatterPattern(ESTestCaseTests.java:207)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
Looking at the commit history my guess is https://github.com/elastic/elasticsearch/commit/ca36bca5bd3d51c69a726281e757b5ba7643d445 might have to do something with this. @nik9000 can you check and otherwise assign someone else or another team?
|
test
|
estestcasetests testrandomdateformatterpattern fails reproducibly build scan repro line gradlew test framework test tests org elasticsearch test test estestcasetests testrandomdateformatterpattern dtests seed dtests security manager true dtests locale ja jp dtests timezone asia dacca druntime java reproduces locally yes applicable branches master failure history another failure on x earlier today failure excerpt java lang assertionerror expected but was at randomizedtesting seedinfo seed at org hamcrest matcherassert assertthat matcherassert java at org junit assert assertthat assert java at org junit assert assertthat assert java at org elasticsearch test test estestcasetests testrandomdateformatterpattern estestcasetests java at java base jdk internal reflect nativemethodaccessorimpl native method looking at the commit history my guess is might have to do something with this can you check and otherwise assign someone else or another team
| 1
|
509,657
| 14,741,166,907
|
IssuesEvent
|
2021-01-07 10:12:07
|
wso2/integration-studio
|
https://api.github.com/repos/wso2/integration-studio
|
closed
|
Environment variable settings are lost after restarting the Integration Studio
|
Priority/High
|
**Description:**
The environment variables can be added to Integration Studio under the default server configuration (Micro Integrator Server 1.2.0) as explained in the blog post [1]. But after restarting the Integration Studio, the previously added environment variables are lost and it requires adding environment variables again.
The cause for the above behavior seems to be due to removing the' Micro Integrator Server 1.2.0' under 'Generic Server' in the Integration Studio 'Run Configurations', during a restart of the Integration Studio.
Is there a possibility to correct the above behavior in Integration Studio?
[1] https://medium.com/think-integration/how-to-inject-environment-variables-to-wso2-integration-studio-runtime-83b8f0cb882e
|
1.0
|
Environment variable settings are lost after restarting the Integration Studio - **Description:**
The environment variables can be added to Integration Studio under the default server configuration (Micro Integrator Server 1.2.0) as explained in the blog post [1]. But after restarting the Integration Studio, the previously added environment variables are lost and it requires adding environment variables again.
The cause for the above behavior seems to be due to removing the' Micro Integrator Server 1.2.0' under 'Generic Server' in the Integration Studio 'Run Configurations', during a restart of the Integration Studio.
Is there a possibility to correct the above behavior in Integration Studio?
[1] https://medium.com/think-integration/how-to-inject-environment-variables-to-wso2-integration-studio-runtime-83b8f0cb882e
|
non_test
|
environment variable settings are lost after restarting the integration studio description the environment variables can be added to integration studio under the default server configuration micro integrator server as explained in the blog post but after restarting the integration studio the previously added environment variables are lost and it requires adding environment variables again the cause for the above behavior seems to be due to removing the micro integrator server under generic server in the integration studio run configurations during a restart of the integration studio is there a possibility to correct the above behavior in integration studio
| 0
|
1,805
| 2,573,981,081
|
IssuesEvent
|
2015-02-11 14:21:04
|
KoffeinFlummi/AGM
|
https://api.github.com/repos/KoffeinFlummi/AGM
|
closed
|
Big performance drop with nothing happening just from AGM
|
enhancement needs testing
|
What we did
- Added the AGM mod from play with six with no modules added or removed.
- Launch the game with just that mod
- Open the editor and place a nato unit in the main airport and a local town as a custom mission
- Run the game with and without the mod running.
What we expected
That performance would be identical between the two tests (AGM enabled and not).
What actually happened
Performance in vanilla was 83 frames per second, and 60 with AGM. That is a sizeable reduction in performance just from starting the game with this mod.
Should hopefully be easy to verify, likely harder to track down and find the cause.
|
1.0
|
Big performance drop with nothing happening just from AGM - What we did
- Added the AGM mod from play with six with no modules added or removed.
- Launch the game with just that mod
- Open the editor and place a nato unit in the main airport and a local town as a custom mission
- Run the game with and without the mod running.
What we expected
That performance would be identical between the two tests (AGM enabled and not).
What actually happened
Performance in vanilla was 83 frames per second, and 60 with AGM. That is a sizeable reduction in performance just from starting the game with this mod.
Should hopefully be easy to verify, likely harder to track down and find the cause.
|
test
|
big performance drop with nothing happening just from agm what we did added the agm mod from play with six with no modules added or removed launch the game with just that mod open the editor and place a nato unit in the main airport and a local town as a custom mission run the game with and without the mod running what we expected that performance would be identical between the two tests agm enabled and not what actually happened performance in vanilla was frames per second and with agm that is a sizeable reduction in performance just from starting the game with this mod should hopefully be easy to verify likely harder to track down and find the cause
| 1
|
319,727
| 27,397,634,142
|
IssuesEvent
|
2023-02-28 21:04:51
|
pysal/momepy
|
https://api.github.com/repos/pysal/momepy
|
closed
|
remove `libpysal` pin in dev?
|
testing/CI
|
https://github.com/pysal/momepy/blob/a3c79cd8e54b73bcda54c197afa5c2570f3bbbaf/ci/envs/311-dev.yaml#L9
Should we remove the `libpysal>=4.6.0` pin in `311-dev.yaml` since we also installing from `pip`?
|
1.0
|
remove `libpysal` pin in dev? - https://github.com/pysal/momepy/blob/a3c79cd8e54b73bcda54c197afa5c2570f3bbbaf/ci/envs/311-dev.yaml#L9
Should we remove the `libpysal>=4.6.0` pin in `311-dev.yaml` since we also installing from `pip`?
|
test
|
remove libpysal pin in dev should we remove the libpysal pin in dev yaml since we also installing from pip
| 1
|
68,171
| 14,912,693,634
|
IssuesEvent
|
2021-01-22 13:04:35
|
SSanjeevi/fastpages
|
https://api.github.com/repos/SSanjeevi/fastpages
|
opened
|
CVE-2020-26247 (Medium) detected in nokogiri-1.10.10.gem
|
security vulnerability
|
## CVE-2020-26247 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>nokogiri-1.10.10.gem</b></p></summary>
<p>Nokogiri (๏ฟฝ๏ฟฝ๏ฟฝ) is an HTML, XML, SAX, and Reader parser. Among
Nokogiri's many features is the ability to search documents via XPath
or CSS3 selectors.</p>
<p>Library home page: <a href="https://rubygems.org/gems/nokogiri-1.10.10.gem">https://rubygems.org/gems/nokogiri-1.10.10.gem</a></p>
<p>
Dependency Hierarchy:
- jemoji-0.12.0.gem (Root Library)
- html-pipeline-2.14.0.gem
- :x: **nokogiri-1.10.10.gem** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/SSanjeevi/fastpages/commit/d662cb4a65c0b5cf384407ac114030614c2d8a7f">d662cb4a65c0b5cf384407ac114030614c2d8a7f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Nokogiri is a Rubygem providing HTML, XML, SAX, and Reader parsers with XPath and CSS selector support. In Nokogiri before version 1.11.0.rc4 there is an XXE vulnerability. XML Schemas parsed by Nokogiri::XML::Schema are trusted by default, allowing external resources to be accessed over the network, potentially enabling XXE or SSRF attacks. This behavior is counter to the security policy followed by Nokogiri maintainers, which is to treat all input as untrusted by default whenever possible. This is fixed in Nokogiri version 1.11.0.rc4.
<p>Publish Date: 2020-12-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-26247>CVE-2020-26247</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sparklemotion/nokogiri/releases/tag/v1.11.0.rc4">https://github.com/sparklemotion/nokogiri/releases/tag/v1.11.0.rc4</a></p>
<p>Release Date: 2020-12-30</p>
<p>Fix Resolution: 1.11.0.rc4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-26247 (Medium) detected in nokogiri-1.10.10.gem - ## CVE-2020-26247 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>nokogiri-1.10.10.gem</b></p></summary>
<p>Nokogiri (๏ฟฝ๏ฟฝ๏ฟฝ) is an HTML, XML, SAX, and Reader parser. Among
Nokogiri's many features is the ability to search documents via XPath
or CSS3 selectors.</p>
<p>Library home page: <a href="https://rubygems.org/gems/nokogiri-1.10.10.gem">https://rubygems.org/gems/nokogiri-1.10.10.gem</a></p>
<p>
Dependency Hierarchy:
- jemoji-0.12.0.gem (Root Library)
- html-pipeline-2.14.0.gem
- :x: **nokogiri-1.10.10.gem** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/SSanjeevi/fastpages/commit/d662cb4a65c0b5cf384407ac114030614c2d8a7f">d662cb4a65c0b5cf384407ac114030614c2d8a7f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Nokogiri is a Rubygem providing HTML, XML, SAX, and Reader parsers with XPath and CSS selector support. In Nokogiri before version 1.11.0.rc4 there is an XXE vulnerability. XML Schemas parsed by Nokogiri::XML::Schema are trusted by default, allowing external resources to be accessed over the network, potentially enabling XXE or SSRF attacks. This behavior is counter to the security policy followed by Nokogiri maintainers, which is to treat all input as untrusted by default whenever possible. This is fixed in Nokogiri version 1.11.0.rc4.
<p>Publish Date: 2020-12-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-26247>CVE-2020-26247</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sparklemotion/nokogiri/releases/tag/v1.11.0.rc4">https://github.com/sparklemotion/nokogiri/releases/tag/v1.11.0.rc4</a></p>
<p>Release Date: 2020-12-30</p>
<p>Fix Resolution: 1.11.0.rc4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve medium detected in nokogiri gem cve medium severity vulnerability vulnerable library nokogiri gem nokogiri ๏ฟฝ๏ฟฝ๏ฟฝ is an html xml sax and reader parser among nokogiri s many features is the ability to search documents via xpath or selectors library home page a href dependency hierarchy jemoji gem root library html pipeline gem x nokogiri gem vulnerable library found in head commit a href found in base branch master vulnerability details nokogiri is a rubygem providing html xml sax and reader parsers with xpath and css selector support in nokogiri before version there is an xxe vulnerability xml schemas parsed by nokogiri xml schema are trusted by default allowing external resources to be accessed over the network potentially enabling xxe or ssrf attacks this behavior is counter to the security policy followed by nokogiri maintainers which is to treat all input as untrusted by default whenever possible this is fixed in nokogiri version publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
89,330
| 8,200,924,020
|
IssuesEvent
|
2018-09-01 11:15:05
|
soosyze/framework
|
https://api.github.com/repos/soosyze/framework
|
closed
|
Interfacer Queryflatfile avec Travis et Coveralls.
|
evolution unit test
|
L'outil Travis permet de tester le code depuis un service tiers, il offre la possibilitรฉ de connaรฎtre les versions PHP sur lequel fonctionne Queryflatfile.
https://travis-ci.org/
L'outil Travis Coveralls de tester le code depuis un service tiers, il offre la possibilitรฉ de connaรฎtre voir la couverture de test de Queryflatfile.
https://coveralls.io
|
1.0
|
Interfacer Queryflatfile avec Travis et Coveralls. - L'outil Travis permet de tester le code depuis un service tiers, il offre la possibilitรฉ de connaรฎtre les versions PHP sur lequel fonctionne Queryflatfile.
https://travis-ci.org/
L'outil Travis Coveralls de tester le code depuis un service tiers, il offre la possibilitรฉ de connaรฎtre voir la couverture de test de Queryflatfile.
https://coveralls.io
|
test
|
interfacer queryflatfile avec travis et coveralls l outil travis permet de tester le code depuis un service tiers il offre la possibilitรฉ de connaรฎtre les versions php sur lequel fonctionne queryflatfile l outil travis coveralls de tester le code depuis un service tiers il offre la possibilitรฉ de connaรฎtre voir la couverture de test de queryflatfile
| 1
|
2,313
| 2,675,956,786
|
IssuesEvent
|
2015-03-25 15:22:33
|
Atmosphere/atmosphere
|
https://api.github.com/repos/Atmosphere/atmosphere
|
opened
|
Support for event based messages like Socket.IO
|
3.0.0 API Changes Documentation Enhancement Help Wanted! Javascript
|
This is a combinaison of changes between `atmosphere.js` and `atmosphere-runtime`
|
1.0
|
Support for event based messages like Socket.IO - This is a combinaison of changes between `atmosphere.js` and `atmosphere-runtime`
|
non_test
|
support for event based messages like socket io this is a combinaison of changes between atmosphere js and atmosphere runtime
| 0
|
244,504
| 26,414,153,588
|
IssuesEvent
|
2023-01-13 14:40:15
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
closed
|
Improve documentation for Login Selector icons
|
good first issue Team:Security Feature:Security/Authentication docs
|
The icons that we render within the login selector supports everything that [EuiIcon](https://elastic.github.io/eui/#/display/icons) supports as a `type`:
* http/https URLs
* predefined ids listed here https://elastic.github.io/eui/#/display/icons
* `data:` URLs (incl. base64)
ex:
```yaml
xpack.security.authc.providers:
basic.basic1:
order: 0
icon: "logoElasticsearch"
hint: "Typically for administrators"
saml.saml1:
order: 1
realm: saml1
description: "Log in with SSO"
# Content of kibana-7.17.1-darwin-x86_64/src/plugins/home/public/assets/logos/system.svg
icon: 'data:image/svg+xml;utf8,<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 1000 1000"><path d="M433.41 29.92c-9.81 5.14-15.18 10.04-19.15 17.74-1.87 3.74-9.57 43.43-17.28 88.49-23.58 140.32-43.19 243.98-72.61 384.3l-2.57 12.38-10.27-10.97c-19.61-20.31-45.76-25.92-67.71-14.48-14.71 7.47-24.05 20.31-34.55 46.93-7.7 20.32-9.11 22.18-15.64 23.58-3.97.7-39.22 1.4-78.45 1.4-67.94 0-71.91.23-81.72 4.9C7.54 596.8 1.94 631.82 22.49 651.9c13.08 12.84 17.04 13.31 88.72 13.54 106 0 125.38-3.04 145.69-24.05l7.94-8.17 6.3 13.54c12.14 25.45 28.72 38.76 51.6 41.56 24.52 2.8 42.49-10.97 56.03-43.19 13.31-31.98 39.93-147.56 63.28-272.93 5.6-29.88 10.51-54.4 11.21-54.4.7 0 26.85 140.09 58.14 311.22 31.28 170.91 58.37 314.73 60.24 319.17 4.2 10.51 9.11 15.87 19.85 21.25 18.45 9.57 43.19 3.04 54.4-14.01 5.14-7.71 7.24-16.34 13.07-57.67 12.36-85.22 33.84-204.06 36.87-204.06.7 0 4.67 5.37 8.4 11.91 14.48 24.75 37.82 38.06 66.54 38.29 29.18 0 40.63-9.34 72.15-58.84l16.11-25.21 57.2-1.17c56.5-1.17 57.44-1.17 63.27-6.77 7.94-7.47 11.44-19.15 10.27-34.09-1.4-15.88-8.17-28.72-20.08-37.12l-9.57-6.77-57.43-1.17c-69.58-1.4-77.51-.23-94.33 14.94-6.3 5.84-17.74 19.84-24.98 31.29-7.47 11.44-13.78 20.78-14.01 20.31-.24-.23-2.57-12.61-5.37-27.32-8.64-48.8-19.38-69.81-40.86-80.55-22.41-11.21-48.33-6.31-64.21 11.91-14.47 16.34-30.12 56.03-43.66 110.67-3.5 14.47-6.77 25.68-7.24 24.52-.23-1.4-26.38-142.66-57.9-314.03-63.04-342.74-58.6-323.83-78.21-333.87-10.96-5.61-28.71-6.08-38.51-.71z"/></svg>'
```
We should improve our documentation to make the supported icon formats more explicit, so that administrators can more easily customize their login experience.
|
True
|
Improve documentation for Login Selector icons - The icons that we render within the login selector supports everything that [EuiIcon](https://elastic.github.io/eui/#/display/icons) supports as a `type`:
* http/https URLs
* predefined ids listed here https://elastic.github.io/eui/#/display/icons
* `data:` URLs (incl. base64)
ex:
```yaml
xpack.security.authc.providers:
basic.basic1:
order: 0
icon: "logoElasticsearch"
hint: "Typically for administrators"
saml.saml1:
order: 1
realm: saml1
description: "Log in with SSO"
# Content of kibana-7.17.1-darwin-x86_64/src/plugins/home/public/assets/logos/system.svg
icon: 'data:image/svg+xml;utf8,<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 1000 1000"><path d="M433.41 29.92c-9.81 5.14-15.18 10.04-19.15 17.74-1.87 3.74-9.57 43.43-17.28 88.49-23.58 140.32-43.19 243.98-72.61 384.3l-2.57 12.38-10.27-10.97c-19.61-20.31-45.76-25.92-67.71-14.48-14.71 7.47-24.05 20.31-34.55 46.93-7.7 20.32-9.11 22.18-15.64 23.58-3.97.7-39.22 1.4-78.45 1.4-67.94 0-71.91.23-81.72 4.9C7.54 596.8 1.94 631.82 22.49 651.9c13.08 12.84 17.04 13.31 88.72 13.54 106 0 125.38-3.04 145.69-24.05l7.94-8.17 6.3 13.54c12.14 25.45 28.72 38.76 51.6 41.56 24.52 2.8 42.49-10.97 56.03-43.19 13.31-31.98 39.93-147.56 63.28-272.93 5.6-29.88 10.51-54.4 11.21-54.4.7 0 26.85 140.09 58.14 311.22 31.28 170.91 58.37 314.73 60.24 319.17 4.2 10.51 9.11 15.87 19.85 21.25 18.45 9.57 43.19 3.04 54.4-14.01 5.14-7.71 7.24-16.34 13.07-57.67 12.36-85.22 33.84-204.06 36.87-204.06.7 0 4.67 5.37 8.4 11.91 14.48 24.75 37.82 38.06 66.54 38.29 29.18 0 40.63-9.34 72.15-58.84l16.11-25.21 57.2-1.17c56.5-1.17 57.44-1.17 63.27-6.77 7.94-7.47 11.44-19.15 10.27-34.09-1.4-15.88-8.17-28.72-20.08-37.12l-9.57-6.77-57.43-1.17c-69.58-1.4-77.51-.23-94.33 14.94-6.3 5.84-17.74 19.84-24.98 31.29-7.47 11.44-13.78 20.78-14.01 20.31-.24-.23-2.57-12.61-5.37-27.32-8.64-48.8-19.38-69.81-40.86-80.55-22.41-11.21-48.33-6.31-64.21 11.91-14.47 16.34-30.12 56.03-43.66 110.67-3.5 14.47-6.77 25.68-7.24 24.52-.23-1.4-26.38-142.66-57.9-314.03-63.04-342.74-58.6-323.83-78.21-333.87-10.96-5.61-28.71-6.08-38.51-.71z"/></svg>'
```
We should improve our documentation to make the supported icon formats more explicit, so that administrators can more easily customize their login experience.
|
non_test
|
improve documentation for login selector icons the icons that we render within the login selector supports everything that supports as a type http https urls predefined ids listed here data urls incl ex yaml xpack security authc providers basic order icon logoelasticsearch hint typically for administrators saml order realm description log in with sso content of kibana darwin src plugins home public assets logos system svg icon data image svg xml we should improve our documentation to make the supported icon formats more explicit so that administrators can more easily customize their login experience
| 0
|
86,751
| 8,049,123,316
|
IssuesEvent
|
2018-08-01 09:07:58
|
ClassicWoW/Nefarian_1.12.1_Bugtracker
|
https://api.github.com/repos/ClassicWoW/Nefarian_1.12.1_Bugtracker
|
closed
|
[Raid/AQ40] Raidbosse - WoW Crit Error
|
Mehr Input/Recherche/Tests nรถtig
|
Der Kampf gegen die Bug Fam in AQ 40 verursacht einen Crit Error.
Welches Verhalten wird beobachtet?
Wรคhrend dem Bosskampf fรผhrt eine Aktion zu einem Crit Error von bis zu 20 Spielern.
Wie sollte es sich verhalten?
Der Kampf sollte ohne Fehlermeldungen ablaufen
Schritte zur Reproduzierung
Leider konnte keine Regelmรครigkeit festgestellt werden.
Zusatz
Weder die Spieler waren die Gleichen, noch lag es an einem Addon. (Null Addons aktiviert)
Der Crit Error kam aber jedesmal beim Kampf gegen Yauj (http://datenbank.classic-wow.org/?npc=15543#abilities)
Sowohl beim Kampf gegen Yauj, wie auch kurz nach dem Tod. (Addspawn)
Haben vielleicht andere Gilden รคhnliche/gleiche Probleme beobachten kรถnnen?
Gruร
Chichi
|
1.0
|
[Raid/AQ40] Raidbosse - WoW Crit Error - Der Kampf gegen die Bug Fam in AQ 40 verursacht einen Crit Error.
Welches Verhalten wird beobachtet?
Wรคhrend dem Bosskampf fรผhrt eine Aktion zu einem Crit Error von bis zu 20 Spielern.
Wie sollte es sich verhalten?
Der Kampf sollte ohne Fehlermeldungen ablaufen
Schritte zur Reproduzierung
Leider konnte keine Regelmรครigkeit festgestellt werden.
Zusatz
Weder die Spieler waren die Gleichen, noch lag es an einem Addon. (Null Addons aktiviert)
Der Crit Error kam aber jedesmal beim Kampf gegen Yauj (http://datenbank.classic-wow.org/?npc=15543#abilities)
Sowohl beim Kampf gegen Yauj, wie auch kurz nach dem Tod. (Addspawn)
Haben vielleicht andere Gilden รคhnliche/gleiche Probleme beobachten kรถnnen?
Gruร
Chichi
|
test
|
raidbosse wow crit error der kampf gegen die bug fam in aq verursacht einen crit error welches verhalten wird beobachtet wรคhrend dem bosskampf fรผhrt eine aktion zu einem crit error von bis zu spielern wie sollte es sich verhalten der kampf sollte ohne fehlermeldungen ablaufen schritte zur reproduzierung leider konnte keine regelmรครigkeit festgestellt werden zusatz weder die spieler waren die gleichen noch lag es an einem addon null addons aktiviert der crit error kam aber jedesmal beim kampf gegen yauj sowohl beim kampf gegen yauj wie auch kurz nach dem tod addspawn haben vielleicht andere gilden รคhnliche gleiche probleme beobachten kรถnnen gruร chichi
| 1
|
34,695
| 14,492,075,591
|
IssuesEvent
|
2020-12-11 06:12:28
|
Azure/azure-cli
|
https://api.github.com/repos/Azure/azure-cli
|
closed
|
Make DedicatedHostGroup.properties.supportAutomaticPlacement default to false
|
Compute Service Team Support Request feature-request
|
**Resource Provider**
<!--- What is the Azure resource provider your feature is part of? --->
Microsoft.Compute
**Description of Feature or Work Requested**
<!--- Provide a brief description of the feature or work requested. A link to conceptual documentation may be helpful too. --->
The initial plan was to default DedicatedHostGroup.properties.supportAutomaticPlacement to true, but recently we have agreed to default the value to false as we GA. The change to default to false has already been made on the service side.
**Minimum API Version Required**
<!--- What is the minimum API version of your service required to implement your feature? --->
2020-06-01
**Swagger Link**
<!--- Provide a link to the location of your feature(s) in the REST API specs repo. If your feature(s) has corresponding commit or pull request in the REST API specs repo, provide them. This should be on the master branch of the REST API specs repo. --->
https://github.com/Azure/azure-rest-api-specs/pull/11697
**Target Date**
<!--- If you have a target date for release of this feature/work, please provide it. While we can't guarantee these dates,
it will help us prioritize your request against other requests. --->
The feature is expected to GA on December 10. While having a wrong description is not the biggest issue, it would be good to have this change reflected in the next release.
|
1.0
|
Make DedicatedHostGroup.properties.supportAutomaticPlacement default to false - **Resource Provider**
<!--- What is the Azure resource provider your feature is part of? --->
Microsoft.Compute
**Description of Feature or Work Requested**
<!--- Provide a brief description of the feature or work requested. A link to conceptual documentation may be helpful too. --->
The initial plan was to default DedicatedHostGroup.properties.supportAutomaticPlacement to true, but recently we have agreed to default the value to false as we GA. The change to default to false has already been made on the service side.
**Minimum API Version Required**
<!--- What is the minimum API version of your service required to implement your feature? --->
2020-06-01
**Swagger Link**
<!--- Provide a link to the location of your feature(s) in the REST API specs repo. If your feature(s) has corresponding commit or pull request in the REST API specs repo, provide them. This should be on the master branch of the REST API specs repo. --->
https://github.com/Azure/azure-rest-api-specs/pull/11697
**Target Date**
<!--- If you have a target date for release of this feature/work, please provide it. While we can't guarantee these dates,
it will help us prioritize your request against other requests. --->
The feature is expected to GA on December 10. While having a wrong description is not the biggest issue, it would be good to have this change reflected in the next release.
|
non_test
|
make dedicatedhostgroup properties supportautomaticplacement default to false resource provider microsoft compute description of feature or work requested the initial plan was to default dedicatedhostgroup properties supportautomaticplacement to true but recently we have agreed to default the value to false as we ga the change to default to false has already been made on the service side minimum api version required swagger link target date if you have a target date for release of this feature work please provide it while we can t guarantee these dates it will help us prioritize your request against other requests the feature is expected to ga on december while having a wrong description is not the biggest issue it would be good to have this change reflected in the next release
| 0
|
181,948
| 6,665,596,323
|
IssuesEvent
|
2017-10-03 02:26:16
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
es.savefrom.net - see bug description
|
browser-firefox priority-important status-needstriage
|
<!-- @browser: Firefox 57.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:57.0) Gecko/20100101 Firefox/57.0 -->
<!-- @reported_with: web -->
**URL**: https://es.savefrom.net/
**Browser / Version**: Firefox 57.0
**Operating System**: Windows 10
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: in the last item in https://es.savefrom.net/faq.php appears rapidshare.com but can not load
**Steps to Reproduce**:
With https://www.alexa.com/topsites/countries/GT
_From [webcompat.com](https://webcompat.com/) with โค๏ธ_
|
1.0
|
es.savefrom.net - see bug description - <!-- @browser: Firefox 57.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:57.0) Gecko/20100101 Firefox/57.0 -->
<!-- @reported_with: web -->
**URL**: https://es.savefrom.net/
**Browser / Version**: Firefox 57.0
**Operating System**: Windows 10
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: in the last item in https://es.savefrom.net/faq.php appears rapidshare.com but can not load
**Steps to Reproduce**:
With https://www.alexa.com/topsites/countries/GT
_From [webcompat.com](https://webcompat.com/) with โค๏ธ_
|
non_test
|
es savefrom net see bug description url browser version firefox operating system windows tested another browser no problem type something else description in the last item in appears rapidshare com but can not load steps to reproduce with from with โค๏ธ
| 0
|
25,928
| 11,233,480,722
|
IssuesEvent
|
2020-01-09 01:23:33
|
TIBCOSoftware/bw6-plugin-maven
|
https://api.github.com/repos/TIBCOSoftware/bw6-plugin-maven
|
opened
|
CVE-2019-17571 (High) detected in log4j-1.2.12.jar
|
security vulnerability
|
## CVE-2019-17571 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4j-1.2.12.jar</b></p></summary>
<p>null</p>
<p>Path to dependency file: /tmp/ws-scm/bw6-plugin-maven/Source/bw6-maven-plugin/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/log4j/log4j/1.2.12/log4j-1.2.12.jar</p>
<p>
Dependency Hierarchy:
- maven-compiler-plugin-3.3.jar (Root Library)
- plexus-container-default-1.5.5.jar
- xbean-reflect-3.4.jar
- :x: **log4j-1.2.12.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Included in Log4j 1.2 is a SocketServer class that is vulnerable to deserialization of untrusted data which can be exploited to remotely execute arbitrary code when combined with a deserialization gadget when listening to untrusted network traffic for log data. This affects Log4j versions up to 1.2 up to 1.2.17.
<p>Publish Date: 2019-12-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17571>CVE-2019-17571</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"log4j","packageName":"log4j","packageVersion":"1.2.12","isTransitiveDependency":true,"dependencyTree":"org.apache.maven.plugins:maven-compiler-plugin:3.3;org.codehaus.plexus:plexus-container-default:1.5.5;org.apache.xbean:xbean-reflect:3.4;log4j:log4j:1.2.12","isMinimumFixVersionAvailable":false}],"vulnerabilityIdentifier":"CVE-2019-17571","vulnerabilityDetails":"Included in Log4j 1.2 is a SocketServer class that is vulnerable to deserialization of untrusted data which can be exploited to remotely execute arbitrary code when combined with a deserialization gadget when listening to untrusted network traffic for log data. This affects Log4j versions up to 1.2 up to 1.2.17.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17571","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2019-17571 (High) detected in log4j-1.2.12.jar - ## CVE-2019-17571 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4j-1.2.12.jar</b></p></summary>
<p>null</p>
<p>Path to dependency file: /tmp/ws-scm/bw6-plugin-maven/Source/bw6-maven-plugin/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/log4j/log4j/1.2.12/log4j-1.2.12.jar</p>
<p>
Dependency Hierarchy:
- maven-compiler-plugin-3.3.jar (Root Library)
- plexus-container-default-1.5.5.jar
- xbean-reflect-3.4.jar
- :x: **log4j-1.2.12.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Included in Log4j 1.2 is a SocketServer class that is vulnerable to deserialization of untrusted data which can be exploited to remotely execute arbitrary code when combined with a deserialization gadget when listening to untrusted network traffic for log data. This affects Log4j versions up to 1.2 up to 1.2.17.
<p>Publish Date: 2019-12-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17571>CVE-2019-17571</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"log4j","packageName":"log4j","packageVersion":"1.2.12","isTransitiveDependency":true,"dependencyTree":"org.apache.maven.plugins:maven-compiler-plugin:3.3;org.codehaus.plexus:plexus-container-default:1.5.5;org.apache.xbean:xbean-reflect:3.4;log4j:log4j:1.2.12","isMinimumFixVersionAvailable":false}],"vulnerabilityIdentifier":"CVE-2019-17571","vulnerabilityDetails":"Included in Log4j 1.2 is a SocketServer class that is vulnerable to deserialization of untrusted data which can be exploited to remotely execute arbitrary code when combined with a deserialization gadget when listening to untrusted network traffic for log data. This affects Log4j versions up to 1.2 up to 1.2.17.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17571","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_test
|
cve high detected in jar cve high severity vulnerability vulnerable library jar null path to dependency file tmp ws scm plugin maven source maven plugin pom xml path to vulnerable library root repository jar dependency hierarchy maven compiler plugin jar root library plexus container default jar xbean reflect jar x jar vulnerable library vulnerability details included in is a socketserver class that is vulnerable to deserialization of untrusted data which can be exploited to remotely execute arbitrary code when combined with a deserialization gadget when listening to untrusted network traffic for log data this affects versions up to up to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails included in is a socketserver class that is vulnerable to deserialization of untrusted data which can be exploited to remotely execute arbitrary code when combined with a deserialization gadget when listening to untrusted network traffic for log data this affects versions up to up to vulnerabilityurl
| 0
|
39,669
| 5,241,671,278
|
IssuesEvent
|
2017-01-31 16:14:48
|
ValveSoftware/Source-1-Games
|
https://api.github.com/repos/ValveSoftware/Source-1-Games
|
closed
|
Wrong Elf.
|
CounterโStrike Linux Need Retest
|
I have been having some problems with Counter Strike crashing, I have been getting this error message:
ERROR: ld.so: object '/home/loon/.local/share/Steam/ubuntu12_64/gameoverlayrenderer.so' from LD_PRELOAD cannot be preloaded (wrong ELF class: ELFCLASS64): ignored.
the
I've been to Steam support, but because I'm running Mint 17, which they don't support, they have pointed me here...
|
1.0
|
Wrong Elf. - I have been having some problems with Counter Strike crashing, I have been getting this error message:
ERROR: ld.so: object '/home/loon/.local/share/Steam/ubuntu12_64/gameoverlayrenderer.so' from LD_PRELOAD cannot be preloaded (wrong ELF class: ELFCLASS64): ignored.
the
I've been to Steam support, but because I'm running Mint 17, which they don't support, they have pointed me here...
|
test
|
wrong elf i have been having some problems with counter strike crashing i have been getting this error message error ld so object home loon local share steam gameoverlayrenderer so from ld preload cannot be preloaded wrong elf class ignored the i ve been to steam support but because i m running mint which they don t support they have pointed me here
| 1
|
9,340
| 3,036,757,898
|
IssuesEvent
|
2015-08-06 13:49:31
|
owncloud/client
|
https://api.github.com/repos/owncloud/client
|
closed
|
New big folder not in list of folders to choose for sync
|
bug gold-ticket ReadyToTest
|
### Expected behaviour
When there is a new shared folder on the server that is bigger than the confirmation limit then:
1) the client should notify the user
2) the user should be able to choose to sync or not sync the new folder
### Actual behaviour
1) The client notifies the user - good
2) The new folder does not appear in the dropdown list of known folders, so there is no way to select or unselect it for sync.
3) The "Apply" button seems the only thing the user can do at this point. After Apply is pressed the folder is synced down to the client. This rather defeats the whole point of this feature.

### Steps to reproduce
1. Add a folder to the server with files in it greater than the confirmation limit.
2. Share the folder with the user
3. Let the client ind it and then the user can see the message about the new big folder...
### Server configuration
ownCloud version: 8.0
### Client configuration
Client version: 2.0.0-nightly 20150801 (build 5326)
Operating system: Windows10
OS language: English (US)
Installation path of client: default
|
1.0
|
New big folder not in list of folders to choose for sync - ### Expected behaviour
When there is a new shared folder on the server that is bigger than the confirmation limit then:
1) the client should notify the user
2) the user should be able to choose to sync or not sync the new folder
### Actual behaviour
1) The client notifies the user - good
2) The new folder does not appear in the dropdown list of known folders, so there is no way to select or unselect it for sync.
3) The "Apply" button seems the only thing the user can do at this point. After Apply is pressed the folder is synced down to the client. This rather defeats the whole point of this feature.

### Steps to reproduce
1. Add a folder to the server with files in it greater than the confirmation limit.
2. Share the folder with the user
3. Let the client ind it and then the user can see the message about the new big folder...
### Server configuration
ownCloud version: 8.0
### Client configuration
Client version: 2.0.0-nightly 20150801 (build 5326)
Operating system: Windows10
OS language: English (US)
Installation path of client: default
|
test
|
new big folder not in list of folders to choose for sync expected behaviour when there is a new shared folder on the server that is bigger than the confirmation limit then the client should notify the user the user should be able to choose to sync or not sync the new folder actual behaviour the client notifies the user good the new folder does not appear in the dropdown list of known folders so there is no way to select or unselect it for sync the apply button seems the only thing the user can do at this point after apply is pressed the folder is synced down to the client this rather defeats the whole point of this feature steps to reproduce add a folder to the server with files in it greater than the confirmation limit share the folder with the user let the client ind it and then the user can see the message about the new big folder server configuration owncloud version client configuration client version nightly build operating system os language english us installation path of client default
| 1
|
203,073
| 23,123,510,590
|
IssuesEvent
|
2022-07-28 01:32:15
|
kapseliboi/Node-Data
|
https://api.github.com/repos/kapseliboi/Node-Data
|
closed
|
CVE-2021-35065 (High) detected in glob-parent-3.1.0.tgz, glob-parent-2.0.0.tgz - autoclosed
|
security vulnerability
|
## CVE-2021-35065 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>glob-parent-3.1.0.tgz</b>, <b>glob-parent-2.0.0.tgz</b></p></summary>
<p>
<details><summary><b>glob-parent-3.1.0.tgz</b></p></summary>
<p>Strips glob magic from a string to provide the parent directory path</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- gulp-nodemon-2.0.6.tgz (Root Library)
- nodemon-1.18.9.tgz
- chokidar-2.0.4.tgz
- :x: **glob-parent-3.1.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>glob-parent-2.0.0.tgz</b></p></summary>
<p>Strips glob magic from a string to provide the parent path</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-2.0.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-2.0.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/glob-base/node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- gulp-typescript-2.12.0.tgz (Root Library)
- vinyl-fs-2.2.1.tgz
- glob-stream-5.3.5.tgz
- micromatch-2.3.11.tgz
- parse-glob-3.0.4.tgz
- glob-base-0.3.0.tgz
- :x: **glob-parent-2.0.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/Node-Data/commit/289c77565fc637d4c0e4bf4a9a1e81df96cd190a">289c77565fc637d4c0e4bf4a9a1e81df96cd190a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package glob-parent before 6.0.1 are vulnerable to Regular Expression Denial of Service (ReDoS)
<p>Publish Date: 2021-06-22
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-35065>CVE-2021-35065</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-cj88-88mr-972w">https://github.com/advisories/GHSA-cj88-88mr-972w</a></p>
<p>Release Date: 2021-06-22</p>
<p>Fix Resolution: glob-parent - 6.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-35065 (High) detected in glob-parent-3.1.0.tgz, glob-parent-2.0.0.tgz - autoclosed - ## CVE-2021-35065 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>glob-parent-3.1.0.tgz</b>, <b>glob-parent-2.0.0.tgz</b></p></summary>
<p>
<details><summary><b>glob-parent-3.1.0.tgz</b></p></summary>
<p>Strips glob magic from a string to provide the parent directory path</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- gulp-nodemon-2.0.6.tgz (Root Library)
- nodemon-1.18.9.tgz
- chokidar-2.0.4.tgz
- :x: **glob-parent-3.1.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>glob-parent-2.0.0.tgz</b></p></summary>
<p>Strips glob magic from a string to provide the parent path</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-2.0.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-2.0.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/glob-base/node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- gulp-typescript-2.12.0.tgz (Root Library)
- vinyl-fs-2.2.1.tgz
- glob-stream-5.3.5.tgz
- micromatch-2.3.11.tgz
- parse-glob-3.0.4.tgz
- glob-base-0.3.0.tgz
- :x: **glob-parent-2.0.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/Node-Data/commit/289c77565fc637d4c0e4bf4a9a1e81df96cd190a">289c77565fc637d4c0e4bf4a9a1e81df96cd190a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package glob-parent before 6.0.1 are vulnerable to Regular Expression Denial of Service (ReDoS)
<p>Publish Date: 2021-06-22
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-35065>CVE-2021-35065</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-cj88-88mr-972w">https://github.com/advisories/GHSA-cj88-88mr-972w</a></p>
<p>Release Date: 2021-06-22</p>
<p>Fix Resolution: glob-parent - 6.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve high detected in glob parent tgz glob parent tgz autoclosed cve high severity vulnerability vulnerable libraries glob parent tgz glob parent tgz glob parent tgz strips glob magic from a string to provide the parent directory path library home page a href path to dependency file package json path to vulnerable library node modules glob parent package json dependency hierarchy gulp nodemon tgz root library nodemon tgz chokidar tgz x glob parent tgz vulnerable library glob parent tgz strips glob magic from a string to provide the parent path library home page a href path to dependency file package json path to vulnerable library node modules glob base node modules glob parent package json dependency hierarchy gulp typescript tgz root library vinyl fs tgz glob stream tgz micromatch tgz parse glob tgz glob base tgz x glob parent tgz vulnerable library found in head commit a href found in base branch master vulnerability details the package glob parent before are vulnerable to regular expression denial of service redos publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution glob parent step up your open source security game with mend
| 0
|
370,990
| 10,959,648,012
|
IssuesEvent
|
2019-11-27 11:52:59
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.powershow.com - site is not usable
|
browser-firefox engine-gecko ml-needsdiagnosis-false ml-probability-high priority-normal
|
<!-- @browser: Firefox 71.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:71.0) Gecko/20100101 Firefox/71.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: http://www.powershow.com/help
**Browser / Version**: Firefox 71.0
**Operating System**: Windows 7
**Tested Another Browser**: No
**Problem type**: Site is not usable
**Description**: it's not working.
**Steps to Reproduce**:
it's not sing up.
[](https://webcompat.com/uploads/2019/11/7931398f-6248-4ef3-8785-28201afb423b.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20191118154140</li><li>channel: beta</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
<p>Console Messages:</p>
<pre>
[{'level': 'warn', 'log': ['Request to access cookie or storage on https://stats.g.doubleclick.net/r/collect?v=1&aip=1&t=dc&_r=3&tid=UA-2610266-2&cid=1549349711.1574245857&jid=1844112312&_gid=416800190.1574851680&gjid=2092196307&_v=j79&z=1172690811 was blocked because it came from a tracker and content blocking is enabled.'], 'uri': 'http://www.powershow.com/help', 'pos': '0:0'}]
</pre>
</details>
_From [webcompat.com](https://webcompat.com/) with โค๏ธ_
|
1.0
|
www.powershow.com - site is not usable - <!-- @browser: Firefox 71.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:71.0) Gecko/20100101 Firefox/71.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: http://www.powershow.com/help
**Browser / Version**: Firefox 71.0
**Operating System**: Windows 7
**Tested Another Browser**: No
**Problem type**: Site is not usable
**Description**: it's not working.
**Steps to Reproduce**:
it's not sing up.
[](https://webcompat.com/uploads/2019/11/7931398f-6248-4ef3-8785-28201afb423b.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20191118154140</li><li>channel: beta</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
<p>Console Messages:</p>
<pre>
[{'level': 'warn', 'log': ['Request to access cookie or storage on https://stats.g.doubleclick.net/r/collect?v=1&aip=1&t=dc&_r=3&tid=UA-2610266-2&cid=1549349711.1574245857&jid=1844112312&_gid=416800190.1574851680&gjid=2092196307&_v=j79&z=1172690811 was blocked because it came from a tracker and content blocking is enabled.'], 'uri': 'http://www.powershow.com/help', 'pos': '0:0'}]
</pre>
</details>
_From [webcompat.com](https://webcompat.com/) with โค๏ธ_
|
non_test
|
site is not usable url browser version firefox operating system windows tested another browser no problem type site is not usable description it s not working steps to reproduce it s not sing up browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false console messages uri pos from with โค๏ธ
| 0
|
26,763
| 27,168,672,830
|
IssuesEvent
|
2023-02-17 17:20:39
|
neurobagel/annotation_tool
|
https://api.github.com/repos/neurobagel/annotation_tool
|
closed
|
Set up a workflow to run the linter on new pull requests
|
importance:medium maintenance:usability
|
Set up a workflow to run eslint on PRs to catch any issues in case eslint wasn't run before committing.
|
True
|
Set up a workflow to run the linter on new pull requests - Set up a workflow to run eslint on PRs to catch any issues in case eslint wasn't run before committing.
|
non_test
|
set up a workflow to run the linter on new pull requests set up a workflow to run eslint on prs to catch any issues in case eslint wasn t run before committing
| 0
|
170,606
| 26,989,073,648
|
IssuesEvent
|
2023-02-09 18:19:48
|
MusicAsLanguage/mobileapp
|
https://api.github.com/repos/MusicAsLanguage/mobileapp
|
opened
|
Accessibility: Scrub Accessibility Insights for additional design and engineering work we should do
|
accessibility design
|
https://accessibilityinsights.io/downloads/
|
1.0
|
Accessibility: Scrub Accessibility Insights for additional design and engineering work we should do - https://accessibilityinsights.io/downloads/
|
non_test
|
accessibility scrub accessibility insights for additional design and engineering work we should do
| 0
|
161,416
| 20,153,987,821
|
IssuesEvent
|
2022-02-09 14:56:11
|
kapseliboi/watch-rtp-play
|
https://api.github.com/repos/kapseliboi/watch-rtp-play
|
opened
|
CVE-2022-0235 (Medium) detected in node-fetch-2.6.1.tgz
|
security vulnerability
|
## CVE-2022-0235 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-fetch-2.6.1.tgz</b></p></summary>
<p>A light-weight module that brings window.fetch to node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-fetch/-/node-fetch-2.6.1.tgz">https://registry.npmjs.org/node-fetch/-/node-fetch-2.6.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/node-fetch/package.json</p>
<p>
Dependency Hierarchy:
- semantic-release-17.4.7.tgz (Root Library)
- github-7.2.3.tgz
- rest-18.9.1.tgz
- core-3.5.1.tgz
- request-5.6.1.tgz
- :x: **node-fetch-2.6.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/watch-rtp-play/commit/64394d795c87a969ce2025c813a8bc494318d8b2">64394d795c87a969ce2025c813a8bc494318d8b2</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
node-fetch is vulnerable to Exposure of Sensitive Information to an Unauthorized Actor
<p>Publish Date: 2022-01-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0235>CVE-2022-0235</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-r683-j2x4-v87g">https://github.com/advisories/GHSA-r683-j2x4-v87g</a></p>
<p>Release Date: 2022-01-16</p>
<p>Fix Resolution: node-fetch - 2.6.7,3.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-0235 (Medium) detected in node-fetch-2.6.1.tgz - ## CVE-2022-0235 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-fetch-2.6.1.tgz</b></p></summary>
<p>A light-weight module that brings window.fetch to node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-fetch/-/node-fetch-2.6.1.tgz">https://registry.npmjs.org/node-fetch/-/node-fetch-2.6.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/node-fetch/package.json</p>
<p>
Dependency Hierarchy:
- semantic-release-17.4.7.tgz (Root Library)
- github-7.2.3.tgz
- rest-18.9.1.tgz
- core-3.5.1.tgz
- request-5.6.1.tgz
- :x: **node-fetch-2.6.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/watch-rtp-play/commit/64394d795c87a969ce2025c813a8bc494318d8b2">64394d795c87a969ce2025c813a8bc494318d8b2</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
node-fetch is vulnerable to Exposure of Sensitive Information to an Unauthorized Actor
<p>Publish Date: 2022-01-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0235>CVE-2022-0235</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-r683-j2x4-v87g">https://github.com/advisories/GHSA-r683-j2x4-v87g</a></p>
<p>Release Date: 2022-01-16</p>
<p>Fix Resolution: node-fetch - 2.6.7,3.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve medium detected in node fetch tgz cve medium severity vulnerability vulnerable library node fetch tgz a light weight module that brings window fetch to node js library home page a href path to dependency file package json path to vulnerable library node modules node fetch package json dependency hierarchy semantic release tgz root library github tgz rest tgz core tgz request tgz x node fetch tgz vulnerable library found in head commit a href found in base branch master vulnerability details node fetch is vulnerable to exposure of sensitive information to an unauthorized actor publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution node fetch step up your open source security game with whitesource
| 0
|
17,391
| 3,610,039,098
|
IssuesEvent
|
2016-02-05 02:08:53
|
palantir/tslint
|
https://api.github.com/repos/palantir/tslint
|
closed
|
Refactor test suites
|
Domain: Unit Testing Resolution: Fixed Type: Refactor
|
The way we write our test suites is inconsistent:
### `noUnusedVariableRuleTests.ts`
```ts
it("restricts unused class members", () => {
var fileName = "rules/nounusedvariable-class.test.ts";
var Rule = Lint.Test.getRule("no-unused-variable");
var failure1 = Lint.Test.createFailuresOnFile(fileName, Rule.FAILURE_STRING + "'z2'")([2, 13], [2, 15]);
var failure2 = Lint.Test.createFailuresOnFile(fileName, Rule.FAILURE_STRING + "'mfunc4'")([18, 13], [18, 19]);
var actualFailures = Lint.Test.applyRuleOnFile(fileName, Rule);
assert.lengthOf(actualFailures, 2);
Lint.Test.assertContainsFailure(actualFailures, failure1);
Lint.Test.assertContainsFailure(actualFailures, failure2);
});
```
### `noDuplicateKeyRuleTests.ts`
```ts
it("forbids duplicate keys in object literals", () => {
const fileName = "rules/dupkey.test.ts";
const NoDuplicateKeyRule = Lint.Test.getRule("no-duplicate-key");
const failureString = NoDuplicateKeyRule.FAILURE_STRING;
const actualFailures = Lint.Test.applyRuleOnFile(fileName, NoDuplicateKeyRule);
const createFailure1 = Lint.Test.createFailuresOnFile(fileName, failureString + "axa'");
const createFailure2 = Lint.Test.createFailuresOnFile(fileName, failureString + "bd'");
const createFailure3 = Lint.Test.createFailuresOnFile(fileName, failureString + "duplicated'");
const expectedFailures = [
createFailure1([10, 5], [10, 8]),
createFailure2([13, 5], [13, 7]),
createFailure1([14, 5], [14, 8]),
createFailure3([31, 5], [31, 15])
];
Lint.Test.assertFailuresEqual(actualFailures, expectedFailures);
});
```
### `whitespaceRuleTests.ts`
```ts
const createFailure = Lint.Test.createFailuresOnFile(fileName, WhitespaceRule.FAILURE_STRING);
...
it("enforces whitespace in variable definitions", () => {
const expectedFailures = [
createFailure([11, 10], [11, 11]),
createFailure([11, 11], [11, 12]),
createFailure([13, 11], [13, 12])
];
expectedFailures.forEach((failure) => {
Lint.Test.assertContainsFailure(actualFailures, failure);
});
});
```
-------
I like the third style shown here, from `whitespaceRuleTests.ts`; it's the most concise. Now that we have TS 1.5 syntax, we should even be able to write
```ts
for (const failure of expectedFailures) {
Lint.Test.assertContainsFailure(actualFailures, failure);
}
```
For cases where you need to create failures with different failure strings, you can do something like:
```ts
const createFailure = (suffix) => Lint.Test.createFailuresOnFile(fileName, WhitespaceRule.FAILURE_STRING + suffix);
...
it("enforces whitespace in variable definitions", () => {
const expectedFailures = [
createFailure("foo")([11, 10], [11, 11]),
createFailure("bar")([11, 11], [11, 12]),
createFailure("baz")([13, 11], [13, 12])
];
for (const failure of expectedFailures) {
Lint.Test.assertContainsFailure(actualFailures, failure);
}
});
```
|
1.0
|
Refactor test suites - The way we write our test suites is inconsistent:
### `noUnusedVariableRuleTests.ts`
```ts
it("restricts unused class members", () => {
var fileName = "rules/nounusedvariable-class.test.ts";
var Rule = Lint.Test.getRule("no-unused-variable");
var failure1 = Lint.Test.createFailuresOnFile(fileName, Rule.FAILURE_STRING + "'z2'")([2, 13], [2, 15]);
var failure2 = Lint.Test.createFailuresOnFile(fileName, Rule.FAILURE_STRING + "'mfunc4'")([18, 13], [18, 19]);
var actualFailures = Lint.Test.applyRuleOnFile(fileName, Rule);
assert.lengthOf(actualFailures, 2);
Lint.Test.assertContainsFailure(actualFailures, failure1);
Lint.Test.assertContainsFailure(actualFailures, failure2);
});
```
### `noDuplicateKeyRuleTests.ts`
```ts
it("forbids duplicate keys in object literals", () => {
const fileName = "rules/dupkey.test.ts";
const NoDuplicateKeyRule = Lint.Test.getRule("no-duplicate-key");
const failureString = NoDuplicateKeyRule.FAILURE_STRING;
const actualFailures = Lint.Test.applyRuleOnFile(fileName, NoDuplicateKeyRule);
const createFailure1 = Lint.Test.createFailuresOnFile(fileName, failureString + "axa'");
const createFailure2 = Lint.Test.createFailuresOnFile(fileName, failureString + "bd'");
const createFailure3 = Lint.Test.createFailuresOnFile(fileName, failureString + "duplicated'");
const expectedFailures = [
createFailure1([10, 5], [10, 8]),
createFailure2([13, 5], [13, 7]),
createFailure1([14, 5], [14, 8]),
createFailure3([31, 5], [31, 15])
];
Lint.Test.assertFailuresEqual(actualFailures, expectedFailures);
});
```
### `whitespaceRuleTests.ts`
```ts
const createFailure = Lint.Test.createFailuresOnFile(fileName, WhitespaceRule.FAILURE_STRING);
...
it("enforces whitespace in variable definitions", () => {
const expectedFailures = [
createFailure([11, 10], [11, 11]),
createFailure([11, 11], [11, 12]),
createFailure([13, 11], [13, 12])
];
expectedFailures.forEach((failure) => {
Lint.Test.assertContainsFailure(actualFailures, failure);
});
});
```
-------
I like the third style shown here, from `whitespaceRuleTests.ts`; it's the most concise. Now that we have TS 1.5 syntax, we should even be able to write
```ts
for (const failure of expectedFailures) {
Lint.Test.assertContainsFailure(actualFailures, failure);
}
```
For cases where you need to create failures with different failure strings, you can do something like:
```ts
const createFailure = (suffix) => Lint.Test.createFailuresOnFile(fileName, WhitespaceRule.FAILURE_STRING + suffix);
...
it("enforces whitespace in variable definitions", () => {
const expectedFailures = [
createFailure("foo")([11, 10], [11, 11]),
createFailure("bar")([11, 11], [11, 12]),
createFailure("baz")([13, 11], [13, 12])
];
for (const failure of expectedFailures) {
Lint.Test.assertContainsFailure(actualFailures, failure);
}
});
```
|
test
|
refactor test suites the way we write our test suites is inconsistent nounusedvariableruletests ts ts it restricts unused class members var filename rules nounusedvariable class test ts var rule lint test getrule no unused variable var lint test createfailuresonfile filename rule failure string var lint test createfailuresonfile filename rule failure string var actualfailures lint test applyruleonfile filename rule assert lengthof actualfailures lint test assertcontainsfailure actualfailures lint test assertcontainsfailure actualfailures noduplicatekeyruletests ts ts it forbids duplicate keys in object literals const filename rules dupkey test ts const noduplicatekeyrule lint test getrule no duplicate key const failurestring noduplicatekeyrule failure string const actualfailures lint test applyruleonfile filename noduplicatekeyrule const lint test createfailuresonfile filename failurestring axa const lint test createfailuresonfile filename failurestring bd const lint test createfailuresonfile filename failurestring duplicated const expectedfailures lint test assertfailuresequal actualfailures expectedfailures whitespaceruletests ts ts const createfailure lint test createfailuresonfile filename whitespacerule failure string it enforces whitespace in variable definitions const expectedfailures createfailure createfailure createfailure expectedfailures foreach failure lint test assertcontainsfailure actualfailures failure i like the third style shown here from whitespaceruletests ts it s the most concise now that we have ts syntax we should even be able to write ts for const failure of expectedfailures lint test assertcontainsfailure actualfailures failure for cases where you need to create failures with different failure strings you can do something like ts const createfailure suffix lint test createfailuresonfile filename whitespacerule failure string suffix it enforces whitespace in variable definitions const expectedfailures createfailure foo createfailure bar createfailure baz for const failure of expectedfailures lint test assertcontainsfailure actualfailures failure
| 1
|
107,452
| 16,761,591,650
|
IssuesEvent
|
2021-06-13 22:24:58
|
gms-ws-demo/nibrs
|
https://api.github.com/repos/gms-ws-demo/nibrs
|
closed
|
CVE-2020-36184 (High) detected in multiple libraries - autoclosed
|
security vulnerability
|
## CVE-2020-36184 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.8.10.jar</b>, <b>jackson-databind-2.8.0.jar</b>, <b>jackson-databind-2.9.6.jar</b>, <b>jackson-databind-2.9.8.jar</b>, <b>jackson-databind-2.9.5.jar</b></p></summary>
<p>
<details><summary><b>jackson-databind-2.8.10.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-fbi-service/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.10/jackson-databind-2.8.10.jar,nibrs/tools/nibrs-fbi-service/target/nibrs-fbi-service-1.0.0/WEB-INF/lib/jackson-databind-2.8.10.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.8.10.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.8.0.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.0/jackson-databind-2.8.0.jar</p>
<p>
Dependency Hierarchy:
- tika-parsers-1.18.jar (Root Library)
- :x: **jackson-databind-2.8.0.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-staging-data/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,nibrs/web/nibrs-web/target/nibrs-web/WEB-INF/lib/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.6.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-summary-report-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.5.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.1.5.RELEASE.jar
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-flatfile/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar</p>
<p>
Dependency Hierarchy:
- tika-parsers-1.18.jar (Root Library)
- :x: **jackson-databind-2.9.5.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/gms-ws-demo/nibrs/commit/9fb1c19bd26c2113d1961640de126a33eacdc946">9fb1c19bd26c2113d1961640de126a33eacdc946</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp2.datasources.PerUserPoolDataSource.
<p>Publish Date: 2021-01-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36184>CVE-2020-36184</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2998">https://github.com/FasterXML/jackson-databind/issues/2998</a></p>
<p>Release Date: 2021-01-06</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.10","packageFilePaths":["/tools/nibrs-fbi-service/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.8.10","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.8"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.0","packageFilePaths":["/tools/nibrs-common/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.apache.tika:tika-parsers:1.18;com.fasterxml.jackson.core:jackson-databind:2.8.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.8"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.6","packageFilePaths":["/tools/nibrs-staging-data/pom.xml","/tools/nibrs-summary-report/pom.xml","/tools/nibrs-route/pom.xml","/tools/nibrs-staging-data-common/pom.xml","/tools/nibrs-xmlfile/pom.xml","/tools/nibrs-validation/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.8"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.8","packageFilePaths":["/tools/nibrs-summary-report-common/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:2.1.5.RELEASE;org.springframework.boot:spring-boot-starter-json:2.1.5.RELEASE;com.fasterxml.jackson.core:jackson-databind:2.9.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.8"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.5","packageFilePaths":["/tools/nibrs-flatfile/pom.xml","/tools/nibrs-validate-common/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.apache.tika:tika-parsers:1.18;com.fasterxml.jackson.core:jackson-databind:2.9.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.8"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-36184","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp2.datasources.PerUserPoolDataSource.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36184","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-36184 (High) detected in multiple libraries - autoclosed - ## CVE-2020-36184 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.8.10.jar</b>, <b>jackson-databind-2.8.0.jar</b>, <b>jackson-databind-2.9.6.jar</b>, <b>jackson-databind-2.9.8.jar</b>, <b>jackson-databind-2.9.5.jar</b></p></summary>
<p>
<details><summary><b>jackson-databind-2.8.10.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-fbi-service/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.10/jackson-databind-2.8.10.jar,nibrs/tools/nibrs-fbi-service/target/nibrs-fbi-service-1.0.0/WEB-INF/lib/jackson-databind-2.8.10.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.8.10.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.8.0.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.0/jackson-databind-2.8.0.jar</p>
<p>
Dependency Hierarchy:
- tika-parsers-1.18.jar (Root Library)
- :x: **jackson-databind-2.8.0.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-staging-data/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,nibrs/web/nibrs-web/target/nibrs-web/WEB-INF/lib/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.6.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-summary-report-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.5.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.1.5.RELEASE.jar
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-flatfile/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar</p>
<p>
Dependency Hierarchy:
- tika-parsers-1.18.jar (Root Library)
- :x: **jackson-databind-2.9.5.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/gms-ws-demo/nibrs/commit/9fb1c19bd26c2113d1961640de126a33eacdc946">9fb1c19bd26c2113d1961640de126a33eacdc946</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp2.datasources.PerUserPoolDataSource.
<p>Publish Date: 2021-01-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36184>CVE-2020-36184</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2998">https://github.com/FasterXML/jackson-databind/issues/2998</a></p>
<p>Release Date: 2021-01-06</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.10","packageFilePaths":["/tools/nibrs-fbi-service/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.8.10","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.8"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.0","packageFilePaths":["/tools/nibrs-common/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.apache.tika:tika-parsers:1.18;com.fasterxml.jackson.core:jackson-databind:2.8.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.8"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.6","packageFilePaths":["/tools/nibrs-staging-data/pom.xml","/tools/nibrs-summary-report/pom.xml","/tools/nibrs-route/pom.xml","/tools/nibrs-staging-data-common/pom.xml","/tools/nibrs-xmlfile/pom.xml","/tools/nibrs-validation/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.8"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.8","packageFilePaths":["/tools/nibrs-summary-report-common/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:2.1.5.RELEASE;org.springframework.boot:spring-boot-starter-json:2.1.5.RELEASE;com.fasterxml.jackson.core:jackson-databind:2.9.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.8"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.5","packageFilePaths":["/tools/nibrs-flatfile/pom.xml","/tools/nibrs-validate-common/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.apache.tika:tika-parsers:1.18;com.fasterxml.jackson.core:jackson-databind:2.9.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.8"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-36184","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp2.datasources.PerUserPoolDataSource.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36184","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_test
|
cve high detected in multiple libraries autoclosed cve high severity vulnerability vulnerable libraries jackson databind jar jackson databind jar jackson databind jar jackson databind jar jackson databind jar jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file nibrs tools nibrs fbi service pom xml path to vulnerable library canner repository com fasterxml jackson core jackson databind jackson databind jar nibrs tools nibrs fbi service target nibrs fbi service web inf lib jackson databind jar dependency hierarchy x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file nibrs tools nibrs common pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy tika parsers jar root library x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file nibrs tools nibrs staging data pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar nibrs web nibrs web target nibrs web web inf lib jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file nibrs tools nibrs summary report common pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter web release jar root library spring boot starter json release jar x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file nibrs tools nibrs flatfile pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy tika parsers jar root library x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache tomcat dbcp datasources peruserpooldatasource publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency true dependencytree org apache tika tika parsers com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency false dependencytree com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency true dependencytree org springframework boot spring boot starter web release org springframework boot spring boot starter json release com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency true dependencytree org apache tika tika parsers com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind basebranches vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache tomcat dbcp datasources peruserpooldatasource vulnerabilityurl
| 0
|
207,253
| 7,126,659,763
|
IssuesEvent
|
2018-01-20 13:09:19
|
zulip/zulip
|
https://api.github.com/repos/zulip/zulip
|
closed
|
bugdown: Latex shifts superscripts and other offset characters downwards.
|
area: markdown bug help wanted priority: high
|
To reproduce, send a message like
```
Subscripts and superscripts: $$a^X B^X a^x a_X B_X$$.
\frac: $$\frac{a}{b}$$, \over: $${a \over b}$$ and \bar: $$\bar{\theta}$$.
Integrals: $$\int_0^1$$.
This seems to be okay: $$a^{a^{a^a}}$$.
```
The frontend markdown processor is doing the right thing, but the backend markdown processor returns:

I would start by figuring out what is different between what the frontend and backend markdown returns.
http://zulip.readthedocs.io/en/latest/subsystems/markdown.html is useful reading.
|
1.0
|
bugdown: Latex shifts superscripts and other offset characters downwards. - To reproduce, send a message like
```
Subscripts and superscripts: $$a^X B^X a^x a_X B_X$$.
\frac: $$\frac{a}{b}$$, \over: $${a \over b}$$ and \bar: $$\bar{\theta}$$.
Integrals: $$\int_0^1$$.
This seems to be okay: $$a^{a^{a^a}}$$.
```
The frontend markdown processor is doing the right thing, but the backend markdown processor returns:

I would start by figuring out what is different between what the frontend and backend markdown returns.
http://zulip.readthedocs.io/en/latest/subsystems/markdown.html is useful reading.
|
non_test
|
bugdown latex shifts superscripts and other offset characters downwards to reproduce send a message like subscripts and superscripts a x b x a x a x b x frac frac a b over a over b and bar bar theta integrals int this seems to be okay a a a a the frontend markdown processor is doing the right thing but the backend markdown processor returns i would start by figuring out what is different between what the frontend and backend markdown returns is useful reading
| 0
|
190,389
| 14,544,158,782
|
IssuesEvent
|
2020-12-15 17:47:56
|
m-tosch/mu
|
https://api.github.com/repos/m-tosch/mu
|
closed
|
constructor for nested array initialization of Matrix
|
enhancement matrix testing
|
The Matrix constructor that takes an array should be able to take nested array representing the two dimensions.
e.g.
```cpp
Matrix<2,2,int> m{ {1,2}, {3,4} };
```
^as of now it loos like a variadic template constructor that takes std::arrays is be required
also write tests for it:
Vector tests for reference. The different initializations should all be made possible!
```cpp
// ConstructorFromArray
TypeParam obj{this->values};
// ConstructorFromArrayAssignment
TypeParam obj = this->values;
// ConstructorFromArrayAssignmentBraces
TypeParam obj = {this->values};
```
|
1.0
|
constructor for nested array initialization of Matrix - The Matrix constructor that takes an array should be able to take nested array representing the two dimensions.
e.g.
```cpp
Matrix<2,2,int> m{ {1,2}, {3,4} };
```
^as of now it loos like a variadic template constructor that takes std::arrays is be required
also write tests for it:
Vector tests for reference. The different initializations should all be made possible!
```cpp
// ConstructorFromArray
TypeParam obj{this->values};
// ConstructorFromArrayAssignment
TypeParam obj = this->values;
// ConstructorFromArrayAssignmentBraces
TypeParam obj = {this->values};
```
|
test
|
constructor for nested array initialization of matrix the matrix constructor that takes an array should be able to take nested array representing the two dimensions e g cpp matrix m as of now it loos like a variadic template constructor that takes std arrays is be required also write tests for it vector tests for reference the different initializations should all be made possible cpp constructorfromarray typeparam obj this values constructorfromarrayassignment typeparam obj this values constructorfromarrayassignmentbraces typeparam obj this values
| 1
|
9,280
| 3,031,745,521
|
IssuesEvent
|
2015-08-05 01:43:19
|
servo/servo
|
https://api.github.com/repos/servo/servo
|
closed
|
Intermittent crash in /css21_dev/html4/counter-reset-increment-002.htm
|
A-testing I-crash P-linux
|
```
7:41.04 TEST_START: Thread-TestrunnerManager-7 /css21_dev/html4/counter-reset-increment-002.htm
7:55.92 CRASH: Thread-TestrunnerManager-7 pid:3987. Test:None. Minidump anaylsed:False. Signature:[/css21_dev/html4/counter-reset-increment-002.htm]
7:55.92 TEST_END: Thread-TestrunnerManager-7 CRASH, expected TIMEOUT
```
|
1.0
|
Intermittent crash in /css21_dev/html4/counter-reset-increment-002.htm - ```
7:41.04 TEST_START: Thread-TestrunnerManager-7 /css21_dev/html4/counter-reset-increment-002.htm
7:55.92 CRASH: Thread-TestrunnerManager-7 pid:3987. Test:None. Minidump anaylsed:False. Signature:[/css21_dev/html4/counter-reset-increment-002.htm]
7:55.92 TEST_END: Thread-TestrunnerManager-7 CRASH, expected TIMEOUT
```
|
test
|
intermittent crash in dev counter reset increment htm test start thread testrunnermanager dev counter reset increment htm crash thread testrunnermanager pid test none minidump anaylsed false signature test end thread testrunnermanager crash expected timeout
| 1
|
633,178
| 20,247,103,282
|
IssuesEvent
|
2022-02-14 14:41:12
|
Paul2497/Bootcamp-Activity-4.0
|
https://api.github.com/repos/Paul2497/Bootcamp-Activity-4.0
|
opened
|
No profile page for users
|
Category : Enhancement Category : UI/UX Priority : Low
|
Users will not have the capability of editing their account information and login credentials.
|
1.0
|
No profile page for users - Users will not have the capability of editing their account information and login credentials.
|
non_test
|
no profile page for users users will not have the capability of editing their account information and login credentials
| 0
|
34,277
| 4,896,592,436
|
IssuesEvent
|
2016-11-20 12:50:35
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
github.com/cockroachdb/cockroach/pkg/storage: TestReplicateQueueRebalance failed under stress
|
Robot test-failure
|
SHA: https://github.com/cockroachdb/cockroach/commits/509e36d94b447f0e11b69ba32a7e5f095b8b2057
Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=52946
```
I161120 08:13:37.109750 31376 gossip/gossip.go:244 [n?] initial resolvers: []
W161120 08:13:37.109873 31376 gossip/gossip.go:1120 [n?] no resolvers found; use --join to specify a connected node
W161120 08:13:37.123744 31376 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I161120 08:13:37.126216 31376 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161120 08:13:37.127334 31376 server/config.go:443 1 storage engine initialized
I161120 08:13:37.128634 31376 server/node.go:419 [n?] store [n0,s0] not bootstrapped
I161120 08:13:37.137197 31400 storage/replica_proposal.go:349 [s1,r1/1:/M{in-ax}] new range lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 411008h13m46.135197301s following replica {0 0 0} 1970-01-01 00:00:00 +0000 UTC 0s [physicalTime=2016-11-20 08:13:37.137059582 +0000 UTC]
I161120 08:13:37.140196 31376 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161120 08:13:37.141054 31376 server/node.go:348 [n?] **** cluster 0f6d8142-08da-4080-94d5-dc69eb94ac0d has been created
I161120 08:13:37.141105 31376 server/node.go:349 [n?] **** add additional nodes by specifying --join=127.0.0.1:44544
I161120 08:13:37.142130 31376 base/node_id.go:62 [n1] NodeID set to 1
I161120 08:13:37.145429 31376 storage/store.go:1188 [n1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I161120 08:13:37.145513 31376 server/node.go:432 [n1] initialized store [n1,s1]: {Capacity:536870912 Available:536870912 RangeCount:0 LeaseCount:0}
I161120 08:13:37.145624 31376 server/node.go:317 [n1] node ID 1 initialized
I161120 08:13:37.145786 31376 gossip/gossip.go:286 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:44544" > attrs:<> locality:<>
I161120 08:13:37.146422 31376 storage/stores.go:296 [n1] read 0 node addresses from persistent storage
I161120 08:13:37.146563 31376 server/node.go:562 [n1] connecting to gossip network to verify cluster ID...
I161120 08:13:37.148073 31376 server/node.go:582 [n1] node connected via gossip and verified as part of cluster "0f6d8142-08da-4080-94d5-dc69eb94ac0d"
I161120 08:13:37.148377 31376 server/node.go:367 [n1] node=1: started with [[]=] engine(s) and attributes []
I161120 08:13:37.149035 31376 server/server.go:630 [n1] starting https server at 127.0.0.1:53550
I161120 08:13:37.149076 31376 server/server.go:631 [n1] starting grpc/postgres server at 127.0.0.1:44544
I161120 08:13:37.149109 31376 server/server.go:632 [n1] advertising CockroachDB node at 127.0.0.1:44544
I161120 08:13:37.153611 31458 storage/split_queue.go:103 [n1,split,s1,r1/1:/M{in-ax}] splitting at keys [/Table/11/0 /Table/12/0 /Table/13/0 /Table/14/0]
I161120 08:13:37.157681 31458 storage/replica_command.go:2361 [n1,split,s1,r1/1:/M{in-ax}] initiating a split of this range at key /Table/11 [r2]
E161120 08:13:37.190301 31459 storage/queue.go:586 [n1,replicate,s1,r1/1:/{Min-Table/11}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.192009 31273 storage/queue.go:586 [n1,replicate,s1,r1/1:/{Min-Table/11}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.192700 31458 storage/queue.go:575 [n1,split,s1,r1/1:/{Min-Table/11}] unable to split [n1,s1,r1/1:/{Min-Table/11}] at key "/Table/12/0": key range /Table/12/0-/Table/12/0 outside of bounds of range /Min-/Max
I161120 08:13:37.198627 31458 storage/split_queue.go:103 [n1,split,s1,r2/1:/{Table/11-Max}] splitting at keys [/Table/12/0 /Table/13/0 /Table/14/0]
I161120 08:13:37.198913 31458 storage/replica_command.go:2361 [n1,split,s1,r2/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r3]
E161120 08:13:37.268013 31273 storage/queue.go:586 [n1,replicate,s1,r1/1:/{Min-Table/11}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
I161120 08:13:37.294963 31333 sql/event_log.go:95 [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:44544} Attrs: Locality:} ClusterID:0f6d8142-08da-4080-94d5-dc69eb94ac0d StartedAt:1479629617148108066}
E161120 08:13:37.309260 31459 storage/queue.go:586 [n1,replicate,s1,r2/1:/Table/1{1-2}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.309412 31458 storage/queue.go:575 [n1,split,s1,r2/1:/Table/1{1-2}] unable to split [n1,s1,r2/1:/Table/1{1-2}] at key "/Table/13/0": key range /Table/13/0-/Table/13/0 outside of bounds of range /Table/11-/Max
I161120 08:13:37.309914 31458 storage/split_queue.go:103 [n1,split,s1,r3/1:/{Table/12-Max}] splitting at keys [/Table/13/0 /Table/14/0]
I161120 08:13:37.310128 31458 storage/replica_command.go:2361 [n1,split,s1,r3/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r4]
E161120 08:13:37.352594 31273 storage/queue.go:586 [n1,replicate,s1,r1/1:/{Min-Table/11}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.353542 31273 storage/queue.go:586 [n1,replicate,s1,r2/1:/Table/1{1-2}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.394232 31459 storage/queue.go:586 [n1,replicate,s1,r3/1:/Table/1{2-3}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.399887 31458 storage/queue.go:575 [n1,split,s1,r3/1:/Table/1{2-3}] unable to split [n1,s1,r3/1:/Table/1{2-3}] at key "/Table/14/0": key range /Table/14/0-/Table/14/0 outside of bounds of range /Table/12-/Max
I161120 08:13:37.400523 31458 storage/split_queue.go:103 [n1,split,s1,r4/1:/{Table/13-Max}] splitting at keys [/Table/14/0]
I161120 08:13:37.400813 31458 storage/replica_command.go:2361 [n1,split,s1,r4/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r5]
E161120 08:13:37.460967 31273 storage/queue.go:586 [n1,replicate,s1,r1/1:/{Min-Table/11}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.461581 31273 storage/queue.go:586 [n1,replicate,s1,r2/1:/Table/1{1-2}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.462148 31273 storage/queue.go:586 [n1,replicate,s1,r3/1:/Table/1{2-3}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.552940 31273 storage/queue.go:586 [n1,replicate,s1,r1/1:/{Min-Table/11}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.553969 31273 storage/queue.go:586 [n1,replicate,s1,r2/1:/Table/1{1-2}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.554484 31273 storage/queue.go:586 [n1,replicate,s1,r3/1:/Table/1{2-3}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.657745 31273 storage/queue.go:586 [n1,replicate,s1,r3/1:/Table/1{2-3}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.658328 31273 storage/queue.go:586 [n1,replicate,s1,r1/1:/{Min-Table/11}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.658776 31273 storage/queue.go:586 [n1,replicate,s1,r2/1:/Table/1{1-2}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.708508 31459 storage/queue.go:586 [n1,replicate,s1,r4/1:/Table/1{3-4}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.709178 31459 storage/queue.go:586 [n1,replicate,s1,r5/1:/{Table/14-Max}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.751621 31273 storage/queue.go:586 [n1,replicate,s1,r3/1:/Table/1{2-3}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.752255 31273 storage/queue.go:586 [n1,replicate,s1,r1/1:/{Min-Table/11}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.752884 31273 storage/queue.go:586 [n1,replicate,s1,r2/1:/Table/1{1-2}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.753483 31273 storage/queue.go:586 [n1,replicate,s1,r4/1:/Table/1{3-4}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.754058 31273 storage/queue.go:586 [n1,replicate,s1,r5/1:/{Table/14-Max}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
I161120 08:13:37.757227 31376 gossip/gossip.go:244 [n?] initial resolvers: [127.0.0.1:44544]
W161120 08:13:37.757334 31376 gossip/gossip.go:1122 [n?] no incoming or outgoing connections
W161120 08:13:37.801284 31376 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I161120 08:13:37.804551 31376 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161120 08:13:37.814895 31376 server/config.go:443 1 storage engine initialized
I161120 08:13:37.816022 31376 server/node.go:419 [n?] store [n0,s0] not bootstrapped
I161120 08:13:37.816096 31376 storage/stores.go:296 [n?] read 0 node addresses from persistent storage
I161120 08:13:37.816174 31376 server/node.go:562 [n?] connecting to gossip network to verify cluster ID...
E161120 08:13:37.882514 31273 storage/queue.go:586 [n1,replicate,s1,r1/1:/{Min-Table/11}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.883144 31273 storage/queue.go:586 [n1,replicate,s1,r2/1:/Table/1{1-2}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.886141 31273 storage/queue.go:586 [n1,replicate,s1,r4/1:/Table/1{3-4}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.886610 31273 storage/queue.go:586 [n1,replicate,s1,r5/1:/{Table/14-Max}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.888261 31273 storage/queue.go:586 [n1,replicate,s1,r3/1:/Table/1{2-3}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
I161120 08:13:37.908703 31246 gossip/client.go:125 [n?] started gossip client to 127.0.0.1:44544
I161120 08:13:37.909792 31469 gossip/server.go:285 [n1] received gossip from unknown node
I161120 08:13:37.937695 31488 storage/stores.go:312 [n?] wrote 1 node addresses to persistent storage
I161120 08:13:37.939537 31376 server/node.go:582 [n?] node connected via gossip and verified as part of cluster "0f6d8142-08da-4080-94d5-dc69eb94ac0d"
I161120 08:13:37.953665 31376 kv/dist_sender.go:331 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
E161120 08:13:37.954454 31273 storage/queue.go:586 [n1,replicate,s1,r2/1:/Table/1{1-2}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.955398 31273 storage/queue.go:586 [n1,replicate,s1,r4/1:/Table/1{3-4}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.956824 31273 storage/queue.go:586 [n1,replicate,s1,r5/1:/{Table/14-Max}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.957413 31273 storage/queue.go:586 [n1,replicate,s1,r3/1:/Table/1{2-3}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.957974 31273 storage/queue.go:586 [n1,replicate,s1,r1/1:/{Min-Table/11}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
I161120 08:13:37.966253 31376 server/node.go:310 [n?] new node allocated ID 2
I161120 08:13:37.966443 31376 base/node_id.go:62 [n2] NodeID set to 2
I161120 08:13:37.966674 31376 gossip/gossip.go:286 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:56071" > attrs:<> locality:<>
I161120 08:13:37.967698 31376 server/node.go:367 [n2] node=2: started with [[]=] engine(s) and attributes []
I161120 08:13:37.968202 31376 server/server.go:630 [n2] starting https server at 127.0.0.1:34171
I161120 08:13:37.968246 31376 server/server.go:631 [n2] starting grpc/postgres server at 127.0.0.1:56071
I161120 08:13:37.968729 31376 server/server.go:632 [n2] advertising CockroachDB node at 127.0.0.1:56071
I161120 08:13:37.972511 31566 storage/stores.go:312 [n1] wrote 1 node addresses to persistent storage
I161120 08:13:38.006234 31554 server/node.go:543 [n2] bootstrapped store [n2,s2]
I161120 08:13:38.006707 31376 gossip/gossip.go:244 [n?] initial resolvers: [127.0.0.1:44544]
W161120 08:13:38.006887 31376 gossip/gossip.go:1122 [n?] no incoming or outgoing connections
I161120 08:13:38.008853 31273 storage/replica_raftstorage.go:453 [n1,replicate,s1,r2/1:/Table/1{1-2}] generated snapshot 22797f0a at index 23
W161120 08:13:38.059277 31376 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I161120 08:13:38.063258 31557 sql/event_log.go:95 [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:56071} Attrs: Locality:} ClusterID:0f6d8142-08da-4080-94d5-dc69eb94ac0d StartedAt:1479629617967058448}
I161120 08:13:38.094115 31376 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161120 08:13:38.105609 31376 server/config.go:443 1 storage engine initialized
I161120 08:13:38.123408 31376 server/node.go:419 [n?] store [n0,s0] not bootstrapped
I161120 08:13:38.134516 31376 storage/stores.go:296 [n?] read 0 node addresses from persistent storage
I161120 08:13:38.134619 31376 server/node.go:562 [n?] connecting to gossip network to verify cluster ID...
I161120 08:13:38.135047 31273 storage/store.go:3134 [n1,replicate,s1,r2/1:/Table/1{1-2}] streamed snapshot: kv pairs: 10, log entries: 13
I161120 08:13:38.136278 31700 storage/replica_raftstorage.go:612 [n2,s2,r2/?:{-}] applying preemptive snapshot at index 23 (id=22797f0a, encoded size=16759, 1 rocksdb batches, 13 log entries)
I161120 08:13:38.138364 31700 storage/replica_raftstorage.go:620 [n2,s2,r2/?:/Table/1{1-2}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I161120 08:13:38.140848 31273 storage/replica_command.go:3261 [n1,replicate,s1,r2/1:/Table/1{1-2}] change replicas: read existing descriptor range_id:2 start_key:"\223" end_key:"\224" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I161120 08:13:38.175056 31273 storage/replica.go:2088 [n1,s1,r2/1:/Table/1{1-2}] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I161120 08:13:38.187094 31673 storage/raft_transport.go:437 [n2] raft transport stream to node 1 established
I161120 08:13:38.191291 31459 storage/replica_raftstorage.go:453 [n1,replicate,s1,r1/1:/{Min-Table/11}] generated snapshot 37dd0924 at index 226
I161120 08:13:38.217673 31519 storage/replica_raftstorage.go:612 [n2,s2,r1/?:{-}] applying preemptive snapshot at index 226 (id=37dd0924, encoded size=180435, 1 rocksdb batches, 93 log entries)
I161120 08:13:38.226066 31519 storage/replica_raftstorage.go:620 [n2,s2,r1/?:/{Min-Table/11}] applied preemptive snapshot in 5ms [clear=0ms batch=0ms entries=4ms commit=1ms]
I161120 08:13:38.226944 31459 storage/store.go:3134 [n1,replicate,s1,r1/1:/{Min-Table/11}] streamed snapshot: kv pairs: 587, log entries: 93
I161120 08:13:38.228828 31459 storage/replica_command.go:3261 [n1,replicate,s1,r1/1:/{Min-Table/11}] change replicas: read existing descriptor range_id:1 start_key:"" end_key:"\223" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I161120 08:13:38.230429 31692 gossip/client.go:125 [n?] started gossip client to 127.0.0.1:44544
I161120 08:13:38.231892 31716 gossip/server.go:285 [n1] received gossip from unknown node
I161120 08:13:38.234703 31592 storage/stores.go:312 [n?] wrote 1 node addresses to persistent storage
I161120 08:13:38.234913 31592 storage/stores.go:312 [n?] wrote 2 node addresses to persistent storage
I161120 08:13:38.235457 31376 server/node.go:582 [n?] node connected via gossip and verified as part of cluster "0f6d8142-08da-4080-94d5-dc69eb94ac0d"
I161120 08:13:38.258762 31376 kv/dist_sender.go:331 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I161120 08:13:38.277992 31376 server/node.go:310 [n?] new node allocated ID 3
I161120 08:13:38.280820 31376 base/node_id.go:62 [n3] NodeID set to 3
I161120 08:13:38.281418 31376 gossip/gossip.go:286 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:44208" > attrs:<> locality:<>
I161120 08:13:38.282186 31376 server/node.go:367 [n3] node=3: started with [[]=] engine(s) and attributes []
I161120 08:13:38.282885 31376 server/server.go:630 [n3] starting https server at 127.0.0.1:54811
I161120 08:13:38.282937 31376 server/server.go:631 [n3] starting grpc/postgres server at 127.0.0.1:44208
I161120 08:13:38.283000 31376 server/server.go:632 [n3] advertising CockroachDB node at 127.0.0.1:44208
I161120 08:13:38.300306 31601 storage/stores.go:312 [n1] wrote 2 node addresses to persistent storage
I161120 08:13:38.301695 31658 storage/stores.go:312 [n2] wrote 2 node addresses to persistent storage
I161120 08:13:38.340765 31376 gossip/gossip.go:244 [n?] initial resolvers: [127.0.0.1:44544]
W161120 08:13:38.340868 31376 gossip/gossip.go:1122 [n?] no incoming or outgoing connections
I161120 08:13:38.366980 31459 storage/replica_command.go:3261 [n1,replicate,s1,r1/1:/{Min-Table/11}] change replicas: read existing descriptor range_id:1 start_key:"" end_key:"\223" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
W161120 08:13:38.376300 31376 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I161120 08:13:38.380336 31705 server/node.go:543 [n3] bootstrapped store [n3,s3]
I161120 08:13:38.380443 31376 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161120 08:13:38.381279 31376 server/config.go:443 1 storage engine initialized
I161120 08:13:38.385134 31376 server/node.go:419 [n?] store [n0,s0] not bootstrapped
I161120 08:13:38.385222 31376 storage/stores.go:296 [n?] read 0 node addresses from persistent storage
I161120 08:13:38.385339 31376 server/node.go:562 [n?] connecting to gossip network to verify cluster ID...
I161120 08:13:38.401065 31459 storage/replica.go:2088 [n1,s1,r1/1:/{Min-Table/11}] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I161120 08:13:38.408275 31273 storage/replica_raftstorage.go:453 [n1,replicate,s1,r4/1:/Table/1{3-4}] generated snapshot b1d78390 at index 39
I161120 08:13:38.422089 31708 sql/event_log.go:95 [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:44208} Attrs: Locality:} ClusterID:0f6d8142-08da-4080-94d5-dc69eb94ac0d StartedAt:1479629618281857043}
I161120 08:13:38.477667 31809 gossip/client.go:125 [n?] started gossip client to 127.0.0.1:44544
I161120 08:13:38.478875 31834 gossip/server.go:285 [n1] received gossip from unknown node
I161120 08:13:38.492976 31376 server/node.go:582 [n?] node connected via gossip and verified as part of cluster "0f6d8142-08da-4080-94d5-dc69eb94ac0d"
I161120 08:13:38.498252 31925 storage/stores.go:312 [n?] wrote 1 node addresses to persistent storage
I161120 08:13:38.498453 31925 storage/stores.go:312 [n?] wrote 2 node addresses to persistent storage
I161120 08:13:38.498650 31925 storage/stores.go:312 [n?] wrote 3 node addresses to persistent storage
I161120 08:13:38.513233 31897 storage/replica_raftstorage.go:612 [n3,s3,r4/?:{-}] applying preemptive snapshot at index 39 (id=b1d78390, encoded size=44465, 1 rocksdb batches, 29 log entries)
I161120 08:13:38.515628 31897 storage/replica_raftstorage.go:620 [n3,s3,r4/?:/Table/1{3-4}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I161120 08:13:38.516193 31273 storage/store.go:3134 [n1,replicate,s1,r4/1:/Table/1{3-4}] streamed snapshot: kv pairs: 59, log entries: 29
I161120 08:13:38.521465 31273 storage/replica_command.go:3261 [n1,replicate,s1,r4/1:/Table/1{3-4}] change replicas: read existing descriptor range_id:4 start_key:"\225" end_key:"\226" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I161120 08:13:38.522326 31376 kv/dist_sender.go:331 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I161120 08:13:38.539459 31376 server/node.go:310 [n?] new node allocated ID 4
I161120 08:13:38.539596 31376 base/node_id.go:62 [n4] NodeID set to 4
I161120 08:13:38.539775 31376 gossip/gossip.go:286 [n4] NodeDescriptor set to node_id:4 address:<network_field:"tcp" address_field:"127.0.0.1:52957" > attrs:<> locality:<>
I161120 08:13:38.540466 31376 server/node.go:367 [n4] node=4: started with [[]=] engine(s) and attributes []
I161120 08:13:38.541132 31376 server/server.go:630 [n4] starting https server at 127.0.0.1:58037
I161120 08:13:38.541177 31376 server/server.go:631 [n4] starting grpc/postgres server at 127.0.0.1:52957
I161120 08:13:38.541214 31376 server/server.go:632 [n4] advertising CockroachDB node at 127.0.0.1:52957
I161120 08:13:38.552760 31880 storage/stores.go:312 [n1] wrote 3 node addresses to persistent storage
I161120 08:13:38.555686 32002 storage/stores.go:312 [n2] wrote 3 node addresses to persistent storage
I161120 08:13:38.556091 31956 storage/stores.go:312 [n3] wrote 3 node addresses to persistent storage
I161120 08:13:38.656334 31376 gossip/gossip.go:244 [n?] initial resolvers: [127.0.0.1:44544]
W161120 08:13:38.656453 31376 gossip/gossip.go:1122 [n?] no incoming or outgoing connections
I161120 08:13:38.658498 31273 storage/replica_command.go:3261 [n1,replicate,s1,r4/1:/Table/1{3-4}] change replicas: read existing descriptor range_id:4 start_key:"\225" end_key:"\226" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I161120 08:13:38.689193 31914 server/node.go:543 [n4] bootstrapped store [n4,s4]
I161120 08:13:38.713264 31917 sql/event_log.go:95 [n4] Event: "node_join", target: 4, info: {Descriptor:{NodeID:4 Address:{NetworkField:tcp AddressField:127.0.0.1:52957} Attrs: Locality:} ClusterID:0f6d8142-08da-4080-94d5-dc69eb94ac0d StartedAt:1479629618540182064}
W161120 08:13:38.718193 31376 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I161120 08:13:38.720105 31376 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161120 08:13:38.721000 31376 server/config.go:443 1 storage engine initialized
I161120 08:13:38.722140 31376 server/node.go:419 [n?] store [n0,s0] not bootstrapped
I161120 08:13:38.722244 31376 storage/stores.go:296 [n?] read 0 node addresses from persistent storage
I161120 08:13:38.722328 31376 server/node.go:562 [n?] connecting to gossip network to verify cluster ID...
I161120 08:13:38.763385 31273 storage/replica.go:2088 [n1,s1,r4/1:/Table/1{3-4}] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2}]
I161120 08:13:38.768613 31273 storage/replica_raftstorage.go:453 [n1,replicate,s1,r5/1:/{Table/14-Max}] generated snapshot d1bd14bf at index 11
I161120 08:13:38.770911 31273 storage/store.go:3134 [n1,replicate,s1,r5/1:/{Table/14-Max}] streamed snapshot: kv pairs: 9, log entries: 1
I161120 08:13:38.771701 32086 storage/replica_raftstorage.go:612 [n3,s3,r5/?:{-}] applying preemptive snapshot at index 11 (id=d1bd14bf, encoded size=508, 1 rocksdb batches, 1 log entries)
I161120 08:13:38.772618 32086 storage/replica_raftstorage.go:620 [n3,s3,r5/?:/{Table/14-Max}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I161120 08:13:38.776161 31273 storage/replica_command.go:3261 [n1,replicate,s1,r5/1:/{Table/14-Max}] change replicas: read existing descriptor range_id:5 start_key:"\226" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I161120 08:13:38.789084 32123 storage/raft_transport.go:437 [n3] raft transport stream to node 1 established
I161120 08:13:38.805272 32101 gossip/client.go:125 [n?] started gossip client to 127.0.0.1:44544
I161120 08:13:38.806132 32063 gossip/server.go:285 [n1] received gossip from unknown node
I161120 08:13:38.809358 31376 server/node.go:582 [n?] node connected via gossip and verified as part of cluster "0f6d8142-08da-4080-94d5-dc69eb94ac0d"
I161120 08:13:38.810882 32095 storage/stores.go:312 [n?] wrote 1 node addresses to persistent storage
I161120 08:13:38.811088 32095 storage/stores.go:312 [n?] wrote 2 node addresses to persistent storage
I161120 08:13:38.811318 32095 storage/stores.go:312 [n?] wrote 3 node addresses to persistent storage
I161120 08:13:38.811513 32095 storage/stores.go:312 [n?] wrote 4 node addresses to persistent storage
I161120 08:13:38.823165 31376 kv/dist_sender.go:331 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I161120 08:13:38.837129 31376 server/node.go:310 [n?] new node allocated ID 5
I161120 08:13:38.837263 31376 base/node_id.go:62 [n5] NodeID set to 5
I161120 08:13:38.837446 31376 gossip/gossip.go:286 [n5] NodeDescriptor set to node_id:5 address:<network_field:"tcp" address_field:"127.0.0.1:45740" > attrs:<> locality:<>
I161120 08:13:38.838189 31376 server/node.go:367 [n5] node=5: started with [[]=] engine(s) and attributes []
I161120 08:13:38.838859 31376 server/server.go:630 [n5] starting https server at 127.0.0.1:49295
I161120 08:13:38.838918 31376 server/server.go:631 [n5] starting grpc/postgres server at 127.0.0.1:45740
I161120 08:13:38.838962 31376 server/server.go:632 [n5] advertising CockroachDB node at 127.0.0.1:45740
I161120 08:13:38.846046 32063 gossip/server.go:263 [n1] refusing gossip from node 5 (max 3 conns); forwarding to 4 ({tcp 127.0.0.1:52957})
I161120 08:13:38.847196 32077 storage/stores.go:312 [n1] wrote 4 node addresses to persistent storage
I161120 08:13:38.849763 32078 storage/stores.go:312 [n3] wrote 4 node addresses to persistent storage
I161120 08:13:38.851007 32079 storage/stores.go:312 [n4] wrote 4 node addresses to persistent storage
I161120 08:13:38.851897 32080 storage/stores.go:312 [n2] wrote 4 node addresses to persistent storage
I161120 08:13:38.852567 32101 gossip/client.go:130 [n5] closing client to node 1 (127.0.0.1:44544): received forward from node 1 to 4 (127.0.0.1:52957)
I161120 08:13:38.860696 31273 storage/replica.go:2088 [n1,s1,r5/1:/{Table/14-Max}] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2}]
I161120 08:13:38.877875 31273 storage/replica_raftstorage.go:453 [n1,replicate,s1,r3/1:/Table/1{2-3}] generated snapshot 651978a5 at index 26
I161120 08:13:38.897967 31273 storage/store.go:3134 [n1,replicate,s1,r3/1:/Table/1{2-3}] streamed snapshot: kv pairs: 30, log entries: 16
I161120 08:13:38.903395 32227 storage/replica_raftstorage.go:612 [n2,s2,r3/?:{-}] applying preemptive snapshot at index 26 (id=651978a5, encoded size=25155, 1 rocksdb batches, 16 log entries)
I161120 08:13:38.905573 32227 storage/replica_raftstorage.go:620 [n2,s2,r3/?:/Table/1{2-3}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I161120 08:13:38.914575 31273 storage/replica_command.go:3261 [n1,replicate,s1,r3/1:/Table/1{2-3}] change replicas: read existing descriptor range_id:3 start_key:"\224" end_key:"\225" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I161120 08:13:38.915438 32081 gossip/client.go:125 [n5] started gossip client to 127.0.0.1:52957
I161120 08:13:38.932771 32163 server/node.go:543 [n5] bootstrapped store [n5,s5]
I161120 08:13:39.060081 32166 sql/event_log.go:95 [n5] Event: "node_join", target: 5, info: {Descriptor:{NodeID:5 Address:{NetworkField:tcp AddressField:127.0.0.1:45740} Attrs: Locality:} ClusterID:0f6d8142-08da-4080-94d5-dc69eb94ac0d StartedAt:1479629618837840008}
I161120 08:13:39.089104 31273 storage/replica.go:2088 [n1,s1,r3/1:/Table/1{2-3}] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I161120 08:13:39.093714 31273 storage/replica_raftstorage.go:453 [n1,replicate,s1,r1/1:/{Min-Table/11}] generated snapshot b5110cc4 at index 317
I161120 08:13:39.174574 31273 storage/store.go:3134 [n1,replicate,s1,r1/1:/{Min-Table/11}] streamed snapshot: kv pairs: 1174, log entries: 57
I161120 08:13:39.176566 32403 storage/replica_raftstorage.go:612 [n5,s5,r1/?:{-}] applying preemptive snapshot at index 317 (id=b5110cc4, encoded size=392400, 1 rocksdb batches, 57 log entries)
I161120 08:13:39.184604 32403 storage/replica_raftstorage.go:620 [n5,s5,r1/?:/{Min-Table/11}] applied preemptive snapshot in 8ms [clear=0ms batch=1ms entries=4ms commit=2ms]
I161120 08:13:39.188724 31273 storage/replica_command.go:3261 [n1,replicate,s1,r1/1:/{Min-Table/11}] change replicas: read existing descriptor range_id:1 start_key:"" end_key:"\223" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3
I161120 08:13:39.227682 31273 storage/replica.go:2088 [n1,s1,r1/1:/{Min-Table/11}] proposing ADD_REPLICA {NodeID:5 StoreID:5 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:5 StoreID:5 ReplicaID:3}]
I161120 08:13:39.239372 31273 storage/queue.go:638 [n1,replicate] purgatory is now empty
I161120 08:13:39.240893 31459 storage/replica_raftstorage.go:453 [n1,replicate,s1,r3/1:/Table/1{2-3}] generated snapshot c7a14eb7 at index 31
I161120 08:13:39.244688 31459 storage/store.go:3134 [n1,replicate,s1,r3/1:/Table/1{2-3}] streamed snapshot: kv pairs: 36, log entries: 21
I161120 08:13:39.244837 32422 storage/raft_transport.go:437 [n5] raft transport stream to node 1 established
I161120 08:13:39.259285 32471 storage/replica_raftstorage.go:612 [n5,s5,r3/?:{-}] applying preemptive snapshot at index 31 (id=c7a14eb7, encoded size=31936, 1 rocksdb batches, 21 log entries)
I161120 08:13:39.262410 32471 storage/replica_raftstorage.go:620 [n5,s5,r3/?:/Table/1{2-3}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=1ms commit=1ms]
I161120 08:13:39.269600 31459 storage/replica_command.go:3261 [n1,replicate,s1,r3/1:/Table/1{2-3}] change replicas: read existing descriptor range_id:3 start_key:"\224" end_key:"\225" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3
I161120 08:13:39.328676 31459 storage/replica.go:2088 [n1,s1,r3/1:/Table/1{2-3}] proposing ADD_REPLICA {NodeID:5 StoreID:5 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:5 StoreID:5 ReplicaID:3}]
I161120 08:13:39.338385 31459 storage/replica_raftstorage.go:453 [n1,replicate,s1,r5/1:/{Table/14-Max}] generated snapshot 6aa7ad77 at index 15
I161120 08:13:39.429312 31459 storage/store.go:3134 [n1,replicate,s1,r5/1:/{Table/14-Max}] streamed snapshot: kv pairs: 10, log entries: 5
I161120 08:13:39.433108 32485 storage/replica_raftstorage.go:612 [n4,s4,r5/?:{-}] applying preemptive snapshot at index 15 (id=6aa7ad77, encoded size=3866, 1 rocksdb batches, 5 log entries)
I161120 08:13:39.445296 32485 storage/replica_raftstorage.go:620 [n4,s4,r5/?:/{Table/14-Max}] applied preemptive snapshot in 12ms [clear=9ms batch=0ms entries=1ms commit=1ms]
I161120 08:13:39.455091 31459 storage/replica_command.go:3261 [n1,replicate,s1,r5/1:/{Table/14-Max}] change replicas: read existing descriptor range_id:5 start_key:"\226" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:3 store_id:3 replica_id:2 > next_replica_id:3
I161120 08:13:39.523503 31459 storage/replica.go:2088 [n1,s1,r5/1:/{Table/14-Max}] proposing ADD_REPLICA {NodeID:4 StoreID:4 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:3}]
I161120 08:13:39.539522 31459 storage/replica_raftstorage.go:453 [n1,replicate,s1,r4/1:/Table/1{3-4}] generated snapshot 501947cf at index 58
I161120 08:13:39.551726 31459 storage/store.go:3134 [n1,replicate,s1,r4/1:/Table/1{3-4}] streamed snapshot: kv pairs: 75, log entries: 48
I161120 08:13:39.552419 32492 storage/replica_raftstorage.go:612 [n5,s5,r4/?:{-}] applying preemptive snapshot at index 58 (id=501947cf, encoded size=72491, 1 rocksdb batches, 48 log entries)
I161120 08:13:39.556526 32492 storage/replica_raftstorage.go:620 [n5,s5,r4/?:/Table/1{3-4}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=3ms commit=1ms]
I161120 08:13:39.566406 32508 storage/raft_transport.go:437 [n4] raft transport stream to node 1 established
I161120 08:13:39.570123 31459 storage/replica_command.go:3261 [n1,replicate,s1,r4/1:/Table/1{3-4}] change replicas: read existing descriptor range_id:4 start_key:"\225" end_key:"\226" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:3 store_id:3 replica_id:2 > next_replica_id:3
I161120 08:13:39.612433 31459 storage/replica.go:2088 [n1,s1,r4/1:/Table/1{3-4}] proposing ADD_REPLICA {NodeID:5 StoreID:5 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2} {NodeID:5 StoreID:5 ReplicaID:3}]
I161120 08:13:39.620148 31459 storage/replica_raftstorage.go:453 [n1,replicate,s1,r2/1:/Table/1{1-2}] generated snapshot 815bbed5 at index 26
I161120 08:13:39.638851 31459 storage/store.go:3134 [n1,replicate,s1,r2/1:/Table/1{1-2}] streamed snapshot: kv pairs: 11, log entries: 16
I161120 08:13:39.642296 32525 storage/replica_raftstorage.go:612 [n3,s3,r2/?:{-}] applying preemptive snapshot at index 26 (id=815bbed5, encoded size=19546, 1 rocksdb batches, 16 log entries)
I161120 08:13:39.660520 32525 storage/replica_raftstorage.go:620 [n3,s3,r2/?:/Table/1{1-2}] applied preemptive snapshot in 11ms [clear=0ms batch=0ms entries=9ms commit=0ms]
I161120 08:13:39.667369 31459 storage/replica_command.go:3261 [n1,replicate,s1,r2/1:/Table/1{1-2}] change replicas: read existing descriptor range_id:2 start_key:"\223" end_key:"\224" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3
I161120 08:13:39.727478 31459 storage/replica.go:2088 [n1,s1,r2/1:/Table/1{1-2}] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3}]
I161120 08:13:39.969944 31376 storage/replica_command.go:2361 [n1,s1,r5/1:/{Table/14-Max}] initiating a split of this range at key /Table/50 [r6]
I161120 08:13:40.132072 31376 storage/replica_command.go:2361 [n1,s1,r6/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r7]
I161120 08:13:40.347627 31376 storage/replica_command.go:2361 [n1,s1,r7/1:/{Table/51-Max}] initiating a split of this range at key /Table/52 [r8]
I161120 08:13:40.545639 31376 storage/replica_command.go:2361 [n1,s1,r8/1:/{Table/52-Max}] initiating a split of this range at key /Table/53 [r9]
I161120 08:13:40.626451 31429 storage/replica_proposal.go:398 [n1,s1,r3/1:/Table/1{2-3}] range [n1,s1,r3/1:/Table/1{2-3}]: transferring raft leadership to replica ID 3
I161120 08:13:40.628650 32215 storage/replica_proposal.go:349 [n5,s5,r3/3:/Table/1{2-3}] new range lease replica {5 5 3} 2016-11-20 08:13:40.622942855 +0000 UTC 9.25s following replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 411008h13m46.39704762s [physicalTime=2016-11-20 08:13:40.6285109 +0000 UTC]
I161120 08:13:40.637042 32857 storage/raft_transport.go:437 [n5] raft transport stream to node 2 established
I161120 08:13:40.638855 32751 storage/raft_transport.go:437 [n2] raft transport stream to node 5 established
I161120 08:13:40.748180 31459 storage/replica_raftstorage.go:453 [n1,replicate,s1,r8/1:/Table/5{2-3}] generated snapshot 7afcca74 at index 15
I161120 08:13:40.751483 31459 storage/store.go:3134 [n1,replicate,s1,r8/1:/Table/5{2-3}] streamed snapshot: kv pairs: 11, log entries: 5
I161120 08:13:40.753320 32865 storage/replica_raftstorage.go:612 [n2,s2,r8/?:{-}] applying preemptive snapshot at index 15 (id=7afcca74, encoded size=6056, 1 rocksdb batches, 5 log entries)
I161120 08:13:40.755007 32865 storage/replica_raftstorage.go:620 [n2,s2,r8/?:/Table/5{2-3}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I161120 08:13:40.772344 31376 storage/replica_command.go:2361 [n1,s1,r9/1:/{Table/53-Max}] initiating a split of this range at key /Table/54 [r10]
I161120 08:13:40.772493 31459 storage/replica_command.go:3261 [n1,replicate,s1,r8/1:/Table/5{2-3}] change replicas: read existing descriptor range_id:8 start_key:"\274" end_key:"\275" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:3 store_id:3 replica_id:2 > replicas:<node_id:4 store_id:4 replica_id:3 > next_replica_id:4
I161120 08:13:40.823092 31459 storage/replica.go:2088 [n1,s1,r8/1:/Table/5{2-3}] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:4}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:3} {NodeID:2 StoreID:2 ReplicaID:4}]
I161120 08:13:40.852050 31432 storage/replica_proposal.go:398 [n1,s1,r8/1:/Table/5{2-3}] range [n1,s1,r8/1:/Table/5{2-3}]: transferring raft leadership to replica ID 4
I161120 08:13:40.870260 31459 storage/replica_raftstorage.go:453 [n1,replicate,s1,r9/1:/{Table/53-Max}] generated snapshot 89dbce25 at index 13
I161120 08:13:40.872760 31459 storage/store.go:3134 [n1,replicate,s1,r9/1:/{Table/53-Max}] streamed snapshot: kv pairs: 12, log entries: 3
I161120 08:13:40.877451 33027 storage/replica_raftstorage.go:612 [n5,s5,r9/?:{-}] applying preemptive snapshot at index 13 (id=89dbce25, encoded size=3080, 1 rocksdb batches, 3 log entries)
I161120 08:13:40.891009 33027 storage/replica_raftstorage.go:620 [n5,s5,r9/?:/{Table/53-Max}] applied preemptive snapshot in 13ms [clear=0ms batch=0ms entries=12ms commit=0ms]
I161120 08:13:40.892108 31609 storage/replica_proposal.go:349 [n2,s2,r8/4:/Table/5{2-3}] new range lease replica {2 2 4} 2016-11-20 08:13:40.834742641 +0000 UTC 9.25s following replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 411008h13m46.39704762s [physicalTime=2016-11-20 08:13:40.891942848 +0000 UTC]
I161120 08:13:40.966914 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.967143 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.967272 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.967410 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.967541 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.967775 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.967907 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.968025 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.968139 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.968267 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.968381 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.968600 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.970107 31459 storage/replica_command.go:3261 [n1,replicate,s1,r9/1:/Table/5{3-4}] change replicas: read existing descriptor range_id:9 start_key:"\275" end_key:"\276" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:3 store_id:3 replica_id:2 > replicas:<node_id:4 store_id:4 replica_id:3 > next_replica_id:4
I161120 08:13:40.970802 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.971449 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
E161120 08:13:40.973563 31459 storage/queue.go:575 [n1,replicate,s1,r9/1:/Table/5{3-4}] change replicas of range 9 failed: unexpected value: raw_bytes:"'\301\203\026\003\010\t\022\001\275\032\001\276\"\006\010\001\020\001\030\001\"\006\010\003\020\003\030\002\"\006\010\004\020\004\030\003(\004" timestamp:<wall_time:1479629620772583016 logical:0 >
I161120 08:13:40.974600 31459 storage/replica_raftstorage.go:453 [n1,replicate,s1,r5/1:/Table/{14-50}] generated snapshot bb211cd9 at index 22
I161120 08:13:40.974985 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.978105 31459 storage/store.go:3134 [n1,replicate,s1,r5/1:/Table/{14-50}] streamed snapshot: kv pairs: 12, log entries: 12
I161120 08:13:40.978343 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.978681 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.978990 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 5]
I161120 08:13:40.979382 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 5]
I161120 08:13:40.979483 32991 storage/replica_raftstorage.go:612 [n5,s5,r5/?:{-}] applying preemptive snapshot at index 22 (id=bb211cd9, encoded size=11331, 1 rocksdb batches, 12 log entries)
I161120 08:13:40.979883 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 5]
I161120 08:13:40.980649 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 5]
I161120 08:13:40.981002 32991 storage/replica_raftstorage.go:620 [n5,s5,r5/?:/Table/{14-50}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I161120 08:13:40.981917 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 5]
I161120 08:13:40.983362 31459 storage/replica_command.go:3261 [n1,replicate,s1,r5/1:/Table/{14-50}] change replicas: read existing descriptor range_id:5 start_key:"\226" end_key:"\272" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:3 store_id:3 replica_id:2 > replicas:<node_id:4 store_id:4 replica_id:3 > next_replica_id:4
I161120 08:13:40.984427 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 5]
I161120 08:13:40.988977 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 5]
I161120 08:13:40.997711 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 5]
I161120 08:13:41.016233 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 5]
I161120 08:13:41.016946 33017 storage/raft_transport.go:437 [n2] raft transport stream to node 3 established
I161120 08:13:41.025226 33018 storage/raft_transport.go:437 [n2] raft transport stream to node 4 established
I161120 08:13:41.027096 31459 storage/replica.go:2088 [n1,s1,r5/1:/Table/{14-50}] proposing ADD_REPLICA {NodeID:5 StoreID:5 ReplicaID:4}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:3} {NodeID:5 StoreID:5 ReplicaID:4}]
I161120 08:13:41.029279 33132 storage/raft_transport.go:437 [n4] raft transport stream to node 2 established
I161120 08:13:41.031443 33054 storage/raft_transport.go:437 [n3] raft transport stream to node 2 established
I161120 08:13:41.048248 31459 storage/replica_command.go:3261 [n1,replicate,s1,r5/1:/Table/{14-50}] change replicas: read existing descriptor range_id:5 start_key:"\226" end_key:"\272" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:3 store_id:3 replica_id:2 > replicas:<node_id:4 store_id:4 replica_id:3 > replicas:<node_id:5 store_id:5 replica_id:4 > next_replica_id:5
I161120 08:13:41.053770 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 5]
I161120 08:13:41.104053 31459 storage/replica.go:2088 [n1,s1,r5/1:/Table/{14-50}] proposing REMOVE_REPLICA {NodeID:3 StoreID:3 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:5 StoreID:5 ReplicaID:4} {NodeID:4 StoreID:4 ReplicaID:3}]
I161120 08:13:41.111098 31459 storage/replica_raftstorage.go:453 [n1,replicate,s1,r9/1:/Table/5{3-4}] generated snapshot 41d422ce at index 16
I161120 08:13:41.114545 31459 storage/store.go:3134 [n1,replicate,s1,r9/1:/Table/5{3-4}] streamed snapshot: kv pairs: 10, log entries: 6
I161120 08:13:41.117490 33072 storage/replica_raftstorage.go:612 [n2,s2,r9/?:{-}] applying preemptive snapshot at index 16 (id=41d422ce, encoded size=6229, 1 rocksdb batches, 6 log entries)
I161120 08:13:41.118929 33072 storage/replica_raftstorage.go:620 [n2,s2,r9/?:/Table/5{3-4}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I161120 08:13:41.121195 31376 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
W161120 08:13:41.123738 32422 storage/raft_transport.go:443 [n5] raft transport stream to node 1 failed: rpc error: code = 13 desc = transport is closing
I161120 08:13:41.124291 33155 util/stop/stopper.go:468 quiescing; tasks left:
2 server/node.go:830
1 storage/queue.go:477
1 storage/intent_resolver.go:383
W161120 08:13:41.124343 33132 storage/raft_transport.go:443 [n4] raft transport stream to node 2 failed: EOF
W161120 08:13:41.124970 31668 storage/raft_transport.go:443 [n1] raft transport stream to node 2 failed: EOF
W161120 08:13:41.125168 32751 storage/raft_transport.go:443 [n2] raft transport stream to node 5 failed: rpc error: code = 13 desc = transport is closing
W161120 08:13:41.125910 31673 storage/raft_transport.go:443 [n2] raft transport stream to node 1 failed: rpc error: code = 13 desc = transport is closing
W161120 08:13:41.126382 33017 storage/raft_transport.go:443 [n2] raft transport stream to node 3 failed: rpc error: code = 13 desc = transport is closing
W161120 08:13:41.126594 33018 storage/raft_transport.go:443 [n2] raft transport stream to node 4 failed: rpc error: code = 13 desc = transport is closing
W161120 08:13:41.128276 32225 storage/raft_transport.go:443 [n1] raft transport stream to node 5 failed: rpc error: code = 13 desc = transport is closing
W161120 08:13:41.128621 33054 storage/raft_transport.go:443 [n3] raft transport stream to node 2 failed: rpc error: code = 13 desc = transport is closing
W161120 08:13:41.129201 32123 storage/raft_transport.go:443 [n3] raft transport stream to node 1 failed: rpc error: code = 13 desc = transport is closing
W161120 08:13:41.129524 33110 storage/intent_resolver.go:380 could not GC completed transaction anchored at /Local/Range/"\x96"/RangeDescriptor: node unavailable; try another peer
W161120 08:13:41.130670 32857 storage/raft_transport.go:443 [n5] raft transport stream to node 2 failed: rpc error: code = 13 desc = transport is closing
W161120 08:13:41.131992 32508 storage/raft_transport.go:443 [n4] raft transport stream to node 1 failed: EOF
W161120 08:13:41.132361 32135 storage/raft_transport.go:443 [n1] raft transport stream to node 3 failed: rpc error: code = 13 desc = transport is closing
I161120 08:13:41.132713 33155 util/stop/stopper.go:468 quiescing; tasks left:
1 storage/queue.go:477
1 server/node.go:830
W161120 08:13:41.134563 32414 storage/raft_transport.go:443 [n1] raft transport stream to node 4 failed: rpc error: code = 13 desc = transport is closing
I161120 08:13:41.135183 31459 storage/replica_command.go:3261 [n1,replicate,s1,r9/1:/Table/5{3-4}] change replicas: read existing descriptor range_id:9 start_key:"\275" end_key:"\276" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:3 store_id:3 replica_id:2 > replicas:<node_id:4 store_id:4 replica_id:3 > next_replica_id:4
E161120 08:13:41.136628 31459 storage/queue.go:575 [n1,replicate,s1,r9/1:/Table/5{3-4}] change replicas of range 9 failed: node unavailable; try another peer
I161120 08:13:41.136762 33155 util/stop/stopper.go:468 quiescing; tasks left:
1 storage/queue.go:477
I161120 08:13:41.137242 31376 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161120 08:13:41.138274 31423 kv/transport_race.go:71 transport race promotion: ran 42 iterations on up to 858 requests
W161120 08:13:41.144951 31916 gossip/infostore.go:303 [n4] node unavailable; try another peer
I161120 08:13:41.145332 31376 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161120 08:13:41.155221 31376 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161120 08:13:41.158120 31376 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161120 08:13:41.161818 31376 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
<autogenerated>:10: Leaked goroutine: goroutine 29506 [chan receive, 1 minutes]:
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).beginCmds.func1(0xc42039c660, 0xc42161dc00, 0xc421176328, 0xc421176330)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica.go:1370 +0x94
created by github.com/cockroachdb/cockroach/pkg/storage.(*Replica).beginCmds
/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica.go:1376 +0x14c4
```
|
1.0
|
github.com/cockroachdb/cockroach/pkg/storage: TestReplicateQueueRebalance failed under stress - SHA: https://github.com/cockroachdb/cockroach/commits/509e36d94b447f0e11b69ba32a7e5f095b8b2057
Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=52946
```
I161120 08:13:37.109750 31376 gossip/gossip.go:244 [n?] initial resolvers: []
W161120 08:13:37.109873 31376 gossip/gossip.go:1120 [n?] no resolvers found; use --join to specify a connected node
W161120 08:13:37.123744 31376 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I161120 08:13:37.126216 31376 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161120 08:13:37.127334 31376 server/config.go:443 1 storage engine initialized
I161120 08:13:37.128634 31376 server/node.go:419 [n?] store [n0,s0] not bootstrapped
I161120 08:13:37.137197 31400 storage/replica_proposal.go:349 [s1,r1/1:/M{in-ax}] new range lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 411008h13m46.135197301s following replica {0 0 0} 1970-01-01 00:00:00 +0000 UTC 0s [physicalTime=2016-11-20 08:13:37.137059582 +0000 UTC]
I161120 08:13:37.140196 31376 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161120 08:13:37.141054 31376 server/node.go:348 [n?] **** cluster 0f6d8142-08da-4080-94d5-dc69eb94ac0d has been created
I161120 08:13:37.141105 31376 server/node.go:349 [n?] **** add additional nodes by specifying --join=127.0.0.1:44544
I161120 08:13:37.142130 31376 base/node_id.go:62 [n1] NodeID set to 1
I161120 08:13:37.145429 31376 storage/store.go:1188 [n1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I161120 08:13:37.145513 31376 server/node.go:432 [n1] initialized store [n1,s1]: {Capacity:536870912 Available:536870912 RangeCount:0 LeaseCount:0}
I161120 08:13:37.145624 31376 server/node.go:317 [n1] node ID 1 initialized
I161120 08:13:37.145786 31376 gossip/gossip.go:286 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:44544" > attrs:<> locality:<>
I161120 08:13:37.146422 31376 storage/stores.go:296 [n1] read 0 node addresses from persistent storage
I161120 08:13:37.146563 31376 server/node.go:562 [n1] connecting to gossip network to verify cluster ID...
I161120 08:13:37.148073 31376 server/node.go:582 [n1] node connected via gossip and verified as part of cluster "0f6d8142-08da-4080-94d5-dc69eb94ac0d"
I161120 08:13:37.148377 31376 server/node.go:367 [n1] node=1: started with [[]=] engine(s) and attributes []
I161120 08:13:37.149035 31376 server/server.go:630 [n1] starting https server at 127.0.0.1:53550
I161120 08:13:37.149076 31376 server/server.go:631 [n1] starting grpc/postgres server at 127.0.0.1:44544
I161120 08:13:37.149109 31376 server/server.go:632 [n1] advertising CockroachDB node at 127.0.0.1:44544
I161120 08:13:37.153611 31458 storage/split_queue.go:103 [n1,split,s1,r1/1:/M{in-ax}] splitting at keys [/Table/11/0 /Table/12/0 /Table/13/0 /Table/14/0]
I161120 08:13:37.157681 31458 storage/replica_command.go:2361 [n1,split,s1,r1/1:/M{in-ax}] initiating a split of this range at key /Table/11 [r2]
E161120 08:13:37.190301 31459 storage/queue.go:586 [n1,replicate,s1,r1/1:/{Min-Table/11}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.192009 31273 storage/queue.go:586 [n1,replicate,s1,r1/1:/{Min-Table/11}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.192700 31458 storage/queue.go:575 [n1,split,s1,r1/1:/{Min-Table/11}] unable to split [n1,s1,r1/1:/{Min-Table/11}] at key "/Table/12/0": key range /Table/12/0-/Table/12/0 outside of bounds of range /Min-/Max
I161120 08:13:37.198627 31458 storage/split_queue.go:103 [n1,split,s1,r2/1:/{Table/11-Max}] splitting at keys [/Table/12/0 /Table/13/0 /Table/14/0]
I161120 08:13:37.198913 31458 storage/replica_command.go:2361 [n1,split,s1,r2/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r3]
E161120 08:13:37.268013 31273 storage/queue.go:586 [n1,replicate,s1,r1/1:/{Min-Table/11}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
I161120 08:13:37.294963 31333 sql/event_log.go:95 [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:44544} Attrs: Locality:} ClusterID:0f6d8142-08da-4080-94d5-dc69eb94ac0d StartedAt:1479629617148108066}
E161120 08:13:37.309260 31459 storage/queue.go:586 [n1,replicate,s1,r2/1:/Table/1{1-2}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.309412 31458 storage/queue.go:575 [n1,split,s1,r2/1:/Table/1{1-2}] unable to split [n1,s1,r2/1:/Table/1{1-2}] at key "/Table/13/0": key range /Table/13/0-/Table/13/0 outside of bounds of range /Table/11-/Max
I161120 08:13:37.309914 31458 storage/split_queue.go:103 [n1,split,s1,r3/1:/{Table/12-Max}] splitting at keys [/Table/13/0 /Table/14/0]
I161120 08:13:37.310128 31458 storage/replica_command.go:2361 [n1,split,s1,r3/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r4]
E161120 08:13:37.352594 31273 storage/queue.go:586 [n1,replicate,s1,r1/1:/{Min-Table/11}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.353542 31273 storage/queue.go:586 [n1,replicate,s1,r2/1:/Table/1{1-2}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.394232 31459 storage/queue.go:586 [n1,replicate,s1,r3/1:/Table/1{2-3}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.399887 31458 storage/queue.go:575 [n1,split,s1,r3/1:/Table/1{2-3}] unable to split [n1,s1,r3/1:/Table/1{2-3}] at key "/Table/14/0": key range /Table/14/0-/Table/14/0 outside of bounds of range /Table/12-/Max
I161120 08:13:37.400523 31458 storage/split_queue.go:103 [n1,split,s1,r4/1:/{Table/13-Max}] splitting at keys [/Table/14/0]
I161120 08:13:37.400813 31458 storage/replica_command.go:2361 [n1,split,s1,r4/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r5]
E161120 08:13:37.460967 31273 storage/queue.go:586 [n1,replicate,s1,r1/1:/{Min-Table/11}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.461581 31273 storage/queue.go:586 [n1,replicate,s1,r2/1:/Table/1{1-2}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.462148 31273 storage/queue.go:586 [n1,replicate,s1,r3/1:/Table/1{2-3}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.552940 31273 storage/queue.go:586 [n1,replicate,s1,r1/1:/{Min-Table/11}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.553969 31273 storage/queue.go:586 [n1,replicate,s1,r2/1:/Table/1{1-2}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.554484 31273 storage/queue.go:586 [n1,replicate,s1,r3/1:/Table/1{2-3}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.657745 31273 storage/queue.go:586 [n1,replicate,s1,r3/1:/Table/1{2-3}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.658328 31273 storage/queue.go:586 [n1,replicate,s1,r1/1:/{Min-Table/11}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.658776 31273 storage/queue.go:586 [n1,replicate,s1,r2/1:/Table/1{1-2}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.708508 31459 storage/queue.go:586 [n1,replicate,s1,r4/1:/Table/1{3-4}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.709178 31459 storage/queue.go:586 [n1,replicate,s1,r5/1:/{Table/14-Max}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.751621 31273 storage/queue.go:586 [n1,replicate,s1,r3/1:/Table/1{2-3}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.752255 31273 storage/queue.go:586 [n1,replicate,s1,r1/1:/{Min-Table/11}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.752884 31273 storage/queue.go:586 [n1,replicate,s1,r2/1:/Table/1{1-2}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.753483 31273 storage/queue.go:586 [n1,replicate,s1,r4/1:/Table/1{3-4}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.754058 31273 storage/queue.go:586 [n1,replicate,s1,r5/1:/{Table/14-Max}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
I161120 08:13:37.757227 31376 gossip/gossip.go:244 [n?] initial resolvers: [127.0.0.1:44544]
W161120 08:13:37.757334 31376 gossip/gossip.go:1122 [n?] no incoming or outgoing connections
W161120 08:13:37.801284 31376 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I161120 08:13:37.804551 31376 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161120 08:13:37.814895 31376 server/config.go:443 1 storage engine initialized
I161120 08:13:37.816022 31376 server/node.go:419 [n?] store [n0,s0] not bootstrapped
I161120 08:13:37.816096 31376 storage/stores.go:296 [n?] read 0 node addresses from persistent storage
I161120 08:13:37.816174 31376 server/node.go:562 [n?] connecting to gossip network to verify cluster ID...
E161120 08:13:37.882514 31273 storage/queue.go:586 [n1,replicate,s1,r1/1:/{Min-Table/11}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.883144 31273 storage/queue.go:586 [n1,replicate,s1,r2/1:/Table/1{1-2}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.886141 31273 storage/queue.go:586 [n1,replicate,s1,r4/1:/Table/1{3-4}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.886610 31273 storage/queue.go:586 [n1,replicate,s1,r5/1:/{Table/14-Max}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.888261 31273 storage/queue.go:586 [n1,replicate,s1,r3/1:/Table/1{2-3}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
I161120 08:13:37.908703 31246 gossip/client.go:125 [n?] started gossip client to 127.0.0.1:44544
I161120 08:13:37.909792 31469 gossip/server.go:285 [n1] received gossip from unknown node
I161120 08:13:37.937695 31488 storage/stores.go:312 [n?] wrote 1 node addresses to persistent storage
I161120 08:13:37.939537 31376 server/node.go:582 [n?] node connected via gossip and verified as part of cluster "0f6d8142-08da-4080-94d5-dc69eb94ac0d"
I161120 08:13:37.953665 31376 kv/dist_sender.go:331 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
E161120 08:13:37.954454 31273 storage/queue.go:586 [n1,replicate,s1,r2/1:/Table/1{1-2}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.955398 31273 storage/queue.go:586 [n1,replicate,s1,r4/1:/Table/1{3-4}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.956824 31273 storage/queue.go:586 [n1,replicate,s1,r5/1:/{Table/14-Max}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.957413 31273 storage/queue.go:586 [n1,replicate,s1,r3/1:/Table/1{2-3}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E161120 08:13:37.957974 31273 storage/queue.go:586 [n1,replicate,s1,r1/1:/{Min-Table/11}] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
I161120 08:13:37.966253 31376 server/node.go:310 [n?] new node allocated ID 2
I161120 08:13:37.966443 31376 base/node_id.go:62 [n2] NodeID set to 2
I161120 08:13:37.966674 31376 gossip/gossip.go:286 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:56071" > attrs:<> locality:<>
I161120 08:13:37.967698 31376 server/node.go:367 [n2] node=2: started with [[]=] engine(s) and attributes []
I161120 08:13:37.968202 31376 server/server.go:630 [n2] starting https server at 127.0.0.1:34171
I161120 08:13:37.968246 31376 server/server.go:631 [n2] starting grpc/postgres server at 127.0.0.1:56071
I161120 08:13:37.968729 31376 server/server.go:632 [n2] advertising CockroachDB node at 127.0.0.1:56071
I161120 08:13:37.972511 31566 storage/stores.go:312 [n1] wrote 1 node addresses to persistent storage
I161120 08:13:38.006234 31554 server/node.go:543 [n2] bootstrapped store [n2,s2]
I161120 08:13:38.006707 31376 gossip/gossip.go:244 [n?] initial resolvers: [127.0.0.1:44544]
W161120 08:13:38.006887 31376 gossip/gossip.go:1122 [n?] no incoming or outgoing connections
I161120 08:13:38.008853 31273 storage/replica_raftstorage.go:453 [n1,replicate,s1,r2/1:/Table/1{1-2}] generated snapshot 22797f0a at index 23
W161120 08:13:38.059277 31376 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I161120 08:13:38.063258 31557 sql/event_log.go:95 [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:56071} Attrs: Locality:} ClusterID:0f6d8142-08da-4080-94d5-dc69eb94ac0d StartedAt:1479629617967058448}
I161120 08:13:38.094115 31376 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161120 08:13:38.105609 31376 server/config.go:443 1 storage engine initialized
I161120 08:13:38.123408 31376 server/node.go:419 [n?] store [n0,s0] not bootstrapped
I161120 08:13:38.134516 31376 storage/stores.go:296 [n?] read 0 node addresses from persistent storage
I161120 08:13:38.134619 31376 server/node.go:562 [n?] connecting to gossip network to verify cluster ID...
I161120 08:13:38.135047 31273 storage/store.go:3134 [n1,replicate,s1,r2/1:/Table/1{1-2}] streamed snapshot: kv pairs: 10, log entries: 13
I161120 08:13:38.136278 31700 storage/replica_raftstorage.go:612 [n2,s2,r2/?:{-}] applying preemptive snapshot at index 23 (id=22797f0a, encoded size=16759, 1 rocksdb batches, 13 log entries)
I161120 08:13:38.138364 31700 storage/replica_raftstorage.go:620 [n2,s2,r2/?:/Table/1{1-2}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I161120 08:13:38.140848 31273 storage/replica_command.go:3261 [n1,replicate,s1,r2/1:/Table/1{1-2}] change replicas: read existing descriptor range_id:2 start_key:"\223" end_key:"\224" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I161120 08:13:38.175056 31273 storage/replica.go:2088 [n1,s1,r2/1:/Table/1{1-2}] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I161120 08:13:38.187094 31673 storage/raft_transport.go:437 [n2] raft transport stream to node 1 established
I161120 08:13:38.191291 31459 storage/replica_raftstorage.go:453 [n1,replicate,s1,r1/1:/{Min-Table/11}] generated snapshot 37dd0924 at index 226
I161120 08:13:38.217673 31519 storage/replica_raftstorage.go:612 [n2,s2,r1/?:{-}] applying preemptive snapshot at index 226 (id=37dd0924, encoded size=180435, 1 rocksdb batches, 93 log entries)
I161120 08:13:38.226066 31519 storage/replica_raftstorage.go:620 [n2,s2,r1/?:/{Min-Table/11}] applied preemptive snapshot in 5ms [clear=0ms batch=0ms entries=4ms commit=1ms]
I161120 08:13:38.226944 31459 storage/store.go:3134 [n1,replicate,s1,r1/1:/{Min-Table/11}] streamed snapshot: kv pairs: 587, log entries: 93
I161120 08:13:38.228828 31459 storage/replica_command.go:3261 [n1,replicate,s1,r1/1:/{Min-Table/11}] change replicas: read existing descriptor range_id:1 start_key:"" end_key:"\223" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I161120 08:13:38.230429 31692 gossip/client.go:125 [n?] started gossip client to 127.0.0.1:44544
I161120 08:13:38.231892 31716 gossip/server.go:285 [n1] received gossip from unknown node
I161120 08:13:38.234703 31592 storage/stores.go:312 [n?] wrote 1 node addresses to persistent storage
I161120 08:13:38.234913 31592 storage/stores.go:312 [n?] wrote 2 node addresses to persistent storage
I161120 08:13:38.235457 31376 server/node.go:582 [n?] node connected via gossip and verified as part of cluster "0f6d8142-08da-4080-94d5-dc69eb94ac0d"
I161120 08:13:38.258762 31376 kv/dist_sender.go:331 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I161120 08:13:38.277992 31376 server/node.go:310 [n?] new node allocated ID 3
I161120 08:13:38.280820 31376 base/node_id.go:62 [n3] NodeID set to 3
I161120 08:13:38.281418 31376 gossip/gossip.go:286 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:44208" > attrs:<> locality:<>
I161120 08:13:38.282186 31376 server/node.go:367 [n3] node=3: started with [[]=] engine(s) and attributes []
I161120 08:13:38.282885 31376 server/server.go:630 [n3] starting https server at 127.0.0.1:54811
I161120 08:13:38.282937 31376 server/server.go:631 [n3] starting grpc/postgres server at 127.0.0.1:44208
I161120 08:13:38.283000 31376 server/server.go:632 [n3] advertising CockroachDB node at 127.0.0.1:44208
I161120 08:13:38.300306 31601 storage/stores.go:312 [n1] wrote 2 node addresses to persistent storage
I161120 08:13:38.301695 31658 storage/stores.go:312 [n2] wrote 2 node addresses to persistent storage
I161120 08:13:38.340765 31376 gossip/gossip.go:244 [n?] initial resolvers: [127.0.0.1:44544]
W161120 08:13:38.340868 31376 gossip/gossip.go:1122 [n?] no incoming or outgoing connections
I161120 08:13:38.366980 31459 storage/replica_command.go:3261 [n1,replicate,s1,r1/1:/{Min-Table/11}] change replicas: read existing descriptor range_id:1 start_key:"" end_key:"\223" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
W161120 08:13:38.376300 31376 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I161120 08:13:38.380336 31705 server/node.go:543 [n3] bootstrapped store [n3,s3]
I161120 08:13:38.380443 31376 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161120 08:13:38.381279 31376 server/config.go:443 1 storage engine initialized
I161120 08:13:38.385134 31376 server/node.go:419 [n?] store [n0,s0] not bootstrapped
I161120 08:13:38.385222 31376 storage/stores.go:296 [n?] read 0 node addresses from persistent storage
I161120 08:13:38.385339 31376 server/node.go:562 [n?] connecting to gossip network to verify cluster ID...
I161120 08:13:38.401065 31459 storage/replica.go:2088 [n1,s1,r1/1:/{Min-Table/11}] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I161120 08:13:38.408275 31273 storage/replica_raftstorage.go:453 [n1,replicate,s1,r4/1:/Table/1{3-4}] generated snapshot b1d78390 at index 39
I161120 08:13:38.422089 31708 sql/event_log.go:95 [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:44208} Attrs: Locality:} ClusterID:0f6d8142-08da-4080-94d5-dc69eb94ac0d StartedAt:1479629618281857043}
I161120 08:13:38.477667 31809 gossip/client.go:125 [n?] started gossip client to 127.0.0.1:44544
I161120 08:13:38.478875 31834 gossip/server.go:285 [n1] received gossip from unknown node
I161120 08:13:38.492976 31376 server/node.go:582 [n?] node connected via gossip and verified as part of cluster "0f6d8142-08da-4080-94d5-dc69eb94ac0d"
I161120 08:13:38.498252 31925 storage/stores.go:312 [n?] wrote 1 node addresses to persistent storage
I161120 08:13:38.498453 31925 storage/stores.go:312 [n?] wrote 2 node addresses to persistent storage
I161120 08:13:38.498650 31925 storage/stores.go:312 [n?] wrote 3 node addresses to persistent storage
I161120 08:13:38.513233 31897 storage/replica_raftstorage.go:612 [n3,s3,r4/?:{-}] applying preemptive snapshot at index 39 (id=b1d78390, encoded size=44465, 1 rocksdb batches, 29 log entries)
I161120 08:13:38.515628 31897 storage/replica_raftstorage.go:620 [n3,s3,r4/?:/Table/1{3-4}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I161120 08:13:38.516193 31273 storage/store.go:3134 [n1,replicate,s1,r4/1:/Table/1{3-4}] streamed snapshot: kv pairs: 59, log entries: 29
I161120 08:13:38.521465 31273 storage/replica_command.go:3261 [n1,replicate,s1,r4/1:/Table/1{3-4}] change replicas: read existing descriptor range_id:4 start_key:"\225" end_key:"\226" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I161120 08:13:38.522326 31376 kv/dist_sender.go:331 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I161120 08:13:38.539459 31376 server/node.go:310 [n?] new node allocated ID 4
I161120 08:13:38.539596 31376 base/node_id.go:62 [n4] NodeID set to 4
I161120 08:13:38.539775 31376 gossip/gossip.go:286 [n4] NodeDescriptor set to node_id:4 address:<network_field:"tcp" address_field:"127.0.0.1:52957" > attrs:<> locality:<>
I161120 08:13:38.540466 31376 server/node.go:367 [n4] node=4: started with [[]=] engine(s) and attributes []
I161120 08:13:38.541132 31376 server/server.go:630 [n4] starting https server at 127.0.0.1:58037
I161120 08:13:38.541177 31376 server/server.go:631 [n4] starting grpc/postgres server at 127.0.0.1:52957
I161120 08:13:38.541214 31376 server/server.go:632 [n4] advertising CockroachDB node at 127.0.0.1:52957
I161120 08:13:38.552760 31880 storage/stores.go:312 [n1] wrote 3 node addresses to persistent storage
I161120 08:13:38.555686 32002 storage/stores.go:312 [n2] wrote 3 node addresses to persistent storage
I161120 08:13:38.556091 31956 storage/stores.go:312 [n3] wrote 3 node addresses to persistent storage
I161120 08:13:38.656334 31376 gossip/gossip.go:244 [n?] initial resolvers: [127.0.0.1:44544]
W161120 08:13:38.656453 31376 gossip/gossip.go:1122 [n?] no incoming or outgoing connections
I161120 08:13:38.658498 31273 storage/replica_command.go:3261 [n1,replicate,s1,r4/1:/Table/1{3-4}] change replicas: read existing descriptor range_id:4 start_key:"\225" end_key:"\226" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I161120 08:13:38.689193 31914 server/node.go:543 [n4] bootstrapped store [n4,s4]
I161120 08:13:38.713264 31917 sql/event_log.go:95 [n4] Event: "node_join", target: 4, info: {Descriptor:{NodeID:4 Address:{NetworkField:tcp AddressField:127.0.0.1:52957} Attrs: Locality:} ClusterID:0f6d8142-08da-4080-94d5-dc69eb94ac0d StartedAt:1479629618540182064}
W161120 08:13:38.718193 31376 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I161120 08:13:38.720105 31376 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161120 08:13:38.721000 31376 server/config.go:443 1 storage engine initialized
I161120 08:13:38.722140 31376 server/node.go:419 [n?] store [n0,s0] not bootstrapped
I161120 08:13:38.722244 31376 storage/stores.go:296 [n?] read 0 node addresses from persistent storage
I161120 08:13:38.722328 31376 server/node.go:562 [n?] connecting to gossip network to verify cluster ID...
I161120 08:13:38.763385 31273 storage/replica.go:2088 [n1,s1,r4/1:/Table/1{3-4}] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2}]
I161120 08:13:38.768613 31273 storage/replica_raftstorage.go:453 [n1,replicate,s1,r5/1:/{Table/14-Max}] generated snapshot d1bd14bf at index 11
I161120 08:13:38.770911 31273 storage/store.go:3134 [n1,replicate,s1,r5/1:/{Table/14-Max}] streamed snapshot: kv pairs: 9, log entries: 1
I161120 08:13:38.771701 32086 storage/replica_raftstorage.go:612 [n3,s3,r5/?:{-}] applying preemptive snapshot at index 11 (id=d1bd14bf, encoded size=508, 1 rocksdb batches, 1 log entries)
I161120 08:13:38.772618 32086 storage/replica_raftstorage.go:620 [n3,s3,r5/?:/{Table/14-Max}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I161120 08:13:38.776161 31273 storage/replica_command.go:3261 [n1,replicate,s1,r5/1:/{Table/14-Max}] change replicas: read existing descriptor range_id:5 start_key:"\226" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I161120 08:13:38.789084 32123 storage/raft_transport.go:437 [n3] raft transport stream to node 1 established
I161120 08:13:38.805272 32101 gossip/client.go:125 [n?] started gossip client to 127.0.0.1:44544
I161120 08:13:38.806132 32063 gossip/server.go:285 [n1] received gossip from unknown node
I161120 08:13:38.809358 31376 server/node.go:582 [n?] node connected via gossip and verified as part of cluster "0f6d8142-08da-4080-94d5-dc69eb94ac0d"
I161120 08:13:38.810882 32095 storage/stores.go:312 [n?] wrote 1 node addresses to persistent storage
I161120 08:13:38.811088 32095 storage/stores.go:312 [n?] wrote 2 node addresses to persistent storage
I161120 08:13:38.811318 32095 storage/stores.go:312 [n?] wrote 3 node addresses to persistent storage
I161120 08:13:38.811513 32095 storage/stores.go:312 [n?] wrote 4 node addresses to persistent storage
I161120 08:13:38.823165 31376 kv/dist_sender.go:331 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I161120 08:13:38.837129 31376 server/node.go:310 [n?] new node allocated ID 5
I161120 08:13:38.837263 31376 base/node_id.go:62 [n5] NodeID set to 5
I161120 08:13:38.837446 31376 gossip/gossip.go:286 [n5] NodeDescriptor set to node_id:5 address:<network_field:"tcp" address_field:"127.0.0.1:45740" > attrs:<> locality:<>
I161120 08:13:38.838189 31376 server/node.go:367 [n5] node=5: started with [[]=] engine(s) and attributes []
I161120 08:13:38.838859 31376 server/server.go:630 [n5] starting https server at 127.0.0.1:49295
I161120 08:13:38.838918 31376 server/server.go:631 [n5] starting grpc/postgres server at 127.0.0.1:45740
I161120 08:13:38.838962 31376 server/server.go:632 [n5] advertising CockroachDB node at 127.0.0.1:45740
I161120 08:13:38.846046 32063 gossip/server.go:263 [n1] refusing gossip from node 5 (max 3 conns); forwarding to 4 ({tcp 127.0.0.1:52957})
I161120 08:13:38.847196 32077 storage/stores.go:312 [n1] wrote 4 node addresses to persistent storage
I161120 08:13:38.849763 32078 storage/stores.go:312 [n3] wrote 4 node addresses to persistent storage
I161120 08:13:38.851007 32079 storage/stores.go:312 [n4] wrote 4 node addresses to persistent storage
I161120 08:13:38.851897 32080 storage/stores.go:312 [n2] wrote 4 node addresses to persistent storage
I161120 08:13:38.852567 32101 gossip/client.go:130 [n5] closing client to node 1 (127.0.0.1:44544): received forward from node 1 to 4 (127.0.0.1:52957)
I161120 08:13:38.860696 31273 storage/replica.go:2088 [n1,s1,r5/1:/{Table/14-Max}] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2}]
I161120 08:13:38.877875 31273 storage/replica_raftstorage.go:453 [n1,replicate,s1,r3/1:/Table/1{2-3}] generated snapshot 651978a5 at index 26
I161120 08:13:38.897967 31273 storage/store.go:3134 [n1,replicate,s1,r3/1:/Table/1{2-3}] streamed snapshot: kv pairs: 30, log entries: 16
I161120 08:13:38.903395 32227 storage/replica_raftstorage.go:612 [n2,s2,r3/?:{-}] applying preemptive snapshot at index 26 (id=651978a5, encoded size=25155, 1 rocksdb batches, 16 log entries)
I161120 08:13:38.905573 32227 storage/replica_raftstorage.go:620 [n2,s2,r3/?:/Table/1{2-3}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I161120 08:13:38.914575 31273 storage/replica_command.go:3261 [n1,replicate,s1,r3/1:/Table/1{2-3}] change replicas: read existing descriptor range_id:3 start_key:"\224" end_key:"\225" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I161120 08:13:38.915438 32081 gossip/client.go:125 [n5] started gossip client to 127.0.0.1:52957
I161120 08:13:38.932771 32163 server/node.go:543 [n5] bootstrapped store [n5,s5]
I161120 08:13:39.060081 32166 sql/event_log.go:95 [n5] Event: "node_join", target: 5, info: {Descriptor:{NodeID:5 Address:{NetworkField:tcp AddressField:127.0.0.1:45740} Attrs: Locality:} ClusterID:0f6d8142-08da-4080-94d5-dc69eb94ac0d StartedAt:1479629618837840008}
I161120 08:13:39.089104 31273 storage/replica.go:2088 [n1,s1,r3/1:/Table/1{2-3}] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I161120 08:13:39.093714 31273 storage/replica_raftstorage.go:453 [n1,replicate,s1,r1/1:/{Min-Table/11}] generated snapshot b5110cc4 at index 317
I161120 08:13:39.174574 31273 storage/store.go:3134 [n1,replicate,s1,r1/1:/{Min-Table/11}] streamed snapshot: kv pairs: 1174, log entries: 57
I161120 08:13:39.176566 32403 storage/replica_raftstorage.go:612 [n5,s5,r1/?:{-}] applying preemptive snapshot at index 317 (id=b5110cc4, encoded size=392400, 1 rocksdb batches, 57 log entries)
I161120 08:13:39.184604 32403 storage/replica_raftstorage.go:620 [n5,s5,r1/?:/{Min-Table/11}] applied preemptive snapshot in 8ms [clear=0ms batch=1ms entries=4ms commit=2ms]
I161120 08:13:39.188724 31273 storage/replica_command.go:3261 [n1,replicate,s1,r1/1:/{Min-Table/11}] change replicas: read existing descriptor range_id:1 start_key:"" end_key:"\223" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3
I161120 08:13:39.227682 31273 storage/replica.go:2088 [n1,s1,r1/1:/{Min-Table/11}] proposing ADD_REPLICA {NodeID:5 StoreID:5 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:5 StoreID:5 ReplicaID:3}]
I161120 08:13:39.239372 31273 storage/queue.go:638 [n1,replicate] purgatory is now empty
I161120 08:13:39.240893 31459 storage/replica_raftstorage.go:453 [n1,replicate,s1,r3/1:/Table/1{2-3}] generated snapshot c7a14eb7 at index 31
I161120 08:13:39.244688 31459 storage/store.go:3134 [n1,replicate,s1,r3/1:/Table/1{2-3}] streamed snapshot: kv pairs: 36, log entries: 21
I161120 08:13:39.244837 32422 storage/raft_transport.go:437 [n5] raft transport stream to node 1 established
I161120 08:13:39.259285 32471 storage/replica_raftstorage.go:612 [n5,s5,r3/?:{-}] applying preemptive snapshot at index 31 (id=c7a14eb7, encoded size=31936, 1 rocksdb batches, 21 log entries)
I161120 08:13:39.262410 32471 storage/replica_raftstorage.go:620 [n5,s5,r3/?:/Table/1{2-3}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=1ms commit=1ms]
I161120 08:13:39.269600 31459 storage/replica_command.go:3261 [n1,replicate,s1,r3/1:/Table/1{2-3}] change replicas: read existing descriptor range_id:3 start_key:"\224" end_key:"\225" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3
I161120 08:13:39.328676 31459 storage/replica.go:2088 [n1,s1,r3/1:/Table/1{2-3}] proposing ADD_REPLICA {NodeID:5 StoreID:5 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:5 StoreID:5 ReplicaID:3}]
I161120 08:13:39.338385 31459 storage/replica_raftstorage.go:453 [n1,replicate,s1,r5/1:/{Table/14-Max}] generated snapshot 6aa7ad77 at index 15
I161120 08:13:39.429312 31459 storage/store.go:3134 [n1,replicate,s1,r5/1:/{Table/14-Max}] streamed snapshot: kv pairs: 10, log entries: 5
I161120 08:13:39.433108 32485 storage/replica_raftstorage.go:612 [n4,s4,r5/?:{-}] applying preemptive snapshot at index 15 (id=6aa7ad77, encoded size=3866, 1 rocksdb batches, 5 log entries)
I161120 08:13:39.445296 32485 storage/replica_raftstorage.go:620 [n4,s4,r5/?:/{Table/14-Max}] applied preemptive snapshot in 12ms [clear=9ms batch=0ms entries=1ms commit=1ms]
I161120 08:13:39.455091 31459 storage/replica_command.go:3261 [n1,replicate,s1,r5/1:/{Table/14-Max}] change replicas: read existing descriptor range_id:5 start_key:"\226" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:3 store_id:3 replica_id:2 > next_replica_id:3
I161120 08:13:39.523503 31459 storage/replica.go:2088 [n1,s1,r5/1:/{Table/14-Max}] proposing ADD_REPLICA {NodeID:4 StoreID:4 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:3}]
I161120 08:13:39.539522 31459 storage/replica_raftstorage.go:453 [n1,replicate,s1,r4/1:/Table/1{3-4}] generated snapshot 501947cf at index 58
I161120 08:13:39.551726 31459 storage/store.go:3134 [n1,replicate,s1,r4/1:/Table/1{3-4}] streamed snapshot: kv pairs: 75, log entries: 48
I161120 08:13:39.552419 32492 storage/replica_raftstorage.go:612 [n5,s5,r4/?:{-}] applying preemptive snapshot at index 58 (id=501947cf, encoded size=72491, 1 rocksdb batches, 48 log entries)
I161120 08:13:39.556526 32492 storage/replica_raftstorage.go:620 [n5,s5,r4/?:/Table/1{3-4}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=3ms commit=1ms]
I161120 08:13:39.566406 32508 storage/raft_transport.go:437 [n4] raft transport stream to node 1 established
I161120 08:13:39.570123 31459 storage/replica_command.go:3261 [n1,replicate,s1,r4/1:/Table/1{3-4}] change replicas: read existing descriptor range_id:4 start_key:"\225" end_key:"\226" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:3 store_id:3 replica_id:2 > next_replica_id:3
I161120 08:13:39.612433 31459 storage/replica.go:2088 [n1,s1,r4/1:/Table/1{3-4}] proposing ADD_REPLICA {NodeID:5 StoreID:5 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2} {NodeID:5 StoreID:5 ReplicaID:3}]
I161120 08:13:39.620148 31459 storage/replica_raftstorage.go:453 [n1,replicate,s1,r2/1:/Table/1{1-2}] generated snapshot 815bbed5 at index 26
I161120 08:13:39.638851 31459 storage/store.go:3134 [n1,replicate,s1,r2/1:/Table/1{1-2}] streamed snapshot: kv pairs: 11, log entries: 16
I161120 08:13:39.642296 32525 storage/replica_raftstorage.go:612 [n3,s3,r2/?:{-}] applying preemptive snapshot at index 26 (id=815bbed5, encoded size=19546, 1 rocksdb batches, 16 log entries)
I161120 08:13:39.660520 32525 storage/replica_raftstorage.go:620 [n3,s3,r2/?:/Table/1{1-2}] applied preemptive snapshot in 11ms [clear=0ms batch=0ms entries=9ms commit=0ms]
I161120 08:13:39.667369 31459 storage/replica_command.go:3261 [n1,replicate,s1,r2/1:/Table/1{1-2}] change replicas: read existing descriptor range_id:2 start_key:"\223" end_key:"\224" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3
I161120 08:13:39.727478 31459 storage/replica.go:2088 [n1,s1,r2/1:/Table/1{1-2}] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3}]
I161120 08:13:39.969944 31376 storage/replica_command.go:2361 [n1,s1,r5/1:/{Table/14-Max}] initiating a split of this range at key /Table/50 [r6]
I161120 08:13:40.132072 31376 storage/replica_command.go:2361 [n1,s1,r6/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r7]
I161120 08:13:40.347627 31376 storage/replica_command.go:2361 [n1,s1,r7/1:/{Table/51-Max}] initiating a split of this range at key /Table/52 [r8]
I161120 08:13:40.545639 31376 storage/replica_command.go:2361 [n1,s1,r8/1:/{Table/52-Max}] initiating a split of this range at key /Table/53 [r9]
I161120 08:13:40.626451 31429 storage/replica_proposal.go:398 [n1,s1,r3/1:/Table/1{2-3}] range [n1,s1,r3/1:/Table/1{2-3}]: transferring raft leadership to replica ID 3
I161120 08:13:40.628650 32215 storage/replica_proposal.go:349 [n5,s5,r3/3:/Table/1{2-3}] new range lease replica {5 5 3} 2016-11-20 08:13:40.622942855 +0000 UTC 9.25s following replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 411008h13m46.39704762s [physicalTime=2016-11-20 08:13:40.6285109 +0000 UTC]
I161120 08:13:40.637042 32857 storage/raft_transport.go:437 [n5] raft transport stream to node 2 established
I161120 08:13:40.638855 32751 storage/raft_transport.go:437 [n2] raft transport stream to node 5 established
I161120 08:13:40.748180 31459 storage/replica_raftstorage.go:453 [n1,replicate,s1,r8/1:/Table/5{2-3}] generated snapshot 7afcca74 at index 15
I161120 08:13:40.751483 31459 storage/store.go:3134 [n1,replicate,s1,r8/1:/Table/5{2-3}] streamed snapshot: kv pairs: 11, log entries: 5
I161120 08:13:40.753320 32865 storage/replica_raftstorage.go:612 [n2,s2,r8/?:{-}] applying preemptive snapshot at index 15 (id=7afcca74, encoded size=6056, 1 rocksdb batches, 5 log entries)
I161120 08:13:40.755007 32865 storage/replica_raftstorage.go:620 [n2,s2,r8/?:/Table/5{2-3}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I161120 08:13:40.772344 31376 storage/replica_command.go:2361 [n1,s1,r9/1:/{Table/53-Max}] initiating a split of this range at key /Table/54 [r10]
I161120 08:13:40.772493 31459 storage/replica_command.go:3261 [n1,replicate,s1,r8/1:/Table/5{2-3}] change replicas: read existing descriptor range_id:8 start_key:"\274" end_key:"\275" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:3 store_id:3 replica_id:2 > replicas:<node_id:4 store_id:4 replica_id:3 > next_replica_id:4
I161120 08:13:40.823092 31459 storage/replica.go:2088 [n1,s1,r8/1:/Table/5{2-3}] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:4}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:3} {NodeID:2 StoreID:2 ReplicaID:4}]
I161120 08:13:40.852050 31432 storage/replica_proposal.go:398 [n1,s1,r8/1:/Table/5{2-3}] range [n1,s1,r8/1:/Table/5{2-3}]: transferring raft leadership to replica ID 4
I161120 08:13:40.870260 31459 storage/replica_raftstorage.go:453 [n1,replicate,s1,r9/1:/{Table/53-Max}] generated snapshot 89dbce25 at index 13
I161120 08:13:40.872760 31459 storage/store.go:3134 [n1,replicate,s1,r9/1:/{Table/53-Max}] streamed snapshot: kv pairs: 12, log entries: 3
I161120 08:13:40.877451 33027 storage/replica_raftstorage.go:612 [n5,s5,r9/?:{-}] applying preemptive snapshot at index 13 (id=89dbce25, encoded size=3080, 1 rocksdb batches, 3 log entries)
I161120 08:13:40.891009 33027 storage/replica_raftstorage.go:620 [n5,s5,r9/?:/{Table/53-Max}] applied preemptive snapshot in 13ms [clear=0ms batch=0ms entries=12ms commit=0ms]
I161120 08:13:40.892108 31609 storage/replica_proposal.go:349 [n2,s2,r8/4:/Table/5{2-3}] new range lease replica {2 2 4} 2016-11-20 08:13:40.834742641 +0000 UTC 9.25s following replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 411008h13m46.39704762s [physicalTime=2016-11-20 08:13:40.891942848 +0000 UTC]
I161120 08:13:40.966914 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.967143 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.967272 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.967410 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.967541 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.967775 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.967907 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.968025 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.968139 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.968267 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.968381 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.968600 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.970107 31459 storage/replica_command.go:3261 [n1,replicate,s1,r9/1:/Table/5{3-4}] change replicas: read existing descriptor range_id:9 start_key:"\275" end_key:"\276" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:3 store_id:3 replica_id:2 > replicas:<node_id:4 store_id:4 replica_id:3 > next_replica_id:4
I161120 08:13:40.970802 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.971449 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
E161120 08:13:40.973563 31459 storage/queue.go:575 [n1,replicate,s1,r9/1:/Table/5{3-4}] change replicas of range 9 failed: unexpected value: raw_bytes:"'\301\203\026\003\010\t\022\001\275\032\001\276\"\006\010\001\020\001\030\001\"\006\010\003\020\003\030\002\"\006\010\004\020\004\030\003(\004" timestamp:<wall_time:1479629620772583016 logical:0 >
I161120 08:13:40.974600 31459 storage/replica_raftstorage.go:453 [n1,replicate,s1,r5/1:/Table/{14-50}] generated snapshot bb211cd9 at index 22
I161120 08:13:40.974985 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.978105 31459 storage/store.go:3134 [n1,replicate,s1,r5/1:/Table/{14-50}] streamed snapshot: kv pairs: 12, log entries: 12
I161120 08:13:40.978343 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.978681 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 4]
I161120 08:13:40.978990 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 5]
I161120 08:13:40.979382 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 5]
I161120 08:13:40.979483 32991 storage/replica_raftstorage.go:612 [n5,s5,r5/?:{-}] applying preemptive snapshot at index 22 (id=bb211cd9, encoded size=11331, 1 rocksdb batches, 12 log entries)
I161120 08:13:40.979883 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 5]
I161120 08:13:40.980649 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 5]
I161120 08:13:40.981002 32991 storage/replica_raftstorage.go:620 [n5,s5,r5/?:/Table/{14-50}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I161120 08:13:40.981917 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 5]
I161120 08:13:40.983362 31459 storage/replica_command.go:3261 [n1,replicate,s1,r5/1:/Table/{14-50}] change replicas: read existing descriptor range_id:5 start_key:"\226" end_key:"\272" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:3 store_id:3 replica_id:2 > replicas:<node_id:4 store_id:4 replica_id:3 > next_replica_id:4
I161120 08:13:40.984427 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 5]
I161120 08:13:40.988977 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 5]
I161120 08:13:40.997711 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 5]
I161120 08:13:41.016233 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 5]
I161120 08:13:41.016946 33017 storage/raft_transport.go:437 [n2] raft transport stream to node 3 established
I161120 08:13:41.025226 33018 storage/raft_transport.go:437 [n2] raft transport stream to node 4 established
I161120 08:13:41.027096 31459 storage/replica.go:2088 [n1,s1,r5/1:/Table/{14-50}] proposing ADD_REPLICA {NodeID:5 StoreID:5 ReplicaID:4}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:3} {NodeID:5 StoreID:5 ReplicaID:4}]
I161120 08:13:41.029279 33132 storage/raft_transport.go:437 [n4] raft transport stream to node 2 established
I161120 08:13:41.031443 33054 storage/raft_transport.go:437 [n3] raft transport stream to node 2 established
I161120 08:13:41.048248 31459 storage/replica_command.go:3261 [n1,replicate,s1,r5/1:/Table/{14-50}] change replicas: read existing descriptor range_id:5 start_key:"\226" end_key:"\272" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:3 store_id:3 replica_id:2 > replicas:<node_id:4 store_id:4 replica_id:3 > replicas:<node_id:5 store_id:5 replica_id:4 > next_replica_id:5
I161120 08:13:41.053770 31376 storage/replicate_queue_test.go:101 not balanced: [10 4 8 6 5]
I161120 08:13:41.104053 31459 storage/replica.go:2088 [n1,s1,r5/1:/Table/{14-50}] proposing REMOVE_REPLICA {NodeID:3 StoreID:3 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:5 StoreID:5 ReplicaID:4} {NodeID:4 StoreID:4 ReplicaID:3}]
I161120 08:13:41.111098 31459 storage/replica_raftstorage.go:453 [n1,replicate,s1,r9/1:/Table/5{3-4}] generated snapshot 41d422ce at index 16
I161120 08:13:41.114545 31459 storage/store.go:3134 [n1,replicate,s1,r9/1:/Table/5{3-4}] streamed snapshot: kv pairs: 10, log entries: 6
I161120 08:13:41.117490 33072 storage/replica_raftstorage.go:612 [n2,s2,r9/?:{-}] applying preemptive snapshot at index 16 (id=41d422ce, encoded size=6229, 1 rocksdb batches, 6 log entries)
I161120 08:13:41.118929 33072 storage/replica_raftstorage.go:620 [n2,s2,r9/?:/Table/5{3-4}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I161120 08:13:41.121195 31376 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
W161120 08:13:41.123738 32422 storage/raft_transport.go:443 [n5] raft transport stream to node 1 failed: rpc error: code = 13 desc = transport is closing
I161120 08:13:41.124291 33155 util/stop/stopper.go:468 quiescing; tasks left:
2 server/node.go:830
1 storage/queue.go:477
1 storage/intent_resolver.go:383
W161120 08:13:41.124343 33132 storage/raft_transport.go:443 [n4] raft transport stream to node 2 failed: EOF
W161120 08:13:41.124970 31668 storage/raft_transport.go:443 [n1] raft transport stream to node 2 failed: EOF
W161120 08:13:41.125168 32751 storage/raft_transport.go:443 [n2] raft transport stream to node 5 failed: rpc error: code = 13 desc = transport is closing
W161120 08:13:41.125910 31673 storage/raft_transport.go:443 [n2] raft transport stream to node 1 failed: rpc error: code = 13 desc = transport is closing
W161120 08:13:41.126382 33017 storage/raft_transport.go:443 [n2] raft transport stream to node 3 failed: rpc error: code = 13 desc = transport is closing
W161120 08:13:41.126594 33018 storage/raft_transport.go:443 [n2] raft transport stream to node 4 failed: rpc error: code = 13 desc = transport is closing
W161120 08:13:41.128276 32225 storage/raft_transport.go:443 [n1] raft transport stream to node 5 failed: rpc error: code = 13 desc = transport is closing
W161120 08:13:41.128621 33054 storage/raft_transport.go:443 [n3] raft transport stream to node 2 failed: rpc error: code = 13 desc = transport is closing
W161120 08:13:41.129201 32123 storage/raft_transport.go:443 [n3] raft transport stream to node 1 failed: rpc error: code = 13 desc = transport is closing
W161120 08:13:41.129524 33110 storage/intent_resolver.go:380 could not GC completed transaction anchored at /Local/Range/"\x96"/RangeDescriptor: node unavailable; try another peer
W161120 08:13:41.130670 32857 storage/raft_transport.go:443 [n5] raft transport stream to node 2 failed: rpc error: code = 13 desc = transport is closing
W161120 08:13:41.131992 32508 storage/raft_transport.go:443 [n4] raft transport stream to node 1 failed: EOF
W161120 08:13:41.132361 32135 storage/raft_transport.go:443 [n1] raft transport stream to node 3 failed: rpc error: code = 13 desc = transport is closing
I161120 08:13:41.132713 33155 util/stop/stopper.go:468 quiescing; tasks left:
1 storage/queue.go:477
1 server/node.go:830
W161120 08:13:41.134563 32414 storage/raft_transport.go:443 [n1] raft transport stream to node 4 failed: rpc error: code = 13 desc = transport is closing
I161120 08:13:41.135183 31459 storage/replica_command.go:3261 [n1,replicate,s1,r9/1:/Table/5{3-4}] change replicas: read existing descriptor range_id:9 start_key:"\275" end_key:"\276" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:3 store_id:3 replica_id:2 > replicas:<node_id:4 store_id:4 replica_id:3 > next_replica_id:4
E161120 08:13:41.136628 31459 storage/queue.go:575 [n1,replicate,s1,r9/1:/Table/5{3-4}] change replicas of range 9 failed: node unavailable; try another peer
I161120 08:13:41.136762 33155 util/stop/stopper.go:468 quiescing; tasks left:
1 storage/queue.go:477
I161120 08:13:41.137242 31376 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161120 08:13:41.138274 31423 kv/transport_race.go:71 transport race promotion: ran 42 iterations on up to 858 requests
W161120 08:13:41.144951 31916 gossip/infostore.go:303 [n4] node unavailable; try another peer
I161120 08:13:41.145332 31376 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161120 08:13:41.155221 31376 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161120 08:13:41.158120 31376 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161120 08:13:41.161818 31376 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
<autogenerated>:10: Leaked goroutine: goroutine 29506 [chan receive, 1 minutes]:
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).beginCmds.func1(0xc42039c660, 0xc42161dc00, 0xc421176328, 0xc421176330)
/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica.go:1370 +0x94
created by github.com/cockroachdb/cockroach/pkg/storage.(*Replica).beginCmds
/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica.go:1376 +0x14c4
```
|
test
|
github com cockroachdb cockroach pkg storage testreplicatequeuerebalance failed under stress sha stress build found a failed test gossip gossip go initial resolvers gossip gossip go no resolvers found use join to specify a connected node server status runtime go could not parse build timestamp parsing time as cannot parse as storage engine rocksdb go opening in memory rocksdb instance server config go storage engine initialized server node go store not bootstrapped storage replica proposal go new range lease replica utc following replica utc util stop stopper go stop has been called stopping or quiescing all running tasks server node go cluster has been created server node go add additional nodes by specifying join base node id go nodeid set to storage store go failed initial metrics computation system config not yet available server node go initialized store capacity available rangecount leasecount server node go node id initialized gossip gossip go nodedescriptor set to node id address attrs locality storage stores go read node addresses from persistent storage server node go connecting to gossip network to verify cluster id server node go node connected via gossip and verified as part of cluster server node go node started with engine s and attributes server server go starting https server at server server go starting grpc postgres server at server server go advertising cockroachdb node at storage split queue go splitting at keys storage replica command go initiating a split of this range at key table storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go unable to split at key table key range table table outside of bounds of range min max storage split queue go splitting at keys storage replica command go initiating a split of this range at key table storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality clusterid startedat storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go unable to split at key table key range table table outside of bounds of range table max storage split queue go splitting at keys storage replica command go initiating a split of this range at key table storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go unable to split at key table key range table table outside of bounds of range table max storage split queue go splitting at keys storage replica command go initiating a split of this range at key table storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster gossip gossip go initial resolvers gossip gossip go no incoming or outgoing connections server status runtime go could not parse build timestamp parsing time as cannot parse as storage engine rocksdb go opening in memory rocksdb instance server config go storage engine initialized server node go store not bootstrapped storage stores go read node addresses from persistent storage server node go connecting to gossip network to verify cluster id storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster gossip client go started gossip client to gossip server go received gossip from unknown node storage stores go wrote node addresses to persistent storage server node go node connected via gossip and verified as part of cluster kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster server node go new node allocated id base node id go nodeid set to gossip gossip go nodedescriptor set to node id address attrs locality server node go node started with engine s and attributes server server go starting https server at server server go starting grpc postgres server at server server go advertising cockroachdb node at storage stores go wrote node addresses to persistent storage server node go bootstrapped store gossip gossip go initial resolvers gossip gossip go no incoming or outgoing connections storage replica raftstorage go generated snapshot at index server status runtime go could not parse build timestamp parsing time as cannot parse as sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality clusterid startedat storage engine rocksdb go opening in memory rocksdb instance server config go storage engine initialized server node go store not bootstrapped storage stores go read node addresses from persistent storage server node go connecting to gossip network to verify cluster id storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas read existing descriptor range id start key end key replicas next replica id storage replica go proposing add replica nodeid storeid replicaid storage raft transport go raft transport stream to node established storage replica raftstorage go generated snapshot at index storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage store go streamed snapshot kv pairs log entries storage replica command go change replicas read existing descriptor range id start key end key replicas next replica id gossip client go started gossip client to gossip server go received gossip from unknown node storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage server node go node connected via gossip and verified as part of cluster kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping server node go new node allocated id base node id go nodeid set to gossip gossip go nodedescriptor set to node id address attrs locality server node go node started with engine s and attributes server server go starting https server at server server go starting grpc postgres server at server server go advertising cockroachdb node at storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage gossip gossip go initial resolvers gossip gossip go no incoming or outgoing connections storage replica command go change replicas read existing descriptor range id start key end key replicas next replica id server status runtime go could not parse build timestamp parsing time as cannot parse as server node go bootstrapped store storage engine rocksdb go opening in memory rocksdb instance server config go storage engine initialized server node go store not bootstrapped storage stores go read node addresses from persistent storage server node go connecting to gossip network to verify cluster id storage replica go proposing add replica nodeid storeid replicaid storage replica raftstorage go generated snapshot at index sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality clusterid startedat gossip client go started gossip client to gossip server go received gossip from unknown node server node go node connected via gossip and verified as part of cluster storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage store go streamed snapshot kv pairs log entries storage replica command go change replicas read existing descriptor range id start key end key replicas next replica id kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping server node go new node allocated id base node id go nodeid set to gossip gossip go nodedescriptor set to node id address attrs locality server node go node started with engine s and attributes server server go starting https server at server server go starting grpc postgres server at server server go advertising cockroachdb node at storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage gossip gossip go initial resolvers gossip gossip go no incoming or outgoing connections storage replica command go change replicas read existing descriptor range id start key end key replicas next replica id server node go bootstrapped store sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality clusterid startedat server status runtime go could not parse build timestamp parsing time as cannot parse as storage engine rocksdb go opening in memory rocksdb instance server config go storage engine initialized server node go store not bootstrapped storage stores go read node addresses from persistent storage server node go connecting to gossip network to verify cluster id storage replica go proposing add replica nodeid storeid replicaid storage replica raftstorage go generated snapshot at index storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas read existing descriptor range id start key end key replicas next replica id storage raft transport go raft transport stream to node established gossip client go started gossip client to gossip server go received gossip from unknown node server node go node connected via gossip and verified as part of cluster storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping server node go new node allocated id base node id go nodeid set to gossip gossip go nodedescriptor set to node id address attrs locality server node go node started with engine s and attributes server server go starting https server at server server go starting grpc postgres server at server server go advertising cockroachdb node at gossip server go refusing gossip from node max conns forwarding to tcp storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage gossip client go closing client to node received forward from node to storage replica go proposing add replica nodeid storeid replicaid storage replica raftstorage go generated snapshot at index storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas read existing descriptor range id start key end key replicas next replica id gossip client go started gossip client to server node go bootstrapped store sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality clusterid startedat storage replica go proposing add replica nodeid storeid replicaid storage replica raftstorage go generated snapshot at index storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas read existing descriptor range id start key end key replicas replicas next replica id storage replica go proposing add replica nodeid storeid replicaid storage queue go purgatory is now empty storage replica raftstorage go generated snapshot at index storage store go streamed snapshot kv pairs log entries storage raft transport go raft transport stream to node established storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas read existing descriptor range id start key end key replicas replicas next replica id storage replica go proposing add replica nodeid storeid replicaid storage replica raftstorage go generated snapshot at index storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas read existing descriptor range id start key end key replicas replicas next replica id storage replica go proposing add replica nodeid storeid replicaid storage replica raftstorage go generated snapshot at index storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage raft transport go raft transport stream to node established storage replica command go change replicas read existing descriptor range id start key end key replicas replicas next replica id storage replica go proposing add replica nodeid storeid replicaid storage replica raftstorage go generated snapshot at index storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas read existing descriptor range id start key end key replicas replicas next replica id storage replica go proposing add replica nodeid storeid replicaid storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table storage replica command go initiating a split of this range at key table storage replica proposal go range transferring raft leadership to replica id storage replica proposal go new range lease replica utc following replica utc storage raft transport go raft transport stream to node established storage raft transport go raft transport stream to node established storage replica raftstorage go generated snapshot at index storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go initiating a split of this range at key table storage replica command go change replicas read existing descriptor range id start key end key replicas replicas replicas next replica id storage replica go proposing add replica nodeid storeid replicaid storage replica proposal go range transferring raft leadership to replica id storage replica raftstorage go generated snapshot at index storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica proposal go new range lease replica utc following replica utc storage replicate queue test go not balanced storage replicate queue test go not balanced storage replicate queue test go not balanced storage replicate queue test go not balanced storage replicate queue test go not balanced storage replicate queue test go not balanced storage replicate queue test go not balanced storage replicate queue test go not balanced storage replicate queue test go not balanced storage replicate queue test go not balanced storage replicate queue test go not balanced storage replicate queue test go not balanced storage replica command go change replicas read existing descriptor range id start key end key replicas replicas replicas next replica id storage replicate queue test go not balanced storage replicate queue test go not balanced storage queue go change replicas of range failed unexpected value raw bytes t timestamp storage replica raftstorage go generated snapshot at index storage replicate queue test go not balanced storage store go streamed snapshot kv pairs log entries storage replicate queue test go not balanced storage replicate queue test go not balanced storage replicate queue test go not balanced storage replicate queue test go not balanced storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replicate queue test go not balanced storage replicate queue test go not balanced storage replica raftstorage go applied preemptive snapshot in storage replicate queue test go not balanced storage replica command go change replicas read existing descriptor range id start key end key replicas replicas replicas next replica id storage replicate queue test go not balanced storage replicate queue test go not balanced storage replicate queue test go not balanced storage replicate queue test go not balanced storage raft transport go raft transport stream to node established storage raft transport go raft transport stream to node established storage replica go proposing add replica nodeid storeid replicaid storage raft transport go raft transport stream to node established storage raft transport go raft transport stream to node established storage replica command go change replicas read existing descriptor range id start key end key replicas replicas replicas replicas next replica id storage replicate queue test go not balanced storage replica go proposing remove replica nodeid storeid replicaid storage replica raftstorage go generated snapshot at index storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in util stop stopper go stop has been called stopping or quiescing all running tasks storage raft transport go raft transport stream to node failed rpc error code desc transport is closing util stop stopper go quiescing tasks left server node go storage queue go storage intent resolver go storage raft transport go raft transport stream to node failed eof storage raft transport go raft transport stream to node failed eof storage raft transport go raft transport stream to node failed rpc error code desc transport is closing storage raft transport go raft transport stream to node failed rpc error code desc transport is closing storage raft transport go raft transport stream to node failed rpc error code desc transport is closing storage raft transport go raft transport stream to node failed rpc error code desc transport is closing storage raft transport go raft transport stream to node failed rpc error code desc transport is closing storage raft transport go raft transport stream to node failed rpc error code desc transport is closing storage raft transport go raft transport stream to node failed rpc error code desc transport is closing storage intent resolver go could not gc completed transaction anchored at local range rangedescriptor node unavailable try another peer storage raft transport go raft transport stream to node failed rpc error code desc transport is closing storage raft transport go raft transport stream to node failed eof storage raft transport go raft transport stream to node failed rpc error code desc transport is closing util stop stopper go quiescing tasks left storage queue go server node go storage raft transport go raft transport stream to node failed rpc error code desc transport is closing storage replica command go change replicas read existing descriptor range id start key end key replicas replicas replicas next replica id storage queue go change replicas of range failed node unavailable try another peer util stop stopper go quiescing tasks left storage queue go util stop stopper go stop has been called stopping or quiescing all running tasks kv transport race go transport race promotion ran iterations on up to requests gossip infostore go node unavailable try another peer util stop stopper go stop has been called stopping or quiescing all running tasks util stop stopper go stop has been called stopping or quiescing all running tasks util stop stopper go stop has been called stopping or quiescing all running tasks util stop stopper go stop has been called stopping or quiescing all running tasks leaked goroutine goroutine github com cockroachdb cockroach pkg storage replica begincmds go src github com cockroachdb cockroach pkg storage replica go created by github com cockroachdb cockroach pkg storage replica begincmds go src github com cockroachdb cockroach pkg storage replica go
| 1
|
3,057
| 5,221,986,670
|
IssuesEvent
|
2017-01-27 05:15:24
|
Microsoft/visualfsharp
|
https://api.github.com/repos/Microsoft/visualfsharp
|
opened
|
Breakpoint Resolution falls back to enclosing scope in nested expressions
|
Area-IDE Language Service Urgency-Soon
|
This is with RC3 + the VSIX installed.
1. Clone https://github.com/bryanedds/Nu
2. Build it
3. Open `AssetGraph.fs` and go to line 139 under the `buildAssets5` function.
4. Set a breakpoint there by clicking in the margin.
**Expected:** Breakpoint set at line 139, `let inputFileSubpath = asset.FilePath`.
**Actual:** Breakpoint is set at the top of the `for..do` loop.

Note that it also fails inside any block inside the `match` expressions inside of a lambda in `loadAssetsFromPackageDescriptor4`.
|
1.0
|
Breakpoint Resolution falls back to enclosing scope in nested expressions - This is with RC3 + the VSIX installed.
1. Clone https://github.com/bryanedds/Nu
2. Build it
3. Open `AssetGraph.fs` and go to line 139 under the `buildAssets5` function.
4. Set a breakpoint there by clicking in the margin.
**Expected:** Breakpoint set at line 139, `let inputFileSubpath = asset.FilePath`.
**Actual:** Breakpoint is set at the top of the `for..do` loop.

Note that it also fails inside any block inside the `match` expressions inside of a lambda in `loadAssetsFromPackageDescriptor4`.
|
non_test
|
breakpoint resolution falls back to enclosing scope in nested expressions this is with the vsix installed clone build it open assetgraph fs and go to line under the function set a breakpoint there by clicking in the margin expected breakpoint set at line let inputfilesubpath asset filepath actual breakpoint is set at the top of the for do loop note that it also fails inside any block inside the match expressions inside of a lambda in
| 0
|
261,400
| 22,743,612,781
|
IssuesEvent
|
2022-07-07 07:09:20
|
WoWManiaUK/Redemption
|
https://api.github.com/repos/WoWManiaUK/Redemption
|
closed
|
[Core] charge ability issue
|
Fixed on PTR - Tester Confirmed
|
**Links:**
https://www.wow-mania.com/armory/?spell=100
**What is Happening:**
Warrior's charge not available at all under water (from azeroth to northrend)
**What Should happen:**
charge should be possible
** I have found this mentioned several years ago and surprised to know it still is bugged.
|
1.0
|
[Core] charge ability issue - **Links:**
https://www.wow-mania.com/armory/?spell=100
**What is Happening:**
Warrior's charge not available at all under water (from azeroth to northrend)
**What Should happen:**
charge should be possible
** I have found this mentioned several years ago and surprised to know it still is bugged.
|
test
|
charge ability issue links what is happening warrior s charge not available at all under water from azeroth to northrend what should happen charge should be possible i have found this mentioned several years ago and surprised to know it still is bugged
| 1
|
34,914
| 9,498,066,937
|
IssuesEvent
|
2019-04-24 00:05:25
|
scikit-learn/scikit-learn
|
https://api.github.com/repos/scikit-learn/scikit-learn
|
closed
|
Inspection of some cython produced functions causes segfault
|
Build / CI
|
#### Description
Trying to access `__kwdefaults__` on some Cython produced functions, either by `getattr` or `hasattr`, segfaults the Python process. This occurs at least on macOS when scikit-learn is installed from PyPI using `pip install scikit-learn`. (I'm reporting a similar issue on `pandas`, where I can install via `pip install pandas --no-binary :all:` and no longer get the segfault, but I'm having trouble compiling it on my Mac.) There may be other functions which do this, but this is the first we encounter.
This impacts our Python language server as it uses the `inspect` library to examine libraries without Python source. The code sample below is a minimal repro, but in reality it's being called by `inspect.getfullargspec()` (which eventually does [this](https://github.com/python/cpython/blob/3.7/Lib/inspect.py#L1837)). When it segfaults, our process crashes (and on some OSs like macOS produces a visible popup as the OS is tracking these sorts of crashes). See: Microsoft/python-language-server#740
cython/cython#1470 looks to be related, and would be fixed in Cython 0.29.6, so maybe a version bump is all that would be required. (I'm currently working on building it locally to see if it goes away, which does work for `pandas`, as previously mentioned.)
#### Steps/Code to Reproduce
```python
from slearn.cluster._k_means_elkan import k_means_elkan
getattr(k_means_elkan, "__kwdefaults__", None)
```
Run with `-X faulthandler` to get more info.
#### Expected Results
Anything, just no crash.
#### Actual Results
Python segfaults at `getattr`. Here's what macOS's crash reporter says:
<details>
Process: Python [80492]
Path: /usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/Resources/Python.app/Contents/MacOS/Python
Identifier: Python
Version: 3.7.3 (3.7.3)
Code Type: X86-64 (Native)
Parent Process: zsh [72384]
Responsible: Python [80492]
User ID: 501
Date/Time: 2019-04-17 16:16:08.689 -0700
OS Version: Mac OS X 10.14.4 (18E226)
Report Version: 12
Bridge OS Version: 3.0 (14Y674)
Anonymous UUID: 5A957B3E-4E8F-3DE2-C606-5B11FE48E6DD
Sleep/Wake UUID: 02FDA72B-8D53-471B-80AE-6514E0B386FB
Time Awake Since Boot: 23000 seconds
Time Since Wake: 2100 seconds
System Integrity Protection: disabled
Crashed Thread: 0 Dispatch queue: com.apple.main-thread
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000008
Exception Note: EXC_CORPSE_NOTIFY
Termination Signal: Segmentation fault: 11
Termination Reason: Namespace SIGNAL, Code 0xb
Terminating Process: exc handler [80492]
VM Regions Near 0x8:
-->
__TEXT 000000010e316000-000000010e318000 [ 8K] r-x/rwx SM=COW /usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/Resources/Python.app/Contents/MacOS/Python
Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0 _k_means_elkan.cpython-37m-darwin.so 0x00000001260712bf __pyx_pf_7sklearn_7cluster_14_k_means_elkan_12__defaults__ + 47
1 sparsefuncs_fast.cpython-37m-darwin.so 0x0000000125693410 __Pyx_CyFunction_get_kwdefaults + 48
2 org.python.python 0x000000010e33b587 getset_get + 58
3 org.python.python 0x000000010e362c2f _PyObject_GenericGetAttrWithDict + 180
4 org.python.python 0x000000010e362b20 _PyObject_LookupAttr + 166
5 org.python.python 0x000000010e3bfd66 builtin_getattr + 141
6 org.python.python 0x000000010e3366f2 _PyMethodDef_RawFastCallKeywords + 495
7 org.python.python 0x000000010e335c8e _PyCFunction_FastCallKeywords + 44
8 org.python.python 0x000000010e3cadb2 call_function + 636
9 org.python.python 0x000000010e3c3c35 _PyEval_EvalFrameDefault + 6594
10 org.python.python 0x000000010e3cb6d3 _PyEval_EvalCodeWithName + 1867
11 org.python.python 0x000000010e3c21d0 PyEval_EvalCode + 51
12 org.python.python 0x000000010e3f079b run_mod + 54
13 org.python.python 0x000000010e3ef7c5 PyRun_FileExFlags + 163
14 org.python.python 0x000000010e3eee6b PyRun_SimpleFileExFlags + 263
15 org.python.python 0x000000010e4079b0 pymain_main + 5367
16 org.python.python 0x000000010e408088 _Py_UnixMain + 56
17 libdyld.dylib 0x00007fff5dead3d5 start + 1
Thread 1:
0 libsystem_kernel.dylib 0x00007fff5dfe586a __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x00007fff5e09e56e _pthread_cond_wait + 722
2 libopenblasp-r0.3.5.dev.dylib 0x000000011a621a3b blas_thread_server + 619
3 libsystem_pthread.dylib 0x00007fff5e09b2eb _pthread_body + 126
4 libsystem_pthread.dylib 0x00007fff5e09e249 _pthread_start + 66
5 libsystem_pthread.dylib 0x00007fff5e09a40d thread_start + 13
Thread 2:
0 libsystem_kernel.dylib 0x00007fff5dfe586a __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x00007fff5e09e56e _pthread_cond_wait + 722
2 libopenblasp-r0.3.5.dev.dylib 0x000000011a621a3b blas_thread_server + 619
3 libsystem_pthread.dylib 0x00007fff5e09b2eb _pthread_body + 126
4 libsystem_pthread.dylib 0x00007fff5e09e249 _pthread_start + 66
5 libsystem_pthread.dylib 0x00007fff5e09a40d thread_start + 13
Thread 3:
0 libsystem_kernel.dylib 0x00007fff5dfe586a __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x00007fff5e09e56e _pthread_cond_wait + 722
2 libopenblasp-r0.3.5.dev.dylib 0x000000011a621a3b blas_thread_server + 619
3 libsystem_pthread.dylib 0x00007fff5e09b2eb _pthread_body + 126
4 libsystem_pthread.dylib 0x00007fff5e09e249 _pthread_start + 66
5 libsystem_pthread.dylib 0x00007fff5e09a40d thread_start + 13
Thread 0 crashed with X86 Thread State (64-bit):
rax: 0x0000000000000000 rbx: 0x0000000125f54118 rcx: 0x000000012569cda0 rdx: 0x00007ffee18e8a18
rdi: 0x0000000125f54118 rsi: 0x0000000000000000 rbp: 0x00007ffee18e89e0 rsp: 0x00007ffee18e89b0
r8: 0x50e9a9e0fed7166b r9: 0x00007ffee18e8a90 r10: 0x00007f9c0d2f6e68 r11: 0x00007ffee18e8ae8
r12: 0x0000000125f54118 r13: 0x000000012569ce18 r14: 0x0000000000000000 r15: 0x0000000125f54118
rip: 0x00000001260712bf rfl: 0x0000000000010246 cr2: 0x0000000000000008
Logical CPU: 2
Error Code: 0x00000004
Trap Number: 14
</details>
#### Versions
```
System:
python: 3.7.3 (default, Mar 27 2019, 09:23:15) [Clang 10.0.1 (clang-1001.0.46.3)]
executable: /Users/jake/skseg2/venv/bin/python
machine: Darwin-18.5.0-x86_64-i386-64bit
BLAS:
macros: NO_ATLAS_INFO=3, HAVE_CBLAS=None
lib_dirs:
cblas_libs: cblas
Python deps:
pip: 19.0.3
setuptools: 40.8.0
sklearn: 0.20.3
numpy: 1.16.2
scipy: 1.2.1
Cython: None
pandas: 0.24.2
```
<!-- Thanks for contributing! -->
|
1.0
|
Inspection of some cython produced functions causes segfault - #### Description
Trying to access `__kwdefaults__` on some Cython produced functions, either by `getattr` or `hasattr`, segfaults the Python process. This occurs at least on macOS when scikit-learn is installed from PyPI using `pip install scikit-learn`. (I'm reporting a similar issue on `pandas`, where I can install via `pip install pandas --no-binary :all:` and no longer get the segfault, but I'm having trouble compiling it on my Mac.) There may be other functions which do this, but this is the first we encounter.
This impacts our Python language server as it uses the `inspect` library to examine libraries without Python source. The code sample below is a minimal repro, but in reality it's being called by `inspect.getfullargspec()` (which eventually does [this](https://github.com/python/cpython/blob/3.7/Lib/inspect.py#L1837)). When it segfaults, our process crashes (and on some OSs like macOS produces a visible popup as the OS is tracking these sorts of crashes). See: Microsoft/python-language-server#740
cython/cython#1470 looks to be related, and would be fixed in Cython 0.29.6, so maybe a version bump is all that would be required. (I'm currently working on building it locally to see if it goes away, which does work for `pandas`, as previously mentioned.)
#### Steps/Code to Reproduce
```python
from slearn.cluster._k_means_elkan import k_means_elkan
getattr(k_means_elkan, "__kwdefaults__", None)
```
Run with `-X faulthandler` to get more info.
#### Expected Results
Anything, just no crash.
#### Actual Results
Python segfaults at `getattr`. Here's what macOS's crash reporter says:
<details>
Process: Python [80492]
Path: /usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/Resources/Python.app/Contents/MacOS/Python
Identifier: Python
Version: 3.7.3 (3.7.3)
Code Type: X86-64 (Native)
Parent Process: zsh [72384]
Responsible: Python [80492]
User ID: 501
Date/Time: 2019-04-17 16:16:08.689 -0700
OS Version: Mac OS X 10.14.4 (18E226)
Report Version: 12
Bridge OS Version: 3.0 (14Y674)
Anonymous UUID: 5A957B3E-4E8F-3DE2-C606-5B11FE48E6DD
Sleep/Wake UUID: 02FDA72B-8D53-471B-80AE-6514E0B386FB
Time Awake Since Boot: 23000 seconds
Time Since Wake: 2100 seconds
System Integrity Protection: disabled
Crashed Thread: 0 Dispatch queue: com.apple.main-thread
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000008
Exception Note: EXC_CORPSE_NOTIFY
Termination Signal: Segmentation fault: 11
Termination Reason: Namespace SIGNAL, Code 0xb
Terminating Process: exc handler [80492]
VM Regions Near 0x8:
-->
__TEXT 000000010e316000-000000010e318000 [ 8K] r-x/rwx SM=COW /usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/Resources/Python.app/Contents/MacOS/Python
Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0 _k_means_elkan.cpython-37m-darwin.so 0x00000001260712bf __pyx_pf_7sklearn_7cluster_14_k_means_elkan_12__defaults__ + 47
1 sparsefuncs_fast.cpython-37m-darwin.so 0x0000000125693410 __Pyx_CyFunction_get_kwdefaults + 48
2 org.python.python 0x000000010e33b587 getset_get + 58
3 org.python.python 0x000000010e362c2f _PyObject_GenericGetAttrWithDict + 180
4 org.python.python 0x000000010e362b20 _PyObject_LookupAttr + 166
5 org.python.python 0x000000010e3bfd66 builtin_getattr + 141
6 org.python.python 0x000000010e3366f2 _PyMethodDef_RawFastCallKeywords + 495
7 org.python.python 0x000000010e335c8e _PyCFunction_FastCallKeywords + 44
8 org.python.python 0x000000010e3cadb2 call_function + 636
9 org.python.python 0x000000010e3c3c35 _PyEval_EvalFrameDefault + 6594
10 org.python.python 0x000000010e3cb6d3 _PyEval_EvalCodeWithName + 1867
11 org.python.python 0x000000010e3c21d0 PyEval_EvalCode + 51
12 org.python.python 0x000000010e3f079b run_mod + 54
13 org.python.python 0x000000010e3ef7c5 PyRun_FileExFlags + 163
14 org.python.python 0x000000010e3eee6b PyRun_SimpleFileExFlags + 263
15 org.python.python 0x000000010e4079b0 pymain_main + 5367
16 org.python.python 0x000000010e408088 _Py_UnixMain + 56
17 libdyld.dylib 0x00007fff5dead3d5 start + 1
Thread 1:
0 libsystem_kernel.dylib 0x00007fff5dfe586a __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x00007fff5e09e56e _pthread_cond_wait + 722
2 libopenblasp-r0.3.5.dev.dylib 0x000000011a621a3b blas_thread_server + 619
3 libsystem_pthread.dylib 0x00007fff5e09b2eb _pthread_body + 126
4 libsystem_pthread.dylib 0x00007fff5e09e249 _pthread_start + 66
5 libsystem_pthread.dylib 0x00007fff5e09a40d thread_start + 13
Thread 2:
0 libsystem_kernel.dylib 0x00007fff5dfe586a __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x00007fff5e09e56e _pthread_cond_wait + 722
2 libopenblasp-r0.3.5.dev.dylib 0x000000011a621a3b blas_thread_server + 619
3 libsystem_pthread.dylib 0x00007fff5e09b2eb _pthread_body + 126
4 libsystem_pthread.dylib 0x00007fff5e09e249 _pthread_start + 66
5 libsystem_pthread.dylib 0x00007fff5e09a40d thread_start + 13
Thread 3:
0 libsystem_kernel.dylib 0x00007fff5dfe586a __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x00007fff5e09e56e _pthread_cond_wait + 722
2 libopenblasp-r0.3.5.dev.dylib 0x000000011a621a3b blas_thread_server + 619
3 libsystem_pthread.dylib 0x00007fff5e09b2eb _pthread_body + 126
4 libsystem_pthread.dylib 0x00007fff5e09e249 _pthread_start + 66
5 libsystem_pthread.dylib 0x00007fff5e09a40d thread_start + 13
Thread 0 crashed with X86 Thread State (64-bit):
rax: 0x0000000000000000 rbx: 0x0000000125f54118 rcx: 0x000000012569cda0 rdx: 0x00007ffee18e8a18
rdi: 0x0000000125f54118 rsi: 0x0000000000000000 rbp: 0x00007ffee18e89e0 rsp: 0x00007ffee18e89b0
r8: 0x50e9a9e0fed7166b r9: 0x00007ffee18e8a90 r10: 0x00007f9c0d2f6e68 r11: 0x00007ffee18e8ae8
r12: 0x0000000125f54118 r13: 0x000000012569ce18 r14: 0x0000000000000000 r15: 0x0000000125f54118
rip: 0x00000001260712bf rfl: 0x0000000000010246 cr2: 0x0000000000000008
Logical CPU: 2
Error Code: 0x00000004
Trap Number: 14
</details>
#### Versions
```
System:
python: 3.7.3 (default, Mar 27 2019, 09:23:15) [Clang 10.0.1 (clang-1001.0.46.3)]
executable: /Users/jake/skseg2/venv/bin/python
machine: Darwin-18.5.0-x86_64-i386-64bit
BLAS:
macros: NO_ATLAS_INFO=3, HAVE_CBLAS=None
lib_dirs:
cblas_libs: cblas
Python deps:
pip: 19.0.3
setuptools: 40.8.0
sklearn: 0.20.3
numpy: 1.16.2
scipy: 1.2.1
Cython: None
pandas: 0.24.2
```
<!-- Thanks for contributing! -->
|
non_test
|
inspection of some cython produced functions causes segfault description trying to access kwdefaults on some cython produced functions either by getattr or hasattr segfaults the python process this occurs at least on macos when scikit learn is installed from pypi using pip install scikit learn i m reporting a similar issue on pandas where i can install via pip install pandas no binary all and no longer get the segfault but i m having trouble compiling it on my mac there may be other functions which do this but this is the first we encounter this impacts our python language server as it uses the inspect library to examine libraries without python source the code sample below is a minimal repro but in reality it s being called by inspect getfullargspec which eventually does when it segfaults our process crashes and on some oss like macos produces a visible popup as the os is tracking these sorts of crashes see microsoft python language server cython cython looks to be related and would be fixed in cython so maybe a version bump is all that would be required i m currently working on building it locally to see if it goes away which does work for pandas as previously mentioned steps code to reproduce python from slearn cluster k means elkan import k means elkan getattr k means elkan kwdefaults none run with x faulthandler to get more info expected results anything just no crash actual results python segfaults at getattr here s what macos s crash reporter says process python path usr local cellar python frameworks python framework versions resources python app contents macos python identifier python version code type native parent process zsh responsible python user id date time os version mac os x report version bridge os version anonymous uuid sleep wake uuid time awake since boot seconds time since wake seconds system integrity protection disabled crashed thread dispatch queue com apple main thread exception type exc bad access sigsegv exception codes kern invalid address at exception note exc corpse notify termination signal segmentation fault termination reason namespace signal code terminating process exc handler vm regions near text r x rwx sm cow usr local cellar python frameworks python framework versions resources python app contents macos python thread crashed dispatch queue com apple main thread k means elkan cpython darwin so pyx pf k means elkan defaults sparsefuncs fast cpython darwin so pyx cyfunction get kwdefaults org python python getset get org python python pyobject genericgetattrwithdict org python python pyobject lookupattr org python python builtin getattr org python python pymethoddef rawfastcallkeywords org python python pycfunction fastcallkeywords org python python call function org python python pyeval evalframedefault org python python pyeval evalcodewithname org python python pyeval evalcode org python python run mod org python python pyrun fileexflags org python python pyrun simplefileexflags org python python pymain main org python python py unixmain libdyld dylib start thread libsystem kernel dylib psynch cvwait libsystem pthread dylib pthread cond wait libopenblasp dev dylib blas thread server libsystem pthread dylib pthread body libsystem pthread dylib pthread start libsystem pthread dylib thread start thread libsystem kernel dylib psynch cvwait libsystem pthread dylib pthread cond wait libopenblasp dev dylib blas thread server libsystem pthread dylib pthread body libsystem pthread dylib pthread start libsystem pthread dylib thread start thread libsystem kernel dylib psynch cvwait libsystem pthread dylib pthread cond wait libopenblasp dev dylib blas thread server libsystem pthread dylib pthread body libsystem pthread dylib pthread start libsystem pthread dylib thread start thread crashed with thread state bit rax rbx rcx rdx rdi rsi rbp rsp rip rfl logical cpu error code trap number versions system python default mar executable users jake venv bin python machine darwin blas macros no atlas info have cblas none lib dirs cblas libs cblas python deps pip setuptools sklearn numpy scipy cython none pandas
| 0
|
63,952
| 8,703,151,478
|
IssuesEvent
|
2018-12-05 16:00:19
|
Echipa-dotNET-Blanao/Exams-Management-System
|
https://api.github.com/repos/Echipa-dotNET-Blanao/Exams-Management-System
|
reopened
|
Microservices
|
documentation
|
Research on microservices and clarify the microservices our application will be using
|
1.0
|
Microservices - Research on microservices and clarify the microservices our application will be using
|
non_test
|
microservices research on microservices and clarify the microservices our application will be using
| 0
|
803,727
| 29,187,184,981
|
IssuesEvent
|
2023-05-19 16:25:21
|
phetsims/ph-scale
|
https://api.github.com/repos/phetsims/ph-scale
|
closed
|
Use default layoutBounds
|
priority:5-deferred
|
Related to https://github.com/phetsims/joist/issues/542 ... This sim uses non-standard layoutBounds, because it was a port from Java.
In PHScaleConstants.ts:
```typescript
LAYOUT_BOUNDS: new Bounds2( 0, 0, 1100, 700 ),
```
Someday it should be converted to standard (default) layoutBounds, as specified in ScreenView.ts:
```typescript
const DEFAULT_LAYOUT_BOUNDS = new Bounds2( 0, 0, 1024, 618 );
```
|
1.0
|
Use default layoutBounds - Related to https://github.com/phetsims/joist/issues/542 ... This sim uses non-standard layoutBounds, because it was a port from Java.
In PHScaleConstants.ts:
```typescript
LAYOUT_BOUNDS: new Bounds2( 0, 0, 1100, 700 ),
```
Someday it should be converted to standard (default) layoutBounds, as specified in ScreenView.ts:
```typescript
const DEFAULT_LAYOUT_BOUNDS = new Bounds2( 0, 0, 1024, 618 );
```
|
non_test
|
use default layoutbounds related to this sim uses non standard layoutbounds because it was a port from java in phscaleconstants ts typescript layout bounds new someday it should be converted to standard default layoutbounds as specified in screenview ts typescript const default layout bounds new
| 0
|
124,546
| 17,772,650,851
|
IssuesEvent
|
2021-08-30 15:17:15
|
kapseliboi/CruiseMonkey
|
https://api.github.com/repos/kapseliboi/CruiseMonkey
|
opened
|
CVE-2021-21366 (Medium) detected in xmldom-0.1.27.tgz
|
security vulnerability
|
## CVE-2021-21366 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmldom-0.1.27.tgz</b></p></summary>
<p>A W3C Standard XML DOM(Level2 CORE) implementation and parser(DOMParser/XMLSerializer).</p>
<p>Library home page: <a href="https://registry.npmjs.org/xmldom/-/xmldom-0.1.27.tgz">https://registry.npmjs.org/xmldom/-/xmldom-0.1.27.tgz</a></p>
<p>Path to dependency file: CruiseMonkey/package.json</p>
<p>Path to vulnerable library: CruiseMonkey/node_modules/xmldom/package.json</p>
<p>
Dependency Hierarchy:
- twitarr-0.1.2-beta.1.tgz (Root Library)
- :x: **xmldom-0.1.27.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/CruiseMonkey/commit/7d538265e0b4230eb796e57b9721a28adf5e14c0">7d538265e0b4230eb796e57b9721a28adf5e14c0</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
xmldom is a pure JavaScript W3C standard-based (XML DOM Level 2 Core) DOMParser and XMLSerializer module. xmldom versions 0.4.0 and older do not correctly preserve system identifiers, FPIs or namespaces when repeatedly parsing and serializing maliciously crafted documents. This may lead to unexpected syntactic changes during XML processing in some downstream applications. This is fixed in version 0.5.0. As a workaround downstream applications can validate the input and reject the maliciously crafted documents.
<p>Publish Date: 2021-03-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21366>CVE-2021-21366</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/xmldom/xmldom/security/advisories/GHSA-h6q6-9hqw-rwfv">https://github.com/xmldom/xmldom/security/advisories/GHSA-h6q6-9hqw-rwfv</a></p>
<p>Release Date: 2021-03-12</p>
<p>Fix Resolution: 0.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-21366 (Medium) detected in xmldom-0.1.27.tgz - ## CVE-2021-21366 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmldom-0.1.27.tgz</b></p></summary>
<p>A W3C Standard XML DOM(Level2 CORE) implementation and parser(DOMParser/XMLSerializer).</p>
<p>Library home page: <a href="https://registry.npmjs.org/xmldom/-/xmldom-0.1.27.tgz">https://registry.npmjs.org/xmldom/-/xmldom-0.1.27.tgz</a></p>
<p>Path to dependency file: CruiseMonkey/package.json</p>
<p>Path to vulnerable library: CruiseMonkey/node_modules/xmldom/package.json</p>
<p>
Dependency Hierarchy:
- twitarr-0.1.2-beta.1.tgz (Root Library)
- :x: **xmldom-0.1.27.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/CruiseMonkey/commit/7d538265e0b4230eb796e57b9721a28adf5e14c0">7d538265e0b4230eb796e57b9721a28adf5e14c0</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
xmldom is a pure JavaScript W3C standard-based (XML DOM Level 2 Core) DOMParser and XMLSerializer module. xmldom versions 0.4.0 and older do not correctly preserve system identifiers, FPIs or namespaces when repeatedly parsing and serializing maliciously crafted documents. This may lead to unexpected syntactic changes during XML processing in some downstream applications. This is fixed in version 0.5.0. As a workaround downstream applications can validate the input and reject the maliciously crafted documents.
<p>Publish Date: 2021-03-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21366>CVE-2021-21366</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/xmldom/xmldom/security/advisories/GHSA-h6q6-9hqw-rwfv">https://github.com/xmldom/xmldom/security/advisories/GHSA-h6q6-9hqw-rwfv</a></p>
<p>Release Date: 2021-03-12</p>
<p>Fix Resolution: 0.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve medium detected in xmldom tgz cve medium severity vulnerability vulnerable library xmldom tgz a standard xml dom core implementation and parser domparser xmlserializer library home page a href path to dependency file cruisemonkey package json path to vulnerable library cruisemonkey node modules xmldom package json dependency hierarchy twitarr beta tgz root library x xmldom tgz vulnerable library found in head commit a href found in base branch master vulnerability details xmldom is a pure javascript standard based xml dom level core domparser and xmlserializer module xmldom versions and older do not correctly preserve system identifiers fpis or namespaces when repeatedly parsing and serializing maliciously crafted documents this may lead to unexpected syntactic changes during xml processing in some downstream applications this is fixed in version as a workaround downstream applications can validate the input and reject the maliciously crafted documents publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
126,975
| 10,441,754,819
|
IssuesEvent
|
2019-09-18 11:35:24
|
BEXIS2/Core
|
https://api.github.com/repos/BEXIS2/Core
|
closed
|
Refactor Api Data Get filtering and projection
|
Priority: High Status: Testing Required Type: Bug
|
.. filtering and projection are not working well in the api functions
|
1.0
|
Refactor Api Data Get filtering and projection - .. filtering and projection are not working well in the api functions
|
test
|
refactor api data get filtering and projection filtering and projection are not working well in the api functions
| 1
|
17,231
| 5,355,461,075
|
IssuesEvent
|
2017-02-20 13:04:10
|
xpmethod/middlemarch-critical-histories
|
https://api.github.com/repos/xpmethod/middlemarch-critical-histories
|
opened
|
Decide on best visualization for specialist/non-specialist study
|
non-code
|
E.g. stacked bar chart or current positive/negative chart. Once we have the final data.
|
1.0
|
Decide on best visualization for specialist/non-specialist study - E.g. stacked bar chart or current positive/negative chart. Once we have the final data.
|
non_test
|
decide on best visualization for specialist non specialist study e g stacked bar chart or current positive negative chart once we have the final data
| 0
|
213,475
| 7,254,129,000
|
IssuesEvent
|
2018-02-16 09:43:47
|
geosolutions-it/clevmetro-nfd
|
https://api.github.com/repos/geosolutions-it/clevmetro-nfd
|
opened
|
Decoupling form items positions mapping from model fields
|
Priority High backend
|
Form items positioning into the frontend panel tabs must be flexible.
Decoupling from model fields structure must be implemented
|
1.0
|
Decoupling form items positions mapping from model fields - Form items positioning into the frontend panel tabs must be flexible.
Decoupling from model fields structure must be implemented
|
non_test
|
decoupling form items positions mapping from model fields form items positioning into the frontend panel tabs must be flexible decoupling from model fields structure must be implemented
| 0
|
281,857
| 24,426,271,652
|
IssuesEvent
|
2022-10-06 03:09:20
|
HughCraig/GHAP
|
https://api.github.com/repos/HughCraig/GHAP
|
closed
|
Multilayer View Maps not working
|
bug Testing
|
I go to Browse Layers and then to Multilayers and select a Multilayer and that is displayed, but when I go to View Maps and click on any of the options (3D Viewer etc) I get only a blank white screen.
|
1.0
|
Multilayer View Maps not working - I go to Browse Layers and then to Multilayers and select a Multilayer and that is displayed, but when I go to View Maps and click on any of the options (3D Viewer etc) I get only a blank white screen.
|
test
|
multilayer view maps not working i go to browse layers and then to multilayers and select a multilayer and that is displayed but when i go to view maps and click on any of the options viewer etc i get only a blank white screen
| 1
|
719,508
| 24,762,433,715
|
IssuesEvent
|
2022-10-22 04:27:03
|
Baystation12/Baystation12
|
https://api.github.com/repos/Baystation12/Baystation12
|
closed
|
[small] modifier in papercode currently makes written text illegible
|
Priority: Low Could Reproduce
|
### Description of issue
As expected with [small] case type in paperwork usage, the tag should make your text more akin to fine print. It does do this much. The [small] text also appears normally in console applications like records and nanoword. *Printed* sheets in particular are having a curious little bug, explained below.
### Difference between expected and actual behaviour
The issue sadly is that it is now so small that it is impossible to read what's written without copy and pasting it someplace else. **The issue only appears within printed papers**. Records and things in nanoword work as expected.
An image of this issue, for your amusement:


That tiny, tiny, tiny text is what is printed.


Same issue for writing on the paper.
### Steps to reproduce
1. Use [small] type modifier for any document on nanoword. e.g. [small]Hello, my name is Purple.[/small]. Alternatively, write directly on a paper and skip to step 3.
2. Print the document that looked fine on your console.
3. Look at the 4 pixels on your screen in shock.
### Specific information for locating
I'm 99% sure it's related to #32679. Some change in that PR caused this issue.
### Client version, server revision, & game ID
Client Version: 514
Server Revision: [3eeddad8f4d7fd6ae1134d77c3f364ad074f0999](https://bay.ss13.me/github/commit/3eeddad8f4d7fd6ae1134d77c3f364ad074f0999) - dev - 2022-10-19
Game ID: cku-a12c
Current map: SEV Torch
### Issue bingo
- [X] Issue could be reproduced at least once
- [X] Issue could be reproduced by different players
- [X] Issue could be reproduced in multiple rounds
- [X] Issue happened in a recent (less than 7 days ago) round
- [X] [Couldn't find an existing issue about this](https://github.com/Baystation12/Baystation12/issues)
|
1.0
|
[small] modifier in papercode currently makes written text illegible - ### Description of issue
As expected with [small] case type in paperwork usage, the tag should make your text more akin to fine print. It does do this much. The [small] text also appears normally in console applications like records and nanoword. *Printed* sheets in particular are having a curious little bug, explained below.
### Difference between expected and actual behaviour
The issue sadly is that it is now so small that it is impossible to read what's written without copy and pasting it someplace else. **The issue only appears within printed papers**. Records and things in nanoword work as expected.
An image of this issue, for your amusement:


That tiny, tiny, tiny text is what is printed.


Same issue for writing on the paper.
### Steps to reproduce
1. Use [small] type modifier for any document on nanoword. e.g. [small]Hello, my name is Purple.[/small]. Alternatively, write directly on a paper and skip to step 3.
2. Print the document that looked fine on your console.
3. Look at the 4 pixels on your screen in shock.
### Specific information for locating
I'm 99% sure it's related to #32679. Some change in that PR caused this issue.
### Client version, server revision, & game ID
Client Version: 514
Server Revision: [3eeddad8f4d7fd6ae1134d77c3f364ad074f0999](https://bay.ss13.me/github/commit/3eeddad8f4d7fd6ae1134d77c3f364ad074f0999) - dev - 2022-10-19
Game ID: cku-a12c
Current map: SEV Torch
### Issue bingo
- [X] Issue could be reproduced at least once
- [X] Issue could be reproduced by different players
- [X] Issue could be reproduced in multiple rounds
- [X] Issue happened in a recent (less than 7 days ago) round
- [X] [Couldn't find an existing issue about this](https://github.com/Baystation12/Baystation12/issues)
|
non_test
|
modifier in papercode currently makes written text illegible description of issue as expected with case type in paperwork usage the tag should make your text more akin to fine print it does do this much the text also appears normally in console applications like records and nanoword printed sheets in particular are having a curious little bug explained below difference between expected and actual behaviour the issue sadly is that it is now so small that it is impossible to read what s written without copy and pasting it someplace else the issue only appears within printed papers records and things in nanoword work as expected an image of this issue for your amusement that tiny tiny tiny text is what is printed same issue for writing on the paper steps to reproduce use type modifier for any document on nanoword e g hello my name is purple alternatively write directly on a paper and skip to step print the document that looked fine on your console look at the pixels on your screen in shock specific information for locating i m sure it s related to some change in that pr caused this issue client version server revision game id client version server revision dev game id cku current map sev torch issue bingo issue could be reproduced at least once issue could be reproduced by different players issue could be reproduced in multiple rounds issue happened in a recent less than days ago round
| 0
|
98,728
| 8,685,002,955
|
IssuesEvent
|
2018-12-03 05:42:48
|
muqeed11/FXTesting
|
https://api.github.com/repos/muqeed11/FXTesting
|
closed
|
FX UAT TEST : ApiV1RunsIdTestSuiteSummarySearchGetQueryParamPagesizeSla
|
FX UAT TEST
|
Project : FX UAT TEST
Job : UAT
Env : UAT
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=M2Q5NDg1YmYtZDhhNS00Y2MyLThmNmEtYmI3NzQxYjE1MDJl; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Mon, 03 Dec 2018 05:36:29 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/runs/MiotjKDB/test-suite-summary/search?pageSize=1001
Request :
Response :
{
"timestamp" : "2018-12-03T05:36:29.781+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/runs/MiotjKDB/test-suite-summary/search"
}
Logs :
Assertion [@StatusCode == 200 AND @ResponseTime < 500] resolved-to [404 == 200 AND 539 < 500] result [Failed]
--- FX Bot ---
|
1.0
|
FX UAT TEST : ApiV1RunsIdTestSuiteSummarySearchGetQueryParamPagesizeSla - Project : FX UAT TEST
Job : UAT
Env : UAT
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=M2Q5NDg1YmYtZDhhNS00Y2MyLThmNmEtYmI3NzQxYjE1MDJl; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Mon, 03 Dec 2018 05:36:29 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/runs/MiotjKDB/test-suite-summary/search?pageSize=1001
Request :
Response :
{
"timestamp" : "2018-12-03T05:36:29.781+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/runs/MiotjKDB/test-suite-summary/search"
}
Logs :
Assertion [@StatusCode == 200 AND @ResponseTime < 500] resolved-to [404 == 200 AND 539 < 500] result [Failed]
--- FX Bot ---
|
test
|
fx uat test project fx uat test job uat env uat region us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options set cookie content type transfer encoding date endpoint request response timestamp status error not found message no message available path api api runs miotjkdb test suite summary search logs assertion resolved to result fx bot
| 1
|
77,131
| 7,566,761,772
|
IssuesEvent
|
2018-04-22 00:32:54
|
StackStorm/st2-packages
|
https://api.github.com/repos/StackStorm/st2-packages
|
closed
|
Smoke tests for mistral not running
|
bug tests
|
https://github.com/StackStorm/st2-packages/pull/527 fixed a postgres upstream issue, and sometime before that PR was merged and the previous packages were generated, one of mistral's upstream dependencies made a change that was causing mistral to not start. This was fixed in https://github.com/StackStorm/st2-packages/pull/528.
The question now is, why did the tests pass for #527? There are smoke tests that are executed as part of https://github.com/StackStorm/st2-packages/blob/master/rake/spec/default/60-st2_all-services-ok_spec.rb#L88-L93, but for some reason those weren't run - guessing due to a misconfiguration of `mistral_enabled `. Need to look into this.
|
1.0
|
Smoke tests for mistral not running - https://github.com/StackStorm/st2-packages/pull/527 fixed a postgres upstream issue, and sometime before that PR was merged and the previous packages were generated, one of mistral's upstream dependencies made a change that was causing mistral to not start. This was fixed in https://github.com/StackStorm/st2-packages/pull/528.
The question now is, why did the tests pass for #527? There are smoke tests that are executed as part of https://github.com/StackStorm/st2-packages/blob/master/rake/spec/default/60-st2_all-services-ok_spec.rb#L88-L93, but for some reason those weren't run - guessing due to a misconfiguration of `mistral_enabled `. Need to look into this.
|
test
|
smoke tests for mistral not running fixed a postgres upstream issue and sometime before that pr was merged and the previous packages were generated one of mistral s upstream dependencies made a change that was causing mistral to not start this was fixed in the question now is why did the tests pass for there are smoke tests that are executed as part of but for some reason those weren t run guessing due to a misconfiguration of mistral enabled need to look into this
| 1
|
149,572
| 11,906,014,358
|
IssuesEvent
|
2020-03-30 19:36:17
|
microsoft/vscode
|
https://api.github.com/repos/microsoft/vscode
|
opened
|
Test: New Drag and Drop Feedback and View Relocation
|
layout testplan-item workbench-views
|
Refs: <!-- Refer to the issue that this test plan item is testing. -->
- [ ] anyOS
- [ ] anyOS
Complexity: 3
---
<!-- Please write your test here. -->
This milestone, the drag and drop feedback for views was changed to become more intuitive and support moving across panel, sidebar, and activity bar.
To Test:
1. Attempt to create various layouts by dragging and dropping them between the 3 parts.
Feel free to use the following as inspiration: [Achieve the following layout.pdf](https://github.com/microsoft/vscode/files/4405018/Achieve.the.following.layout.pdf)
**DO STEP 1 BEFORE CONTINUING TO PROVIDE UNBIASED FEEDBACK ON THE INTUITIVE NATURE**
2. Try dragging a view to make a new panel/activity bar icon by dragging a single view header to the panel/activity bar area (inserting it where you like).
3. Try dragging views into other containers.
4. Try dragging views into unopened containers by hovering the icon while dragging.
5. Try reordering views.
6. Try reordering composites.
7. Play around, use the reset view location command by right clicking the view header or reset all view locations from the command palette.
|
1.0
|
Test: New Drag and Drop Feedback and View Relocation - Refs: <!-- Refer to the issue that this test plan item is testing. -->
- [ ] anyOS
- [ ] anyOS
Complexity: 3
---
<!-- Please write your test here. -->
This milestone, the drag and drop feedback for views was changed to become more intuitive and support moving across panel, sidebar, and activity bar.
To Test:
1. Attempt to create various layouts by dragging and dropping them between the 3 parts.
Feel free to use the following as inspiration: [Achieve the following layout.pdf](https://github.com/microsoft/vscode/files/4405018/Achieve.the.following.layout.pdf)
**DO STEP 1 BEFORE CONTINUING TO PROVIDE UNBIASED FEEDBACK ON THE INTUITIVE NATURE**
2. Try dragging a view to make a new panel/activity bar icon by dragging a single view header to the panel/activity bar area (inserting it where you like).
3. Try dragging views into other containers.
4. Try dragging views into unopened containers by hovering the icon while dragging.
5. Try reordering views.
6. Try reordering composites.
7. Play around, use the reset view location command by right clicking the view header or reset all view locations from the command palette.
|
test
|
test new drag and drop feedback and view relocation refs anyos anyos complexity this milestone the drag and drop feedback for views was changed to become more intuitive and support moving across panel sidebar and activity bar to test attempt to create various layouts by dragging and dropping them between the parts feel free to use the following as inspiration do step before continuing to provide unbiased feedback on the intuitive nature try dragging a view to make a new panel activity bar icon by dragging a single view header to the panel activity bar area inserting it where you like try dragging views into other containers try dragging views into unopened containers by hovering the icon while dragging try reordering views try reordering composites play around use the reset view location command by right clicking the view header or reset all view locations from the command palette
| 1
|
22,464
| 2,649,114,047
|
IssuesEvent
|
2015-03-14 16:17:14
|
myrafproject/myrafproject
|
https://api.github.com/repos/myrafproject/myrafproject
|
closed
|
IRAF error in imalign()
|
bug imported Priority-Medium wontfix
|
_From [suvend...@gmail.com](https://code.google.com/u/118189557038039991612/) on November 04, 2014 11:27:58_
I have a problem running a code (in imalign module) using pyraf. This code was perfectly ok until yesterday but suddenly this code is not running at all.
I am using pyraf 2.1.5 in mac os x 10.9
My IRAF version is
NOAO/IRAF PC-IRAF Revision 2 .16 EXPORT Thu May 24 15:41:17 MST 2012
This is the EXPORT version of IRAF V2.16 supporting PC systems.
The imalign module is not running fully.
I got the following errors:
\# Trimming images: corrected section = [1:368,2:370]
Killing IRAF task `imcopy'
Traceback (most recent call last):
File "pytrial.py", line 46, in \<module>
imalign()
..............
..............
stsci.tools.irafglobals.IrafError: Error running IRAF task imcopy
IRAF task terminated abnormally
ERROR (1113, "FXF: must specify which FITS extension (alihd23r1p1.fits[1:368,2:370])")
I found from IRAP-FAQ that this is due to the updates in IRAP packages but I couldn't get any solution.
I have attached my small script. Thank you for any help.
**Attachment:** [pytrial.py](http://code.google.com/p/myrafproject/issues/detail?id=26)
_Original issue: http://code.google.com/p/myrafproject/issues/detail?id=26_
|
1.0
|
IRAF error in imalign() - _From [suvend...@gmail.com](https://code.google.com/u/118189557038039991612/) on November 04, 2014 11:27:58_
I have a problem running a code (in imalign module) using pyraf. This code was perfectly ok until yesterday but suddenly this code is not running at all.
I am using pyraf 2.1.5 in mac os x 10.9
My IRAF version is
NOAO/IRAF PC-IRAF Revision 2 .16 EXPORT Thu May 24 15:41:17 MST 2012
This is the EXPORT version of IRAF V2.16 supporting PC systems.
The imalign module is not running fully.
I got the following errors:
\# Trimming images: corrected section = [1:368,2:370]
Killing IRAF task `imcopy'
Traceback (most recent call last):
File "pytrial.py", line 46, in \<module>
imalign()
..............
..............
stsci.tools.irafglobals.IrafError: Error running IRAF task imcopy
IRAF task terminated abnormally
ERROR (1113, "FXF: must specify which FITS extension (alihd23r1p1.fits[1:368,2:370])")
I found from IRAP-FAQ that this is due to the updates in IRAP packages but I couldn't get any solution.
I have attached my small script. Thank you for any help.
**Attachment:** [pytrial.py](http://code.google.com/p/myrafproject/issues/detail?id=26)
_Original issue: http://code.google.com/p/myrafproject/issues/detail?id=26_
|
non_test
|
iraf error in imalign from on november i have a problem running a code in imalign module using pyraf this code was perfectly ok until yesterday but suddenly this code is not running at all i am using pyraf in mac os x my iraf version is noao iraf pc iraf revision export thu may mst this is the export version of iraf supporting pc systems the imalign module is not running fully i got the following errors trimming images corrected section killing iraf task imcopy traceback most recent call last file pytrial py line in imalign stsci tools irafglobals iraferror error running iraf task imcopy iraf task terminated abnormally error fxf must specify which fits extension fits i found from irap faq that this is due to the updates in irap packages but i couldn t get any solution i have attached my small script thank you for any help attachment original issue
| 0
|
50,429
| 6,087,969,200
|
IssuesEvent
|
2017-06-18 17:30:31
|
etsy/phan
|
https://api.github.com/repos/etsy/phan
|
closed
|
Add unit tests for bug fix in PR #832
|
probably easy tests
|
Reproduce the original bug, and create a test case verifying it's fixed
|
1.0
|
Add unit tests for bug fix in PR #832 - Reproduce the original bug, and create a test case verifying it's fixed
|
test
|
add unit tests for bug fix in pr reproduce the original bug and create a test case verifying it s fixed
| 1
|
284,273
| 24,588,612,216
|
IssuesEvent
|
2022-10-13 22:30:24
|
E3SM-Project/zppy
|
https://api.github.com/repos/E3SM-Project/zppy
|
opened
|
Generalize testing directions
|
Nice to have Testing
|
Generalize testing directions. References to a specific user should be abstracted. For example, in https://github.com/E3SM-Project/zppy/blob/main/tests/integration/generated/directions_chrysalis.md, we have `rm -rf /lcrc/group/e3sm/public_html/diagnostic_output/ac.forsyth2/zppy_test_bundles_www/v2.LR.historical_0201` rather than `rm -rf /lcrc/group/e3sm/public_html/diagnostic_output/{username}/zppy_test_bundles_www/v2.LR.historical_0201`
|
1.0
|
Generalize testing directions - Generalize testing directions. References to a specific user should be abstracted. For example, in https://github.com/E3SM-Project/zppy/blob/main/tests/integration/generated/directions_chrysalis.md, we have `rm -rf /lcrc/group/e3sm/public_html/diagnostic_output/ac.forsyth2/zppy_test_bundles_www/v2.LR.historical_0201` rather than `rm -rf /lcrc/group/e3sm/public_html/diagnostic_output/{username}/zppy_test_bundles_www/v2.LR.historical_0201`
|
test
|
generalize testing directions generalize testing directions references to a specific user should be abstracted for example in we have rm rf lcrc group public html diagnostic output ac zppy test bundles www lr historical rather than rm rf lcrc group public html diagnostic output username zppy test bundles www lr historical
| 1
|
284,078
| 30,913,590,091
|
IssuesEvent
|
2023-08-05 02:19:52
|
maddyCode23/linux-4.1.15
|
https://api.github.com/repos/maddyCode23/linux-4.1.15
|
opened
|
CVE-2023-3776 (High) detected in linux-stable-rtv4.1.33
|
Mend: dependency security vulnerability
|
## CVE-2023-3776 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/sched/cls_fw.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/sched/cls_fw.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A use-after-free vulnerability in the Linux kernel's net/sched: cls_fw component can be exploited to achieve local privilege escalation.
If tcf_change_indev() fails, fw_set_parms() will immediately return an error after incrementing or decrementing the reference counter in tcf_bind_filter(). If an attacker can control the reference counter and set it to zero, they can cause the reference to be freed, leading to a use-after-free vulnerability.
We recommend upgrading past commit 0323bce598eea038714f941ce2b22541c46d488f.
<p>Publish Date: 2023-07-21
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-3776>CVE-2023-3776</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-3776">https://www.linuxkernelcves.com/cves/CVE-2023-3776</a></p>
<p>Release Date: 2023-07-21</p>
<p>Fix Resolution: v5.4.251,v5.10.188,v5.15.121,v6.1.40,v6.4.5,v6.5-rc2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2023-3776 (High) detected in linux-stable-rtv4.1.33 - ## CVE-2023-3776 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/sched/cls_fw.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/sched/cls_fw.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A use-after-free vulnerability in the Linux kernel's net/sched: cls_fw component can be exploited to achieve local privilege escalation.
If tcf_change_indev() fails, fw_set_parms() will immediately return an error after incrementing or decrementing the reference counter in tcf_bind_filter(). If an attacker can control the reference counter and set it to zero, they can cause the reference to be freed, leading to a use-after-free vulnerability.
We recommend upgrading past commit 0323bce598eea038714f941ce2b22541c46d488f.
<p>Publish Date: 2023-07-21
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-3776>CVE-2023-3776</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-3776">https://www.linuxkernelcves.com/cves/CVE-2023-3776</a></p>
<p>Release Date: 2023-07-21</p>
<p>Fix Resolution: v5.4.251,v5.10.188,v5.15.121,v6.1.40,v6.4.5,v6.5-rc2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve high detected in linux stable cve high severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files net sched cls fw c net sched cls fw c vulnerability details a use after free vulnerability in the linux kernel s net sched cls fw component can be exploited to achieve local privilege escalation if tcf change indev fails fw set parms will immediately return an error after incrementing or decrementing the reference counter in tcf bind filter if an attacker can control the reference counter and set it to zero they can cause the reference to be freed leading to a use after free vulnerability we recommend upgrading past commit publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
59,039
| 6,626,669,689
|
IssuesEvent
|
2017-09-22 20:35:51
|
realbigplugins/edd-slack
|
https://api.github.com/repos/realbigplugins/edd-slack
|
closed
|
Slash Commands with Team Invites could expose vulnerabilities
|
bug done needs testing
|
@brashrebel @BigActual
I had a thought a few minutes ago regarding Slash Commands + Team Invites and I wanted to make sure I it brought up. It turns out this is potentially a huge security issue.
Let's say you have the site configured so that Users can be invited to your Slack Team when they make a Purchase. They aren't added to any special Channels, so they're only added to `#general`. No big deal.
However, any User in Slack is given access to Slash Commands. This can expose sensitive information or otherwise leave the site vulnerable.
(This list includes unimplemented Slash Commands from #56)
* With `/edd version <version_number>` someone could intentionally roll back the site to use a version of EDD with a known exploit and then wreak havoc on the website.
* With `/edd info` they could learn about your environment and possibly detect other vulnerabilities to exploit.
* With `/edd sales` they could view your sales data which you may not want to be exposed to your customers.
* With `/edd customer` they could view the information of an arbitrary customer.
* With `/edd reset` they could nuke all your store data.
---
I've thought of a few ways to potentially mitigate this:
1. Only allow Slack Team Admins to run EDD Slack Slash Commands.
- I know that the Slack Username is passed along with the Slash Command Request. However, I'm not sure if their permission level is. If not, we'd need to obtain this too. Either via a locally stored Transient that holds an Array of Admins, or by running a secondary API Call to verify their Admin-status.
2. Only allow specific Team Members to run EDD Slack Slash Commands.
- An additional option could be added to the Settings Screen that populates with all the Users in a Slack Team. If no Users are selected, it could default to letting _all_ Slack Users run Slash Commands.
3. Disable _all_ Slash Commands if User Invites are enabled.
- This is fairly heavy-handed and I personally do not like this approach, but it would be effective. It could potentially be a toggle on the Settings Screen as well.
|
1.0
|
Slash Commands with Team Invites could expose vulnerabilities - @brashrebel @BigActual
I had a thought a few minutes ago regarding Slash Commands + Team Invites and I wanted to make sure I it brought up. It turns out this is potentially a huge security issue.
Let's say you have the site configured so that Users can be invited to your Slack Team when they make a Purchase. They aren't added to any special Channels, so they're only added to `#general`. No big deal.
However, any User in Slack is given access to Slash Commands. This can expose sensitive information or otherwise leave the site vulnerable.
(This list includes unimplemented Slash Commands from #56)
* With `/edd version <version_number>` someone could intentionally roll back the site to use a version of EDD with a known exploit and then wreak havoc on the website.
* With `/edd info` they could learn about your environment and possibly detect other vulnerabilities to exploit.
* With `/edd sales` they could view your sales data which you may not want to be exposed to your customers.
* With `/edd customer` they could view the information of an arbitrary customer.
* With `/edd reset` they could nuke all your store data.
---
I've thought of a few ways to potentially mitigate this:
1. Only allow Slack Team Admins to run EDD Slack Slash Commands.
- I know that the Slack Username is passed along with the Slash Command Request. However, I'm not sure if their permission level is. If not, we'd need to obtain this too. Either via a locally stored Transient that holds an Array of Admins, or by running a secondary API Call to verify their Admin-status.
2. Only allow specific Team Members to run EDD Slack Slash Commands.
- An additional option could be added to the Settings Screen that populates with all the Users in a Slack Team. If no Users are selected, it could default to letting _all_ Slack Users run Slash Commands.
3. Disable _all_ Slash Commands if User Invites are enabled.
- This is fairly heavy-handed and I personally do not like this approach, but it would be effective. It could potentially be a toggle on the Settings Screen as well.
|
test
|
slash commands with team invites could expose vulnerabilities brashrebel bigactual i had a thought a few minutes ago regarding slash commands team invites and i wanted to make sure i it brought up it turns out this is potentially a huge security issue let s say you have the site configured so that users can be invited to your slack team when they make a purchase they aren t added to any special channels so they re only added to general no big deal however any user in slack is given access to slash commands this can expose sensitive information or otherwise leave the site vulnerable this list includes unimplemented slash commands from with edd version someone could intentionally roll back the site to use a version of edd with a known exploit and then wreak havoc on the website with edd info they could learn about your environment and possibly detect other vulnerabilities to exploit with edd sales they could view your sales data which you may not want to be exposed to your customers with edd customer they could view the information of an arbitrary customer with edd reset they could nuke all your store data i ve thought of a few ways to potentially mitigate this only allow slack team admins to run edd slack slash commands i know that the slack username is passed along with the slash command request however i m not sure if their permission level is if not we d need to obtain this too either via a locally stored transient that holds an array of admins or by running a secondary api call to verify their admin status only allow specific team members to run edd slack slash commands an additional option could be added to the settings screen that populates with all the users in a slack team if no users are selected it could default to letting all slack users run slash commands disable all slash commands if user invites are enabled this is fairly heavy handed and i personally do not like this approach but it would be effective it could potentially be a toggle on the settings screen as well
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.