Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 4 112 | repo_url stringlengths 33 141 | action stringclasses 3 values | title stringlengths 1 1.02k | labels stringlengths 4 1.54k | body stringlengths 1 262k | index stringclasses 17 values | text_combine stringlengths 95 262k | label stringclasses 2 values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
155,842 | 12,279,731,903 | IssuesEvent | 2020-05-08 12:49:42 | DiSSCo/ELViS | https://api.github.com/repos/DiSSCo/ELViS | closed | Change institute logo for Luomus | MVP ELViS - Hotfix 2 bug enhancement resolved to test | Instead of University of Helsinki logo, use Luomus logo for the Finnish Museum of Natural History Luomus institution page: https://elvis-accept.pictura-hosting.nl/institutions/grid.507626.0

| 1.0 | Change institute logo for Luomus - Instead of University of Helsinki logo, use Luomus logo for the Finnish Museum of Natural History Luomus institution page: https://elvis-accept.pictura-hosting.nl/institutions/grid.507626.0

| test | change institute logo for luomus instead of university of helsinki logo use luomus logo for the finnish museum of natural history luomus institution page | 1 |
110,546 | 4,428,593,592 | IssuesEvent | 2016-08-17 03:22:02 | empirical-org/Empirical-Core | https://api.github.com/repos/empirical-org/Empirical-Core | opened | Users cannot save new passwords | Priority: ★ | When a user edits a password in my account, it does not save. Test with username: Teacher, password: Demo. It'd be nice if there was a better interaction once it saves as well. For example, the activity planner text switches from "Save" to "Saved". | 1.0 | Users cannot save new passwords - When a user edits a password in my account, it does not save. Test with username: Teacher, password: Demo. It'd be nice if there was a better interaction once it saves as well. For example, the activity planner text switches from "Save" to "Saved". | non_test | users cannot save new passwords when a user edits a password in my account it does not save test with username teacher password demo it d be nice if there was a better interaction once it saves as well for example the activity planner text switches from save to saved | 0 |
219,982 | 7,348,714,979 | IssuesEvent | 2018-03-08 07:55:41 | pmem/issues | https://api.github.com/repos/pmem/issues | closed | Test: util_file_create/TEST0W: SETUP (all\pmem\nondebug) fails | Exposure: Low OS: Windows Priority: 4 low Type: Bug | Found on 0067d81c59f6fa7c7088aecfd630a7d95a444c3a
Output in attached file:
[log_file.log](https://github.com/pmem/issues/files/1770774/log_file.log)
| 1.0 | Test: util_file_create/TEST0W: SETUP (all\pmem\nondebug) fails - Found on 0067d81c59f6fa7c7088aecfd630a7d95a444c3a
Output in attached file:
[log_file.log](https://github.com/pmem/issues/files/1770774/log_file.log)
| non_test | test util file create setup all pmem nondebug fails found on output in attached file | 0 |
55,516 | 6,480,978,727 | IssuesEvent | 2017-08-18 14:37:19 | Transkribus/TWI-mc | https://api.github.com/repos/Transkribus/TWI-mc | closed | A bread crumb | enhancement ready to test | To improve communication of context within the collection structure and also navigation. | 1.0 | A bread crumb - To improve communication of context within the collection structure and also navigation. | test | a bread crumb to improve communication of context within the collection structure and also navigation | 1 |
113,286 | 9,635,016,155 | IssuesEvent | 2019-05-15 23:11:01 | MichaIng/DietPi | https://api.github.com/repos/MichaIng/DietPi | closed | APT | Error while reinstalling SABnzbd pre-reqs | Bug :beetle: Solution available :clinking_glasses: Testing/testers required :arrow_down_small: | #### Details:
- Date | Tue 14 May 14:16:29 AEST 2019
- Bug report | N/A
- DietPi version | v6.23.3 (MichaIng/master)
- Img creator | DietPi Core Team
- Pre-image | Meveric
- SBC device | Odroid XU3/XU4/HC1/HC2 (armv7l) (index=11)
- Kernel version | #1 SMP PREEMPT Thu Apr 5 12:46:33 UTC 2018
- Distro | stretch (index=4)
- Command | G_AGI par2 p7zip-full libffi-dev libssl-dev
- Exit code | 100
- Software title | DietPi-Software
#### Steps to reproduce:
<!-- Explain how to reproduce the issue -->
1. Error found when updating to 6.23.3; and also when running: dietpi-software reinstall 139
2. ...
#### Expected behaviour:
<!-- What SHOULD be happening? -->
- Updating SABnzbd to latest version
#### Actual behaviour:
<!-- What IS happening? -->
- Exits with error
#### Extra details:
<!-- Please post any extra details that might help solve the issue -->
- ...
#### Additional logs:
```
Log file contents:
E: Unable to correct problems, you have held broken packages.
```
| 2.0 | APT | Error while reinstalling SABnzbd pre-reqs - #### Details:
- Date | Tue 14 May 14:16:29 AEST 2019
- Bug report | N/A
- DietPi version | v6.23.3 (MichaIng/master)
- Img creator | DietPi Core Team
- Pre-image | Meveric
- SBC device | Odroid XU3/XU4/HC1/HC2 (armv7l) (index=11)
- Kernel version | #1 SMP PREEMPT Thu Apr 5 12:46:33 UTC 2018
- Distro | stretch (index=4)
- Command | G_AGI par2 p7zip-full libffi-dev libssl-dev
- Exit code | 100
- Software title | DietPi-Software
#### Steps to reproduce:
<!-- Explain how to reproduce the issue -->
1. Error found when updating to 6.23.3; and also when running: dietpi-software reinstall 139
2. ...
#### Expected behaviour:
<!-- What SHOULD be happening? -->
- Updating SABnzbd to latest version
#### Actual behaviour:
<!-- What IS happening? -->
- Exits with error
#### Extra details:
<!-- Please post any extra details that might help solve the issue -->
- ...
#### Additional logs:
```
Log file contents:
E: Unable to correct problems, you have held broken packages.
```
| test | apt error while reinstalling sabnzbd pre reqs details date tue may aest bug report n a dietpi version michaing master img creator dietpi core team pre image meveric sbc device odroid index kernel version smp preempt thu apr utc distro stretch index command g agi full libffi dev libssl dev exit code software title dietpi software steps to reproduce error found when updating to and also when running dietpi software reinstall expected behaviour updating sabnzbd to latest version actual behaviour exits with error extra details additional logs log file contents e unable to correct problems you have held broken packages | 1 |
143,253 | 5,512,563,185 | IssuesEvent | 2017-03-17 09:48:43 | CS2103JAN2017-T11-B2/main | https://api.github.com/repos/CS2103JAN2017-T11-B2/main | closed | Add 'undo' command to undo most recent modifying action | priority.medium status.complete type.task | Give user ability to run 'undo', which undoes the effects of the last command that modified the todo list. This includes add, delete, and edit commands. | 1.0 | Add 'undo' command to undo most recent modifying action - Give user ability to run 'undo', which undoes the effects of the last command that modified the todo list. This includes add, delete, and edit commands. | non_test | add undo command to undo most recent modifying action give user ability to run undo which undoes the effects of the last command that modified the todo list this includes add delete and edit commands | 0 |
249,586 | 21,178,723,218 | IssuesEvent | 2022-04-08 05:01:34 | stores-cedcommerce/Internal--Shaka-Store-Built-Redesign---12-April22 | https://api.github.com/repos/stores-cedcommerce/Internal--Shaka-Store-Built-Redesign---12-April22 | closed | product page, the quantity input field is coming blank. | Product page Ready to test fixed Desktop | **Actual result:**
1: product page, the quantity input field is coming blank.
2: The border is coming when we click on the input field then the border is coming it .( suggestion )


**Expected result:**
The empty input field is coming when we deleting the quantity. | 1.0 | product page, the quantity input field is coming blank. - **Actual result:**
1: product page, the quantity input field is coming blank.
2: The border is coming when we click on the input field then the border is coming it .( suggestion )


**Expected result:**
The empty input field is coming when we deleting the quantity. | test | product page the quantity input field is coming blank actual result product page the quantity input field is coming blank the border is coming when we click on the input field then the border is coming it suggestion expected result the empty input field is coming when we deleting the quantity | 1 |
21,236 | 3,875,683,488 | IssuesEvent | 2016-04-12 02:45:35 | rancher/os | https://api.github.com/repos/rancher/os | closed | ros os upgrade does not count on local image | status/to-test | I would like to test upgrade. So I build an image and which tagged as rancher/os:v0.4.4-dev. I load this image into system-docker as below
<pre>
[rancher@rancher ~]$ sudo system-docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
rancher/os v0.4.4-dev df84018a6caa 30 minutes ago 193.6 MB
rancher/os v0.4.3 eed7c8ab50fd 3 days ago 193.1 MB
rancher/os-preload v0.4.3 983c005fe53f 3 days ago 25.65 MB
rancher/os-console v0.4.3 d9b2845438df 3 days ago 25.66 MB
rancher/os-udev v0.4.3 8dc9eee7501f 2 weeks ago 25.65 MB
rancher/os-syslog v0.4.3 987960440665 2 weeks ago 25.65 MB
rancher/os-statescript v0.4.3 24355446800e 2 weeks ago 25.65 MB
rancher/os-state v0.4.3 0fe21afc3049 2 weeks ago 25.65 MB
rancher/os-ntp v0.4.3 3e2d57d4ae21 2 weeks ago 25.65 MB
rancher/os-network v0.4.3 5288f2eb944e 2 weeks ago 25.65 MB
rancher/os-docker v0.4.3 6a4e2f959df2 2 weeks ago 25.65 MB
rancher/os-cloudinit v0.4.3 5c47e775e016 2 weeks ago 25.65 MB
rancher/os-autoformat v0.4.3 00182a66713c 2 weeks ago 25.65 MB
rancher/os-acpid v0.4.3 0fa901101944 2 weeks ago 25.65 MB
</pre>
I then run upgrade:
<pre>
[rancher@rancher ~]$ sudo ros os upgrade -i rancher/os:v0.4.4-dev
INFO[0000] Project [once]: Starting project
INFO[0000] [0/1] [os-upgrade]: Starting
INFO[0000] Rebuilding os-upgrade
INFO[0000] [1/1] [os-upgrade]: Started
INFO[0000] Project [once]: Project started
Pulling repository docker.io/rancher/os
ERRO[0008] Failed to pull image rancher/os:v0.4.4-dev: Tag v0.4.4-dev not found in repository docker.io/rancher/os
FATA[0008] Tag v0.4.4-dev not found in repository docker.io/rancher/os
</pre>
I could not say this is a bug. But I really want to see upgrade command respect local images, which should great convenient for testing/developing purpose, what do you think?
| 1.0 | ros os upgrade does not count on local image - I would like to test upgrade. So I build an image and which tagged as rancher/os:v0.4.4-dev. I load this image into system-docker as below
<pre>
[rancher@rancher ~]$ sudo system-docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
rancher/os v0.4.4-dev df84018a6caa 30 minutes ago 193.6 MB
rancher/os v0.4.3 eed7c8ab50fd 3 days ago 193.1 MB
rancher/os-preload v0.4.3 983c005fe53f 3 days ago 25.65 MB
rancher/os-console v0.4.3 d9b2845438df 3 days ago 25.66 MB
rancher/os-udev v0.4.3 8dc9eee7501f 2 weeks ago 25.65 MB
rancher/os-syslog v0.4.3 987960440665 2 weeks ago 25.65 MB
rancher/os-statescript v0.4.3 24355446800e 2 weeks ago 25.65 MB
rancher/os-state v0.4.3 0fe21afc3049 2 weeks ago 25.65 MB
rancher/os-ntp v0.4.3 3e2d57d4ae21 2 weeks ago 25.65 MB
rancher/os-network v0.4.3 5288f2eb944e 2 weeks ago 25.65 MB
rancher/os-docker v0.4.3 6a4e2f959df2 2 weeks ago 25.65 MB
rancher/os-cloudinit v0.4.3 5c47e775e016 2 weeks ago 25.65 MB
rancher/os-autoformat v0.4.3 00182a66713c 2 weeks ago 25.65 MB
rancher/os-acpid v0.4.3 0fa901101944 2 weeks ago 25.65 MB
</pre>
I then run upgrade:
<pre>
[rancher@rancher ~]$ sudo ros os upgrade -i rancher/os:v0.4.4-dev
INFO[0000] Project [once]: Starting project
INFO[0000] [0/1] [os-upgrade]: Starting
INFO[0000] Rebuilding os-upgrade
INFO[0000] [1/1] [os-upgrade]: Started
INFO[0000] Project [once]: Project started
Pulling repository docker.io/rancher/os
ERRO[0008] Failed to pull image rancher/os:v0.4.4-dev: Tag v0.4.4-dev not found in repository docker.io/rancher/os
FATA[0008] Tag v0.4.4-dev not found in repository docker.io/rancher/os
</pre>
I could not say this is a bug. But I really want to see upgrade command respect local images, which should great convenient for testing/developing purpose, what do you think?
| test | ros os upgrade does not count on local image i would like to test upgrade so i build an image and which tagged as rancher os dev i load this image into system docker as below sudo system docker images repository tag image id created size rancher os dev minutes ago mb rancher os days ago mb rancher os preload days ago mb rancher os console days ago mb rancher os udev weeks ago mb rancher os syslog weeks ago mb rancher os statescript weeks ago mb rancher os state weeks ago mb rancher os ntp weeks ago mb rancher os network weeks ago mb rancher os docker weeks ago mb rancher os cloudinit weeks ago mb rancher os autoformat weeks ago mb rancher os acpid weeks ago mb i then run upgrade sudo ros os upgrade i rancher os dev info project starting project info starting info rebuilding os upgrade info started info project project started pulling repository docker io rancher os erro failed to pull image rancher os dev tag dev not found in repository docker io rancher os fata tag dev not found in repository docker io rancher os i could not say this is a bug but i really want to see upgrade command respect local images which should great convenient for testing developing purpose what do you think | 1 |
244,809 | 20,718,138,474 | IssuesEvent | 2022-03-13 00:26:55 | fortran-lang/minpack | https://api.github.com/repos/fortran-lang/minpack | closed | Modernize examples | tests refactoring | The current tests are not actually testing anything other than running the examples. They provide no way to check the results other than by manual inspection, which makes them not reliable as regression tests.
Required steps:
- split the objective functions in separate callbacks rather than one big select case via a common block variable
- have a resource module to hold the objective functions and initializers to avoid code duplication
- actually test the outcome of the examples is correct up to a given tolerance | 1.0 | Modernize examples - The current tests are not actually testing anything other than running the examples. They provide no way to check the results other than by manual inspection, which makes them not reliable as regression tests.
Required steps:
- split the objective functions in separate callbacks rather than one big select case via a common block variable
- have a resource module to hold the objective functions and initializers to avoid code duplication
- actually test the outcome of the examples is correct up to a given tolerance | test | modernize examples the current tests are not actually testing anything other than running the examples they provide no way to check the results other than by manual inspection which makes them not reliable as regression tests required steps split the objective functions in separate callbacks rather than one big select case via a common block variable have a resource module to hold the objective functions and initializers to avoid code duplication actually test the outcome of the examples is correct up to a given tolerance | 1 |
171,629 | 20,984,377,547 | IssuesEvent | 2022-03-29 00:22:11 | senditagile/yodub.com | https://api.github.com/repos/senditagile/yodub.com | opened | secret discovered - "data/template/config.json - f438d1c5d8aaa822fbe180a5c5d3a7ac0e175938a75c139aa97bc384b70ddd71" | security security-risk: low trufflehog | New Finding Alert
To Forever Suppress This Finding From Alerting add the SHA256 to suppressions-trufflehog3 file, to suppress all findings for this commit, add the commit hash instead. See https://github.com/netlify/security-netlify-trufflehog3#suppression_file_path
--Repo: senditagile/yodub.com
--Date: "2022-03-28T20:19:36-04:00"
--Path: "data/template/config.json"
--Branch: "main"
--Commit: "cbbba2c6fb589031f1fb22de292d2d1637e5d75b"
--Commit Message: "Removes babelrc"
--Line Number: "25"
--Severity: "LOW"
--Reason: "Generic Secret" - "generic.secret"
--String Discovered: "Secret\": \"2ff6331da9e53f9a91bcc991d38d550c85026714\""
--SHA256: f438d1c5d8aaa822fbe180a5c5d3a7ac0e175938a75c139aa97bc384b70ddd71
| True | secret discovered - "data/template/config.json - f438d1c5d8aaa822fbe180a5c5d3a7ac0e175938a75c139aa97bc384b70ddd71" - New Finding Alert
To Forever Suppress This Finding From Alerting add the SHA256 to suppressions-trufflehog3 file, to suppress all findings for this commit, add the commit hash instead. See https://github.com/netlify/security-netlify-trufflehog3#suppression_file_path
--Repo: senditagile/yodub.com
--Date: "2022-03-28T20:19:36-04:00"
--Path: "data/template/config.json"
--Branch: "main"
--Commit: "cbbba2c6fb589031f1fb22de292d2d1637e5d75b"
--Commit Message: "Removes babelrc"
--Line Number: "25"
--Severity: "LOW"
--Reason: "Generic Secret" - "generic.secret"
--String Discovered: "Secret\": \"2ff6331da9e53f9a91bcc991d38d550c85026714\""
--SHA256: f438d1c5d8aaa822fbe180a5c5d3a7ac0e175938a75c139aa97bc384b70ddd71
| non_test | secret discovered data template config json new finding alert to forever suppress this finding from alerting add the to suppressions file to suppress all findings for this commit add the commit hash instead see repo senditagile yodub com date path data template config json branch main commit commit message removes babelrc line number severity low reason generic secret generic secret string discovered secret | 0 |
68,509 | 21,675,997,916 | IssuesEvent | 2022-05-08 18:18:59 | SeleniumHQ/selenium | https://api.github.com/repos/SeleniumHQ/selenium | opened | [🐛 Bug]: | I-defect needs-triaging | ### What happened?
I'm noticing flaky session not created issue in my personal test framework. I have a Selenium Grid running with the help of docker-compose. I run my tests on gitlab CI.
Mostly it works fine but suddenly starts to fail. Sometime even a small change NOT related to selenium Gird in my code creates this issue.
To demostrate this - I'm sharing my Gitlab Repo link and under CI - you can see that I tried running the same codebase (without any changes) and it failed first and then successfully ran. If you check the logs you will notice the below error ->
I have tried using selenium grid 4.1.3 and 4.1.4
```
org.openqa.selenium.SessionNotCreatedException: Could not start a new session. Possible causes are invalid address of the remote server or browser start-up failure.
Build info: version: '4.1.3', revision: '7b1ebf28ef'
System info: host: '5a85d554d8f3', ip: '172.19.0.5', os.name: 'Linux', os.arch: 'amd64', os.version: '5.4.109+', java.version: '1.8.0_212'
Driver info: org.openqa.selenium.remote.RemoteWebDriver
Command: [null, newSession {capabilities=[Capabilities {browserName: chrome, goog:chromeOptions: {args: [], extensions: []}}], desiredCapabilities=Capabilities {browserName: chrome, goog:chromeOptions: {args: [], extensions: []}, name: UI Regression}}]
Capabilities {}
at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:585)
at org.openqa.selenium.remote.RemoteWebDriver.startSession(RemoteWebDriver.java:248)
at org.openqa.selenium.remote.RemoteWebDriver.<init>(RemoteWebDriver.java:164)
at org.openqa.selenium.remote.RemoteWebDriver.<init>(RemoteWebDriver.java:146)
at functional.BaseTest.getRemoteDriver(BaseTest.java:113)
```

### How can we reproduce the issue?
```shell
Below is my project link -
https://gitlab.com/suryajit7/my-blog-topics
```
### Relevant log output
```shell
org.openqa.selenium.SessionNotCreatedException: Could not start a new session. Possible causes are invalid address of the remote server or browser start-up failure.
Build info: version: '4.1.3', revision: '7b1ebf28ef'
System info: host: '5a85d554d8f3', ip: '172.19.0.5', os.name: 'Linux', os.arch: 'amd64', os.version: '5.4.109+', java.version: '1.8.0_212'
Driver info: org.openqa.selenium.remote.RemoteWebDriver
Command: [null, newSession {capabilities=[Capabilities {browserName: chrome, goog:chromeOptions: {args: [], extensions: []}}], desiredCapabilities=Capabilities {browserName: chrome, goog:chromeOptions: {args: [], extensions: []}, name: UI Regression}}]
Capabilities {}
at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:585)
at org.openqa.selenium.remote.RemoteWebDriver.startSession(RemoteWebDriver.java:248)
at org.openqa.selenium.remote.RemoteWebDriver.<init>(RemoteWebDriver.java:164)
at org.openqa.selenium.remote.RemoteWebDriver.<init>(RemoteWebDriver.java:146)
at functional.BaseTest.getRemoteDriver(BaseTest.java:113)
at functional.BaseTest.beforeClassSetup(BaseTest.java:88)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.testng.internal.invokers.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:135)
at org.testng.internal.invokers.MethodInvocationHelper.invokeMethodConsideringTimeout(MethodInvocationHelper.java:65)
at org.testng.internal.invokers.ConfigInvoker.invokeConfigurationMethod(ConfigInvoker.java:381)
at org.testng.internal.invokers.ConfigInvoker.invokeConfigurations(ConfigInvoker.java:319)
at org.testng.internal.invokers.TestMethodWorker.invokeBeforeClassMethods(TestMethodWorker.java:178)
at org.testng.internal.invokers.TestMethodWorker.run(TestMethodWorker.java:122)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.UncheckedIOException: java.net.ConnectException: Connection refused: hub/172.19.0.2:4444
at org.openqa.selenium.remote.http.netty.NettyHttpHandler.makeCall(NettyHttpHandler.java:80)
at org.openqa.selenium.remote.http.AddSeleniumUserAgent.lambda$apply$0(AddSeleniumUserAgent.java:42)
at org.openqa.selenium.remote.http.Filter.lambda$andFinally$1(Filter.java:56)
at org.openqa.selenium.remote.http.netty.NettyHttpHandler.execute(NettyHttpHandler.java:51)
at org.openqa.selenium.remote.http.AddSeleniumUserAgent.lambda$apply$0(AddSeleniumUserAgent.java:42)
at org.openqa.selenium.remote.http.Filter.lambda$andFinally$1(Filter.java:56)
at org.openqa.selenium.remote.http.netty.NettyClient.execute(NettyClient.java:124)
at org.openqa.selenium.remote.tracing.TracedHttpClient.execute(TracedHttpClient.java:55)
at org.openqa.selenium.remote.ProtocolHandshake.createSession(ProtocolHandshake.java:102)
at org.openqa.selenium.remote.ProtocolHandshake.createSession(ProtocolHandshake.java:84)
at org.openqa.selenium.remote.ProtocolHandshake.createSession(ProtocolHandshake.java:62)
at org.openqa.selenium.remote.HttpCommandExecutor.execute(HttpCommandExecutor.java:156)
at org.openqa.selenium.remote.TracedCommandExecutor.execute(TracedCommandExecutor.java:51)
at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:567)
... 18 more
Caused by: java.net.ConnectException: Connection refused: hub/172.19.0.2:4444
at org.asynchttpclient.netty.channel.NettyConnectListener.onFailure(NettyConnectListener.java:179)
at org.asynchttpclient.netty.channel.NettyChannelConnector$1.onFailure(NettyChannelConnector.java:108)
at org.asynchttpclient.netty.SimpleChannelFutureListener.operationComplete(SimpleChannelFutureListener.java:28)
at org.asynchttpclient.netty.SimpleChannelFutureListener.operationComplete(SimpleChannelFutureListener.java:20)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)
at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:571)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:550)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)
at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:609)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:321)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:337)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:710)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
... 1 more
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: hub/172.19.0.2:4444
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:710)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
```
### Operating System
Windows 10
### Selenium version
Selenium Grid with nodechrome = 4.1.3
### What are the browser(s) and version(s) where you see this issue?
Chrome latest version 100
### What are the browser driver(s) and version(s) where you see this issue?
Selenium Grid with nodechrome = 4.1.3
### Are you using Selenium Grid?
Yes - Selenium Grid 4.1.3 with docker-compose | 1.0 | [🐛 Bug]: - ### What happened?
I'm noticing flaky session not created issue in my personal test framework. I have a Selenium Grid running with the help of docker-compose. I run my tests on gitlab CI.
Mostly it works fine but suddenly starts to fail. Sometime even a small change NOT related to selenium Gird in my code creates this issue.
To demostrate this - I'm sharing my Gitlab Repo link and under CI - you can see that I tried running the same codebase (without any changes) and it failed first and then successfully ran. If you check the logs you will notice the below error ->
I have tried using selenium grid 4.1.3 and 4.1.4
```
org.openqa.selenium.SessionNotCreatedException: Could not start a new session. Possible causes are invalid address of the remote server or browser start-up failure.
Build info: version: '4.1.3', revision: '7b1ebf28ef'
System info: host: '5a85d554d8f3', ip: '172.19.0.5', os.name: 'Linux', os.arch: 'amd64', os.version: '5.4.109+', java.version: '1.8.0_212'
Driver info: org.openqa.selenium.remote.RemoteWebDriver
Command: [null, newSession {capabilities=[Capabilities {browserName: chrome, goog:chromeOptions: {args: [], extensions: []}}], desiredCapabilities=Capabilities {browserName: chrome, goog:chromeOptions: {args: [], extensions: []}, name: UI Regression}}]
Capabilities {}
at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:585)
at org.openqa.selenium.remote.RemoteWebDriver.startSession(RemoteWebDriver.java:248)
at org.openqa.selenium.remote.RemoteWebDriver.<init>(RemoteWebDriver.java:164)
at org.openqa.selenium.remote.RemoteWebDriver.<init>(RemoteWebDriver.java:146)
at functional.BaseTest.getRemoteDriver(BaseTest.java:113)
```

### How can we reproduce the issue?
```shell
Below is my project link -
https://gitlab.com/suryajit7/my-blog-topics
```
### Relevant log output
```shell
org.openqa.selenium.SessionNotCreatedException: Could not start a new session. Possible causes are invalid address of the remote server or browser start-up failure.
Build info: version: '4.1.3', revision: '7b1ebf28ef'
System info: host: '5a85d554d8f3', ip: '172.19.0.5', os.name: 'Linux', os.arch: 'amd64', os.version: '5.4.109+', java.version: '1.8.0_212'
Driver info: org.openqa.selenium.remote.RemoteWebDriver
Command: [null, newSession {capabilities=[Capabilities {browserName: chrome, goog:chromeOptions: {args: [], extensions: []}}], desiredCapabilities=Capabilities {browserName: chrome, goog:chromeOptions: {args: [], extensions: []}, name: UI Regression}}]
Capabilities {}
at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:585)
at org.openqa.selenium.remote.RemoteWebDriver.startSession(RemoteWebDriver.java:248)
at org.openqa.selenium.remote.RemoteWebDriver.<init>(RemoteWebDriver.java:164)
at org.openqa.selenium.remote.RemoteWebDriver.<init>(RemoteWebDriver.java:146)
at functional.BaseTest.getRemoteDriver(BaseTest.java:113)
at functional.BaseTest.beforeClassSetup(BaseTest.java:88)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.testng.internal.invokers.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:135)
at org.testng.internal.invokers.MethodInvocationHelper.invokeMethodConsideringTimeout(MethodInvocationHelper.java:65)
at org.testng.internal.invokers.ConfigInvoker.invokeConfigurationMethod(ConfigInvoker.java:381)
at org.testng.internal.invokers.ConfigInvoker.invokeConfigurations(ConfigInvoker.java:319)
at org.testng.internal.invokers.TestMethodWorker.invokeBeforeClassMethods(TestMethodWorker.java:178)
at org.testng.internal.invokers.TestMethodWorker.run(TestMethodWorker.java:122)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.UncheckedIOException: java.net.ConnectException: Connection refused: hub/172.19.0.2:4444
at org.openqa.selenium.remote.http.netty.NettyHttpHandler.makeCall(NettyHttpHandler.java:80)
at org.openqa.selenium.remote.http.AddSeleniumUserAgent.lambda$apply$0(AddSeleniumUserAgent.java:42)
at org.openqa.selenium.remote.http.Filter.lambda$andFinally$1(Filter.java:56)
at org.openqa.selenium.remote.http.netty.NettyHttpHandler.execute(NettyHttpHandler.java:51)
at org.openqa.selenium.remote.http.AddSeleniumUserAgent.lambda$apply$0(AddSeleniumUserAgent.java:42)
at org.openqa.selenium.remote.http.Filter.lambda$andFinally$1(Filter.java:56)
at org.openqa.selenium.remote.http.netty.NettyClient.execute(NettyClient.java:124)
at org.openqa.selenium.remote.tracing.TracedHttpClient.execute(TracedHttpClient.java:55)
at org.openqa.selenium.remote.ProtocolHandshake.createSession(ProtocolHandshake.java:102)
at org.openqa.selenium.remote.ProtocolHandshake.createSession(ProtocolHandshake.java:84)
at org.openqa.selenium.remote.ProtocolHandshake.createSession(ProtocolHandshake.java:62)
at org.openqa.selenium.remote.HttpCommandExecutor.execute(HttpCommandExecutor.java:156)
at org.openqa.selenium.remote.TracedCommandExecutor.execute(TracedCommandExecutor.java:51)
at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:567)
... 18 more
Caused by: java.net.ConnectException: Connection refused: hub/172.19.0.2:4444
at org.asynchttpclient.netty.channel.NettyConnectListener.onFailure(NettyConnectListener.java:179)
at org.asynchttpclient.netty.channel.NettyChannelConnector$1.onFailure(NettyChannelConnector.java:108)
at org.asynchttpclient.netty.SimpleChannelFutureListener.operationComplete(SimpleChannelFutureListener.java:28)
at org.asynchttpclient.netty.SimpleChannelFutureListener.operationComplete(SimpleChannelFutureListener.java:20)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)
at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:571)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:550)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)
at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:609)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:321)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:337)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:710)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
... 1 more
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: hub/172.19.0.2:4444
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:710)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
```
### Operating System
Windows 10
### Selenium version
Selenium Grid with nodechrome = 4.1.3
### What are the browser(s) and version(s) where you see this issue?
Chrome latest version 100
### What are the browser driver(s) and version(s) where you see this issue?
Selenium Grid with nodechrome = 4.1.3
### Are you using Selenium Grid?
Yes - Selenium Grid 4.1.3 with docker-compose | non_test | what happened i m noticing flaky session not created issue in my personal test framework i have a selenium grid running with the help of docker compose i run my tests on gitlab ci mostly it works fine but suddenly starts to fail sometime even a small change not related to selenium gird in my code creates this issue to demostrate this i m sharing my gitlab repo link and under ci you can see that i tried running the same codebase without any changes and it failed first and then successfully ran if you check the logs you will notice the below error i have tried using selenium grid and org openqa selenium sessionnotcreatedexception could not start a new session possible causes are invalid address of the remote server or browser start up failure build info version revision system info host ip os name linux os arch os version java version driver info org openqa selenium remote remotewebdriver command extensions desiredcapabilities capabilities browsername chrome goog chromeoptions args extensions name ui regression capabilities at org openqa selenium remote remotewebdriver execute remotewebdriver java at org openqa selenium remote remotewebdriver startsession remotewebdriver java at org openqa selenium remote remotewebdriver remotewebdriver java at org openqa selenium remote remotewebdriver remotewebdriver java at functional basetest getremotedriver basetest java how can we reproduce the issue shell below is my project link relevant log output shell org openqa selenium sessionnotcreatedexception could not start a new session possible causes are invalid address of the remote server or browser start up failure build info version revision system info host ip os name linux os arch os version java version driver info org openqa selenium remote remotewebdriver command extensions desiredcapabilities capabilities browsername chrome goog chromeoptions args extensions name ui regression capabilities at org openqa selenium remote remotewebdriver execute remotewebdriver java at org openqa selenium remote remotewebdriver startsession remotewebdriver java at org openqa selenium remote remotewebdriver remotewebdriver java at org openqa selenium remote remotewebdriver remotewebdriver java at functional basetest getremotedriver basetest java at functional basetest beforeclasssetup basetest java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org testng internal invokers methodinvocationhelper invokemethod methodinvocationhelper java at org testng internal invokers methodinvocationhelper invokemethodconsideringtimeout methodinvocationhelper java at org testng internal invokers configinvoker invokeconfigurationmethod configinvoker java at org testng internal invokers configinvoker invokeconfigurations configinvoker java at org testng internal invokers testmethodworker invokebeforeclassmethods testmethodworker java at org testng internal invokers testmethodworker run testmethodworker java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by java io uncheckedioexception java net connectexception connection refused hub at org openqa selenium remote http netty nettyhttphandler makecall nettyhttphandler java at org openqa selenium remote http addseleniumuseragent lambda apply addseleniumuseragent java at org openqa selenium remote http filter lambda andfinally filter java at org openqa selenium remote http netty nettyhttphandler execute nettyhttphandler java at org openqa selenium remote http addseleniumuseragent lambda apply addseleniumuseragent java at org openqa selenium remote http filter lambda andfinally filter java at org openqa selenium remote http netty nettyclient execute nettyclient java at org openqa selenium remote tracing tracedhttpclient execute tracedhttpclient java at org openqa selenium remote protocolhandshake createsession protocolhandshake java at org openqa selenium remote protocolhandshake createsession protocolhandshake java at org openqa selenium remote protocolhandshake createsession protocolhandshake java at org openqa selenium remote httpcommandexecutor execute httpcommandexecutor java at org openqa selenium remote tracedcommandexecutor execute tracedcommandexecutor java at org openqa selenium remote remotewebdriver execute remotewebdriver java more caused by java net connectexception connection refused hub at org asynchttpclient netty channel nettyconnectlistener onfailure nettyconnectlistener java at org asynchttpclient netty channel nettychannelconnector onfailure nettychannelconnector java at org asynchttpclient netty simplechannelfuturelistener operationcomplete simplechannelfuturelistener java at org asynchttpclient netty simplechannelfuturelistener operationcomplete simplechannelfuturelistener java at io netty util concurrent defaultpromise defaultpromise java at io netty util concurrent defaultpromise defaultpromise java at io netty util concurrent defaultpromise notifylistenersnow defaultpromise java at io netty util concurrent defaultpromise notifylisteners defaultpromise java at io netty util concurrent defaultpromise defaultpromise java at io netty util concurrent defaultpromise defaultpromise java at io netty util concurrent defaultpromise tryfailure defaultpromise java at io netty channel nio abstractniochannel abstractniounsafe fulfillconnectpromise abstractniochannel java at io netty channel nio abstractniochannel abstractniounsafe finishconnect abstractniochannel java at io netty channel nio nioeventloop processselectedkey nioeventloop java at io netty channel nio nioeventloop processselectedkeysoptimized nioeventloop java at io netty channel nio nioeventloop processselectedkeys nioeventloop java at io netty channel nio nioeventloop run nioeventloop java at io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java at io netty util internal threadexecutormap run threadexecutormap java at io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java more caused by io netty channel abstractchannel annotatedconnectexception connection refused hub caused by java net connectexception connection refused at sun nio ch socketchannelimpl checkconnect native method at sun nio ch socketchannelimpl finishconnect socketchannelimpl java at io netty channel socket nio niosocketchannel dofinishconnect niosocketchannel java at io netty channel nio abstractniochannel abstractniounsafe finishconnect abstractniochannel java at io netty channel nio nioeventloop processselectedkey nioeventloop java at io netty channel nio nioeventloop processselectedkeysoptimized nioeventloop java at io netty channel nio nioeventloop processselectedkeys nioeventloop java at io netty channel nio nioeventloop run nioeventloop java at io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java at io netty util internal threadexecutormap run threadexecutormap java at io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java at java lang thread run thread java operating system windows selenium version selenium grid with nodechrome what are the browser s and version s where you see this issue chrome latest version what are the browser driver s and version s where you see this issue selenium grid with nodechrome are you using selenium grid yes selenium grid with docker compose | 0 |
254,618 | 21,800,780,603 | IssuesEvent | 2022-05-16 04:48:55 | stores-cedcommerce/Deluxe-Hotel-LCC-Store-Redesign | https://api.github.com/repos/stores-cedcommerce/Deluxe-Hotel-LCC-Store-Redesign | closed | The banner text is cropping. | Tab Mobile Ready to test fixed homepage | **Actual result:**
The banner text is cropping.



**Expected result:**
The text of the banner should not be cropped.
| 1.0 | The banner text is cropping. - **Actual result:**
The banner text is cropping.



**Expected result:**
The text of the banner should not be cropped.
| test | the banner text is cropping actual result the banner text is cropping expected result the text of the banner should not be cropped | 1 |
465,159 | 13,357,861,078 | IssuesEvent | 2020-08-31 10:35:22 | input-output-hk/ouroboros-network | https://api.github.com/repos/input-output-hk/ouroboros-network | closed | Incorrect trimming of the Shelley Ledger View History | bug consensus priority high shelley ledger integration | The `LedgerViewHistory` maintained by the Shelley `LedgerState` is trimmed after a snapshot of the old ledger view history is made.
Call stack:
https://github.com/input-output-hk/ouroboros-network/blob/0934a3cb1e24ecbfcbab4e40522d99ffcd60feaf/ouroboros-consensus-shelley/src/Ouroboros/Consensus/Shelley/Ledger/History.hs#L70
https://github.com/input-output-hk/ouroboros-network/blob/0934a3cb1e24ecbfcbab4e40522d99ffcd60feaf/ouroboros-consensus/src/Ouroboros/Consensus/Ledger/History.hs#L103
This last module, `Ouroboros.Consensus.Ledger.History` is reused by the Byron ledger to maintain a history of the delegation state (= the Byron ledger view).
In `Ouroboros.Consensus.Ledger.History.trim`, snapshots older than "the earliest slot we might roll back to" are trimmed. This "earliest slot ..." is defined as "now - `2k`". This is correct for Byron, but not for Shelley! For Shelley, it should be "now - `3k/f`". This means that the effective rollback in the Shelley era is much shorter than it should be. | 1.0 | Incorrect trimming of the Shelley Ledger View History - The `LedgerViewHistory` maintained by the Shelley `LedgerState` is trimmed after a snapshot of the old ledger view history is made.
Call stack:
https://github.com/input-output-hk/ouroboros-network/blob/0934a3cb1e24ecbfcbab4e40522d99ffcd60feaf/ouroboros-consensus-shelley/src/Ouroboros/Consensus/Shelley/Ledger/History.hs#L70
https://github.com/input-output-hk/ouroboros-network/blob/0934a3cb1e24ecbfcbab4e40522d99ffcd60feaf/ouroboros-consensus/src/Ouroboros/Consensus/Ledger/History.hs#L103
This last module, `Ouroboros.Consensus.Ledger.History` is reused by the Byron ledger to maintain a history of the delegation state (= the Byron ledger view).
In `Ouroboros.Consensus.Ledger.History.trim`, snapshots older than "the earliest slot we might roll back to" are trimmed. This "earliest slot ..." is defined as "now - `2k`". This is correct for Byron, but not for Shelley! For Shelley, it should be "now - `3k/f`". This means that the effective rollback in the Shelley era is much shorter than it should be. | non_test | incorrect trimming of the shelley ledger view history the ledgerviewhistory maintained by the shelley ledgerstate is trimmed after a snapshot of the old ledger view history is made call stack this last module ouroboros consensus ledger history is reused by the byron ledger to maintain a history of the delegation state the byron ledger view in ouroboros consensus ledger history trim snapshots older than the earliest slot we might roll back to are trimmed this earliest slot is defined as now this is correct for byron but not for shelley for shelley it should be now f this means that the effective rollback in the shelley era is much shorter than it should be | 0 |
221,232 | 17,314,524,094 | IssuesEvent | 2021-07-27 02:58:03 | microsoft/AzureStorageExplorer | https://api.github.com/repos/microsoft/AzureStorageExplorer | opened | Only download one folder's blobs when trying to download all blobs under flat list mode | :gear: blobs 🧪 testing | **Storage Explorer Version**: 1.21.0-dev
**Build Number**: 20210727.2
**Branch**: main
**Platform/OS**: Windows 10/ Linux Ubuntu 20.04/ MacOS Big Sur 11.4
**Architecture**: ia32/x64
**How Found**: Exploratory testing
**Regression From**: Not a regression
## Steps to Reproduce ##
1. Expand one Non-ADLS Gen2 storage account -> Blob Containers.
2. Create a blob container -> Create a new folder then upload blobs to it.
3. Back to the root -> Create another folder -> Upload blobs to it.
4. Click 'Show View Options' -> Select 'Flat'.
5. Hide view options panel -> Select all items -> Click 'Download'.
6. Check whether all the blobs are downloaded.
## Expected Experience ##
All blobs are downloaded.
## Actual Experience ##
Only the blobs under one folder are downloaded. | 1.0 | Only download one folder's blobs when trying to download all blobs under flat list mode - **Storage Explorer Version**: 1.21.0-dev
**Build Number**: 20210727.2
**Branch**: main
**Platform/OS**: Windows 10/ Linux Ubuntu 20.04/ MacOS Big Sur 11.4
**Architecture**: ia32/x64
**How Found**: Exploratory testing
**Regression From**: Not a regression
## Steps to Reproduce ##
1. Expand one Non-ADLS Gen2 storage account -> Blob Containers.
2. Create a blob container -> Create a new folder then upload blobs to it.
3. Back to the root -> Create another folder -> Upload blobs to it.
4. Click 'Show View Options' -> Select 'Flat'.
5. Hide view options panel -> Select all items -> Click 'Download'.
6. Check whether all the blobs are downloaded.
## Expected Experience ##
All blobs are downloaded.
## Actual Experience ##
Only the blobs under one folder are downloaded. | test | only download one folder s blobs when trying to download all blobs under flat list mode storage explorer version dev build number branch main platform os windows linux ubuntu macos big sur architecture how found exploratory testing regression from not a regression steps to reproduce expand one non adls storage account blob containers create a blob container create a new folder then upload blobs to it back to the root create another folder upload blobs to it click show view options select flat hide view options panel select all items click download check whether all the blobs are downloaded expected experience all blobs are downloaded actual experience only the blobs under one folder are downloaded | 1 |
351,163 | 31,986,894,019 | IssuesEvent | 2023-09-21 00:35:39 | unifyai/ivy | https://api.github.com/repos/unifyai/ivy | opened | Fix tensor.test_torch___lt__ | PyTorch Frontend Sub Task Failing Test | | | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/6254778160/job/16982944534"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/6254778160/job/16982944534"><img src=https://img.shields.io/badge/-failure-red></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/6254778160/job/16982944534"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/6254778160/job/16982944534"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/6254778160/job/16982944534"><img src=https://img.shields.io/badge/-success-success></a>
| 1.0 | Fix tensor.test_torch___lt__ - | | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/6254778160/job/16982944534"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/6254778160/job/16982944534"><img src=https://img.shields.io/badge/-failure-red></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/6254778160/job/16982944534"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/6254778160/job/16982944534"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/6254778160/job/16982944534"><img src=https://img.shields.io/badge/-success-success></a>
| test | fix tensor test torch lt numpy a href src jax a href src tensorflow a href src torch a href src paddle a href src | 1 |
49,301 | 26,090,480,455 | IssuesEvent | 2022-12-26 10:37:20 | appsmithorg/appsmith | https://api.github.com/repos/appsmithorg/appsmith | closed | [Task]: POC dataTree split | Performance Task Evaluated Value | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### SubTasks
main issue https://github.com/appsmithorg/appsmith/issues/11351 | True | [Task]: POC dataTree split - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### SubTasks
main issue https://github.com/appsmithorg/appsmith/issues/11351 | non_test | poc datatree split is there an existing issue for this i have searched the existing issues subtasks main issue | 0 |
613,431 | 19,090,136,495 | IssuesEvent | 2021-11-29 11:08:35 | SAP/xsk | https://api.github.com/repos/SAP/xsk | closed | [Migration] Enable selection of multiple Delivery Units | core priority-medium usability tooling | ### Background
Complete XS classic applications might consist of multiple delivery units (DUs) so the migration wizard should allow for the selection of multiple DUs at once.
### Target
In the migration wizard, when the list of DUs is show, change the view so that the user can select multiple DUs at the same time - ie dropdown with multiselect.
All DUs must be migrated in the same workspace.
**Failure behavior**
If migration of 1 DU fails it should be removed and the process should continue to the next one | 1.0 | [Migration] Enable selection of multiple Delivery Units - ### Background
Complete XS classic applications might consist of multiple delivery units (DUs) so the migration wizard should allow for the selection of multiple DUs at once.
### Target
In the migration wizard, when the list of DUs is show, change the view so that the user can select multiple DUs at the same time - ie dropdown with multiselect.
All DUs must be migrated in the same workspace.
**Failure behavior**
If migration of 1 DU fails it should be removed and the process should continue to the next one | non_test | enable selection of multiple delivery units background complete xs classic applications might consist of multiple delivery units dus so the migration wizard should allow for the selection of multiple dus at once target in the migration wizard when the list of dus is show change the view so that the user can select multiple dus at the same time ie dropdown with multiselect all dus must be migrated in the same workspace failure behavior if migration of du fails it should be removed and the process should continue to the next one | 0 |
29,732 | 4,535,329,469 | IssuesEvent | 2016-09-08 17:00:04 | mozilla/fxa-content-server | https://api.github.com/repos/mozilla/fxa-content-server | opened | force_auth error hidden as soon as it's displayed | tests ❤❤❤ | 
This is causing test bustage on latest. | 1.0 | force_auth error hidden as soon as it's displayed - 
This is causing test bustage on latest. | test | force auth error hidden as soon as it s displayed this is causing test bustage on latest | 1 |
97,170 | 8,650,556,531 | IssuesEvent | 2018-11-26 22:59:16 | rancher/rancher | https://api.github.com/repos/rancher/rancher | closed | Not able to launch kubectl shell from UI , gets stuck in "connecting.." | kind/bug-qa status/resolved status/to-test version/2.0 | Rancher server version - v2.1.2-rc13
Steps to reproduce the problem:
Create a 1 node DO cluster.
Try to launch kubectl shell from UI using the "launch kubectl" option.
Kubectl shell gets stuck in "connecting.."
Note - This issue is not seen when testing with v2.1.2-rc12 | 1.0 | Not able to launch kubectl shell from UI , gets stuck in "connecting.." - Rancher server version - v2.1.2-rc13
Steps to reproduce the problem:
Create a 1 node DO cluster.
Try to launch kubectl shell from UI using the "launch kubectl" option.
Kubectl shell gets stuck in "connecting.."
Note - This issue is not seen when testing with v2.1.2-rc12 | test | not able to launch kubectl shell from ui gets stuck in connecting rancher server version steps to reproduce the problem create a node do cluster try to launch kubectl shell from ui using the launch kubectl option kubectl shell gets stuck in connecting note this issue is not seen when testing with | 1 |
38,043 | 5,164,904,938 | IssuesEvent | 2017-01-17 12:01:58 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | closed | CardinalityEstimatorAdvancedTest.testCardinalityEstimatorSpawnNodeInParallel | Team: Core Type: Test-Failure | ```
java.lang.AssertionError: CountDownLatch failed to complete within 120 seconds , count left: 1
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at com.hazelcast.test.HazelcastTestSupport.assertOpenEventually(HazelcastTestSupport.java:812)
at com.hazelcast.test.HazelcastTestSupport.assertOpenEventually(HazelcastTestSupport.java:805)
at com.hazelcast.test.HazelcastTestSupport.assertOpenEventually(HazelcastTestSupport.java:797)
at com.hazelcast.cardinality.CardinalityEstimatorAdvancedTest.testCardinalityEstimatorSpawnNodeInParallel(CardinalityEstimatorAdvancedTest.java:104)
```
https://hazelcast-l337.ci.cloudbees.com/view/Hazelcast/job/Hazelcast-3.x-OracleJDK1.6/com.hazelcast$hazelcast/1069/testReport/junit/com.hazelcast.cardinality/CardinalityEstimatorAdvancedTest/testCardinalityEstimatorSpawnNodeInParallel/
https://hazelcast-l337.ci.cloudbees.com/view/Hazelcast/job/Hazelcast-3.x-OracleJDK8/com.hazelcast$hazelcast/963/testReport/junit/com.hazelcast.cardinality/CardinalityEstimatorAdvancedTest/testCardinalityEstimatorSpawnNodeInParallel/
| 1.0 | CardinalityEstimatorAdvancedTest.testCardinalityEstimatorSpawnNodeInParallel - ```
java.lang.AssertionError: CountDownLatch failed to complete within 120 seconds , count left: 1
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at com.hazelcast.test.HazelcastTestSupport.assertOpenEventually(HazelcastTestSupport.java:812)
at com.hazelcast.test.HazelcastTestSupport.assertOpenEventually(HazelcastTestSupport.java:805)
at com.hazelcast.test.HazelcastTestSupport.assertOpenEventually(HazelcastTestSupport.java:797)
at com.hazelcast.cardinality.CardinalityEstimatorAdvancedTest.testCardinalityEstimatorSpawnNodeInParallel(CardinalityEstimatorAdvancedTest.java:104)
```
https://hazelcast-l337.ci.cloudbees.com/view/Hazelcast/job/Hazelcast-3.x-OracleJDK1.6/com.hazelcast$hazelcast/1069/testReport/junit/com.hazelcast.cardinality/CardinalityEstimatorAdvancedTest/testCardinalityEstimatorSpawnNodeInParallel/
https://hazelcast-l337.ci.cloudbees.com/view/Hazelcast/job/Hazelcast-3.x-OracleJDK8/com.hazelcast$hazelcast/963/testReport/junit/com.hazelcast.cardinality/CardinalityEstimatorAdvancedTest/testCardinalityEstimatorSpawnNodeInParallel/
| test | cardinalityestimatoradvancedtest testcardinalityestimatorspawnnodeinparallel java lang assertionerror countdownlatch failed to complete within seconds count left at org junit assert fail assert java at org junit assert asserttrue assert java at com hazelcast test hazelcasttestsupport assertopeneventually hazelcasttestsupport java at com hazelcast test hazelcasttestsupport assertopeneventually hazelcasttestsupport java at com hazelcast test hazelcasttestsupport assertopeneventually hazelcasttestsupport java at com hazelcast cardinality cardinalityestimatoradvancedtest testcardinalityestimatorspawnnodeinparallel cardinalityestimatoradvancedtest java | 1 |
193,051 | 15,365,877,652 | IssuesEvent | 2021-03-02 00:25:44 | MicTott/FrozenPy | https://api.github.com/repos/MicTott/FrozenPy | closed | Need to add extinction example | documentation | Title. Need to complete documentation and examples so lab can transition easily | 1.0 | Need to add extinction example - Title. Need to complete documentation and examples so lab can transition easily | non_test | need to add extinction example title need to complete documentation and examples so lab can transition easily | 0 |
742,703 | 25,866,786,406 | IssuesEvent | 2022-12-13 21:39:59 | IBMa/equal-access | https://api.github.com/repos/IBMa/equal-access | closed | Clean up mdx template | Bug engine priority-3 (low) | The mdx template (accessibility-checker-engine/help/a_rule_help_template.mdx) has a number of spaces on empty lines, which are causing odd formatting glitches.
 | 1.0 | Clean up mdx template - The mdx template (accessibility-checker-engine/help/a_rule_help_template.mdx) has a number of spaces on empty lines, which are causing odd formatting glitches.
 | non_test | clean up mdx template the mdx template accessibility checker engine help a rule help template mdx has a number of spaces on empty lines which are causing odd formatting glitches | 0 |
344,111 | 10,340,036,500 | IssuesEvent | 2019-09-03 20:50:43 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | k_uptime_get_32() does not behave as documented | bug has-pr priority: high | It was recently pointed out on slack that `k_uptime_get_32()` does not return `The lower 32-bits of the elapsed time since the system booted, in milliseconds.`
It returns instead the number of milliseconds corresponding to the low 32 bits of the tick counter.
```
u32_t z_impl_k_uptime_get_32(void)
{
return __ticks_to_ms(z_tick_get_32());
}
```
The values are significantly different when you look at a 32-bit rollover of the tick clock:
```
At 10000 Hz ticks and 2^32 +/- 50:
t0 = 0x0000ffffffce = 4294967246 ticks => t0m = 429496724 ms
t1 = 0x000100000032 = 4294967346 ticks => t1m = 429496734 ms
t1-t0 = 100 ticks = 10 ms
and t1m-t0m = 10 ms
tt0 = 0x0000ffffffce = 4294967246 ticks => tt0m = 429496724 ms
tt1 = 0x000000000032 = 50 ticks => tt1m = 5 ms
tt1-tt0 = 100 ticks = 10 ms
but tt1m-tt1m = 3865470577 ms
```
This explains https://github.com/zephyrproject-rtos/zephyr/pull/17155#discussion_r309013463 which had claimed that the current implementation followed an algorithm that I noted was incorrect. Apparently it does. | 1.0 | k_uptime_get_32() does not behave as documented - It was recently pointed out on slack that `k_uptime_get_32()` does not return `The lower 32-bits of the elapsed time since the system booted, in milliseconds.`
It returns instead the number of milliseconds corresponding to the low 32 bits of the tick counter.
```
u32_t z_impl_k_uptime_get_32(void)
{
return __ticks_to_ms(z_tick_get_32());
}
```
The values are significantly different when you look at a 32-bit rollover of the tick clock:
```
At 10000 Hz ticks and 2^32 +/- 50:
t0 = 0x0000ffffffce = 4294967246 ticks => t0m = 429496724 ms
t1 = 0x000100000032 = 4294967346 ticks => t1m = 429496734 ms
t1-t0 = 100 ticks = 10 ms
and t1m-t0m = 10 ms
tt0 = 0x0000ffffffce = 4294967246 ticks => tt0m = 429496724 ms
tt1 = 0x000000000032 = 50 ticks => tt1m = 5 ms
tt1-tt0 = 100 ticks = 10 ms
but tt1m-tt1m = 3865470577 ms
```
This explains https://github.com/zephyrproject-rtos/zephyr/pull/17155#discussion_r309013463 which had claimed that the current implementation followed an algorithm that I noted was incorrect. Apparently it does. | non_test | k uptime get does not behave as documented it was recently pointed out on slack that k uptime get does not return the lower bits of the elapsed time since the system booted in milliseconds it returns instead the number of milliseconds corresponding to the low bits of the tick counter t z impl k uptime get void return ticks to ms z tick get the values are significantly different when you look at a bit rollover of the tick clock at hz ticks and ticks ms ticks ms ticks ms and ms ticks ms ticks ms ticks ms but ms this explains which had claimed that the current implementation followed an algorithm that i noted was incorrect apparently it does | 0 |
52,624 | 10,885,288,185 | IssuesEvent | 2019-11-18 10:06:01 | rapid-eth/rapid-adventures | https://api.github.com/repos/rapid-eth/rapid-adventures | opened | [Component/Page] QuestPagePrimary | code | The QuestPagePrimary component will manage several of the Quest Views: QuestSearch, QuestCreateModal, etc...
- [ ] <QuestSearch />
- [ ] <QuestCreateModal />
## Component(s)
### QuestSearch
- [ ] Display Quest Search
- [ ] Provide View Selection (Card/List)
### QuestCreateModal
- [ ] Use `react-portal-system`
- [ ] Render `<FormQuestCreate />` | 1.0 | [Component/Page] QuestPagePrimary - The QuestPagePrimary component will manage several of the Quest Views: QuestSearch, QuestCreateModal, etc...
- [ ] <QuestSearch />
- [ ] <QuestCreateModal />
## Component(s)
### QuestSearch
- [ ] Display Quest Search
- [ ] Provide View Selection (Card/List)
### QuestCreateModal
- [ ] Use `react-portal-system`
- [ ] Render `<FormQuestCreate />` | non_test | questpageprimary the questpageprimary component will manage several of the quest views questsearch questcreatemodal etc component s questsearch display quest search provide view selection card list questcreatemodal use react portal system render | 0 |
151,175 | 12,016,494,964 | IssuesEvent | 2020-04-10 16:13:30 | mathjax/MathJax | https://api.github.com/repos/mathjax/MathJax | closed | using "tex-...-full.js" still needs extensions being loaded and added ... | Feature Request Fixed Merged Test Needed v3 | The following minimal test code doesn't typeset the equations:
```
<!DOCTYPE html>
<head>
<meta charset="utf-8">
<meta http-equiv="x-ua-compatible" content="ie=edge">
<meta name="viewport" content="width=device-width">
<title>MathJax v3 with TeX input and SVG output</title>
<script>
MathJax = {
tex: {
inlineMath: [['$', '$']],
tagFormat: {
number: function(n){
return String(n).replace(/0/g,"00");
}
}
},
svg: {fontCache: 'global'}
};
</script>
<script id="MathJax-script" async src="tex-svg-full.js"></script>
</head>
<body>
When $a \ne 0$, there are two solutions to $ax^2 + bx + c = 0$ and they are
$$x = {-b \pm \sqrt{b^2-4ac} \over 2a}.$$
</body>
</html>
```
The problem is resolved if one adds the extension, loads it inside the loader, and adds it as a package to the config setup of Mathjax; but isn't "**...-full.js**" package expected to be single-file solutions? | 1.0 | using "tex-...-full.js" still needs extensions being loaded and added ... - The following minimal test code doesn't typeset the equations:
```
<!DOCTYPE html>
<head>
<meta charset="utf-8">
<meta http-equiv="x-ua-compatible" content="ie=edge">
<meta name="viewport" content="width=device-width">
<title>MathJax v3 with TeX input and SVG output</title>
<script>
MathJax = {
tex: {
inlineMath: [['$', '$']],
tagFormat: {
number: function(n){
return String(n).replace(/0/g,"00");
}
}
},
svg: {fontCache: 'global'}
};
</script>
<script id="MathJax-script" async src="tex-svg-full.js"></script>
</head>
<body>
When $a \ne 0$, there are two solutions to $ax^2 + bx + c = 0$ and they are
$$x = {-b \pm \sqrt{b^2-4ac} \over 2a}.$$
</body>
</html>
```
The problem is resolved if one adds the extension, loads it inside the loader, and adds it as a package to the config setup of Mathjax; but isn't "**...-full.js**" package expected to be single-file solutions? | test | using tex full js still needs extensions being loaded and added the following minimal test code doesn t typeset the equations mathjax with tex input and svg output mathjax tex inlinemath tagformat number function n return string n replace g svg fontcache global when a ne there are two solutions to ax bx c and they are x b pm sqrt b over the problem is resolved if one adds the extension loads it inside the loader and adds it as a package to the config setup of mathjax but isn t full js package expected to be single file solutions | 1 |
138,518 | 20,601,782,161 | IssuesEvent | 2022-03-06 11:28:39 | xieyuschen/grillen | https://api.github.com/repos/xieyuschen/grillen | opened | How to choose format of config file? | TechDesign | As a code generation project, grillen wants users to provide their setting files that define the action of their business. Here
# Which format of config file should we use?
There are many formats of config files. Such as `JSON`, `YAML`, and so on. As `YAML` can support `JSON` so I think I can use YAML here as the format of the config file.
## How to represent API in the config file?
Can use YAML format to present API docs. As OpenApi suggests to us, we can use YAML file to represent API, link is here, https://swagger.io/docs/specification/basic-structure/
| 1.0 | How to choose format of config file? - As a code generation project, grillen wants users to provide their setting files that define the action of their business. Here
# Which format of config file should we use?
There are many formats of config files. Such as `JSON`, `YAML`, and so on. As `YAML` can support `JSON` so I think I can use YAML here as the format of the config file.
## How to represent API in the config file?
Can use YAML format to present API docs. As OpenApi suggests to us, we can use YAML file to represent API, link is here, https://swagger.io/docs/specification/basic-structure/
| non_test | how to choose format of config file as a code generation project grillen wants users to provide their setting files that define the action of their business here which format of config file should we use there are many formats of config files such as json yaml and so on as yaml can support json so i think i can use yaml here as the format of the config file how to represent api in the config file can use yaml format to present api docs as openapi suggests to us we can use yaml file to represent api link is here | 0 |
132,488 | 10,757,036,798 | IssuesEvent | 2019-10-31 12:30:40 | elastic/beats | https://api.github.com/repos/elastic/beats | opened | Add integration test in test_base to cover the limits for default fields | :Testing libbeat | Elasticsearch `default_fields` has a default limit of 1024 fields, the limit is not enforced when we created the template but it will be returned when we query Elasticsearch through Kibana.
Recently master was broken because new fields were added and we didn't detect it right away.
@Andrewkroh has fixed the issues and removed unnecessary fields and added a test to make sure we don't go over.
But we are on borrowed time here until we change our template strategy because we are at 939 field now. The unit test will allow to be notified up front but we should add a new integration test to make sure that the default value of Elasticsearch is not changed to something else.
Scenario:
- Add a test to test_based.py so all the existing beats can run it.
- Start the beats
- Install the template
- Do an Elasticsearch query to see if we still have the problem.
See https://github.com/elastic/beats/issues/14262 for a description of the behavior when it fails. | 1.0 | Add integration test in test_base to cover the limits for default fields - Elasticsearch `default_fields` has a default limit of 1024 fields, the limit is not enforced when we created the template but it will be returned when we query Elasticsearch through Kibana.
Recently master was broken because new fields were added and we didn't detect it right away.
@Andrewkroh has fixed the issues and removed unnecessary fields and added a test to make sure we don't go over.
But we are on borrowed time here until we change our template strategy because we are at 939 field now. The unit test will allow to be notified up front but we should add a new integration test to make sure that the default value of Elasticsearch is not changed to something else.
Scenario:
- Add a test to test_based.py so all the existing beats can run it.
- Start the beats
- Install the template
- Do an Elasticsearch query to see if we still have the problem.
See https://github.com/elastic/beats/issues/14262 for a description of the behavior when it fails. | test | add integration test in test base to cover the limits for default fields elasticsearch default fields has a default limit of fields the limit is not enforced when we created the template but it will be returned when we query elasticsearch through kibana recently master was broken because new fields were added and we didn t detect it right away andrewkroh has fixed the issues and removed unnecessary fields and added a test to make sure we don t go over but we are on borrowed time here until we change our template strategy because we are at field now the unit test will allow to be notified up front but we should add a new integration test to make sure that the default value of elasticsearch is not changed to something else scenario add a test to test based py so all the existing beats can run it start the beats install the template do an elasticsearch query to see if we still have the problem see for a description of the behavior when it fails | 1 |
194,132 | 22,261,863,142 | IssuesEvent | 2022-06-10 01:46:13 | kapseliboi/WeiPay | https://api.github.com/repos/kapseliboi/WeiPay | closed | WS-2018-0625 (High) detected in xmlbuilder-4.0.0.tgz, xmlbuilder-8.2.2.tgz - autoclosed | security vulnerability | ## WS-2018-0625 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>xmlbuilder-4.0.0.tgz</b>, <b>xmlbuilder-8.2.2.tgz</b></p></summary>
<p>
<details><summary><b>xmlbuilder-4.0.0.tgz</b></p></summary>
<p>An XML builder for node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/xmlbuilder/-/xmlbuilder-4.0.0.tgz">https://registry.npmjs.org/xmlbuilder/-/xmlbuilder-4.0.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/xmlbuilder/package.json</p>
<p>
Dependency Hierarchy:
- react-native-0.55.4.tgz (Root Library)
- plist-1.2.0.tgz
- :x: **xmlbuilder-4.0.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>xmlbuilder-8.2.2.tgz</b></p></summary>
<p>An XML builder for node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/xmlbuilder/-/xmlbuilder-8.2.2.tgz">https://registry.npmjs.org/xmlbuilder/-/xmlbuilder-8.2.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/simple-plist/node_modules/xmlbuilder/package.json</p>
<p>
Dependency Hierarchy:
- react-native-0.55.4.tgz (Root Library)
- xcode-0.9.3.tgz
- simple-plist-0.2.1.tgz
- plist-2.0.1.tgz
- :x: **xmlbuilder-8.2.2.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>stable</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package xmlbuilder-js before 9.0.5 is vulnerable to denial of service due to a regular expression issue.
<p>Publish Date: 2018-02-08
<p>URL: <a href=https://github.com/oozcitak/xmlbuilder-js/commit/bbf929a8a54f0d012bdc44cbe622fdeda2509230>WS-2018-0625</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/oozcitak/xmlbuilder-js/commit/bbf929a8a54f0d012bdc44cbe622fdeda2509230">https://github.com/oozcitak/xmlbuilder-js/commit/bbf929a8a54f0d012bdc44cbe622fdeda2509230</a></p>
<p>Release Date: 2018-02-08</p>
<p>Fix Resolution (xmlbuilder): 9.0.5</p>
<p>Direct dependency fix Resolution (react-native): 0.59.0-rc.0</p><p>Fix Resolution (xmlbuilder): 9.0.5</p>
<p>Direct dependency fix Resolution (react-native): 0.59.0-rc.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2018-0625 (High) detected in xmlbuilder-4.0.0.tgz, xmlbuilder-8.2.2.tgz - autoclosed - ## WS-2018-0625 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>xmlbuilder-4.0.0.tgz</b>, <b>xmlbuilder-8.2.2.tgz</b></p></summary>
<p>
<details><summary><b>xmlbuilder-4.0.0.tgz</b></p></summary>
<p>An XML builder for node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/xmlbuilder/-/xmlbuilder-4.0.0.tgz">https://registry.npmjs.org/xmlbuilder/-/xmlbuilder-4.0.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/xmlbuilder/package.json</p>
<p>
Dependency Hierarchy:
- react-native-0.55.4.tgz (Root Library)
- plist-1.2.0.tgz
- :x: **xmlbuilder-4.0.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>xmlbuilder-8.2.2.tgz</b></p></summary>
<p>An XML builder for node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/xmlbuilder/-/xmlbuilder-8.2.2.tgz">https://registry.npmjs.org/xmlbuilder/-/xmlbuilder-8.2.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/simple-plist/node_modules/xmlbuilder/package.json</p>
<p>
Dependency Hierarchy:
- react-native-0.55.4.tgz (Root Library)
- xcode-0.9.3.tgz
- simple-plist-0.2.1.tgz
- plist-2.0.1.tgz
- :x: **xmlbuilder-8.2.2.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>stable</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package xmlbuilder-js before 9.0.5 is vulnerable to denial of service due to a regular expression issue.
<p>Publish Date: 2018-02-08
<p>URL: <a href=https://github.com/oozcitak/xmlbuilder-js/commit/bbf929a8a54f0d012bdc44cbe622fdeda2509230>WS-2018-0625</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/oozcitak/xmlbuilder-js/commit/bbf929a8a54f0d012bdc44cbe622fdeda2509230">https://github.com/oozcitak/xmlbuilder-js/commit/bbf929a8a54f0d012bdc44cbe622fdeda2509230</a></p>
<p>Release Date: 2018-02-08</p>
<p>Fix Resolution (xmlbuilder): 9.0.5</p>
<p>Direct dependency fix Resolution (react-native): 0.59.0-rc.0</p><p>Fix Resolution (xmlbuilder): 9.0.5</p>
<p>Direct dependency fix Resolution (react-native): 0.59.0-rc.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | ws high detected in xmlbuilder tgz xmlbuilder tgz autoclosed ws high severity vulnerability vulnerable libraries xmlbuilder tgz xmlbuilder tgz xmlbuilder tgz an xml builder for node js library home page a href path to dependency file package json path to vulnerable library node modules xmlbuilder package json dependency hierarchy react native tgz root library plist tgz x xmlbuilder tgz vulnerable library xmlbuilder tgz an xml builder for node js library home page a href path to dependency file package json path to vulnerable library node modules simple plist node modules xmlbuilder package json dependency hierarchy react native tgz root library xcode tgz simple plist tgz plist tgz x xmlbuilder tgz vulnerable library found in base branch stable vulnerability details the package xmlbuilder js before is vulnerable to denial of service due to a regular expression issue publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution xmlbuilder direct dependency fix resolution react native rc fix resolution xmlbuilder direct dependency fix resolution react native rc step up your open source security game with whitesource | 0 |
168,795 | 13,103,230,152 | IssuesEvent | 2020-08-04 08:12:33 | WoWManiaUK/Redemption | https://api.github.com/repos/WoWManiaUK/Redemption | closed | [Spell/Warlock] Demonic Pact Icd | Fix - Ready to Test | **What is Happening:**
Demonic Pact (Demon Warlock) **should have** 20 sec ICD.

Has 2 demonic pact spells: 54909 and 53646 one for each faction?! anyway, both need have 20 sec icd
| 1.0 | [Spell/Warlock] Demonic Pact Icd - **What is Happening:**
Demonic Pact (Demon Warlock) **should have** 20 sec ICD.

Has 2 demonic pact spells: 54909 and 53646 one for each faction?! anyway, both need have 20 sec icd
| test | demonic pact icd what is happening demonic pact demon warlock should have sec icd has demonic pact spells and one for each faction anyway both need have sec icd | 1 |
65,840 | 14,761,951,279 | IssuesEvent | 2021-01-09 01:07:34 | jgeraigery/nimbus-deployment | https://api.github.com/repos/jgeraigery/nimbus-deployment | opened | CVE-2020-25649 (High) detected in jackson-databind-2.6.7.2.jar | security vulnerability | ## CVE-2020-25649 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.6.7.2.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nimbus-deployment/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.6.7.2/jackson-databind-2.6.7.2.jar</p>
<p>
Dependency Hierarchy:
- aws-java-sdk-cloudformation-1.11.475.jar (Root Library)
- aws-java-sdk-core-1.11.475.jar
- :x: **jackson-databind-2.6.7.2.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in FasterXML Jackson Databind, where it did not have entity expansion secured properly. This flaw allows vulnerability to XML external entity (XXE) attacks. The highest threat from this vulnerability is data integrity.
<p>Publish Date: 2020-12-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25649>CVE-2020-25649</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2589">https://github.com/FasterXML/jackson-databind/issues/2589</a></p>
<p>Release Date: 2020-12-03</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.11.0.rc1,2.10.5,2.9.10.7,2.6.7.4</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.6.7.2","isTransitiveDependency":true,"dependencyTree":"com.amazonaws:aws-java-sdk-cloudformation:1.11.475;com.amazonaws:aws-java-sdk-core:1.11.475;com.fasterxml.jackson.core:jackson-databind:2.6.7.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.11.0.rc1,2.10.5,2.9.10.7,2.6.7.4"}],"vulnerabilityIdentifier":"CVE-2020-25649","vulnerabilityDetails":"A flaw was found in FasterXML Jackson Databind, where it did not have entity expansion secured properly. This flaw allows vulnerability to XML external entity (XXE) attacks. The highest threat from this vulnerability is data integrity.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25649","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-25649 (High) detected in jackson-databind-2.6.7.2.jar - ## CVE-2020-25649 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.6.7.2.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nimbus-deployment/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.6.7.2/jackson-databind-2.6.7.2.jar</p>
<p>
Dependency Hierarchy:
- aws-java-sdk-cloudformation-1.11.475.jar (Root Library)
- aws-java-sdk-core-1.11.475.jar
- :x: **jackson-databind-2.6.7.2.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in FasterXML Jackson Databind, where it did not have entity expansion secured properly. This flaw allows vulnerability to XML external entity (XXE) attacks. The highest threat from this vulnerability is data integrity.
<p>Publish Date: 2020-12-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25649>CVE-2020-25649</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2589">https://github.com/FasterXML/jackson-databind/issues/2589</a></p>
<p>Release Date: 2020-12-03</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.11.0.rc1,2.10.5,2.9.10.7,2.6.7.4</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.6.7.2","isTransitiveDependency":true,"dependencyTree":"com.amazonaws:aws-java-sdk-cloudformation:1.11.475;com.amazonaws:aws-java-sdk-core:1.11.475;com.fasterxml.jackson.core:jackson-databind:2.6.7.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.11.0.rc1,2.10.5,2.9.10.7,2.6.7.4"}],"vulnerabilityIdentifier":"CVE-2020-25649","vulnerabilityDetails":"A flaw was found in FasterXML Jackson Databind, where it did not have entity expansion secured properly. This flaw allows vulnerability to XML external entity (XXE) attacks. The highest threat from this vulnerability is data integrity.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25649","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_test | cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file nimbus deployment pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy aws java sdk cloudformation jar root library aws java sdk core jar x jackson databind jar vulnerable library vulnerability details a flaw was found in fasterxml jackson databind where it did not have entity expansion secured properly this flaw allows vulnerability to xml external entity xxe attacks the highest threat from this vulnerability is data integrity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails a flaw was found in fasterxml jackson databind where it did not have entity expansion secured properly this flaw allows vulnerability to xml external entity xxe attacks the highest threat from this vulnerability is data integrity vulnerabilityurl | 0 |
16,668 | 4,075,703,986 | IssuesEvent | 2016-05-29 11:55:24 | openucx/ucx | https://api.github.com/repos/openucx/ucx | opened | UCP wakeup fixes | documentation enhancement | - update doc that WAKEUP feature is required
- don't need to arm before doing worker_wait | 1.0 | UCP wakeup fixes - - update doc that WAKEUP feature is required
- don't need to arm before doing worker_wait | non_test | ucp wakeup fixes update doc that wakeup feature is required don t need to arm before doing worker wait | 0 |
248,917 | 21,089,120,400 | IssuesEvent | 2022-04-04 01:22:18 | Uuvana-Studios/longvinter-windows-client | https://api.github.com/repos/Uuvana-Studios/longvinter-windows-client | closed | James General Store screen, "x" leads to Home screen and won't close store screen | Bug Not Tested | **Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to Jame's General Store
2. Click on Vending Machine buy/sell and click "x" to exit
3. Scroll down to '....'
4. See an error
**Expected behavior**
Clicking on red X should close General Store screen
**Screenshots**
If applicable, add screenshots to help explain your problem.

**Desktop (please complete the following information):**
- OS: [e.g. Windows]
- Game Version [e.g. 1.0]
- Steam Version [e.g. 1.0]
**Additional context**
The general store screen won't close and you have to leave the game, it doesn't happen every time, but it has happened 3 times during an hour of game play. When clicking the x it goes to the "esc" screen
| 1.0 | James General Store screen, "x" leads to Home screen and won't close store screen - **Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to Jame's General Store
2. Click on Vending Machine buy/sell and click "x" to exit
3. Scroll down to '....'
4. See an error
**Expected behavior**
Clicking on red X should close General Store screen
**Screenshots**
If applicable, add screenshots to help explain your problem.

**Desktop (please complete the following information):**
- OS: [e.g. Windows]
- Game Version [e.g. 1.0]
- Steam Version [e.g. 1.0]
**Additional context**
The general store screen won't close and you have to leave the game, it doesn't happen every time, but it has happened 3 times during an hour of game play. When clicking the x it goes to the "esc" screen
| test | james general store screen x leads to home screen and won t close store screen describe the bug a clear and concise description of what the bug is to reproduce steps to reproduce the behavior go to jame s general store click on vending machine buy sell and click x to exit scroll down to see an error expected behavior clicking on red x should close general store screen screenshots if applicable add screenshots to help explain your problem desktop please complete the following information os game version steam version additional context the general store screen won t close and you have to leave the game it doesn t happen every time but it has happened times during an hour of game play when clicking the x it goes to the esc screen | 1 |
346,877 | 10,421,180,781 | IssuesEvent | 2019-09-16 04:55:11 | msep2019/MSEP_2019_3 | https://api.github.com/repos/msep2019/MSEP_2019_3 | closed | Extract the description in the CVE, CWE, CAPEC databases | Medium Priority functionality | Jamal wants to extract the description field in the CVE, CWE, CAPEC databases so that it can be used in the keywords extraction and text-ming to find the weakness and attack pattern.
- [ ] Extract CVE description
- [ ] Extract CWE description
- [ ] Extract CAPEC description | 1.0 | Extract the description in the CVE, CWE, CAPEC databases - Jamal wants to extract the description field in the CVE, CWE, CAPEC databases so that it can be used in the keywords extraction and text-ming to find the weakness and attack pattern.
- [ ] Extract CVE description
- [ ] Extract CWE description
- [ ] Extract CAPEC description | non_test | extract the description in the cve cwe capec databases jamal wants to extract the description field in the cve cwe capec databases so that it can be used in the keywords extraction and text ming to find the weakness and attack pattern extract cve description extract cwe description extract capec description | 0 |
223,828 | 17,633,872,724 | IssuesEvent | 2021-08-19 11:26:41 | ClickHouse/ClickHouse | https://api.github.com/repos/ClickHouse/ClickHouse | opened | insertPostgreSQLValue.cpp: Bad cast from type DB::ColumnDecimal<DB::DateTime64> to DB::ColumnDecimal<DB::Decimal<long> > | testing comp-postgresql | test_postgresql_replica_database_engine/test.py::test_single_transaction
https://clickhouse-test-reports.s3.yandex.net/0/c6bcd48bee8e82da27794cd2f7c0c5f836c9cac8/integration_tests_(debug).html
```
2021.08.19 04:24:48.588864 [ 502 ] {} <Fatal> : Logical error: 'Bad cast from type DB::ColumnDecimal<DB::DateTime64> to DB::ColumnDecimal<DB::Decimal<long> >'.
2021.08.19 04:24:48.589960 [ 600 ] {} <Fatal> BaseDaemon: ########################################
2021.08.19 04:24:48.590076 [ 600 ] {} <Fatal> BaseDaemon: (version 21.9.1.7816 (official build), build id: 55EA93E79D418D815C8CD3E05DD9D857EC61A209) (from thread 502) (no query) Received signal Aborted (6)
2021.08.19 04:24:48.590163 [ 600 ] {} <Fatal> BaseDaemon:
2021.08.19 04:24:48.590323 [ 600 ] {} <Fatal> BaseDaemon: Stack trace: 0x7fe48cfef18b 0x7fe48cfce859 0x13696a66 0x13696b75 0x13a2c046 0x1e0c375f 0x1e5161e8 0x1e516e27 0x1e51678f 0x1e517422 0x1e51b3f7 0x1e51d380 0x1e4f927a 0x1e500b98 0x1e500b3d 0x1e500afd 0x1e500ad5 0x1e500a9d 0x136e58a9 0x136e49d5 0x1e43d44a 0x1e441a46 0x1e43f380 0x1e440578 0x1e44053d 0x1e4404e1 0x1e4403f2 0x1e4402e7 0x1e4401fd 0x1e4401bd 0x1e440195 0x1e440160 0x136e58a9 0x136e49d5 0x1370ba4e 0x13713104 0x1371305d 0x13712f85 0x137128a2 0x7fe48d1b5609 0x7fe48d0cb293
2021.08.19 04:24:48.590555 [ 600 ] {} <Fatal> BaseDaemon: 4. gsignal @ 0x4618b in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.08.19 04:24:48.590668 [ 600 ] {} <Fatal> BaseDaemon: 5. abort @ 0x25859 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.08.19 04:24:48.877367 [ 600 ] {} <Fatal> BaseDaemon: 6. ./obj-x86_64-linux-gnu/../src/Common/Exception.cpp:53: DB::handle_error_code(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool, std::__1::vector<void*, std::__1::allocator<void*> > const&) @ 0x13696a66 in /usr/bin/clickhouse
2021.08.19 04:24:49.145859 [ 600 ] {} <Fatal> BaseDaemon: 7. ./obj-x86_64-linux-gnu/../src/Common/Exception.cpp:60: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x13696b75 in /usr/bin/clickhouse
2021.08.19 04:24:49.718089 [ 600 ] {} <Fatal> BaseDaemon: 8. ./obj-x86_64-linux-gnu/../src/Common/assert_cast.h:47: DB::ColumnDecimal<DB::Decimal<long> >& assert_cast<DB::ColumnDecimal<DB::Decimal<long> >&, DB::IColumn&>(DB::IColumn&) @ 0x13a2c046 in /usr/bin/clickhouse
2021.08.19 04:24:49.912074 [ 600 ] {} <Fatal> BaseDaemon: 9. ./obj-x86_64-linux-gnu/../src/Core/PostgreSQL/insertPostgreSQLValue.cpp:113: DB::insertPostgreSQLValue(DB::IColumn&, std::__1::basic_string_view<char, std::__1::char_traits<char> >, DB::ExternalResultDescription::ValueType, std::__1::shared_ptr<DB::IDataType const>, std::__1::unordered_map<unsigned long, DB::PostgreSQLArrayInfo, std::__1::hash<unsigned long>, std::__1::equal_to<unsigned long>, std::__1::allocator<std::__1::pair<unsigned long const, DB::PostgreSQLArrayInfo> > >&, unsigned long) @ 0x1e0c375f in /usr/bin/clickhouse
2021.08.19 04:24:50.535456 [ 600 ] {} <Fatal> BaseDaemon: 10. ./obj-x86_64-linux-gnu/../src/Storages/PostgreSQL/MaterializedPostgreSQLConsumer.cpp:101: DB::MaterializedPostgreSQLConsumer::insertValue(DB::MaterializedPostgreSQLConsumer::Buffer&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned long) @ 0x1e5161e8 in /usr/bin/clickhouse
2021.08.19 04:24:51.155633 [ 600 ] {} <Fatal> BaseDaemon: 11. ./obj-x86_64-linux-gnu/../src/Storages/PostgreSQL/MaterializedPostgreSQLConsumer.cpp:205: DB::MaterializedPostgreSQLConsumer::readTupleData(DB::MaterializedPostgreSQLConsumer::Buffer&, char const*, unsigned long&, unsigned long, DB::MaterializedPostgreSQLConsumer::PostgreSQLQuery, bool)::$_0::operator()(signed char, short) const @ 0x1e516e27 in /usr/bin/clickhouse
2021.08.19 04:24:51.778837 [ 600 ] {} <Fatal> BaseDaemon: 12. ./obj-x86_64-linux-gnu/../src/Storages/PostgreSQL/MaterializedPostgreSQLConsumer.cpp:219: DB::MaterializedPostgreSQLConsumer::readTupleData(DB::MaterializedPostgreSQLConsumer::Buffer&, char const*, unsigned long&, unsigned long, DB::MaterializedPostgreSQLConsumer::PostgreSQLQuery, bool) @ 0x1e51678f in /usr/bin/clickhouse
2021.08.19 04:24:52.404012 [ 600 ] {} <Fatal> BaseDaemon: 13. ./obj-x86_64-linux-gnu/../src/Storages/PostgreSQL/MaterializedPostgreSQLConsumer.cpp:285: DB::MaterializedPostgreSQLConsumer::processReplicationMessage(char const*, unsigned long) @ 0x1e517422 in /usr/bin/clickhouse
2021.08.19 04:24:53.023838 [ 600 ] {} <Fatal> BaseDaemon: 14. ./obj-x86_64-linux-gnu/../src/Storages/PostgreSQL/MaterializedPostgreSQLConsumer.cpp:620: DB::MaterializedPostgreSQLConsumer::readFromReplicationSlot() @ 0x1e51b3f7 in /usr/bin/clickhouse
2021.08.19 04:24:53.647608 [ 600 ] {} <Fatal> BaseDaemon: 15. ./obj-x86_64-linux-gnu/../src/Storages/PostgreSQL/MaterializedPostgreSQLConsumer.cpp:703: DB::MaterializedPostgreSQLConsumer::consume(std::__1::vector<std::__1::pair<int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > >&) @ 0x1e51d380 in /usr/bin/clickhouse
2021.08.19 04:24:53.703185 [ 593 ] {b38f128b-e606-49ee-a799-6bf86f73e7cd} <Fatal> : Logical error: 'Bad cast from type DB::ColumnDecimal<DB::DateTime64> to DB::ColumnDecimal<DB::Decimal<long> >'.
2021.08.19 04:24:53.704485 [ 601 ] {} <Fatal> BaseDaemon: ########################################
2021.08.19 04:24:53.704890 [ 601 ] {} <Fatal> BaseDaemon: (version 21.9.1.7816 (official build), build id: 55EA93E79D418D815C8CD3E05DD9D857EC61A209) (from thread 593) (query_id: b38f128b-e606-49ee-a799-6bf86f73e7cd) Received signal Aborted (6)
2021.08.19 04:24:53.705224 [ 601 ] {} <Fatal> BaseDaemon:
2021.08.19 04:24:53.705586 [ 601 ] {} <Fatal> BaseDaemon: Stack trace: 0x7fe48cfef18b 0x7fe48cfce859 0x13696a66 0x13696b75 0x13a2c046 0x1e0c375f 0x1e0c03b9 0x1fc46036 0x1fc45da7 0x20051dd2 0x1fcaff9c 0x1fcafeff 0x1fcafe9d 0x1fcafe5d 0x1fcafe35 0x1fcafdfd 0x136e58a9 0x136e49d5 0x1fcae7ed 0x1fcaf1d9 0x1fcad084 0x1fcac373 0x1fcce2b7 0x1fcce1e6 0x1fcce15d 0x1fcce101 0x1fcce012 0x1fccdf0c 0x1fccde1d 0x1fccdddd 0x1fccddb5 0x1fccdd80 0x136e58a9 0x136e49d5 0x1370ba4e 0x13713104 0x1371305d 0x13712f85 0x137128a2 0x7fe48d1b5609 0x7fe48d0cb293
2021.08.19 04:24:53.706295 [ 601 ] {} <Fatal> BaseDaemon: 4. gsignal @ 0x4618b in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.08.19 04:24:53.706498 [ 601 ] {} <Fatal> BaseDaemon: 5. abort @ 0x25859 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.08.19 04:24:53.989293 [ 601 ] {} <Fatal> BaseDaemon: 6. ./obj-x86_64-linux-gnu/../src/Common/Exception.cpp:53: DB::handle_error_code(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool, std::__1::vector<void*, std::__1::allocator<void*> > const&) @ 0x13696a66 in /usr/bin/clickhouse
2021.08.19 04:24:54.253767 [ 601 ] {} <Fatal> BaseDaemon: 7. ./obj-x86_64-linux-gnu/../src/Common/Exception.cpp:60: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x13696b75 in /usr/bin/clickhouse
2021.08.19 04:24:54.322628 [ 600 ] {} <Fatal> BaseDaemon: 16. ./obj-x86_64-linux-gnu/../src/Storages/PostgreSQL/PostgreSQLReplicationHandler.cpp:255: DB::PostgreSQLReplicationHandler::consumerFunc() @ 0x1e4f927a in /usr/bin/clickhouse
2021.08.19 04:24:54.822493 [ 601 ] {} <Fatal> BaseDaemon: 8. ./obj-x86_64-linux-gnu/../src/Common/assert_cast.h:47: DB::ColumnDecimal<DB::Decimal<long> >& assert_cast<DB::ColumnDecimal<DB::Decimal<long> >&, DB::IColumn&>(DB::IColumn&) @ 0x13a2c046 in /usr/bin/clickhouse
2021.08.19 04:24:55.012185 [ 601 ] {} <Fatal> BaseDaemon: 9. ./obj-x86_64-linux-gnu/../src/Core/PostgreSQL/insertPostgreSQLValue.cpp:113: DB::insertPostgreSQLValue(DB::IColumn&, std::__1::basic_string_view<char, std::__1::char_traits<char> >, DB::ExternalResultDescription::ValueType, std::__1::shared_ptr<DB::IDataType const>, std::__1::unordered_map<unsigned long, DB::PostgreSQLArrayInfo, std::__1::hash<unsigned long>, std::__1::equal_to<unsigned long>, std::__1::allocator<std::__1::pair<unsigned long const, DB::PostgreSQLArrayInfo> > >&, unsigned long) @ 0x1e0c375f in /usr/bin/clickhouse
2021.08.19 04:24:55.019037 [ 600 ] {} <Fatal> BaseDaemon: 17. ./obj-x86_64-linux-gnu/../src/Storages/PostgreSQL/PostgreSQLReplicationHandler.cpp:56: DB::PostgreSQLReplicationHandler::PostgreSQLReplicationHandler(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, std::__1::shared_ptr<DB::Context const>, bool, unsigned long, bool, bool, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >)::$_1::operator()() const @ 0x1e500b98 in /usr/bin/clickhouse
2021.08.19 04:24:55.213672 [ 601 ] {} <Fatal> BaseDaemon: 10. ./obj-x86_64-linux-gnu/../src/DataStreams/PostgreSQLSource.cpp:128: DB::PostgreSQLSource<pqxx::transaction<(pqxx::isolation_level)0, (pqxx::write_policy)0> >::generate() @ 0x1e0c03b9 in /usr/bin/clickhouse
2021.08.19 04:24:55.345049 [ 601 ] {} <Fatal> BaseDaemon: 11. ./obj-x86_64-linux-gnu/../src/Processors/ISource.cpp:79: DB::ISource::tryGenerate() @ 0x1fc46036 in /usr/bin/clickhouse
2021.08.19 04:24:55.474311 [ 601 ] {} <Fatal> BaseDaemon: 12. ./obj-x86_64-linux-gnu/../src/Processors/ISource.cpp:53: DB::ISource::work() @ 0x1fc45da7 in /usr/bin/clickhouse
2021.08.19 04:24:55.663605 [ 601 ] {} <Fatal> BaseDaemon: 13. ./obj-x86_64-linux-gnu/../src/Processors/Sources/SourceWithProgress.cpp:60: DB::SourceWithProgress::work() @ 0x20051dd2 in /usr/bin/clickhouse
2021.08.19 04:24:55.720086 [ 600 ] {} <Fatal> BaseDaemon: 18. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676: decltype(std::__1::forward<DB::PostgreSQLReplicationHandler::PostgreSQLReplicationHandler(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, std::__1::shared_ptr<DB::Context const>, bool, unsigned long, bool, bool, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >)::$_1&>(fp)()) std::__1::__invoke<DB::PostgreSQLReplicationHandler::PostgreSQLReplicationHandler(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, std::__1::shared_ptr<DB::Context const>, bool, unsigned long, bool, bool, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >)::$_1&>(DB::PostgreSQLReplicationHandler::PostgreSQLReplicationHandler(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, std::__1::shared_ptr<DB::Context const>, bool, unsigned long, bool, bool, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >)::$_1&) @ 0x1e500b3d in /usr/bin/clickhouse
2021.08.19 04:24:56.170576 [ 601 ] {} <Fatal> BaseDaemon: 14. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:88: DB::executeJob(DB::IProcessor*) @ 0x1fcaff9c in /usr/bin/clickhouse
2021.08.19 04:24:56.401792 [ 600 ] {} <Fatal> BaseDaemon: 19. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/__functional_base:349: void std::__1::__invoke_void_return_wrapper<void>::__call<DB::PostgreSQLReplicationHandler::PostgreSQLReplicationHandler(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, std::__1::shared_ptr<DB::Context const>, bool, unsigned long, bool, bool, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >)::$_1&>(DB::PostgreSQLReplicationHandler::PostgreSQLReplicationHandler(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, std::__1::shared_ptr<DB::Context const>, bool, unsigned long, bool, bool, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >)::$_1&) @ 0x1e500afd in /usr/bin/clickhouse
2021.08.19 04:24:56.662261 [ 601 ] {} <Fatal> BaseDaemon: 15. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:105: DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0::operator()() const @ 0x1fcafeff in /usr/bin/clickhouse
2021.08.19 04:24:57.089834 [ 600 ] {} <Fatal> BaseDaemon: 20. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:1608: std::__1::__function::__default_alloc_func<DB::PostgreSQLReplicationHandler::PostgreSQLReplicationHandler(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, std::__1::shared_ptr<DB::Context const>, bool, unsigned long, bool, bool, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >)::$_1, void ()>::operator()() @ 0x1e500ad5 in /usr/bin/clickhouse
2021.08.19 04:24:57.162106 [ 601 ] {} <Fatal> BaseDaemon: 16. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676: decltype(std::__1::forward<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&>(fp)()) std::__1::__invoke<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&>(DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&) @ 0x1fcafe9d in /usr/bin/clickhouse
2021.08.19 04:24:57.654538 [ 601 ] {} <Fatal> BaseDaemon: 17. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/__functional_base:349: void std::__1::__invoke_void_return_wrapper<void>::__call<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&>(DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&) @ 0x1fcafe5d in /usr/bin/clickhouse
2021.08.19 04:24:57.777370 [ 600 ] {} <Fatal> BaseDaemon: 21. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2089: void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<DB::PostgreSQLReplicationHandler::PostgreSQLReplicationHandler(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, std::__1::shared_ptr<DB::Context const>, bool, unsigned long, bool, bool, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >)::$_1, void ()> >(std::__1::__function::__policy_storage const*) @ 0x1e500a9d in /usr/bin/clickhouse
2021.08.19 04:24:58.023569 [ 600 ] {} <Fatal> BaseDaemon: 22. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2221: std::__1::__function::__policy_func<void ()>::operator()() const @ 0x136e58a9 in /usr/bin/clickhouse
2021.08.19 04:24:58.149174 [ 601 ] {} <Fatal> BaseDaemon: 18. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:1608: std::__1::__function::__default_alloc_func<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0, void ()>::operator()() @ 0x1fcafe35 in /usr/bin/clickhouse
2021.08.19 04:24:58.266393 [ 600 ] {} <Fatal> BaseDaemon: 23. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2560: std::__1::function<void ()>::operator()() const @ 0x136e49d5 in /usr/bin/clickhouse
2021.08.19 04:24:58.388193 [ 600 ] {} <Fatal> BaseDaemon: 24. ./obj-x86_64-linux-gnu/../src/Core/BackgroundSchedulePool.cpp:106: DB::BackgroundSchedulePoolTaskInfo::execute() @ 0x1e43d44a in /usr/bin/clickhouse
2021.08.19 04:24:58.522311 [ 600 ] {} <Fatal> BaseDaemon: 25. ./obj-x86_64-linux-gnu/../src/Core/BackgroundSchedulePool.cpp:19: DB::TaskNotification::execute() @ 0x1e441a46 in /usr/bin/clickhouse
2021.08.19 04:24:58.640778 [ 601 ] {} <Fatal> BaseDaemon: 19. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2089: void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0, void ()> >(std::__1::__function::__policy_storage const*) @ 0x1fcafdfd in /usr/bin/clickhouse
2021.08.19 04:24:58.656289 [ 600 ] {} <Fatal> BaseDaemon: 26. ./obj-x86_64-linux-gnu/../src/Core/BackgroundSchedulePool.cpp:265: DB::BackgroundSchedulePool::threadFunction() @ 0x1e43f380 in /usr/bin/clickhouse
2021.08.19 04:24:58.797004 [ 600 ] {} <Fatal> BaseDaemon: 27. ./obj-x86_64-linux-gnu/../src/Core/BackgroundSchedulePool.cpp:161: DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1::operator()() const @ 0x1e440578 in /usr/bin/clickhouse
2021.08.19 04:24:58.887709 [ 601 ] {} <Fatal> BaseDaemon: 20. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2221: std::__1::__function::__policy_func<void ()>::operator()() const @ 0x136e58a9 in /usr/bin/clickhouse
2021.08.19 04:24:58.939545 [ 600 ] {} <Fatal> BaseDaemon: 28. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3682: decltype(std::__1::forward<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&>(fp)()) std::__1::__invoke_constexpr<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&) @ 0x1e44053d in /usr/bin/clickhouse
2021.08.19 04:24:59.083286 [ 600 ] {} <Fatal> BaseDaemon: 29. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/tuple:1415: decltype(auto) std::__1::__apply_tuple_impl<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&, std::__1::tuple<>&>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&, std::__1::tuple<>&, std::__1::__tuple_indices<>) @ 0x1e4404e1 in /usr/bin/clickhouse
2021.08.19 04:24:59.132881 [ 601 ] {} <Fatal> BaseDaemon: 21. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2560: std::__1::function<void ()>::operator()() const @ 0x136e49d5 in /usr/bin/clickhouse
2021.08.19 04:24:59.224012 [ 600 ] {} <Fatal> BaseDaemon: 30. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/tuple:1424: decltype(auto) std::__1::apply<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&, std::__1::tuple<>&>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&, std::__1::tuple<>&) @ 0x1e4403f2 in /usr/bin/clickhouse
2021.08.19 04:24:59.366303 [ 600 ] {} <Fatal> BaseDaemon: 31. ./obj-x86_64-linux-gnu/../src/Common/ThreadPool.h:182: ThreadFromGlobalPool::ThreadFromGlobalPool<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&&)::'lambda'()::operator()() @ 0x1e4402e7 in /usr/bin/clickhouse
2021.08.19 04:24:59.508365 [ 600 ] {} <Fatal> BaseDaemon: 32. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676: decltype(std::__1::forward<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1>(fp)()) std::__1::__invoke<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&&)::'lambda'()&>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&&) @ 0x1e4401fd in /usr/bin/clickhouse
2021.08.19 04:24:59.614605 [ 601 ] {} <Fatal> BaseDaemon: 22. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:600: DB::PipelineExecutor::executeStepImpl(unsigned long, unsigned long, std::__1::atomic<bool>*) @ 0x1fcae7ed in /usr/bin/clickhouse
2021.08.19 04:24:59.648338 [ 600 ] {} <Fatal> BaseDaemon: 33. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/__functional_base:349: void std::__1::__invoke_void_return_wrapper<void>::__call<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&&)::'lambda'()&>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&&...) @ 0x1e4401bd in /usr/bin/clickhouse
2021.08.19 04:24:59.794318 [ 600 ] {} <Fatal> BaseDaemon: 34. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:1608: std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&&)::'lambda'(), void ()>::operator()() @ 0x1e440195 in /usr/bin/clickhouse
2021.08.19 04:24:59.938842 [ 600 ] {} <Fatal> BaseDaemon: 35. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2089: void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0x1e440160 in /usr/bin/clickhouse
2021.08.19 04:25:00.110190 [ 601 ] {} <Fatal> BaseDaemon: 23. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:485: DB::PipelineExecutor::executeSingleThread(unsigned long, unsigned long) @ 0x1fcaf1d9 in /usr/bin/clickhouse
2021.08.19 04:25:00.194208 [ 600 ] {} <Fatal> BaseDaemon: 36. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2221: std::__1::__function::__policy_func<void ()>::operator()() const @ 0x136e58a9 in /usr/bin/clickhouse
2021.08.19 04:25:00.447228 [ 600 ] {} <Fatal> BaseDaemon: 37. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2560: std::__1::function<void ()>::operator()() const @ 0x136e49d5 in /usr/bin/clickhouse
2021.08.19 04:25:00.538051 [ 600 ] {} <Fatal> BaseDaemon: 38. ./obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:269: ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x1370ba4e in /usr/bin/clickhouse
2021.08.19 04:25:00.578006 [ 601 ] {} <Fatal> BaseDaemon: 24. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:824: DB::PipelineExecutor::executeImpl(unsigned long) @ 0x1fcad084 in /usr/bin/clickhouse
2021.08.19 04:25:00.636090 [ 600 ] {} <Fatal> BaseDaemon: 39. ./obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:136: void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()::operator()() const @ 0x13713104 in /usr/bin/clickhouse
2021.08.19 04:25:00.729994 [ 600 ] {} <Fatal> BaseDaemon: 40. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676: decltype(std::__1::forward<void>(fp)(std::__1::forward<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(fp0)...)) std::__1::__invoke<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()&&...) @ 0x1371305d in /usr/bin/clickhouse
2021.08.19 04:25:00.824458 [ 600 ] {} <Fatal> BaseDaemon: 41. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:281: void std::__1::__thread_execute<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(std::__1::tuple<void, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>&, std::__1::__tuple_indices<>) @ 0x13712f85 in /usr/bin/clickhouse
2021.08.19 04:25:00.917444 [ 600 ] {} <Fatal> BaseDaemon: 42. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:291: void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) @ 0x137128a2 in /usr/bin/clickhouse
2021.08.19 04:25:00.917615 [ 600 ] {} <Fatal> BaseDaemon: 43. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
2021.08.19 04:25:00.917751 [ 600 ] {} <Fatal> BaseDaemon: 44. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.08.19 04:25:01.053149 [ 601 ] {} <Fatal> BaseDaemon: 25. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:407: DB::PipelineExecutor::execute(unsigned long) @ 0x1fcac373 in /usr/bin/clickhouse
2021.08.19 04:25:01.461749 [ 601 ] {} <Fatal> BaseDaemon: 26. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PullingAsyncPipelineExecutor.cpp:80: DB::threadFunction(DB::PullingAsyncPipelineExecutor::Data&, std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0x1fcce2b7 in /usr/bin/clickhouse
2021.08.19 04:25:01.858507 [ 601 ] {} <Fatal> BaseDaemon: 27. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PullingAsyncPipelineExecutor.cpp:108: DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0::operator()() const @ 0x1fcce1e6 in /usr/bin/clickhouse
2021.08.19 04:25:02.200608 [ 600 ] {} <Fatal> BaseDaemon: Checksum of the binary: DE356FEAC76EF5598A726EB9A97FF863, integrity check passed.
2021.08.19 04:25:02.254916 [ 601 ] {} <Fatal> BaseDaemon: 28. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3682: decltype(std::__1::forward<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&>(fp)()) std::__1::__invoke_constexpr<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&) @ 0x1fcce15d in /usr/bin/clickhouse
2021.08.19 04:25:02.650978 [ 601 ] {} <Fatal> BaseDaemon: 29. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/tuple:1415: decltype(auto) std::__1::__apply_tuple_impl<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&, std::__1::tuple<>&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&, std::__1::tuple<>&, std::__1::__tuple_indices<>) @ 0x1fcce101 in /usr/bin/clickhouse
2021.08.19 04:25:03.044038 [ 601 ] {} <Fatal> BaseDaemon: 30. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/tuple:1424: decltype(auto) std::__1::apply<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&, std::__1::tuple<>&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&, std::__1::tuple<>&) @ 0x1fcce012 in /usr/bin/clickhouse
2021.08.19 04:25:03.440833 [ 601 ] {} <Fatal> BaseDaemon: 31. ./obj-x86_64-linux-gnu/../src/Common/ThreadPool.h:182: ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'()::operator()() @ 0x1fccdf0c in /usr/bin/clickhouse
2021.08.19 04:25:03.834413 [ 601 ] {} <Fatal> BaseDaemon: 32. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676: decltype(std::__1::forward<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(fp)()) std::__1::__invoke<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'()&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&) @ 0x1fccde1d in /usr/bin/clickhouse
2021.08.19 04:25:04.229964 [ 601 ] {} <Fatal> BaseDaemon: 33. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/__functional_base:349: void std::__1::__invoke_void_return_wrapper<void>::__call<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'()&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&...) @ 0x1fccdddd in /usr/bin/clickhouse
2021.08.19 04:25:04.621995 [ 601 ] {} <Fatal> BaseDaemon: 34. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:1608: std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'(), void ()>::operator()() @ 0x1fccddb5 in /usr/bin/clickhouse
2021.08.19 04:25:05.032142 [ 601 ] {} <Fatal> BaseDaemon: 35. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2089: void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0x1fccdd80 in /usr/bin/clickhouse
2021.08.19 04:25:05.290146 [ 601 ] {} <Fatal> BaseDaemon: 36. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2221: std::__1::__function::__policy_func<void ()>::operator()() const @ 0x136e58a9 in /usr/bin/clickhouse
2021.08.19 04:25:05.542736 [ 601 ] {} <Fatal> BaseDaemon: 37. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2560: std::__1::function<void ()>::operator()() const @ 0x136e49d5 in /usr/bin/clickhouse
2021.08.19 04:25:05.631930 [ 601 ] {} <Fatal> BaseDaemon: 38. ./obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:269: ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x1370ba4e in /usr/bin/clickhouse
2021.08.19 04:25:05.728543 [ 601 ] {} <Fatal> BaseDaemon: 39. ./obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:136: void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()::operator()() const @ 0x13713104 in /usr/bin/clickhouse
2021.08.19 04:25:05.820976 [ 601 ] {} <Fatal> BaseDaemon: 40. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676: decltype(std::__1::forward<void>(fp)(std::__1::forward<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(fp0)...)) std::__1::__invoke<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()&&...) @ 0x1371305d in /usr/bin/clickhouse
2021.08.19 04:25:05.914716 [ 601 ] {} <Fatal> BaseDaemon: 41. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:281: void std::__1::__thread_execute<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(std::__1::tuple<void, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>&, std::__1::__tuple_indices<>) @ 0x13712f85 in /usr/bin/clickhouse
2021.08.19 04:25:06.008434 [ 601 ] {} <Fatal> BaseDaemon: 42. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:291: void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) @ 0x137128a2 in /usr/bin/clickhouse
2021.08.19 04:25:06.008745 [ 601 ] {} <Fatal> BaseDaemon: 43. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
2021.08.19 04:25:06.009020 [ 601 ] {} <Fatal> BaseDaemon: 44. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.08.19 04:25:07.271277 [ 601 ] {} <Fatal> BaseDaemon: Checksum of the binary: DE356FEAC76EF5598A726EB9A97FF863, integrity check passed.
2021.08.19 04:25:10.154889 [ 413 ] {} <Fatal> Application: Child process was terminated by signal 6.
``` | 1.0 | insertPostgreSQLValue.cpp: Bad cast from type DB::ColumnDecimal<DB::DateTime64> to DB::ColumnDecimal<DB::Decimal<long> > - test_postgresql_replica_database_engine/test.py::test_single_transaction
https://clickhouse-test-reports.s3.yandex.net/0/c6bcd48bee8e82da27794cd2f7c0c5f836c9cac8/integration_tests_(debug).html
```
2021.08.19 04:24:48.588864 [ 502 ] {} <Fatal> : Logical error: 'Bad cast from type DB::ColumnDecimal<DB::DateTime64> to DB::ColumnDecimal<DB::Decimal<long> >'.
2021.08.19 04:24:48.589960 [ 600 ] {} <Fatal> BaseDaemon: ########################################
2021.08.19 04:24:48.590076 [ 600 ] {} <Fatal> BaseDaemon: (version 21.9.1.7816 (official build), build id: 55EA93E79D418D815C8CD3E05DD9D857EC61A209) (from thread 502) (no query) Received signal Aborted (6)
2021.08.19 04:24:48.590163 [ 600 ] {} <Fatal> BaseDaemon:
2021.08.19 04:24:48.590323 [ 600 ] {} <Fatal> BaseDaemon: Stack trace: 0x7fe48cfef18b 0x7fe48cfce859 0x13696a66 0x13696b75 0x13a2c046 0x1e0c375f 0x1e5161e8 0x1e516e27 0x1e51678f 0x1e517422 0x1e51b3f7 0x1e51d380 0x1e4f927a 0x1e500b98 0x1e500b3d 0x1e500afd 0x1e500ad5 0x1e500a9d 0x136e58a9 0x136e49d5 0x1e43d44a 0x1e441a46 0x1e43f380 0x1e440578 0x1e44053d 0x1e4404e1 0x1e4403f2 0x1e4402e7 0x1e4401fd 0x1e4401bd 0x1e440195 0x1e440160 0x136e58a9 0x136e49d5 0x1370ba4e 0x13713104 0x1371305d 0x13712f85 0x137128a2 0x7fe48d1b5609 0x7fe48d0cb293
2021.08.19 04:24:48.590555 [ 600 ] {} <Fatal> BaseDaemon: 4. gsignal @ 0x4618b in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.08.19 04:24:48.590668 [ 600 ] {} <Fatal> BaseDaemon: 5. abort @ 0x25859 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.08.19 04:24:48.877367 [ 600 ] {} <Fatal> BaseDaemon: 6. ./obj-x86_64-linux-gnu/../src/Common/Exception.cpp:53: DB::handle_error_code(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool, std::__1::vector<void*, std::__1::allocator<void*> > const&) @ 0x13696a66 in /usr/bin/clickhouse
2021.08.19 04:24:49.145859 [ 600 ] {} <Fatal> BaseDaemon: 7. ./obj-x86_64-linux-gnu/../src/Common/Exception.cpp:60: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x13696b75 in /usr/bin/clickhouse
2021.08.19 04:24:49.718089 [ 600 ] {} <Fatal> BaseDaemon: 8. ./obj-x86_64-linux-gnu/../src/Common/assert_cast.h:47: DB::ColumnDecimal<DB::Decimal<long> >& assert_cast<DB::ColumnDecimal<DB::Decimal<long> >&, DB::IColumn&>(DB::IColumn&) @ 0x13a2c046 in /usr/bin/clickhouse
2021.08.19 04:24:49.912074 [ 600 ] {} <Fatal> BaseDaemon: 9. ./obj-x86_64-linux-gnu/../src/Core/PostgreSQL/insertPostgreSQLValue.cpp:113: DB::insertPostgreSQLValue(DB::IColumn&, std::__1::basic_string_view<char, std::__1::char_traits<char> >, DB::ExternalResultDescription::ValueType, std::__1::shared_ptr<DB::IDataType const>, std::__1::unordered_map<unsigned long, DB::PostgreSQLArrayInfo, std::__1::hash<unsigned long>, std::__1::equal_to<unsigned long>, std::__1::allocator<std::__1::pair<unsigned long const, DB::PostgreSQLArrayInfo> > >&, unsigned long) @ 0x1e0c375f in /usr/bin/clickhouse
2021.08.19 04:24:50.535456 [ 600 ] {} <Fatal> BaseDaemon: 10. ./obj-x86_64-linux-gnu/../src/Storages/PostgreSQL/MaterializedPostgreSQLConsumer.cpp:101: DB::MaterializedPostgreSQLConsumer::insertValue(DB::MaterializedPostgreSQLConsumer::Buffer&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned long) @ 0x1e5161e8 in /usr/bin/clickhouse
2021.08.19 04:24:51.155633 [ 600 ] {} <Fatal> BaseDaemon: 11. ./obj-x86_64-linux-gnu/../src/Storages/PostgreSQL/MaterializedPostgreSQLConsumer.cpp:205: DB::MaterializedPostgreSQLConsumer::readTupleData(DB::MaterializedPostgreSQLConsumer::Buffer&, char const*, unsigned long&, unsigned long, DB::MaterializedPostgreSQLConsumer::PostgreSQLQuery, bool)::$_0::operator()(signed char, short) const @ 0x1e516e27 in /usr/bin/clickhouse
2021.08.19 04:24:51.778837 [ 600 ] {} <Fatal> BaseDaemon: 12. ./obj-x86_64-linux-gnu/../src/Storages/PostgreSQL/MaterializedPostgreSQLConsumer.cpp:219: DB::MaterializedPostgreSQLConsumer::readTupleData(DB::MaterializedPostgreSQLConsumer::Buffer&, char const*, unsigned long&, unsigned long, DB::MaterializedPostgreSQLConsumer::PostgreSQLQuery, bool) @ 0x1e51678f in /usr/bin/clickhouse
2021.08.19 04:24:52.404012 [ 600 ] {} <Fatal> BaseDaemon: 13. ./obj-x86_64-linux-gnu/../src/Storages/PostgreSQL/MaterializedPostgreSQLConsumer.cpp:285: DB::MaterializedPostgreSQLConsumer::processReplicationMessage(char const*, unsigned long) @ 0x1e517422 in /usr/bin/clickhouse
2021.08.19 04:24:53.023838 [ 600 ] {} <Fatal> BaseDaemon: 14. ./obj-x86_64-linux-gnu/../src/Storages/PostgreSQL/MaterializedPostgreSQLConsumer.cpp:620: DB::MaterializedPostgreSQLConsumer::readFromReplicationSlot() @ 0x1e51b3f7 in /usr/bin/clickhouse
2021.08.19 04:24:53.647608 [ 600 ] {} <Fatal> BaseDaemon: 15. ./obj-x86_64-linux-gnu/../src/Storages/PostgreSQL/MaterializedPostgreSQLConsumer.cpp:703: DB::MaterializedPostgreSQLConsumer::consume(std::__1::vector<std::__1::pair<int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > >&) @ 0x1e51d380 in /usr/bin/clickhouse
2021.08.19 04:24:53.703185 [ 593 ] {b38f128b-e606-49ee-a799-6bf86f73e7cd} <Fatal> : Logical error: 'Bad cast from type DB::ColumnDecimal<DB::DateTime64> to DB::ColumnDecimal<DB::Decimal<long> >'.
2021.08.19 04:24:53.704485 [ 601 ] {} <Fatal> BaseDaemon: ########################################
2021.08.19 04:24:53.704890 [ 601 ] {} <Fatal> BaseDaemon: (version 21.9.1.7816 (official build), build id: 55EA93E79D418D815C8CD3E05DD9D857EC61A209) (from thread 593) (query_id: b38f128b-e606-49ee-a799-6bf86f73e7cd) Received signal Aborted (6)
2021.08.19 04:24:53.705224 [ 601 ] {} <Fatal> BaseDaemon:
2021.08.19 04:24:53.705586 [ 601 ] {} <Fatal> BaseDaemon: Stack trace: 0x7fe48cfef18b 0x7fe48cfce859 0x13696a66 0x13696b75 0x13a2c046 0x1e0c375f 0x1e0c03b9 0x1fc46036 0x1fc45da7 0x20051dd2 0x1fcaff9c 0x1fcafeff 0x1fcafe9d 0x1fcafe5d 0x1fcafe35 0x1fcafdfd 0x136e58a9 0x136e49d5 0x1fcae7ed 0x1fcaf1d9 0x1fcad084 0x1fcac373 0x1fcce2b7 0x1fcce1e6 0x1fcce15d 0x1fcce101 0x1fcce012 0x1fccdf0c 0x1fccde1d 0x1fccdddd 0x1fccddb5 0x1fccdd80 0x136e58a9 0x136e49d5 0x1370ba4e 0x13713104 0x1371305d 0x13712f85 0x137128a2 0x7fe48d1b5609 0x7fe48d0cb293
2021.08.19 04:24:53.706295 [ 601 ] {} <Fatal> BaseDaemon: 4. gsignal @ 0x4618b in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.08.19 04:24:53.706498 [ 601 ] {} <Fatal> BaseDaemon: 5. abort @ 0x25859 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.08.19 04:24:53.989293 [ 601 ] {} <Fatal> BaseDaemon: 6. ./obj-x86_64-linux-gnu/../src/Common/Exception.cpp:53: DB::handle_error_code(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool, std::__1::vector<void*, std::__1::allocator<void*> > const&) @ 0x13696a66 in /usr/bin/clickhouse
2021.08.19 04:24:54.253767 [ 601 ] {} <Fatal> BaseDaemon: 7. ./obj-x86_64-linux-gnu/../src/Common/Exception.cpp:60: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x13696b75 in /usr/bin/clickhouse
2021.08.19 04:24:54.322628 [ 600 ] {} <Fatal> BaseDaemon: 16. ./obj-x86_64-linux-gnu/../src/Storages/PostgreSQL/PostgreSQLReplicationHandler.cpp:255: DB::PostgreSQLReplicationHandler::consumerFunc() @ 0x1e4f927a in /usr/bin/clickhouse
2021.08.19 04:24:54.822493 [ 601 ] {} <Fatal> BaseDaemon: 8. ./obj-x86_64-linux-gnu/../src/Common/assert_cast.h:47: DB::ColumnDecimal<DB::Decimal<long> >& assert_cast<DB::ColumnDecimal<DB::Decimal<long> >&, DB::IColumn&>(DB::IColumn&) @ 0x13a2c046 in /usr/bin/clickhouse
2021.08.19 04:24:55.012185 [ 601 ] {} <Fatal> BaseDaemon: 9. ./obj-x86_64-linux-gnu/../src/Core/PostgreSQL/insertPostgreSQLValue.cpp:113: DB::insertPostgreSQLValue(DB::IColumn&, std::__1::basic_string_view<char, std::__1::char_traits<char> >, DB::ExternalResultDescription::ValueType, std::__1::shared_ptr<DB::IDataType const>, std::__1::unordered_map<unsigned long, DB::PostgreSQLArrayInfo, std::__1::hash<unsigned long>, std::__1::equal_to<unsigned long>, std::__1::allocator<std::__1::pair<unsigned long const, DB::PostgreSQLArrayInfo> > >&, unsigned long) @ 0x1e0c375f in /usr/bin/clickhouse
2021.08.19 04:24:55.019037 [ 600 ] {} <Fatal> BaseDaemon: 17. ./obj-x86_64-linux-gnu/../src/Storages/PostgreSQL/PostgreSQLReplicationHandler.cpp:56: DB::PostgreSQLReplicationHandler::PostgreSQLReplicationHandler(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, std::__1::shared_ptr<DB::Context const>, bool, unsigned long, bool, bool, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >)::$_1::operator()() const @ 0x1e500b98 in /usr/bin/clickhouse
2021.08.19 04:24:55.213672 [ 601 ] {} <Fatal> BaseDaemon: 10. ./obj-x86_64-linux-gnu/../src/DataStreams/PostgreSQLSource.cpp:128: DB::PostgreSQLSource<pqxx::transaction<(pqxx::isolation_level)0, (pqxx::write_policy)0> >::generate() @ 0x1e0c03b9 in /usr/bin/clickhouse
2021.08.19 04:24:55.345049 [ 601 ] {} <Fatal> BaseDaemon: 11. ./obj-x86_64-linux-gnu/../src/Processors/ISource.cpp:79: DB::ISource::tryGenerate() @ 0x1fc46036 in /usr/bin/clickhouse
2021.08.19 04:24:55.474311 [ 601 ] {} <Fatal> BaseDaemon: 12. ./obj-x86_64-linux-gnu/../src/Processors/ISource.cpp:53: DB::ISource::work() @ 0x1fc45da7 in /usr/bin/clickhouse
2021.08.19 04:24:55.663605 [ 601 ] {} <Fatal> BaseDaemon: 13. ./obj-x86_64-linux-gnu/../src/Processors/Sources/SourceWithProgress.cpp:60: DB::SourceWithProgress::work() @ 0x20051dd2 in /usr/bin/clickhouse
2021.08.19 04:24:55.720086 [ 600 ] {} <Fatal> BaseDaemon: 18. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676: decltype(std::__1::forward<DB::PostgreSQLReplicationHandler::PostgreSQLReplicationHandler(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, std::__1::shared_ptr<DB::Context const>, bool, unsigned long, bool, bool, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >)::$_1&>(fp)()) std::__1::__invoke<DB::PostgreSQLReplicationHandler::PostgreSQLReplicationHandler(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, std::__1::shared_ptr<DB::Context const>, bool, unsigned long, bool, bool, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >)::$_1&>(DB::PostgreSQLReplicationHandler::PostgreSQLReplicationHandler(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, std::__1::shared_ptr<DB::Context const>, bool, unsigned long, bool, bool, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >)::$_1&) @ 0x1e500b3d in /usr/bin/clickhouse
2021.08.19 04:24:56.170576 [ 601 ] {} <Fatal> BaseDaemon: 14. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:88: DB::executeJob(DB::IProcessor*) @ 0x1fcaff9c in /usr/bin/clickhouse
2021.08.19 04:24:56.401792 [ 600 ] {} <Fatal> BaseDaemon: 19. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/__functional_base:349: void std::__1::__invoke_void_return_wrapper<void>::__call<DB::PostgreSQLReplicationHandler::PostgreSQLReplicationHandler(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, std::__1::shared_ptr<DB::Context const>, bool, unsigned long, bool, bool, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >)::$_1&>(DB::PostgreSQLReplicationHandler::PostgreSQLReplicationHandler(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, std::__1::shared_ptr<DB::Context const>, bool, unsigned long, bool, bool, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >)::$_1&) @ 0x1e500afd in /usr/bin/clickhouse
2021.08.19 04:24:56.662261 [ 601 ] {} <Fatal> BaseDaemon: 15. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:105: DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0::operator()() const @ 0x1fcafeff in /usr/bin/clickhouse
2021.08.19 04:24:57.089834 [ 600 ] {} <Fatal> BaseDaemon: 20. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:1608: std::__1::__function::__default_alloc_func<DB::PostgreSQLReplicationHandler::PostgreSQLReplicationHandler(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, std::__1::shared_ptr<DB::Context const>, bool, unsigned long, bool, bool, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >)::$_1, void ()>::operator()() @ 0x1e500ad5 in /usr/bin/clickhouse
2021.08.19 04:24:57.162106 [ 601 ] {} <Fatal> BaseDaemon: 16. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676: decltype(std::__1::forward<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&>(fp)()) std::__1::__invoke<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&>(DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&) @ 0x1fcafe9d in /usr/bin/clickhouse
2021.08.19 04:24:57.654538 [ 601 ] {} <Fatal> BaseDaemon: 17. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/__functional_base:349: void std::__1::__invoke_void_return_wrapper<void>::__call<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&>(DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&) @ 0x1fcafe5d in /usr/bin/clickhouse
2021.08.19 04:24:57.777370 [ 600 ] {} <Fatal> BaseDaemon: 21. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2089: void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<DB::PostgreSQLReplicationHandler::PostgreSQLReplicationHandler(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, std::__1::shared_ptr<DB::Context const>, bool, unsigned long, bool, bool, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >)::$_1, void ()> >(std::__1::__function::__policy_storage const*) @ 0x1e500a9d in /usr/bin/clickhouse
2021.08.19 04:24:58.023569 [ 600 ] {} <Fatal> BaseDaemon: 22. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2221: std::__1::__function::__policy_func<void ()>::operator()() const @ 0x136e58a9 in /usr/bin/clickhouse
2021.08.19 04:24:58.149174 [ 601 ] {} <Fatal> BaseDaemon: 18. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:1608: std::__1::__function::__default_alloc_func<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0, void ()>::operator()() @ 0x1fcafe35 in /usr/bin/clickhouse
2021.08.19 04:24:58.266393 [ 600 ] {} <Fatal> BaseDaemon: 23. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2560: std::__1::function<void ()>::operator()() const @ 0x136e49d5 in /usr/bin/clickhouse
2021.08.19 04:24:58.388193 [ 600 ] {} <Fatal> BaseDaemon: 24. ./obj-x86_64-linux-gnu/../src/Core/BackgroundSchedulePool.cpp:106: DB::BackgroundSchedulePoolTaskInfo::execute() @ 0x1e43d44a in /usr/bin/clickhouse
2021.08.19 04:24:58.522311 [ 600 ] {} <Fatal> BaseDaemon: 25. ./obj-x86_64-linux-gnu/../src/Core/BackgroundSchedulePool.cpp:19: DB::TaskNotification::execute() @ 0x1e441a46 in /usr/bin/clickhouse
2021.08.19 04:24:58.640778 [ 601 ] {} <Fatal> BaseDaemon: 19. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2089: void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0, void ()> >(std::__1::__function::__policy_storage const*) @ 0x1fcafdfd in /usr/bin/clickhouse
2021.08.19 04:24:58.656289 [ 600 ] {} <Fatal> BaseDaemon: 26. ./obj-x86_64-linux-gnu/../src/Core/BackgroundSchedulePool.cpp:265: DB::BackgroundSchedulePool::threadFunction() @ 0x1e43f380 in /usr/bin/clickhouse
2021.08.19 04:24:58.797004 [ 600 ] {} <Fatal> BaseDaemon: 27. ./obj-x86_64-linux-gnu/../src/Core/BackgroundSchedulePool.cpp:161: DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1::operator()() const @ 0x1e440578 in /usr/bin/clickhouse
2021.08.19 04:24:58.887709 [ 601 ] {} <Fatal> BaseDaemon: 20. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2221: std::__1::__function::__policy_func<void ()>::operator()() const @ 0x136e58a9 in /usr/bin/clickhouse
2021.08.19 04:24:58.939545 [ 600 ] {} <Fatal> BaseDaemon: 28. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3682: decltype(std::__1::forward<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&>(fp)()) std::__1::__invoke_constexpr<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&) @ 0x1e44053d in /usr/bin/clickhouse
2021.08.19 04:24:59.083286 [ 600 ] {} <Fatal> BaseDaemon: 29. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/tuple:1415: decltype(auto) std::__1::__apply_tuple_impl<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&, std::__1::tuple<>&>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&, std::__1::tuple<>&, std::__1::__tuple_indices<>) @ 0x1e4404e1 in /usr/bin/clickhouse
2021.08.19 04:24:59.132881 [ 601 ] {} <Fatal> BaseDaemon: 21. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2560: std::__1::function<void ()>::operator()() const @ 0x136e49d5 in /usr/bin/clickhouse
2021.08.19 04:24:59.224012 [ 600 ] {} <Fatal> BaseDaemon: 30. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/tuple:1424: decltype(auto) std::__1::apply<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&, std::__1::tuple<>&>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&, std::__1::tuple<>&) @ 0x1e4403f2 in /usr/bin/clickhouse
2021.08.19 04:24:59.366303 [ 600 ] {} <Fatal> BaseDaemon: 31. ./obj-x86_64-linux-gnu/../src/Common/ThreadPool.h:182: ThreadFromGlobalPool::ThreadFromGlobalPool<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&&)::'lambda'()::operator()() @ 0x1e4402e7 in /usr/bin/clickhouse
2021.08.19 04:24:59.508365 [ 600 ] {} <Fatal> BaseDaemon: 32. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676: decltype(std::__1::forward<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1>(fp)()) std::__1::__invoke<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&&)::'lambda'()&>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&&) @ 0x1e4401fd in /usr/bin/clickhouse
2021.08.19 04:24:59.614605 [ 601 ] {} <Fatal> BaseDaemon: 22. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:600: DB::PipelineExecutor::executeStepImpl(unsigned long, unsigned long, std::__1::atomic<bool>*) @ 0x1fcae7ed in /usr/bin/clickhouse
2021.08.19 04:24:59.648338 [ 600 ] {} <Fatal> BaseDaemon: 33. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/__functional_base:349: void std::__1::__invoke_void_return_wrapper<void>::__call<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&&)::'lambda'()&>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&&...) @ 0x1e4401bd in /usr/bin/clickhouse
2021.08.19 04:24:59.794318 [ 600 ] {} <Fatal> BaseDaemon: 34. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:1608: std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&&)::'lambda'(), void ()>::operator()() @ 0x1e440195 in /usr/bin/clickhouse
2021.08.19 04:24:59.938842 [ 600 ] {} <Fatal> BaseDaemon: 35. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2089: void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0x1e440160 in /usr/bin/clickhouse
2021.08.19 04:25:00.110190 [ 601 ] {} <Fatal> BaseDaemon: 23. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:485: DB::PipelineExecutor::executeSingleThread(unsigned long, unsigned long) @ 0x1fcaf1d9 in /usr/bin/clickhouse
2021.08.19 04:25:00.194208 [ 600 ] {} <Fatal> BaseDaemon: 36. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2221: std::__1::__function::__policy_func<void ()>::operator()() const @ 0x136e58a9 in /usr/bin/clickhouse
2021.08.19 04:25:00.447228 [ 600 ] {} <Fatal> BaseDaemon: 37. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2560: std::__1::function<void ()>::operator()() const @ 0x136e49d5 in /usr/bin/clickhouse
2021.08.19 04:25:00.538051 [ 600 ] {} <Fatal> BaseDaemon: 38. ./obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:269: ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x1370ba4e in /usr/bin/clickhouse
2021.08.19 04:25:00.578006 [ 601 ] {} <Fatal> BaseDaemon: 24. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:824: DB::PipelineExecutor::executeImpl(unsigned long) @ 0x1fcad084 in /usr/bin/clickhouse
2021.08.19 04:25:00.636090 [ 600 ] {} <Fatal> BaseDaemon: 39. ./obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:136: void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()::operator()() const @ 0x13713104 in /usr/bin/clickhouse
2021.08.19 04:25:00.729994 [ 600 ] {} <Fatal> BaseDaemon: 40. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676: decltype(std::__1::forward<void>(fp)(std::__1::forward<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(fp0)...)) std::__1::__invoke<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()&&...) @ 0x1371305d in /usr/bin/clickhouse
2021.08.19 04:25:00.824458 [ 600 ] {} <Fatal> BaseDaemon: 41. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:281: void std::__1::__thread_execute<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(std::__1::tuple<void, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>&, std::__1::__tuple_indices<>) @ 0x13712f85 in /usr/bin/clickhouse
2021.08.19 04:25:00.917444 [ 600 ] {} <Fatal> BaseDaemon: 42. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:291: void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) @ 0x137128a2 in /usr/bin/clickhouse
2021.08.19 04:25:00.917615 [ 600 ] {} <Fatal> BaseDaemon: 43. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
2021.08.19 04:25:00.917751 [ 600 ] {} <Fatal> BaseDaemon: 44. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.08.19 04:25:01.053149 [ 601 ] {} <Fatal> BaseDaemon: 25. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:407: DB::PipelineExecutor::execute(unsigned long) @ 0x1fcac373 in /usr/bin/clickhouse
2021.08.19 04:25:01.461749 [ 601 ] {} <Fatal> BaseDaemon: 26. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PullingAsyncPipelineExecutor.cpp:80: DB::threadFunction(DB::PullingAsyncPipelineExecutor::Data&, std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0x1fcce2b7 in /usr/bin/clickhouse
2021.08.19 04:25:01.858507 [ 601 ] {} <Fatal> BaseDaemon: 27. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PullingAsyncPipelineExecutor.cpp:108: DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0::operator()() const @ 0x1fcce1e6 in /usr/bin/clickhouse
2021.08.19 04:25:02.200608 [ 600 ] {} <Fatal> BaseDaemon: Checksum of the binary: DE356FEAC76EF5598A726EB9A97FF863, integrity check passed.
2021.08.19 04:25:02.254916 [ 601 ] {} <Fatal> BaseDaemon: 28. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3682: decltype(std::__1::forward<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&>(fp)()) std::__1::__invoke_constexpr<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&) @ 0x1fcce15d in /usr/bin/clickhouse
2021.08.19 04:25:02.650978 [ 601 ] {} <Fatal> BaseDaemon: 29. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/tuple:1415: decltype(auto) std::__1::__apply_tuple_impl<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&, std::__1::tuple<>&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&, std::__1::tuple<>&, std::__1::__tuple_indices<>) @ 0x1fcce101 in /usr/bin/clickhouse
2021.08.19 04:25:03.044038 [ 601 ] {} <Fatal> BaseDaemon: 30. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/tuple:1424: decltype(auto) std::__1::apply<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&, std::__1::tuple<>&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&, std::__1::tuple<>&) @ 0x1fcce012 in /usr/bin/clickhouse
2021.08.19 04:25:03.440833 [ 601 ] {} <Fatal> BaseDaemon: 31. ./obj-x86_64-linux-gnu/../src/Common/ThreadPool.h:182: ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'()::operator()() @ 0x1fccdf0c in /usr/bin/clickhouse
2021.08.19 04:25:03.834413 [ 601 ] {} <Fatal> BaseDaemon: 32. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676: decltype(std::__1::forward<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(fp)()) std::__1::__invoke<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'()&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&) @ 0x1fccde1d in /usr/bin/clickhouse
2021.08.19 04:25:04.229964 [ 601 ] {} <Fatal> BaseDaemon: 33. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/__functional_base:349: void std::__1::__invoke_void_return_wrapper<void>::__call<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'()&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&...) @ 0x1fccdddd in /usr/bin/clickhouse
2021.08.19 04:25:04.621995 [ 601 ] {} <Fatal> BaseDaemon: 34. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:1608: std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'(), void ()>::operator()() @ 0x1fccddb5 in /usr/bin/clickhouse
2021.08.19 04:25:05.032142 [ 601 ] {} <Fatal> BaseDaemon: 35. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2089: void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0x1fccdd80 in /usr/bin/clickhouse
2021.08.19 04:25:05.290146 [ 601 ] {} <Fatal> BaseDaemon: 36. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2221: std::__1::__function::__policy_func<void ()>::operator()() const @ 0x136e58a9 in /usr/bin/clickhouse
2021.08.19 04:25:05.542736 [ 601 ] {} <Fatal> BaseDaemon: 37. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2560: std::__1::function<void ()>::operator()() const @ 0x136e49d5 in /usr/bin/clickhouse
2021.08.19 04:25:05.631930 [ 601 ] {} <Fatal> BaseDaemon: 38. ./obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:269: ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x1370ba4e in /usr/bin/clickhouse
2021.08.19 04:25:05.728543 [ 601 ] {} <Fatal> BaseDaemon: 39. ./obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:136: void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()::operator()() const @ 0x13713104 in /usr/bin/clickhouse
2021.08.19 04:25:05.820976 [ 601 ] {} <Fatal> BaseDaemon: 40. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676: decltype(std::__1::forward<void>(fp)(std::__1::forward<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(fp0)...)) std::__1::__invoke<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()&&...) @ 0x1371305d in /usr/bin/clickhouse
2021.08.19 04:25:05.914716 [ 601 ] {} <Fatal> BaseDaemon: 41. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:281: void std::__1::__thread_execute<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(std::__1::tuple<void, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>&, std::__1::__tuple_indices<>) @ 0x13712f85 in /usr/bin/clickhouse
2021.08.19 04:25:06.008434 [ 601 ] {} <Fatal> BaseDaemon: 42. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:291: void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) @ 0x137128a2 in /usr/bin/clickhouse
2021.08.19 04:25:06.008745 [ 601 ] {} <Fatal> BaseDaemon: 43. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
2021.08.19 04:25:06.009020 [ 601 ] {} <Fatal> BaseDaemon: 44. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.08.19 04:25:07.271277 [ 601 ] {} <Fatal> BaseDaemon: Checksum of the binary: DE356FEAC76EF5598A726EB9A97FF863, integrity check passed.
2021.08.19 04:25:10.154889 [ 413 ] {} <Fatal> Application: Child process was terminated by signal 6.
``` | test | insertpostgresqlvalue cpp bad cast from type db columndecimal to db columndecimal test postgresql replica database engine test py test single transaction logical error bad cast from type db columndecimal to db columndecimal basedaemon basedaemon version official build build id from thread no query received signal aborted basedaemon basedaemon stack trace basedaemon gsignal in usr lib linux gnu libc so basedaemon abort in usr lib linux gnu libc so basedaemon obj linux gnu src common exception cpp db handle error code std basic string std allocator const int bool std vector const in usr bin clickhouse basedaemon obj linux gnu src common exception cpp db exception exception std basic string std allocator const int bool in usr bin clickhouse basedaemon obj linux gnu src common assert cast h db columndecimal assert cast db icolumn db icolumn in usr bin clickhouse basedaemon obj linux gnu src core postgresql insertpostgresqlvalue cpp db insertpostgresqlvalue db icolumn std basic string view db externalresultdescription valuetype std shared ptr std unordered map std equal to std allocator unsigned long in usr bin clickhouse basedaemon obj linux gnu src storages postgresql materializedpostgresqlconsumer cpp db materializedpostgresqlconsumer insertvalue db materializedpostgresqlconsumer buffer std basic string std allocator const unsigned long in usr bin clickhouse basedaemon obj linux gnu src storages postgresql materializedpostgresqlconsumer cpp db materializedpostgresqlconsumer readtupledata db materializedpostgresqlconsumer buffer char const unsigned long unsigned long db materializedpostgresqlconsumer postgresqlquery bool operator signed char short const in usr bin clickhouse basedaemon obj linux gnu src storages postgresql materializedpostgresqlconsumer cpp db materializedpostgresqlconsumer readtupledata db materializedpostgresqlconsumer buffer char const unsigned long unsigned long db materializedpostgresqlconsumer postgresqlquery bool in usr bin clickhouse basedaemon obj linux gnu src storages postgresql materializedpostgresqlconsumer cpp db materializedpostgresqlconsumer processreplicationmessage char const unsigned long in usr bin clickhouse basedaemon obj linux gnu src storages postgresql materializedpostgresqlconsumer cpp db materializedpostgresqlconsumer readfromreplicationslot in usr bin clickhouse basedaemon obj linux gnu src storages postgresql materializedpostgresqlconsumer cpp db materializedpostgresqlconsumer consume std vector std allocator std allocator std allocator in usr bin clickhouse logical error bad cast from type db columndecimal to db columndecimal basedaemon basedaemon version official build build id from thread query id received signal aborted basedaemon basedaemon stack trace basedaemon gsignal in usr lib linux gnu libc so basedaemon abort in usr lib linux gnu libc so basedaemon obj linux gnu src common exception cpp db handle error code std basic string std allocator const int bool std vector const in usr bin clickhouse basedaemon obj linux gnu src common exception cpp db exception exception std basic string std allocator const int bool in usr bin clickhouse basedaemon obj linux gnu src storages postgresql postgresqlreplicationhandler cpp db postgresqlreplicationhandler consumerfunc in usr bin clickhouse basedaemon obj linux gnu src common assert cast h db columndecimal assert cast db icolumn db icolumn in usr bin clickhouse basedaemon obj linux gnu src core postgresql insertpostgresqlvalue cpp db insertpostgresqlvalue db icolumn std basic string view db externalresultdescription valuetype std shared ptr std unordered map std equal to std allocator unsigned long in usr bin clickhouse basedaemon obj linux gnu src storages postgresql postgresqlreplicationhandler cpp db postgresqlreplicationhandler postgresqlreplicationhandler std basic string std allocator const std basic string std allocator const std basic string std allocator const std pair std allocator std basic string std allocator const std shared ptr bool unsigned long bool bool std basic string std allocator operator const in usr bin clickhouse basedaemon obj linux gnu src datastreams postgresqlsource cpp db postgresqlsource generate in usr bin clickhouse basedaemon obj linux gnu src processors isource cpp db isource trygenerate in usr bin clickhouse basedaemon obj linux gnu src processors isource cpp db isource work in usr bin clickhouse basedaemon obj linux gnu src processors sources sourcewithprogress cpp db sourcewithprogress work in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include type traits decltype std forward std allocator const std basic string std allocator const std basic string std allocator const std pair std allocator std basic string std allocator const std shared ptr bool unsigned long bool bool std basic string std allocator fp std invoke std allocator const std basic string std allocator const std basic string std allocator const std pair std allocator std basic string std allocator const std shared ptr bool unsigned long bool bool std basic string std allocator db postgresqlreplicationhandler postgresqlreplicationhandler std basic string std allocator const std basic string std allocator const std basic string std allocator const std pair std allocator std basic string std allocator const std shared ptr bool unsigned long bool bool std basic string std allocator in usr bin clickhouse basedaemon obj linux gnu src processors executors pipelineexecutor cpp db executejob db iprocessor in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include functional base void std invoke void return wrapper call std allocator const std basic string std allocator const std basic string std allocator const std pair std allocator std basic string std allocator const std shared ptr bool unsigned long bool bool std basic string std allocator db postgresqlreplicationhandler postgresqlreplicationhandler std basic string std allocator const std basic string std allocator const std basic string std allocator const std pair std allocator std basic string std allocator const std shared ptr bool unsigned long bool bool std basic string std allocator in usr bin clickhouse basedaemon obj linux gnu src processors executors pipelineexecutor cpp db pipelineexecutor addjob db executinggraph node operator const in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include functional std function default alloc func std allocator const std basic string std allocator const std basic string std allocator const std pair std allocator std basic string std allocator const std shared ptr bool unsigned long bool bool std basic string std allocator void operator in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include type traits decltype std forward fp std invoke db pipelineexecutor addjob db executinggraph node in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include functional base void std invoke void return wrapper call db pipelineexecutor addjob db executinggraph node in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include functional void std function policy invoker call impl std allocator const std basic string std allocator const std basic string std allocator const std pair std allocator std basic string std allocator const std shared ptr bool unsigned long bool bool std basic string std allocator void std function policy storage const in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include functional std function policy func operator const in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include functional std function default alloc func operator in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include functional std function operator const in usr bin clickhouse basedaemon obj linux gnu src core backgroundschedulepool cpp db backgroundschedulepooltaskinfo execute in usr bin clickhouse basedaemon obj linux gnu src core backgroundschedulepool cpp db tasknotification execute in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include functional void std function policy invoker call impl std function policy storage const in usr bin clickhouse basedaemon obj linux gnu src core backgroundschedulepool cpp db backgroundschedulepool threadfunction in usr bin clickhouse basedaemon obj linux gnu src core backgroundschedulepool cpp db backgroundschedulepool backgroundschedulepool unsigned long unsigned long char const operator const in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include functional std function policy func operator const in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include type traits decltype std forward fp std invoke constexpr db backgroundschedulepool backgroundschedulepool unsigned long unsigned long char const in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include tuple decltype auto std apply tuple impl db backgroundschedulepool backgroundschedulepool unsigned long unsigned long char const std tuple std tuple indices in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include functional std function operator const in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include tuple decltype auto std apply db backgroundschedulepool backgroundschedulepool unsigned long unsigned long char const std tuple in usr bin clickhouse basedaemon obj linux gnu src common threadpool h threadfromglobalpool threadfromglobalpool db backgroundschedulepool backgroundschedulepool unsigned long unsigned long char const lambda operator in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include type traits decltype std forward fp std invoke db backgroundschedulepool backgroundschedulepool unsigned long unsigned long char const lambda db backgroundschedulepool backgroundschedulepool unsigned long unsigned long char const in usr bin clickhouse basedaemon obj linux gnu src processors executors pipelineexecutor cpp db pipelineexecutor executestepimpl unsigned long unsigned long std atomic in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include functional base void std invoke void return wrapper call db backgroundschedulepool backgroundschedulepool unsigned long unsigned long char const lambda db backgroundschedulepool backgroundschedulepool unsigned long unsigned long char const in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include functional std function default alloc func db backgroundschedulepool backgroundschedulepool unsigned long unsigned long char const lambda void operator in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include functional void std function policy invoker call impl db backgroundschedulepool backgroundschedulepool unsigned long unsigned long char const lambda void std function policy storage const in usr bin clickhouse basedaemon obj linux gnu src processors executors pipelineexecutor cpp db pipelineexecutor executesinglethread unsigned long unsigned long in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include functional std function policy func operator const in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include functional std function operator const in usr bin clickhouse basedaemon obj linux gnu src common threadpool cpp threadpoolimpl worker std list iterator in usr bin clickhouse basedaemon obj linux gnu src processors executors pipelineexecutor cpp db pipelineexecutor executeimpl unsigned long in usr bin clickhouse basedaemon obj linux gnu src common threadpool cpp void threadpoolimpl scheduleimpl std function int std optional operator const in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include type traits decltype std forward fp std forward scheduleimpl std function int std optional std invoke scheduleimpl std function int std optional void void threadpoolimpl scheduleimpl std function int std optional in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include thread void std thread execute void threadpoolimpl scheduleimpl std function int std optional std tuple scheduleimpl std function int std optional std tuple indices in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include thread void std thread proxy void threadpoolimpl scheduleimpl std function int std optional void in usr bin clickhouse basedaemon start thread in usr lib linux gnu libpthread so basedaemon clone in usr lib linux gnu libc so basedaemon obj linux gnu src processors executors pipelineexecutor cpp db pipelineexecutor execute unsigned long in usr bin clickhouse basedaemon obj linux gnu src processors executors pullingasyncpipelineexecutor cpp db threadfunction db pullingasyncpipelineexecutor data std shared ptr unsigned long in usr bin clickhouse basedaemon obj linux gnu src processors executors pullingasyncpipelineexecutor cpp db pullingasyncpipelineexecutor pull db chunk unsigned long operator const in usr bin clickhouse basedaemon checksum of the binary integrity check passed basedaemon obj linux gnu contrib libcxx include type traits decltype std forward fp std invoke constexpr db pullingasyncpipelineexecutor pull db chunk unsigned long in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include tuple decltype auto std apply tuple impl db pullingasyncpipelineexecutor pull db chunk unsigned long std tuple std tuple indices in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include tuple decltype auto std apply db pullingasyncpipelineexecutor pull db chunk unsigned long std tuple in usr bin clickhouse basedaemon obj linux gnu src common threadpool h threadfromglobalpool threadfromglobalpool db pullingasyncpipelineexecutor pull db chunk unsigned long lambda operator in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include type traits decltype std forward fp std invoke db pullingasyncpipelineexecutor pull db chunk unsigned long lambda db pullingasyncpipelineexecutor pull db chunk unsigned long in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include functional base void std invoke void return wrapper call db pullingasyncpipelineexecutor pull db chunk unsigned long lambda db pullingasyncpipelineexecutor pull db chunk unsigned long in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include functional std function default alloc func db pullingasyncpipelineexecutor pull db chunk unsigned long lambda void operator in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include functional void std function policy invoker call impl db pullingasyncpipelineexecutor pull db chunk unsigned long lambda void std function policy storage const in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include functional std function policy func operator const in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include functional std function operator const in usr bin clickhouse basedaemon obj linux gnu src common threadpool cpp threadpoolimpl worker std list iterator in usr bin clickhouse basedaemon obj linux gnu src common threadpool cpp void threadpoolimpl scheduleimpl std function int std optional operator const in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include type traits decltype std forward fp std forward scheduleimpl std function int std optional std invoke scheduleimpl std function int std optional void void threadpoolimpl scheduleimpl std function int std optional in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include thread void std thread execute void threadpoolimpl scheduleimpl std function int std optional std tuple scheduleimpl std function int std optional std tuple indices in usr bin clickhouse basedaemon obj linux gnu contrib libcxx include thread void std thread proxy void threadpoolimpl scheduleimpl std function int std optional void in usr bin clickhouse basedaemon start thread in usr lib linux gnu libpthread so basedaemon clone in usr lib linux gnu libc so basedaemon checksum of the binary integrity check passed application child process was terminated by signal | 1 |
167,314 | 13,019,465,872 | IssuesEvent | 2020-07-26 22:47:57 | WohlSoft/PGE-Project | https://api.github.com/repos/WohlSoft/PGE-Project | closed | [Editor] Can't redo after undoing the objects pasting | Need a test bug | A bug reported at the Discord
> **Taycamgame#5382**
weird bug:
if you copy an object
and then, if you paste it somewhere
all good so far, right?
Now, if you undo the action
the object you pasted is now gone, right?
But what happens if you redo the action? the object should reappear where it was right?
well... sort of. Except it gets offset when it reappears. | 1.0 | [Editor] Can't redo after undoing the objects pasting - A bug reported at the Discord
> **Taycamgame#5382**
weird bug:
if you copy an object
and then, if you paste it somewhere
all good so far, right?
Now, if you undo the action
the object you pasted is now gone, right?
But what happens if you redo the action? the object should reappear where it was right?
well... sort of. Except it gets offset when it reappears. | test | can t redo after undoing the objects pasting a bug reported at the discord taycamgame weird bug if you copy an object and then if you paste it somewhere all good so far right now if you undo the action the object you pasted is now gone right but what happens if you redo the action the object should reappear where it was right well sort of except it gets offset when it reappears | 1 |
178,381 | 13,776,982,254 | IssuesEvent | 2020-10-08 10:13:59 | Scholar-6/brillder | https://api.github.com/repos/Scholar-6/brillder | closed | Ticket #93: When playing through bricks, it is marking correct answers as incorrect. | Betatester Request Critical Blocker | When playing through bricks, it is marking correct answers as incorrect. This happened a few days ago when editing Harry's brick, and again today (on Version 6.2.0) when reviewing my own.------------------
Submitted from: https://brillder.scholar6.org/play-preview/brick/257/intro




| 1.0 | Ticket #93: When playing through bricks, it is marking correct answers as incorrect. - When playing through bricks, it is marking correct answers as incorrect. This happened a few days ago when editing Harry's brick, and again today (on Version 6.2.0) when reviewing my own.------------------
Submitted from: https://brillder.scholar6.org/play-preview/brick/257/intro




| test | ticket when playing through bricks it is marking correct answers as incorrect when playing through bricks it is marking correct answers as incorrect this happened a few days ago when editing harry s brick and again today on version when reviewing my own submitted from | 1 |
49,602 | 6,034,429,957 | IssuesEvent | 2017-06-09 11:05:29 | bontorhumala/untar | https://api.github.com/repos/bontorhumala/untar | closed | Test NIMD Builder on other datasets | test needed | NIMD builder is developed on REDD building 1. Need to be tested on other REDD and other datasets | 1.0 | Test NIMD Builder on other datasets - NIMD builder is developed on REDD building 1. Need to be tested on other REDD and other datasets | test | test nimd builder on other datasets nimd builder is developed on redd building need to be tested on other redd and other datasets | 1 |
2,284 | 2,590,555,386 | IssuesEvent | 2015-02-18 19:36:31 | elasticsearch/elasticsearch-mapper-attachments | https://api.github.com/repos/elasticsearch/elasticsearch-mapper-attachments | closed | [Test] Use now full qualified names for fields | 3.0.0 tests update | We were asking for short name fields but elasticsearch does not allow anymore using short names but full qualified names.
```java
SearchResponse response = client().prepareSearch("test")
.addField("content_type")
.addField("name")
.execute().get();
```
We need to use now:
```java
SearchResponse response = client().prepareSearch("test")
.addField("file.content_type")
.addField("file.name")
.execute().get();
```
| 1.0 | [Test] Use now full qualified names for fields - We were asking for short name fields but elasticsearch does not allow anymore using short names but full qualified names.
```java
SearchResponse response = client().prepareSearch("test")
.addField("content_type")
.addField("name")
.execute().get();
```
We need to use now:
```java
SearchResponse response = client().prepareSearch("test")
.addField("file.content_type")
.addField("file.name")
.execute().get();
```
| test | use now full qualified names for fields we were asking for short name fields but elasticsearch does not allow anymore using short names but full qualified names java searchresponse response client preparesearch test addfield content type addfield name execute get we need to use now java searchresponse response client preparesearch test addfield file content type addfield file name execute get | 1 |
48,605 | 13,161,899,761 | IssuesEvent | 2020-08-10 20:28:23 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | closed | CMS sitewide 508-defect-2 [SEMANTIC MARKUP]: Heading levels SHOULD only increase by one | 508-defect-2 508-issue-semantic-markup 508/Accessibility cms sitewide vsa | # [508-defect-2](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/accessibility/guidance/defect-severity-rubric.md#508-defect-2)
```diff
! Team affected: VSA-BAM2; Project: Debt Letters MVP
```
Already logged: https://github.com/department-of-veterans-affairs/va.gov-team/issues/7708
**Feedback framework**
- **❗️ Must** for if the feedback must be applied
- **⚠️Should** if the feedback is best practice
- **✔️ Consider** for suggestions/enhancements
## Description
Headings **should** increase by one.
When the alert box component was brought over from Formation it was brought over with an h3 as the heading. In Formation, there is a `level` prop that may be set to ensure the proper heading levels. For the CMS, another approach should be taken to design these as static components to ensure heading levels increase by one, so that screen reader users understand the semantic hierarchy of the page.
## Point of Contact
**VFS Point of Contact:** Jennifer
## Acceptance Criteria
As a screen reader user, I want to understand the content hierarchy of the page.
## Environment
* Operating System: all
* Browser: all
* Screenreading device: all
* Server destination: staging
## Steps to Recreate
1. Log in as user 1
1. Enter https://staging.va.gov/health-care/order-hearing-aid-batteries-and-accessories/?postLogin=true in browser
1. Run an axe browser scan
1. Verify the "Heading levels should only increase by one" issue appears, related to `#am-i-eligible-to-order-hearing`
## Possible Fixes (optional)
For the CMS alert boxes, the recommendation is the h3 become an h2 with the h3 utility class.
## WCAG or Vendor Guidance (optional)
* [axe-core 3.4 - Heading levels should only increase by one](https://dequeuniversity.com/rules/axe/3.4/heading-order)
* [MDN dialog element](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/dialog)
* [MDN ARIA: alert role](https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA/Roles/Alert_Role)
* [MDN Description List element](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/dl)
## Screenshots or Trace Logs
<!-- Drop any screenshots or error logs that might be useful for debugging -->

| 1.0 | CMS sitewide 508-defect-2 [SEMANTIC MARKUP]: Heading levels SHOULD only increase by one - # [508-defect-2](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/accessibility/guidance/defect-severity-rubric.md#508-defect-2)
```diff
! Team affected: VSA-BAM2; Project: Debt Letters MVP
```
Already logged: https://github.com/department-of-veterans-affairs/va.gov-team/issues/7708
**Feedback framework**
- **❗️ Must** for if the feedback must be applied
- **⚠️Should** if the feedback is best practice
- **✔️ Consider** for suggestions/enhancements
## Description
Headings **should** increase by one.
When the alert box component was brought over from Formation it was brought over with an h3 as the heading. In Formation, there is a `level` prop that may be set to ensure the proper heading levels. For the CMS, another approach should be taken to design these as static components to ensure heading levels increase by one, so that screen reader users understand the semantic hierarchy of the page.
## Point of Contact
**VFS Point of Contact:** Jennifer
## Acceptance Criteria
As a screen reader user, I want to understand the content hierarchy of the page.
## Environment
* Operating System: all
* Browser: all
* Screenreading device: all
* Server destination: staging
## Steps to Recreate
1. Log in as user 1
1. Enter https://staging.va.gov/health-care/order-hearing-aid-batteries-and-accessories/?postLogin=true in browser
1. Run an axe browser scan
1. Verify the "Heading levels should only increase by one" issue appears, related to `#am-i-eligible-to-order-hearing`
## Possible Fixes (optional)
For the CMS alert boxes, the recommendation is the h3 become an h2 with the h3 utility class.
## WCAG or Vendor Guidance (optional)
* [axe-core 3.4 - Heading levels should only increase by one](https://dequeuniversity.com/rules/axe/3.4/heading-order)
* [MDN dialog element](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/dialog)
* [MDN ARIA: alert role](https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA/Roles/Alert_Role)
* [MDN Description List element](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/dl)
## Screenshots or Trace Logs
<!-- Drop any screenshots or error logs that might be useful for debugging -->

| non_test | cms sitewide defect heading levels should only increase by one diff team affected vsa project debt letters mvp already logged feedback framework ❗️ must for if the feedback must be applied ⚠️should if the feedback is best practice ✔️ consider for suggestions enhancements description headings should increase by one when the alert box component was brought over from formation it was brought over with an as the heading in formation there is a level prop that may be set to ensure the proper heading levels for the cms another approach should be taken to design these as static components to ensure heading levels increase by one so that screen reader users understand the semantic hierarchy of the page point of contact vfs point of contact jennifer acceptance criteria as a screen reader user i want to understand the content hierarchy of the page environment operating system all browser all screenreading device all server destination staging steps to recreate log in as user enter in browser run an axe browser scan verify the heading levels should only increase by one issue appears related to am i eligible to order hearing possible fixes optional for the cms alert boxes the recommendation is the become an with the utility class wcag or vendor guidance optional screenshots or trace logs | 0 |
66,665 | 8,037,513,096 | IssuesEvent | 2018-07-30 12:54:07 | scieloorg/opac | https://api.github.com/repos/scieloorg/opac | opened | Exibir data de publicação do documento no sumário | Design | Na visualização do sumário (TOC), exibir a data de publicação do documento.
@paratiuid, por favor faça uma proposta de layout. | 1.0 | Exibir data de publicação do documento no sumário - Na visualização do sumário (TOC), exibir a data de publicação do documento.
@paratiuid, por favor faça uma proposta de layout. | non_test | exibir data de publicação do documento no sumário na visualização do sumário toc exibir a data de publicação do documento paratiuid por favor faça uma proposta de layout | 0 |
32,841 | 4,791,821,702 | IssuesEvent | 2016-10-31 13:54:26 | ansible/ansible | https://api.github.com/repos/ansible/ansible | opened | validate-modules: Check executable flags | affects_2.3 feature_idea test | ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
validate-modules
##### ANSIBLE VERSION
```
ansible 2.3.0 (devel 5502da3cf8) last updated 2016/10/25 11:22:53 (GMT +100)
lib/ansible/modules/core: (devel 4c020102a9) last updated 2016/10/25 11:22:57 (GMT +100)
lib/ansible/modules/extras: (devel 8f77a0e72a) last updated 2016/10/25 11:22:59 (GMT +100)
config file =
```
##### CONFIGURATION
##### OS / ENVIRONMENT
##### SUMMARY
A number of modules (in core & extras) are executable. This is difficult to spot during new module creation PR as GitHub doesn't display file mode flags.
`validate-modules` needs updating to check for executable
Once fixed all modules will need updating - @gundalow will fix this
Also this need adding to the list of checks in `test/sanity/validate-modules/README.rst`
##### STEPS TO REPRODUCE
##### EXPECTED RESULTS
##### ACTUAL RESULTS
| 1.0 | validate-modules: Check executable flags - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
validate-modules
##### ANSIBLE VERSION
```
ansible 2.3.0 (devel 5502da3cf8) last updated 2016/10/25 11:22:53 (GMT +100)
lib/ansible/modules/core: (devel 4c020102a9) last updated 2016/10/25 11:22:57 (GMT +100)
lib/ansible/modules/extras: (devel 8f77a0e72a) last updated 2016/10/25 11:22:59 (GMT +100)
config file =
```
##### CONFIGURATION
##### OS / ENVIRONMENT
##### SUMMARY
A number of modules (in core & extras) are executable. This is difficult to spot during new module creation PR as GitHub doesn't display file mode flags.
`validate-modules` needs updating to check for executable
Once fixed all modules will need updating - @gundalow will fix this
Also this need adding to the list of checks in `test/sanity/validate-modules/README.rst`
##### STEPS TO REPRODUCE
##### EXPECTED RESULTS
##### ACTUAL RESULTS
| test | validate modules check executable flags issue type feature idea component name validate modules ansible version ansible devel last updated gmt lib ansible modules core devel last updated gmt lib ansible modules extras devel last updated gmt config file configuration os environment summary a number of modules in core extras are executable this is difficult to spot during new module creation pr as github doesn t display file mode flags validate modules needs updating to check for executable once fixed all modules will need updating gundalow will fix this also this need adding to the list of checks in test sanity validate modules readme rst steps to reproduce expected results actual results | 1 |
56,008 | 6,950,342,317 | IssuesEvent | 2017-12-06 10:26:10 | theia-ide/theia | https://api.github.com/repos/theia-ide/theia | closed | [navigator] directories that end with "-config" have a "config" icon | bug design | Is that intended? Or was it only meant for config files?

| 1.0 | [navigator] directories that end with "-config" have a "config" icon - Is that intended? Or was it only meant for config files?

| non_test | directories that end with config have a config icon is that intended or was it only meant for config files | 0 |
133,872 | 18,981,709,406 | IssuesEvent | 2021-11-21 01:33:55 | hayes/giraphql | https://api.github.com/repos/hayes/giraphql | closed | Depth limit support | enhancement needs-api-design | It would be great if we have depth limit support and also other security measures mentioned in https://www.apollographql.com/blog/graphql/security/securing-your-graphql-api-from-malicious-queries | 1.0 | Depth limit support - It would be great if we have depth limit support and also other security measures mentioned in https://www.apollographql.com/blog/graphql/security/securing-your-graphql-api-from-malicious-queries | non_test | depth limit support it would be great if we have depth limit support and also other security measures mentioned in | 0 |
124,269 | 12,227,899,575 | IssuesEvent | 2020-05-03 17:07:05 | UNIZAR-30226-2020-10/back-end | https://api.github.com/repos/UNIZAR-30226-2020-10/back-end | closed | Documentación clases sqlalchemy y tests | documentation low difficult | Añadir comentarios del estilo docstring a las clases de forma que se pueda generar documentación automática | 1.0 | Documentación clases sqlalchemy y tests - Añadir comentarios del estilo docstring a las clases de forma que se pueda generar documentación automática | non_test | documentación clases sqlalchemy y tests añadir comentarios del estilo docstring a las clases de forma que se pueda generar documentación automática | 0 |
166,601 | 14,072,540,002 | IssuesEvent | 2020-11-04 02:08:33 | adadesions/MovieDB-TOT-1 | https://api.github.com/repos/adadesions/MovieDB-TOT-1 | closed | Need someone to write the README.md | documentation need helps | **1. Contribution term
2. The location of the JSON file** | 1.0 | Need someone to write the README.md - **1. Contribution term
2. The location of the JSON file** | non_test | need someone to write the readme md contribution term the location of the json file | 0 |
3,760 | 2,540,003,306 | IssuesEvent | 2015-01-27 18:50:47 | xbrowse/xbrowse | https://api.github.com/repos/xbrowse/xbrowse | opened | provide way to download search results from multiple families | feature request medium_priority | From Sarah's email:
"1) ability to export all data (not just one family at a time)" | 1.0 | provide way to download search results from multiple families - From Sarah's email:
"1) ability to export all data (not just one family at a time)" | non_test | provide way to download search results from multiple families from sarah s email ability to export all data not just one family at a time | 0 |
337,819 | 30,265,437,301 | IssuesEvent | 2023-07-07 11:28:29 | valeriupredoi/PyActiveStorage | https://api.github.com/repos/valeriupredoi/PyActiveStorage | closed | `test_harness.py` causes issues when running it for S3 (with USE_S3=True), and is old and needs overhaul | bug testing | The problems reported in #111 are related to this test, and the way the testfile gets created per test case, suggesting some I/O toestepping (99.9% positive on that since I've tested a lot of the workaround @markgoddard and me have put in #113 ). As such, I'd like to overhaul the test module but by all means, if @bnlawrence wants to do it, he be me guest :grin: | 1.0 | `test_harness.py` causes issues when running it for S3 (with USE_S3=True), and is old and needs overhaul - The problems reported in #111 are related to this test, and the way the testfile gets created per test case, suggesting some I/O toestepping (99.9% positive on that since I've tested a lot of the workaround @markgoddard and me have put in #113 ). As such, I'd like to overhaul the test module but by all means, if @bnlawrence wants to do it, he be me guest :grin: | test | test harness py causes issues when running it for with use true and is old and needs overhaul the problems reported in are related to this test and the way the testfile gets created per test case suggesting some i o toestepping positive on that since i ve tested a lot of the workaround markgoddard and me have put in as such i d like to overhaul the test module but by all means if bnlawrence wants to do it he be me guest grin | 1 |
211,152 | 16,177,226,531 | IssuesEvent | 2021-05-03 08:54:59 | davidstraka2/wootom | https://api.github.com/repos/davidstraka2/wootom | closed | Test on minimum supported Atom version (along with latest stable version) in CI | ci test | *As per my [request](https://github.com/UziTech/action-setup-atom/issues/171), [action-setup-atom](https://github.com/marketplace/actions/setup-atom) has added the options to select the exact version of Atom to use for testing. The feature has been added in [version 2](https://github.com/UziTech/action-setup-atom/blob/v2.0.3/CHANGELOG.md#200-2021-04-12), which is now available. This allows us to add the minimum supported Atom version to the CI testing matrix. As long as Atom stays on the same major version (and these don't seem to change frequently, as Atom is still on v1.x), this should ensure everything works also on all minor versions inbetween the tested ones (assuming semantic versioning rules are followed properly).*
Update the version of action-setup-atom used in our Github Actions setup and add the minimum supported Atom version (1.54) to the testing matrix. | 1.0 | Test on minimum supported Atom version (along with latest stable version) in CI - *As per my [request](https://github.com/UziTech/action-setup-atom/issues/171), [action-setup-atom](https://github.com/marketplace/actions/setup-atom) has added the options to select the exact version of Atom to use for testing. The feature has been added in [version 2](https://github.com/UziTech/action-setup-atom/blob/v2.0.3/CHANGELOG.md#200-2021-04-12), which is now available. This allows us to add the minimum supported Atom version to the CI testing matrix. As long as Atom stays on the same major version (and these don't seem to change frequently, as Atom is still on v1.x), this should ensure everything works also on all minor versions inbetween the tested ones (assuming semantic versioning rules are followed properly).*
Update the version of action-setup-atom used in our Github Actions setup and add the minimum supported Atom version (1.54) to the testing matrix. | test | test on minimum supported atom version along with latest stable version in ci as per my has added the options to select the exact version of atom to use for testing the feature has been added in which is now available this allows us to add the minimum supported atom version to the ci testing matrix as long as atom stays on the same major version and these don t seem to change frequently as atom is still on x this should ensure everything works also on all minor versions inbetween the tested ones assuming semantic versioning rules are followed properly update the version of action setup atom used in our github actions setup and add the minimum supported atom version to the testing matrix | 1 |
284,773 | 30,913,686,344 | IssuesEvent | 2023-08-05 02:36:44 | Nivaskumark/kernel_v4.19.72_old | https://api.github.com/repos/Nivaskumark/kernel_v4.19.72_old | reopened | CVE-2023-1611 (Medium) detected in linux-yoctov5.4.51, linux-yoctov5.4.51 | Mend: dependency security vulnerability | ## CVE-2023-1611 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A use-after-free flaw was found in btrfs_search_slot in fs/btrfs/ctree.c in btrfs in the Linux Kernel.This flaw allows an attacker to crash the system and possibly cause a kernel information lea
<p>Publish Date: 2023-04-03
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-1611>CVE-2023-1611</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-1611">https://www.linuxkernelcves.com/cves/CVE-2023-1611</a></p>
<p>Release Date: 2023-04-03</p>
<p>Fix Resolution: v5.10.177,v5.15.106,v6.1.23,v6.2.10</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2023-1611 (Medium) detected in linux-yoctov5.4.51, linux-yoctov5.4.51 - ## CVE-2023-1611 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A use-after-free flaw was found in btrfs_search_slot in fs/btrfs/ctree.c in btrfs in the Linux Kernel.This flaw allows an attacker to crash the system and possibly cause a kernel information lea
<p>Publish Date: 2023-04-03
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-1611>CVE-2023-1611</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-1611">https://www.linuxkernelcves.com/cves/CVE-2023-1611</a></p>
<p>Release Date: 2023-04-03</p>
<p>Fix Resolution: v5.10.177,v5.15.106,v6.1.23,v6.2.10</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve medium detected in linux linux cve medium severity vulnerability vulnerable libraries linux linux vulnerability details a use after free flaw was found in btrfs search slot in fs btrfs ctree c in btrfs in the linux kernel this flaw allows an attacker to crash the system and possibly cause a kernel information lea publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
296,052 | 25,524,527,458 | IssuesEvent | 2022-11-29 00:18:51 | WiIIiam278/HuskHomes2 | https://api.github.com/repos/WiIIiam278/HuskHomes2 | closed | Restricted warp not available despite permission | type: bug status: needs testing | The player has permission for warping and to the restricted warp. At /warps it is said that no warps are set and if the player tries to warp there anyway, it is said that he has no permission.
Granted permissions:
huskhomes.command.warp
huskhomes.command.warp.spawn
Im using Paper 1.19.2, Java 17, Huskhomes v3.2.1-ed58779 | 1.0 | Restricted warp not available despite permission - The player has permission for warping and to the restricted warp. At /warps it is said that no warps are set and if the player tries to warp there anyway, it is said that he has no permission.
Granted permissions:
huskhomes.command.warp
huskhomes.command.warp.spawn
Im using Paper 1.19.2, Java 17, Huskhomes v3.2.1-ed58779 | test | restricted warp not available despite permission the player has permission for warping and to the restricted warp at warps it is said that no warps are set and if the player tries to warp there anyway it is said that he has no permission granted permissions huskhomes command warp huskhomes command warp spawn im using paper java huskhomes | 1 |
322,693 | 27,625,235,363 | IssuesEvent | 2023-03-10 05:57:55 | Etesam913/Custoplayer | https://api.github.com/repos/Etesam913/Custoplayer | closed | Setup Testing | enhancement backlog testing | Write unit tests using `jest` and `react-testing-library`.
~~Use `playwright` for integration tests.~~
Switched to `cypress` instead of `playwright` as it is difficult to test video properties in `playwright` | 1.0 | Setup Testing - Write unit tests using `jest` and `react-testing-library`.
~~Use `playwright` for integration tests.~~
Switched to `cypress` instead of `playwright` as it is difficult to test video properties in `playwright` | test | setup testing write unit tests using jest and react testing library use playwright for integration tests switched to cypress instead of playwright as it is difficult to test video properties in playwright | 1 |
28,673 | 4,426,095,973 | IssuesEvent | 2016-08-16 17:17:46 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | stress: failed test in cockroach/gossip/gossip.test: TestClientGossipMetrics | Robot test-failure | Binary: cockroach/static-tests.tar.gz sha: https://github.com/cockroachdb/cockroach/commits/750f5d01f06ea79dde964fb5d87c2f933569ba29
Stress build found a failed test:
```
=== RUN TestClientGossipMetrics
W160816 05:16:27.118071 gossip/gossip.go:1022 not connected to cluster; use --join to specify a connected node
W160816 05:16:27.119926 gossip/gossip.go:1022 not connected to cluster; use --join to specify a connected node
I160816 05:16:27.121383 gossip/client.go:75 node 2: starting client to 127.0.0.1:44492
I160816 05:16:27.121489 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:27.126785 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.126824 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.126974 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.126996 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.127040 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.127054 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.127130 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.127146 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.127165 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.127179 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.127230 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.127250 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.127300 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.127326 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.127551 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:27.127564 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.127588 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.127608 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.127636 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.127814 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:27.127841 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.127879 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.128803 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.128862 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.129334 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.129377 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.129444 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.129484 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.129552 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.129614 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.129704 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.129769 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.130079 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.130110 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.130159 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.130177 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.130222 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.130242 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.130290 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.130293 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.130317 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.130384 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.130476 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.130508 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.130522 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.130561 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.130598 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.130625 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.130650 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.130687 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.130691 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.130709 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.130747 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.130766 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.130778 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:27.130817 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.130835 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.130890 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.130912 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.131026 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:27.131405 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.131432 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.131468 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.131490 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.131526 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.131548 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.131645 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.131681 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.131695 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:27.131758 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.131777 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.133256 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:27.134074 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.134099 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.134164 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.134182 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.134236 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.134259 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.134325 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.134350 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.134400 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.134417 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.134538 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.134558 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.134714 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:27.137600 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:27.138172 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.138217 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.138323 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.138351 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.138421 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.138445 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.138537 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.138557 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.138609 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.138627 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.138675 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.138698 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.138863 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:27.146184 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:27.146366 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.146396 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.146655 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.146685 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.146783 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.146811 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.146822 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:27.146875 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.146896 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.163266 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:27.163574 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.163610 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.163684 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.163708 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.163765 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:27.163802 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.163832 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.163907 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.163930 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.163985 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.164003 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.164172 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.164208 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.197121 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:27.197337 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.197398 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.197491 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.197521 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.197612 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.197657 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.197729 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.197756 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.197775 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.197808 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.197894 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:27.265189 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:27.265396 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.265437 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.265612 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.265641 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.265742 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.265764 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.265779 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:27.265818 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.265837 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.399581 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:27.399854 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.399904 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.400013 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.400044 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.400076 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.400112 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.400250 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:27.400405 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.400435 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.668212 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:27.669339 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.669385 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.669442 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.669452 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.669472 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.669550 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.669565 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.669571 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.669630 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.669649 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.669703 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:27.669732 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.669754 gossip/server.go:288 node 2: replying to 1
I160816 05:16:28.205710 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:28.205967 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:28.206011 gossip/server.go:288 node 1: replying to 2
I160816 05:16:28.206081 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:28.206106 gossip/server.go:288 node 1: replying to 2
I160816 05:16:28.206115 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:28.206141 gossip/server.go:288 node 2: replying to 1
I160816 05:16:28.206182 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:28.206218 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:28.206251 gossip/server.go:288 node 2: replying to 1
I160816 05:16:29.205944 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:29.206133 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:29.206177 gossip/server.go:288 node 1: replying to 2
I160816 05:16:29.206402 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:29.206450 gossip/server.go:288 node 2: replying to 1
I160816 05:16:29.206573 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:29.206604 gossip/server.go:288 node 2: replying to 1
I160816 05:16:29.206637 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:30.206136 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:30.206310 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:30.206355 gossip/server.go:288 node 1: replying to 2
I160816 05:16:30.206515 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:30.206551 gossip/server.go:288 node 2: replying to 1
I160816 05:16:30.206587 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:30.206617 gossip/server.go:288 node 1: replying to 2
I160816 05:16:30.206647 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:30.206673 gossip/server.go:288 node 2: replying to 1
I160816 05:16:30.206694 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:30.206726 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:30.206751 gossip/server.go:288 node 2: replying to 1
I160816 05:16:31.206743 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:31.206944 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:31.206991 gossip/server.go:288 node 1: replying to 2
I160816 05:16:31.207157 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:31.207188 gossip/server.go:288 node 2: replying to 1
I160816 05:16:31.207199 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:31.207230 gossip/server.go:288 node 1: replying to 2
I160816 05:16:31.207288 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:31.207314 gossip/server.go:288 node 2: replying to 1
I160816 05:16:31.207345 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:32.206969 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:32.207225 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:32.207263 gossip/server.go:288 node 1: replying to 2
I160816 05:16:32.207377 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:32.207383 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:32.207404 gossip/server.go:288 node 2: replying to 1
I160816 05:16:32.207409 gossip/server.go:288 node 1: replying to 2
I160816 05:16:32.207610 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:32.207634 gossip/server.go:288 node 2: replying to 1
I160816 05:16:32.207647 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:32.207698 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:32.207716 gossip/server.go:288 node 2: replying to 1
I160816 05:16:33.208061 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:33.208266 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:33.208315 gossip/server.go:288 node 1: replying to 2
I160816 05:16:33.208421 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:33.208447 gossip/server.go:288 node 1: replying to 2
I160816 05:16:33.208484 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:33.208510 gossip/server.go:288 node 2: replying to 1
I160816 05:16:33.208634 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:33.208653 gossip/server.go:288 node 2: replying to 1
I160816 05:16:33.208689 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:33.208710 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:33.208739 gossip/server.go:288 node 2: replying to 1
I160816 05:16:34.208205 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:34.208500 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:34.208544 gossip/server.go:288 node 1: replying to 2
I160816 05:16:34.208649 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:34.208664 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:34.208680 gossip/server.go:288 node 2: replying to 1
I160816 05:16:34.208683 gossip/server.go:288 node 1: replying to 2
I160816 05:16:34.208743 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:34.208753 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:34.208771 gossip/server.go:288 node 2: replying to 1
I160816 05:16:35.208408 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:35.208716 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:35.208756 gossip/server.go:288 node 1: replying to 2
I160816 05:16:35.208841 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:35.208862 gossip/server.go:288 node 1: replying to 2
I160816 05:16:35.208901 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:35.208933 gossip/server.go:288 node 2: replying to 1
I160816 05:16:35.209025 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:35.209054 gossip/server.go:288 node 2: replying to 1
I160816 05:16:35.209130 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:35.209155 gossip/server.go:288 node 2: replying to 1
I160816 05:16:35.209180 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:36.208729 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:36.209006 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:36.209046 gossip/server.go:288 node 1: replying to 2
I160816 05:16:36.209099 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:36.209133 gossip/server.go:288 node 2: replying to 1
I160816 05:16:36.209153 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:36.209173 gossip/server.go:288 node 1: replying to 2
I160816 05:16:36.209207 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:36.209216 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:36.209258 gossip/server.go:288 node 2: replying to 1
I160816 05:16:37.208908 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:37.209179 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:37.209222 gossip/server.go:288 node 1: replying to 2
I160816 05:16:37.209347 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:37.209388 gossip/server.go:288 node 1: replying to 2
I160816 05:16:37.209426 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:37.209461 gossip/server.go:288 node 2: replying to 1
I160816 05:16:37.209516 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:38.209178 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:38.209366 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:38.209414 gossip/server.go:288 node 1: replying to 2
I160816 05:16:38.209563 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:38.209599 gossip/server.go:288 node 2: replying to 1
I160816 05:16:38.209706 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:38.209736 gossip/server.go:288 node 2: replying to 1
I160816 05:16:38.209765 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:38.209800 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:38.209834 gossip/server.go:288 node 2: replying to 1
I160816 05:16:39.209455 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:39.209769 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:39.209819 gossip/server.go:288 node 1: replying to 2
I160816 05:16:39.209839 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:39.209864 gossip/server.go:288 node 2: replying to 1
I160816 05:16:39.209886 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:39.209907 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:39.209944 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:39.209972 gossip/server.go:288 node 2: replying to 1
I160816 05:16:39.209998 gossip/server.go:288 node 1: replying to 2
I160816 05:16:40.209686 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:40.209871 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:40.209907 gossip/server.go:288 node 1: replying to 2
I160816 05:16:40.210075 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:40.210112 gossip/server.go:288 node 2: replying to 1
I160816 05:16:40.210214 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:40.210243 gossip/server.go:288 node 2: replying to 1
I160816 05:16:40.210285 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:41.209869 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:41.210069 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:41.210120 gossip/server.go:288 node 1: replying to 2
I160816 05:16:41.210281 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:41.210311 gossip/server.go:288 node 2: replying to 1
I160816 05:16:41.210405 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:41.210433 gossip/server.go:288 node 2: replying to 1
I160816 05:16:41.210446 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:41.210485 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:41.210503 gossip/server.go:288 node 2: replying to 1
I160816 05:16:42.210182 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:42.210401 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:42.210430 gossip/server.go:288 node 1: replying to 2
I160816 05:16:42.210596 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:42.210619 gossip/server.go:288 node 1: replying to 2
I160816 05:16:42.210625 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:42.210660 gossip/server.go:288 node 2: replying to 1
I160816 05:16:42.210749 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:42.210781 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:42.210811 gossip/server.go:288 node 2: replying to 1
I160816 05:16:42.210891 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:42.210916 gossip/server.go:288 node 2: replying to 1
I160816 05:16:43.210403 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:43.210688 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:43.210730 gossip/server.go:288 node 1: replying to 2
I160816 05:16:43.210772 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:43.210798 gossip/server.go:288 node 2: replying to 1
I160816 05:16:43.210834 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:43.210851 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:43.210860 gossip/server.go:288 node 1: replying to 2
I160816 05:16:43.210867 gossip/server.go:288 node 2: replying to 1
I160816 05:16:43.210952 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:43.210966 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:43.210979 gossip/server.go:288 node 2: replying to 1
I160816 05:16:44.210602 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:44.210821 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:44.210868 gossip/server.go:288 node 1: replying to 2
I160816 05:16:44.210951 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:44.210986 gossip/server.go:288 node 2: replying to 1
I160816 05:16:44.211022 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:44.211061 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:44.211079 gossip/server.go:288 node 2: replying to 1
I160816 05:16:44.211104 gossip/server.go:288 node 1: replying to 2
I160816 05:16:44.211163 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:44.211191 gossip/server.go:288 node 2: replying to 1
I160816 05:16:44.211225 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:45.210810 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:45.211057 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:45.211110 gossip/server.go:288 node 1: replying to 2
I160816 05:16:45.211217 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:45.211243 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:45.211255 gossip/server.go:288 node 2: replying to 1
I160816 05:16:45.211275 gossip/server.go:288 node 1: replying to 2
I160816 05:16:45.211380 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:45.211397 gossip/server.go:288 node 2: replying to 1
I160816 05:16:45.211449 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:45.211471 gossip/server.go:288 node 2: replying to 1
I160816 05:16:45.211489 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:46.211050 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:46.211292 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:46.211327 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:46.211344 gossip/server.go:288 node 1: replying to 2
I160816 05:16:46.211412 gossip/server.go:288 node 2: replying to 1
I160816 05:16:46.211462 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:46.211490 gossip/server.go:288 node 1: replying to 2
I160816 05:16:46.211564 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:46.211598 gossip/server.go:288 node 2: replying to 1
I160816 05:16:46.211645 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:46.211676 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:46.211704 gossip/server.go:288 node 2: replying to 1
I160816 05:16:47.211263 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:47.211541 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:47.211604 gossip/server.go:288 node 1: replying to 2
I160816 05:16:47.211664 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:47.211702 gossip/server.go:288 node 2: replying to 1
I160816 05:16:47.211792 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:47.211831 gossip/server.go:288 node 1: replying to 2
I160816 05:16:47.211845 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:47.211870 gossip/server.go:288 node 2: replying to 1
I160816 05:16:47.211914 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:47.211942 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:47.211981 gossip/server.go:288 node 2: replying to 1
I160816 05:16:48.211551 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:48.211792 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:48.211838 gossip/server.go:288 node 1: replying to 2
I160816 05:16:48.211944 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:48.211966 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:48.211982 gossip/server.go:288 node 2: replying to 1
I160816 05:16:48.211994 gossip/server.go:288 node 1: replying to 2
I160816 05:16:48.212073 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:48.212103 gossip/server.go:288 node 2: replying to 1
I160816 05:16:48.212161 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:48.212176 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:48.212201 gossip/server.go:288 node 2: replying to 1
I160816 05:16:48.212268 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:48.212295 gossip/server.go:288 node 2: replying to 1
I160816 05:16:49.211862 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:49.212137 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:49.212184 gossip/server.go:288 node 1: replying to 2
I160816 05:16:49.212260 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:49.212297 gossip/server.go:288 node 2: replying to 1
I160816 05:16:49.212321 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:49.212339 gossip/server.go:288 node 1: replying to 2
I160816 05:16:49.212380 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:49.212419 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:49.212447 gossip/server.go:288 node 2: replying to 1
I160816 05:16:50.212133 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:50.212367 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:50.212413 gossip/server.go:288 node 1: replying to 2
I160816 05:16:50.212562 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:50.212599 gossip/server.go:288 node 1: replying to 2
I160816 05:16:50.212607 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:50.212646 gossip/server.go:288 node 2: replying to 1
I160816 05:16:50.212765 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:50.212787 gossip/server.go:288 node 2: replying to 1
I160816 05:16:50.212835 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:50.212849 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:50.212873 gossip/server.go:288 node 2: replying to 1
I160816 05:16:51.212400 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:51.212592 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:51.212630 gossip/server.go:288 node 1: replying to 2
I160816 05:16:51.212705 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:51.212724 gossip/server.go:288 node 1: replying to 2
I160816 05:16:51.212833 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:51.212864 gossip/server.go:288 node 2: replying to 1
I160816 05:16:51.212964 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:51.212993 gossip/server.go:288 node 2: replying to 1
I160816 05:16:51.213002 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:51.213057 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:51.213083 gossip/server.go:288 node 2: replying to 1
I160816 05:16:52.212783 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:52.213159 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:52.213190 gossip/server.go:288 node 1: replying to 2
I160816 05:16:52.213207 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:52.213244 gossip/server.go:288 node 2: replying to 1
I160816 05:16:52.213258 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:52.213278 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:52.213299 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:52.213317 gossip/server.go:288 node 2: replying to 1
I160816 05:16:52.213342 gossip/server.go:288 node 1: replying to 2
I160816 05:16:52.213368 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:52.213384 gossip/server.go:288 node 2: replying to 1
I160816 05:16:53.213016 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:53.213309 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:53.213340 gossip/server.go:288 node 1: replying to 2
I160816 05:16:53.213379 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:53.213397 gossip/server.go:288 node 1: replying to 2
I160816 05:16:53.213425 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:53.213450 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:53.213462 gossip/server.go:288 node 2: replying to 1
I160816 05:16:53.213548 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:53.213566 gossip/server.go:288 node 2: replying to 1
I160816 05:16:54.213358 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:54.213639 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:54.213675 gossip/server.go:288 node 1: replying to 2
I160816 05:16:54.213713 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:54.213753 gossip/server.go:288 node 2: replying to 1
I160816 05:16:54.213779 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:54.213807 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:54.213819 gossip/server.go:288 node 1: replying to 2
I160816 05:16:54.213871 gossip/server.go:288 node 2: replying to 1
I160816 05:16:54.213947 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:54.213966 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:54.213991 gossip/server.go:288 node 2: replying to 1
I160816 05:16:54.214085 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:54.214113 gossip/server.go:288 node 2: replying to 1
I160816 05:16:55.213730 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:55.214046 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:55.214086 gossip/server.go:288 node 1: replying to 2
I160816 05:16:55.214137 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:55.214155 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:55.214182 gossip/server.go:288 node 2: replying to 1
I160816 05:16:55.214239 gossip/server.go:288 node 1: replying to 2
I160816 05:16:55.214253 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:55.214268 gossip/server.go:288 node 2: replying to 1
I160816 05:16:55.214357 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:55.214398 gossip/server.go:288 node 2: replying to 1
I160816 05:16:55.214417 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:56.214011 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:56.214247 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:56.214289 gossip/server.go:288 node 1: replying to 2
I160816 05:16:56.214351 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:56.214369 gossip/server.go:288 node 1: replying to 2
I160816 05:16:56.214445 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:56.214472 gossip/server.go:288 node 2: replying to 1
I160816 05:16:56.214530 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:56.214538 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:56.214547 gossip/server.go:288 node 2: replying to 1
I160816 05:16:56.214585 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:56.214601 gossip/server.go:288 node 2: replying to 1
I160816 05:16:57.214379 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:57.214589 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:57.214624 gossip/server.go:288 node 1: replying to 2
I160816 05:16:57.214728 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:57.214742 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:57.214752 gossip/server.go:288 node 1: replying to 2
I160816 05:16:57.214761 gossip/server.go:288 node 2: replying to 1
I160816 05:16:57.214851 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:57.214877 gossip/server.go:288 node 2: replying to 1
I160816 05:16:57.214891 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:58.214618 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:58.214862 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:58.214897 gossip/server.go:288 node 1: replying to 2
I160816 05:16:58.214962 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:58.214988 gossip/server.go:288 node 2: replying to 1
I160816 05:16:58.214993 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:58.215009 gossip/server.go:288 node 1: replying to 2
I160816 05:16:58.215041 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:58.215059 gossip/server.go:288 node 2: replying to 1
I160816 05:16:58.215067 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:59.214848 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:59.215049 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:59.215084 gossip/server.go:288 node 1: replying to 2
I160816 05:16:59.215157 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:59.215177 gossip/server.go:288 node 1: replying to 2
I160816 05:16:59.215268 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:59.215296 gossip/server.go:288 node 2: replying to 1
I160816 05:16:59.215368 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:59.215387 gossip/server.go:288 node 2: replying to 1
I160816 05:16:59.215392 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:59.215449 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:59.215466 gossip/server.go:288 node 2: replying to 1
I160816 05:17:00.215145 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:17:00.215394 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:00.215425 gossip/server.go:288 node 1: replying to 2
I160816 05:17:00.215460 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:00.215475 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:00.215487 gossip/server.go:288 node 2: replying to 1
I160816 05:17:00.215504 gossip/server.go:288 node 1: replying to 2
I160816 05:17:00.215571 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:00.215590 gossip/server.go:288 node 2: replying to 1
I160816 05:17:00.215621 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:17:00.215634 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:00.215649 gossip/server.go:288 node 2: replying to 1
I160816 05:17:01.215383 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:17:01.215671 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:01.215701 gossip/server.go:288 node 1: replying to 2
I160816 05:17:01.215810 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:01.215839 gossip/server.go:288 node 2: replying to 1
I160816 05:17:01.215854 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:01.215877 gossip/server.go:288 node 1: replying to 2
I160816 05:17:01.215902 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:01.215919 gossip/server.go:288 node 2: replying to 1
I160816 05:17:01.215928 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:17:02.215753 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:17:02.216056 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:02.216270 gossip/server.go:288 node 1: replying to 2
I160816 05:17:02.216602 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:02.216638 gossip/server.go:288 node 2: replying to 1
I160816 05:17:02.216680 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:02.216717 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:02.216747 gossip/server.go:288 node 2: replying to 1
I160816 05:17:02.216802 gossip/server.go:288 node 1: replying to 2
I160816 05:17:02.216821 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:02.216993 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:17:02.217375 gossip/server.go:288 node 2: replying to 1
I160816 05:17:03.216136 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:17:03.216359 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:03.216395 gossip/server.go:288 node 1: replying to 2
I160816 05:17:03.216471 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:03.216492 gossip/server.go:288 node 1: replying to 2
I160816 05:17:03.216529 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:03.216575 gossip/server.go:288 node 2: replying to 1
I160816 05:17:03.216675 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:03.216704 gossip/server.go:288 node 2: replying to 1
I160816 05:17:03.216767 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:17:04.216428 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:17:04.216766 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:04.216823 gossip/server.go:288 node 1: replying to 2
I160816 05:17:04.216972 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:04.216996 gossip/server.go:288 node 1: replying to 2
I160816 05:17:04.217086 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:04.217115 gossip/server.go:288 node 2: replying to 1
I160816 05:17:04.217196 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:04.217216 gossip/server.go:288 node 2: replying to 1
I160816 05:17:04.217242 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:17:04.217286 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:04.217303 gossip/server.go:288 node 2: replying to 1
I160816 05:17:05.216697 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:17:05.217061 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:05.217095 gossip/server.go:288 node 1: replying to 2
I160816 05:17:05.217160 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:05.217172 gossip/server.go:288 node 1: replying to 2
I160816 05:17:05.217226 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:05.217262 gossip/server.go:288 node 2: replying to 1
I160816 05:17:05.217284 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:17:05.217337 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:05.217357 gossip/server.go:288 node 2: replying to 1
I160816 05:17:06.217173 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:17:06.217396 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:06.217433 gossip/server.go:288 node 1: replying to 2
I160816 05:17:06.217495 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:06.217528 gossip/server.go:288 node 2: replying to 1
I160816 05:17:06.217580 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:06.217606 gossip/server.go:288 node 1: replying to 2
I160816 05:17:06.217725 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:06.217765 gossip/server.go:288 node 2: replying to 1
I160816 05:17:06.217811 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:17:07.217551 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:17:07.217840 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:07.217883 gossip/server.go:288 node 1: replying to 2
I160816 05:17:07.217983 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:07.218008 gossip/server.go:288 node 1: replying to 2
I160816 05:17:07.218051 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:07.218086 gossip/server.go:288 node 2: replying to 1
I160816 05:17:07.218172 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:07.218190 gossip/server.go:288 node 2: replying to 1
I160816 05:17:07.218239 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:17:08.217846 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:17:08.218168 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:08.218203 gossip/server.go:288 node 1: replying to 2
I160816 05:17:08.218280 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:08.218308 gossip/server.go:288 node 2: replying to 1
I160816 05:17:08.218316 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:08.218348 gossip/server.go:288 node 1: replying to 2
I160816 05:17:08.218504 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:17:08.218524 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:08.218542 gossip/server.go:288 node 2: replying to 1
I160816 05:17:08.218598 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:08.218616 gossip/server.go:288 node 2: replying to 1
I160816 05:17:09.218138 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:17:09.218376 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:09.218410 gossip/server.go:288 node 1: replying to 2
I160816 05:17:09.218543 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:09.218566 gossip/server.go:288 node 1: replying to 2
I160816 05:17:09.218611 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:09.218640 gossip/server.go:288 node 2: replying to 1
I160816 05:17:09.218726 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:09.218745 gossip/server.go:288 node 2: replying to 1
I160816 05:17:09.218768 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:17:09.218801 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:09.218817 gossip/server.go:288 node 2: replying to 1
I160816 05:17:10.218413 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:17:10.218701 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:10.218738 gossip/server.go:288 node 1: replying to 2
I160816 05:17:10.218874 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:10.218894 gossip/server.go:288 node 1: replying to 2
I160816 05:17:10.218963 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:10.218995 gossip/server.go:288 node 2: replying to 1
I160816 05:17:10.219067 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:10.219088 gossip/server.go:288 node 2: replying to 1
I160816 05:17:10.219114 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:17:11.218692 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:17:11.218956 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:11.218994 gossip/server.go:288 node 1: replying to 2
I160816 05:17:11.219064 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:11.219086 gossip/server.go:288 node 1: replying to 2
I160816 05:17:11.219183 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:11.219222 gossip/server.go:288 node 2: replying to 1
I160816 05:17:11.219358 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:11.219383 gossip/server.go:288 node 2: replying to 1
I160816 05:17:11.219399 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:17:11.219430 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:11.219447 gossip/server.go:288 node 2: replying to 1
I160816 05:17:12.219128 http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:44492->127.0.0.1:53786: use of closed network connection
--- FAIL: TestClientGossipMetrics (45.11s)
testing.go:117: gossip/client_test.go:143, condition failed to evaluate within 45s: 1: expected metrics gauge "gossip.connections.incoming" > 0; = 0
```
Run Details:
```
73 runs so far, 0 failures, over 5s
131 runs so far, 0 failures, over 10s
168 runs so far, 0 failures, over 15s
195 runs so far, 0 failures, over 20s
197 runs so far, 0 failures, over 25s
197 runs so far, 0 failures, over 30s
197 runs so far, 0 failures, over 35s
197 runs so far, 0 failures, over 40s
197 runs so far, 0 failures, over 45s
198 runs completed, 1 failures, over 45s
FAIL
```
Please assign, take a look and update the issue accordingly. | 1.0 | stress: failed test in cockroach/gossip/gossip.test: TestClientGossipMetrics - Binary: cockroach/static-tests.tar.gz sha: https://github.com/cockroachdb/cockroach/commits/750f5d01f06ea79dde964fb5d87c2f933569ba29
Stress build found a failed test:
```
=== RUN TestClientGossipMetrics
W160816 05:16:27.118071 gossip/gossip.go:1022 not connected to cluster; use --join to specify a connected node
W160816 05:16:27.119926 gossip/gossip.go:1022 not connected to cluster; use --join to specify a connected node
I160816 05:16:27.121383 gossip/client.go:75 node 2: starting client to 127.0.0.1:44492
I160816 05:16:27.121489 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:27.126785 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.126824 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.126974 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.126996 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.127040 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.127054 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.127130 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.127146 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.127165 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.127179 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.127230 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.127250 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.127300 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.127326 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.127551 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:27.127564 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.127588 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.127608 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.127636 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.127814 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:27.127841 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.127879 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.128803 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.128862 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.129334 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.129377 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.129444 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.129484 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.129552 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.129614 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.129704 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.129769 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.130079 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.130110 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.130159 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.130177 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.130222 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.130242 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.130290 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.130293 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.130317 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.130384 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.130476 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.130508 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.130522 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.130561 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.130598 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.130625 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.130650 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.130687 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.130691 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.130709 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.130747 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.130766 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.130778 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:27.130817 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.130835 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.130890 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.130912 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.131026 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:27.131405 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.131432 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.131468 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.131490 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.131526 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.131548 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.131645 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.131681 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.131695 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:27.131758 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.131777 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.133256 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:27.134074 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.134099 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.134164 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.134182 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.134236 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.134259 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.134325 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.134350 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.134400 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.134417 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.134538 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.134558 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.134714 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:27.137600 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:27.138172 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.138217 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.138323 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.138351 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.138421 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.138445 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.138537 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.138557 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.138609 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.138627 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.138675 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.138698 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.138863 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:27.146184 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:27.146366 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.146396 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.146655 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.146685 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.146783 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.146811 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.146822 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:27.146875 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.146896 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.163266 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:27.163574 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.163610 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.163684 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.163708 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.163765 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:27.163802 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.163832 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.163907 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.163930 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.163985 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.164003 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.164172 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.164208 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.197121 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:27.197337 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.197398 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.197491 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.197521 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.197612 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.197657 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.197729 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.197756 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.197775 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.197808 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.197894 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:27.265189 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:27.265396 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.265437 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.265612 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.265641 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.265742 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.265764 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.265779 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:27.265818 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.265837 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.399581 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:27.399854 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.399904 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.400013 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.400044 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.400076 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.400112 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.400250 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:27.400405 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.400435 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.668212 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:27.669339 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.669385 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.669442 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.669452 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:27.669472 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.669550 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.669565 gossip/server.go:288 node 1: replying to 2
I160816 05:16:27.669571 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.669630 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.669649 gossip/server.go:288 node 2: replying to 1
I160816 05:16:27.669703 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:27.669732 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:27.669754 gossip/server.go:288 node 2: replying to 1
I160816 05:16:28.205710 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:28.205967 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:28.206011 gossip/server.go:288 node 1: replying to 2
I160816 05:16:28.206081 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:28.206106 gossip/server.go:288 node 1: replying to 2
I160816 05:16:28.206115 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:28.206141 gossip/server.go:288 node 2: replying to 1
I160816 05:16:28.206182 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:28.206218 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:28.206251 gossip/server.go:288 node 2: replying to 1
I160816 05:16:29.205944 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:29.206133 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:29.206177 gossip/server.go:288 node 1: replying to 2
I160816 05:16:29.206402 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:29.206450 gossip/server.go:288 node 2: replying to 1
I160816 05:16:29.206573 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:29.206604 gossip/server.go:288 node 2: replying to 1
I160816 05:16:29.206637 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:30.206136 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:30.206310 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:30.206355 gossip/server.go:288 node 1: replying to 2
I160816 05:16:30.206515 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:30.206551 gossip/server.go:288 node 2: replying to 1
I160816 05:16:30.206587 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:30.206617 gossip/server.go:288 node 1: replying to 2
I160816 05:16:30.206647 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:30.206673 gossip/server.go:288 node 2: replying to 1
I160816 05:16:30.206694 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:30.206726 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:30.206751 gossip/server.go:288 node 2: replying to 1
I160816 05:16:31.206743 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:31.206944 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:31.206991 gossip/server.go:288 node 1: replying to 2
I160816 05:16:31.207157 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:31.207188 gossip/server.go:288 node 2: replying to 1
I160816 05:16:31.207199 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:31.207230 gossip/server.go:288 node 1: replying to 2
I160816 05:16:31.207288 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:31.207314 gossip/server.go:288 node 2: replying to 1
I160816 05:16:31.207345 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:32.206969 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:32.207225 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:32.207263 gossip/server.go:288 node 1: replying to 2
I160816 05:16:32.207377 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:32.207383 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:32.207404 gossip/server.go:288 node 2: replying to 1
I160816 05:16:32.207409 gossip/server.go:288 node 1: replying to 2
I160816 05:16:32.207610 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:32.207634 gossip/server.go:288 node 2: replying to 1
I160816 05:16:32.207647 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:32.207698 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:32.207716 gossip/server.go:288 node 2: replying to 1
I160816 05:16:33.208061 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:33.208266 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:33.208315 gossip/server.go:288 node 1: replying to 2
I160816 05:16:33.208421 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:33.208447 gossip/server.go:288 node 1: replying to 2
I160816 05:16:33.208484 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:33.208510 gossip/server.go:288 node 2: replying to 1
I160816 05:16:33.208634 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:33.208653 gossip/server.go:288 node 2: replying to 1
I160816 05:16:33.208689 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:33.208710 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:33.208739 gossip/server.go:288 node 2: replying to 1
I160816 05:16:34.208205 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:34.208500 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:34.208544 gossip/server.go:288 node 1: replying to 2
I160816 05:16:34.208649 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:34.208664 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:34.208680 gossip/server.go:288 node 2: replying to 1
I160816 05:16:34.208683 gossip/server.go:288 node 1: replying to 2
I160816 05:16:34.208743 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:34.208753 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:34.208771 gossip/server.go:288 node 2: replying to 1
I160816 05:16:35.208408 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:35.208716 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:35.208756 gossip/server.go:288 node 1: replying to 2
I160816 05:16:35.208841 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:35.208862 gossip/server.go:288 node 1: replying to 2
I160816 05:16:35.208901 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:35.208933 gossip/server.go:288 node 2: replying to 1
I160816 05:16:35.209025 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:35.209054 gossip/server.go:288 node 2: replying to 1
I160816 05:16:35.209130 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:35.209155 gossip/server.go:288 node 2: replying to 1
I160816 05:16:35.209180 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:36.208729 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:36.209006 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:36.209046 gossip/server.go:288 node 1: replying to 2
I160816 05:16:36.209099 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:36.209133 gossip/server.go:288 node 2: replying to 1
I160816 05:16:36.209153 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:36.209173 gossip/server.go:288 node 1: replying to 2
I160816 05:16:36.209207 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:36.209216 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:36.209258 gossip/server.go:288 node 2: replying to 1
I160816 05:16:37.208908 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:37.209179 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:37.209222 gossip/server.go:288 node 1: replying to 2
I160816 05:16:37.209347 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:37.209388 gossip/server.go:288 node 1: replying to 2
I160816 05:16:37.209426 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:37.209461 gossip/server.go:288 node 2: replying to 1
I160816 05:16:37.209516 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:38.209178 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:38.209366 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:38.209414 gossip/server.go:288 node 1: replying to 2
I160816 05:16:38.209563 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:38.209599 gossip/server.go:288 node 2: replying to 1
I160816 05:16:38.209706 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:38.209736 gossip/server.go:288 node 2: replying to 1
I160816 05:16:38.209765 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:38.209800 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:38.209834 gossip/server.go:288 node 2: replying to 1
I160816 05:16:39.209455 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:39.209769 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:39.209819 gossip/server.go:288 node 1: replying to 2
I160816 05:16:39.209839 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:39.209864 gossip/server.go:288 node 2: replying to 1
I160816 05:16:39.209886 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:39.209907 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:39.209944 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:39.209972 gossip/server.go:288 node 2: replying to 1
I160816 05:16:39.209998 gossip/server.go:288 node 1: replying to 2
I160816 05:16:40.209686 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:40.209871 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:40.209907 gossip/server.go:288 node 1: replying to 2
I160816 05:16:40.210075 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:40.210112 gossip/server.go:288 node 2: replying to 1
I160816 05:16:40.210214 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:40.210243 gossip/server.go:288 node 2: replying to 1
I160816 05:16:40.210285 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:41.209869 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:41.210069 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:41.210120 gossip/server.go:288 node 1: replying to 2
I160816 05:16:41.210281 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:41.210311 gossip/server.go:288 node 2: replying to 1
I160816 05:16:41.210405 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:41.210433 gossip/server.go:288 node 2: replying to 1
I160816 05:16:41.210446 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:41.210485 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:41.210503 gossip/server.go:288 node 2: replying to 1
I160816 05:16:42.210182 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:42.210401 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:42.210430 gossip/server.go:288 node 1: replying to 2
I160816 05:16:42.210596 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:42.210619 gossip/server.go:288 node 1: replying to 2
I160816 05:16:42.210625 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:42.210660 gossip/server.go:288 node 2: replying to 1
I160816 05:16:42.210749 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:42.210781 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:42.210811 gossip/server.go:288 node 2: replying to 1
I160816 05:16:42.210891 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:42.210916 gossip/server.go:288 node 2: replying to 1
I160816 05:16:43.210403 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:43.210688 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:43.210730 gossip/server.go:288 node 1: replying to 2
I160816 05:16:43.210772 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:43.210798 gossip/server.go:288 node 2: replying to 1
I160816 05:16:43.210834 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:43.210851 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:43.210860 gossip/server.go:288 node 1: replying to 2
I160816 05:16:43.210867 gossip/server.go:288 node 2: replying to 1
I160816 05:16:43.210952 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:43.210966 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:43.210979 gossip/server.go:288 node 2: replying to 1
I160816 05:16:44.210602 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:44.210821 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:44.210868 gossip/server.go:288 node 1: replying to 2
I160816 05:16:44.210951 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:44.210986 gossip/server.go:288 node 2: replying to 1
I160816 05:16:44.211022 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:44.211061 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:44.211079 gossip/server.go:288 node 2: replying to 1
I160816 05:16:44.211104 gossip/server.go:288 node 1: replying to 2
I160816 05:16:44.211163 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:44.211191 gossip/server.go:288 node 2: replying to 1
I160816 05:16:44.211225 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:45.210810 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:45.211057 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:45.211110 gossip/server.go:288 node 1: replying to 2
I160816 05:16:45.211217 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:45.211243 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:45.211255 gossip/server.go:288 node 2: replying to 1
I160816 05:16:45.211275 gossip/server.go:288 node 1: replying to 2
I160816 05:16:45.211380 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:45.211397 gossip/server.go:288 node 2: replying to 1
I160816 05:16:45.211449 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:45.211471 gossip/server.go:288 node 2: replying to 1
I160816 05:16:45.211489 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:46.211050 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:46.211292 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:46.211327 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:46.211344 gossip/server.go:288 node 1: replying to 2
I160816 05:16:46.211412 gossip/server.go:288 node 2: replying to 1
I160816 05:16:46.211462 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:46.211490 gossip/server.go:288 node 1: replying to 2
I160816 05:16:46.211564 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:46.211598 gossip/server.go:288 node 2: replying to 1
I160816 05:16:46.211645 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:46.211676 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:46.211704 gossip/server.go:288 node 2: replying to 1
I160816 05:16:47.211263 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:47.211541 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:47.211604 gossip/server.go:288 node 1: replying to 2
I160816 05:16:47.211664 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:47.211702 gossip/server.go:288 node 2: replying to 1
I160816 05:16:47.211792 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:47.211831 gossip/server.go:288 node 1: replying to 2
I160816 05:16:47.211845 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:47.211870 gossip/server.go:288 node 2: replying to 1
I160816 05:16:47.211914 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:47.211942 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:47.211981 gossip/server.go:288 node 2: replying to 1
I160816 05:16:48.211551 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:48.211792 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:48.211838 gossip/server.go:288 node 1: replying to 2
I160816 05:16:48.211944 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:48.211966 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:48.211982 gossip/server.go:288 node 2: replying to 1
I160816 05:16:48.211994 gossip/server.go:288 node 1: replying to 2
I160816 05:16:48.212073 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:48.212103 gossip/server.go:288 node 2: replying to 1
I160816 05:16:48.212161 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:48.212176 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:48.212201 gossip/server.go:288 node 2: replying to 1
I160816 05:16:48.212268 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:48.212295 gossip/server.go:288 node 2: replying to 1
I160816 05:16:49.211862 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:49.212137 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:49.212184 gossip/server.go:288 node 1: replying to 2
I160816 05:16:49.212260 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:49.212297 gossip/server.go:288 node 2: replying to 1
I160816 05:16:49.212321 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:49.212339 gossip/server.go:288 node 1: replying to 2
I160816 05:16:49.212380 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:49.212419 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:49.212447 gossip/server.go:288 node 2: replying to 1
I160816 05:16:50.212133 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:50.212367 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:50.212413 gossip/server.go:288 node 1: replying to 2
I160816 05:16:50.212562 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:50.212599 gossip/server.go:288 node 1: replying to 2
I160816 05:16:50.212607 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:50.212646 gossip/server.go:288 node 2: replying to 1
I160816 05:16:50.212765 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:50.212787 gossip/server.go:288 node 2: replying to 1
I160816 05:16:50.212835 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:50.212849 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:50.212873 gossip/server.go:288 node 2: replying to 1
I160816 05:16:51.212400 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:51.212592 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:51.212630 gossip/server.go:288 node 1: replying to 2
I160816 05:16:51.212705 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:51.212724 gossip/server.go:288 node 1: replying to 2
I160816 05:16:51.212833 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:51.212864 gossip/server.go:288 node 2: replying to 1
I160816 05:16:51.212964 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:51.212993 gossip/server.go:288 node 2: replying to 1
I160816 05:16:51.213002 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:51.213057 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:51.213083 gossip/server.go:288 node 2: replying to 1
I160816 05:16:52.212783 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:52.213159 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:52.213190 gossip/server.go:288 node 1: replying to 2
I160816 05:16:52.213207 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:52.213244 gossip/server.go:288 node 2: replying to 1
I160816 05:16:52.213258 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:52.213278 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:52.213299 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:52.213317 gossip/server.go:288 node 2: replying to 1
I160816 05:16:52.213342 gossip/server.go:288 node 1: replying to 2
I160816 05:16:52.213368 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:52.213384 gossip/server.go:288 node 2: replying to 1
I160816 05:16:53.213016 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:53.213309 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:53.213340 gossip/server.go:288 node 1: replying to 2
I160816 05:16:53.213379 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:53.213397 gossip/server.go:288 node 1: replying to 2
I160816 05:16:53.213425 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:53.213450 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:53.213462 gossip/server.go:288 node 2: replying to 1
I160816 05:16:53.213548 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:53.213566 gossip/server.go:288 node 2: replying to 1
I160816 05:16:54.213358 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:54.213639 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:54.213675 gossip/server.go:288 node 1: replying to 2
I160816 05:16:54.213713 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:54.213753 gossip/server.go:288 node 2: replying to 1
I160816 05:16:54.213779 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:54.213807 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:54.213819 gossip/server.go:288 node 1: replying to 2
I160816 05:16:54.213871 gossip/server.go:288 node 2: replying to 1
I160816 05:16:54.213947 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:54.213966 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:54.213991 gossip/server.go:288 node 2: replying to 1
I160816 05:16:54.214085 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:54.214113 gossip/server.go:288 node 2: replying to 1
I160816 05:16:55.213730 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:55.214046 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:55.214086 gossip/server.go:288 node 1: replying to 2
I160816 05:16:55.214137 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:55.214155 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:55.214182 gossip/server.go:288 node 2: replying to 1
I160816 05:16:55.214239 gossip/server.go:288 node 1: replying to 2
I160816 05:16:55.214253 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:55.214268 gossip/server.go:288 node 2: replying to 1
I160816 05:16:55.214357 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:55.214398 gossip/server.go:288 node 2: replying to 1
I160816 05:16:55.214417 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:56.214011 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:56.214247 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:56.214289 gossip/server.go:288 node 1: replying to 2
I160816 05:16:56.214351 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:56.214369 gossip/server.go:288 node 1: replying to 2
I160816 05:16:56.214445 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:56.214472 gossip/server.go:288 node 2: replying to 1
I160816 05:16:56.214530 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:56.214538 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:56.214547 gossip/server.go:288 node 2: replying to 1
I160816 05:16:56.214585 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:56.214601 gossip/server.go:288 node 2: replying to 1
I160816 05:16:57.214379 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:57.214589 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:57.214624 gossip/server.go:288 node 1: replying to 2
I160816 05:16:57.214728 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:57.214742 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:57.214752 gossip/server.go:288 node 1: replying to 2
I160816 05:16:57.214761 gossip/server.go:288 node 2: replying to 1
I160816 05:16:57.214851 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:57.214877 gossip/server.go:288 node 2: replying to 1
I160816 05:16:57.214891 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:58.214618 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:58.214862 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:58.214897 gossip/server.go:288 node 1: replying to 2
I160816 05:16:58.214962 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:58.214988 gossip/server.go:288 node 2: replying to 1
I160816 05:16:58.214993 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:58.215009 gossip/server.go:288 node 1: replying to 2
I160816 05:16:58.215041 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:58.215059 gossip/server.go:288 node 2: replying to 1
I160816 05:16:58.215067 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:59.214848 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:16:59.215049 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:59.215084 gossip/server.go:288 node 1: replying to 2
I160816 05:16:59.215157 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:16:59.215177 gossip/server.go:288 node 1: replying to 2
I160816 05:16:59.215268 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:59.215296 gossip/server.go:288 node 2: replying to 1
I160816 05:16:59.215368 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:59.215387 gossip/server.go:288 node 2: replying to 1
I160816 05:16:59.215392 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:16:59.215449 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:16:59.215466 gossip/server.go:288 node 2: replying to 1
I160816 05:17:00.215145 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:17:00.215394 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:00.215425 gossip/server.go:288 node 1: replying to 2
I160816 05:17:00.215460 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:00.215475 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:00.215487 gossip/server.go:288 node 2: replying to 1
I160816 05:17:00.215504 gossip/server.go:288 node 1: replying to 2
I160816 05:17:00.215571 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:00.215590 gossip/server.go:288 node 2: replying to 1
I160816 05:17:00.215621 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:17:00.215634 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:00.215649 gossip/server.go:288 node 2: replying to 1
I160816 05:17:01.215383 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:17:01.215671 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:01.215701 gossip/server.go:288 node 1: replying to 2
I160816 05:17:01.215810 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:01.215839 gossip/server.go:288 node 2: replying to 1
I160816 05:17:01.215854 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:01.215877 gossip/server.go:288 node 1: replying to 2
I160816 05:17:01.215902 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:01.215919 gossip/server.go:288 node 2: replying to 1
I160816 05:17:01.215928 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:17:02.215753 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:17:02.216056 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:02.216270 gossip/server.go:288 node 1: replying to 2
I160816 05:17:02.216602 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:02.216638 gossip/server.go:288 node 2: replying to 1
I160816 05:17:02.216680 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:02.216717 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:02.216747 gossip/server.go:288 node 2: replying to 1
I160816 05:17:02.216802 gossip/server.go:288 node 1: replying to 2
I160816 05:17:02.216821 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:02.216993 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:17:02.217375 gossip/server.go:288 node 2: replying to 1
I160816 05:17:03.216136 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:17:03.216359 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:03.216395 gossip/server.go:288 node 1: replying to 2
I160816 05:17:03.216471 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:03.216492 gossip/server.go:288 node 1: replying to 2
I160816 05:17:03.216529 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:03.216575 gossip/server.go:288 node 2: replying to 1
I160816 05:17:03.216675 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:03.216704 gossip/server.go:288 node 2: replying to 1
I160816 05:17:03.216767 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:17:04.216428 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:17:04.216766 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:04.216823 gossip/server.go:288 node 1: replying to 2
I160816 05:17:04.216972 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:04.216996 gossip/server.go:288 node 1: replying to 2
I160816 05:17:04.217086 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:04.217115 gossip/server.go:288 node 2: replying to 1
I160816 05:17:04.217196 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:04.217216 gossip/server.go:288 node 2: replying to 1
I160816 05:17:04.217242 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:17:04.217286 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:04.217303 gossip/server.go:288 node 2: replying to 1
I160816 05:17:05.216697 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:17:05.217061 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:05.217095 gossip/server.go:288 node 1: replying to 2
I160816 05:17:05.217160 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:05.217172 gossip/server.go:288 node 1: replying to 2
I160816 05:17:05.217226 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:05.217262 gossip/server.go:288 node 2: replying to 1
I160816 05:17:05.217284 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:17:05.217337 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:05.217357 gossip/server.go:288 node 2: replying to 1
I160816 05:17:06.217173 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:17:06.217396 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:06.217433 gossip/server.go:288 node 1: replying to 2
I160816 05:17:06.217495 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:06.217528 gossip/server.go:288 node 2: replying to 1
I160816 05:17:06.217580 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:06.217606 gossip/server.go:288 node 1: replying to 2
I160816 05:17:06.217725 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:06.217765 gossip/server.go:288 node 2: replying to 1
I160816 05:17:06.217811 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:17:07.217551 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:17:07.217840 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:07.217883 gossip/server.go:288 node 1: replying to 2
I160816 05:17:07.217983 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:07.218008 gossip/server.go:288 node 1: replying to 2
I160816 05:17:07.218051 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:07.218086 gossip/server.go:288 node 2: replying to 1
I160816 05:17:07.218172 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:07.218190 gossip/server.go:288 node 2: replying to 1
I160816 05:17:07.218239 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:17:08.217846 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:17:08.218168 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:08.218203 gossip/server.go:288 node 1: replying to 2
I160816 05:17:08.218280 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:08.218308 gossip/server.go:288 node 2: replying to 1
I160816 05:17:08.218316 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:08.218348 gossip/server.go:288 node 1: replying to 2
I160816 05:17:08.218504 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:17:08.218524 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:08.218542 gossip/server.go:288 node 2: replying to 1
I160816 05:17:08.218598 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:08.218616 gossip/server.go:288 node 2: replying to 1
I160816 05:17:09.218138 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:17:09.218376 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:09.218410 gossip/server.go:288 node 1: replying to 2
I160816 05:17:09.218543 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:09.218566 gossip/server.go:288 node 1: replying to 2
I160816 05:17:09.218611 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:09.218640 gossip/server.go:288 node 2: replying to 1
I160816 05:17:09.218726 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:09.218745 gossip/server.go:288 node 2: replying to 1
I160816 05:17:09.218768 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:17:09.218801 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:09.218817 gossip/server.go:288 node 2: replying to 1
I160816 05:17:10.218413 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:17:10.218701 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:10.218738 gossip/server.go:288 node 1: replying to 2
I160816 05:17:10.218874 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:10.218894 gossip/server.go:288 node 1: replying to 2
I160816 05:17:10.218963 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:10.218995 gossip/server.go:288 node 2: replying to 1
I160816 05:17:10.219067 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:10.219088 gossip/server.go:288 node 2: replying to 1
I160816 05:17:10.219114 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:17:11.218692 gossip/client.go:75 node 1: starting client to 127.0.0.1:34631
I160816 05:17:11.218956 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:11.218994 gossip/server.go:288 node 1: replying to 2
I160816 05:17:11.219064 gossip/server.go:190 node 1: received gossip from node 2
I160816 05:17:11.219086 gossip/server.go:288 node 1: replying to 2
I160816 05:17:11.219183 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:11.219222 gossip/server.go:288 node 2: replying to 1
I160816 05:17:11.219358 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:11.219383 gossip/server.go:288 node 2: replying to 1
I160816 05:17:11.219399 gossip/client.go:99 node 1: closing client to node 2 (127.0.0.1:34631): stopping outgoing client to node 2 (127.0.0.1:34631); already have incoming
I160816 05:17:11.219430 gossip/server.go:190 node 2: received gossip from node 1
I160816 05:17:11.219447 gossip/server.go:288 node 2: replying to 1
I160816 05:17:12.219128 http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:44492->127.0.0.1:53786: use of closed network connection
--- FAIL: TestClientGossipMetrics (45.11s)
testing.go:117: gossip/client_test.go:143, condition failed to evaluate within 45s: 1: expected metrics gauge "gossip.connections.incoming" > 0; = 0
```
Run Details:
```
73 runs so far, 0 failures, over 5s
131 runs so far, 0 failures, over 10s
168 runs so far, 0 failures, over 15s
195 runs so far, 0 failures, over 20s
197 runs so far, 0 failures, over 25s
197 runs so far, 0 failures, over 30s
197 runs so far, 0 failures, over 35s
197 runs so far, 0 failures, over 40s
197 runs so far, 0 failures, over 45s
198 runs completed, 1 failures, over 45s
FAIL
```
Please assign, take a look and update the issue accordingly. | test | stress failed test in cockroach gossip gossip test testclientgossipmetrics binary cockroach static tests tar gz sha stress build found a failed test run testclientgossipmetrics gossip gossip go not connected to cluster use join to specify a connected node gossip gossip go not connected to cluster use join to specify a connected node gossip client go node starting client to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node received gossip from node gossip server go node replying to gossip server go node replying to gossip server go node received gossip from node gossip server go node received gossip from node gossip server go node replying to gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node received gossip from node gossip server go node replying to gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node received gossip from node gossip server go node replying to gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node received gossip from node gossip server go node replying to gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node received gossip from node gossip server go node replying to gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node received gossip from node gossip server go node replying to gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node received gossip from node gossip server go node replying to gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node received gossip from node gossip server go node replying to gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip client go node starting client to gossip server go node received gossip from node gossip server go node received gossip from node gossip server go node replying to gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node received gossip from node gossip server go node replying to gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node received gossip from node gossip server go node replying to gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node received gossip from node gossip server go node replying to gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node received gossip from node gossip server go node replying to gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node received gossip from node gossip server go node replying to gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node received gossip from node gossip server go node replying to gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node received gossip from node gossip server go node replying to gossip server go node replying to gossip server go node received gossip from node gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip client go node starting client to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip server go node received gossip from node gossip server go node replying to gossip client go node closing client to node stopping outgoing client to node already have incoming gossip server go node received gossip from node gossip server go node replying to server go transport handlestreams failed to read frame read tcp use of closed network connection fail testclientgossipmetrics testing go gossip client test go condition failed to evaluate within expected metrics gauge gossip connections incoming run details runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs completed failures over fail please assign take a look and update the issue accordingly | 1 |
164,375 | 12,800,818,546 | IssuesEvent | 2020-07-02 17:51:29 | nmfs-fish-tools/RMAS | https://api.github.com/repos/nmfs-fish-tools/RMAS | closed | Unit tests | testing | need unit tests to compare MAS with R generated values for:
recruitment
growth
fishing
mortality | 1.0 | Unit tests - need unit tests to compare MAS with R generated values for:
recruitment
growth
fishing
mortality | test | unit tests need unit tests to compare mas with r generated values for recruitment growth fishing mortality | 1 |
60,960 | 14,596,423,920 | IssuesEvent | 2020-12-20 15:47:42 | billmcchesney1/superagent | https://api.github.com/repos/billmcchesney1/superagent | opened | WS-2020-0091 (High) detected in http-proxy-1.11.2.tgz | security vulnerability | ## WS-2020-0091 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>http-proxy-1.11.2.tgz</b></p></summary>
<p>HTTP proxying for the masses</p>
<p>Library home page: <a href="https://registry.npmjs.org/http-proxy/-/http-proxy-1.11.2.tgz">https://registry.npmjs.org/http-proxy/-/http-proxy-1.11.2.tgz</a></p>
<p>Path to dependency file: superagent/package.json</p>
<p>Path to vulnerable library: superagent/node_modules/http-proxy/package.json</p>
<p>
Dependency Hierarchy:
- zuul-3.12.0.tgz (Root Library)
- :x: **http-proxy-1.11.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/superagent/commit/77fefdaffd4ef3cef2e5b252e165b5f40fae61d5">77fefdaffd4ef3cef2e5b252e165b5f40fae61d5</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of http-proxy prior to 1.18.1 are vulnerable to Denial of Service. An HTTP request with a long body triggers an ERR_HTTP_HEADERS_SENT unhandled exception that crashes the proxy server. This is only possible when the proxy server sets headers in the proxy request using the proxyReq.setHeader function.
<p>Publish Date: 2020-05-14
<p>URL: <a href=https://github.com/http-party/node-http-proxy/pull/1447>WS-2020-0091</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1486">https://www.npmjs.com/advisories/1486</a></p>
<p>Release Date: 2020-05-26</p>
<p>Fix Resolution: http-proxy - 1.18.1 </p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"http-proxy","packageVersion":"1.11.2","isTransitiveDependency":true,"dependencyTree":"zuul:3.12.0;http-proxy:1.11.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"http-proxy - 1.18.1 "}],"vulnerabilityIdentifier":"WS-2020-0091","vulnerabilityDetails":"Versions of http-proxy prior to 1.18.1 are vulnerable to Denial of Service. An HTTP request with a long body triggers an ERR_HTTP_HEADERS_SENT unhandled exception that crashes the proxy server. This is only possible when the proxy server sets headers in the proxy request using the proxyReq.setHeader function.","vulnerabilityUrl":"https://github.com/http-party/node-http-proxy/pull/1447","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | WS-2020-0091 (High) detected in http-proxy-1.11.2.tgz - ## WS-2020-0091 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>http-proxy-1.11.2.tgz</b></p></summary>
<p>HTTP proxying for the masses</p>
<p>Library home page: <a href="https://registry.npmjs.org/http-proxy/-/http-proxy-1.11.2.tgz">https://registry.npmjs.org/http-proxy/-/http-proxy-1.11.2.tgz</a></p>
<p>Path to dependency file: superagent/package.json</p>
<p>Path to vulnerable library: superagent/node_modules/http-proxy/package.json</p>
<p>
Dependency Hierarchy:
- zuul-3.12.0.tgz (Root Library)
- :x: **http-proxy-1.11.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/superagent/commit/77fefdaffd4ef3cef2e5b252e165b5f40fae61d5">77fefdaffd4ef3cef2e5b252e165b5f40fae61d5</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of http-proxy prior to 1.18.1 are vulnerable to Denial of Service. An HTTP request with a long body triggers an ERR_HTTP_HEADERS_SENT unhandled exception that crashes the proxy server. This is only possible when the proxy server sets headers in the proxy request using the proxyReq.setHeader function.
<p>Publish Date: 2020-05-14
<p>URL: <a href=https://github.com/http-party/node-http-proxy/pull/1447>WS-2020-0091</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1486">https://www.npmjs.com/advisories/1486</a></p>
<p>Release Date: 2020-05-26</p>
<p>Fix Resolution: http-proxy - 1.18.1 </p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"http-proxy","packageVersion":"1.11.2","isTransitiveDependency":true,"dependencyTree":"zuul:3.12.0;http-proxy:1.11.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"http-proxy - 1.18.1 "}],"vulnerabilityIdentifier":"WS-2020-0091","vulnerabilityDetails":"Versions of http-proxy prior to 1.18.1 are vulnerable to Denial of Service. An HTTP request with a long body triggers an ERR_HTTP_HEADERS_SENT unhandled exception that crashes the proxy server. This is only possible when the proxy server sets headers in the proxy request using the proxyReq.setHeader function.","vulnerabilityUrl":"https://github.com/http-party/node-http-proxy/pull/1447","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_test | ws high detected in http proxy tgz ws high severity vulnerability vulnerable library http proxy tgz http proxying for the masses library home page a href path to dependency file superagent package json path to vulnerable library superagent node modules http proxy package json dependency hierarchy zuul tgz root library x http proxy tgz vulnerable library found in head commit a href found in base branch master vulnerability details versions of http proxy prior to are vulnerable to denial of service an http request with a long body triggers an err http headers sent unhandled exception that crashes the proxy server this is only possible when the proxy server sets headers in the proxy request using the proxyreq setheader function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution http proxy isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier ws vulnerabilitydetails versions of http proxy prior to are vulnerable to denial of service an http request with a long body triggers an err http headers sent unhandled exception that crashes the proxy server this is only possible when the proxy server sets headers in the proxy request using the proxyreq setheader function vulnerabilityurl | 0 |
136,403 | 11,048,724,979 | IssuesEvent | 2019-12-09 21:48:03 | MangopearUK/European-Boating-Association--Theme | https://api.github.com/repos/MangopearUK/European-Boating-Association--Theme | closed | Preconnect for better perf | Testing Type: Enhancement Work: Development |
URL | Potential Savings
-- | --
https://stats.g.doubleclick.net | 320 ms
https://fonts.googleapis.com | 310 ms
https://www.google-analytics.com | 150 ms
| 1.0 | Preconnect for better perf -
URL | Potential Savings
-- | --
https://stats.g.doubleclick.net | 320 ms
https://fonts.googleapis.com | 310 ms
https://www.google-analytics.com | 150 ms
| test | preconnect for better perf url potential savings ms ms ms | 1 |
28,818 | 2,711,821,269 | IssuesEvent | 2015-04-09 09:26:30 | manugarg/chrome-page-notes | https://api.github.com/repos/manugarg/chrome-page-notes | closed | bug: SyntaxError: Unexpected end of input | bug imported Priority-Medium | _From [dennisgd...@gmail.com](https://code.google.com/u/116948076340592829238/) on October 29, 2013 06:04:15_
browser:
Google Chrome 30.0.1599.114 \(Official Build 229842\)
OS Linux
Blink 537.36 \(`@`159105\)
JavaScript V8 3.20.17.15
Flash 11.9.900.117
User Agent Mozilla/5.0 \(X11; Linux x86\_64\) AppleWebKit/537.36 \(KHTML, like Gecko\) Chrome/30.0.1599.114 Safari/537.36
Command Line /opt/google/chrome/google\-chrome \-\-flag\-switches\-begin \-\-flag\-switches\-end
Executable Path /opt/google/chrome/google\-chrome
Profile Path /home/dennis/.config/google\-chrome/Default
Variations 6dcb530d\-954431d1
f324989d\-87cd3fec
b03ddc1f\-2d9ef0cc
f9b252d0\-fd526c81
3664a344\-be9e69ba
24dca50e\-837c4893
ca65a9fe\-bf3100d4
8d790604\-9cb2a91c
5a3c10b5\-e1cc0f14
244ca1ac\-4ad60575
5e29d81\-f23d1dea
246fb659\-6bdfffe7
f296190c\-24dd8f80
4442aae2\-a90023b1
ed1d377\-e1cc0f14
75f0f0a0\-e1cc0f14
e2b18481\-d7f6b13c
e7e71889\-4ad60575
Messages from last sync:
sync: Starting sync at: Tue Oct 29 2013 14:58:43 GMT\+0200 \(EET\)
sync: First sync, merging local and remote data.
SyntaxError: Unexpected end of input
bug goes deeper
All Page Notes does not show my tag but does show URL and edits are not saved
pagenotes.data is empty despite repeated ctrl \+f5
_Original issue: http://code.google.com/p/chrome-page-notes/issues/detail?id=22_ | 1.0 | bug: SyntaxError: Unexpected end of input - _From [dennisgd...@gmail.com](https://code.google.com/u/116948076340592829238/) on October 29, 2013 06:04:15_
browser:
Google Chrome 30.0.1599.114 \(Official Build 229842\)
OS Linux
Blink 537.36 \(`@`159105\)
JavaScript V8 3.20.17.15
Flash 11.9.900.117
User Agent Mozilla/5.0 \(X11; Linux x86\_64\) AppleWebKit/537.36 \(KHTML, like Gecko\) Chrome/30.0.1599.114 Safari/537.36
Command Line /opt/google/chrome/google\-chrome \-\-flag\-switches\-begin \-\-flag\-switches\-end
Executable Path /opt/google/chrome/google\-chrome
Profile Path /home/dennis/.config/google\-chrome/Default
Variations 6dcb530d\-954431d1
f324989d\-87cd3fec
b03ddc1f\-2d9ef0cc
f9b252d0\-fd526c81
3664a344\-be9e69ba
24dca50e\-837c4893
ca65a9fe\-bf3100d4
8d790604\-9cb2a91c
5a3c10b5\-e1cc0f14
244ca1ac\-4ad60575
5e29d81\-f23d1dea
246fb659\-6bdfffe7
f296190c\-24dd8f80
4442aae2\-a90023b1
ed1d377\-e1cc0f14
75f0f0a0\-e1cc0f14
e2b18481\-d7f6b13c
e7e71889\-4ad60575
Messages from last sync:
sync: Starting sync at: Tue Oct 29 2013 14:58:43 GMT\+0200 \(EET\)
sync: First sync, merging local and remote data.
SyntaxError: Unexpected end of input
bug goes deeper
All Page Notes does not show my tag but does show URL and edits are not saved
pagenotes.data is empty despite repeated ctrl \+f5
_Original issue: http://code.google.com/p/chrome-page-notes/issues/detail?id=22_ | non_test | bug syntaxerror unexpected end of input from on october browser google chrome nbsp nbsp nbsp nbsp nbsp nbsp nbsp official build os nbsp nbsp nbsp nbsp nbsp nbsp nbsp linux blink nbsp nbsp nbsp nbsp nbsp nbsp nbsp javascript nbsp nbsp nbsp nbsp nbsp nbsp nbsp flash nbsp nbsp nbsp nbsp nbsp nbsp nbsp user agent nbsp nbsp nbsp nbsp nbsp nbsp nbsp mozilla linux applewebkit khtml like gecko chrome safari command line nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp opt google chrome google chrome flag switches begin flag switches end executable path nbsp nbsp nbsp nbsp nbsp nbsp nbsp opt google chrome google chrome profile path nbsp nbsp nbsp nbsp nbsp nbsp nbsp home dennis config google chrome default variations nbsp nbsp nbsp nbsp nbsp nbsp nbsp messages from last sync sync starting sync at tue oct gmt eet sync first sync merging local and remote data syntaxerror unexpected end of input bug goes deeper all page notes does not show my tag but does show url and edits are not saved pagenotes data is empty despite repeated ctrl original issue | 0 |
11,876 | 3,236,220,671 | IssuesEvent | 2015-10-14 02:54:32 | UCI-UAVForge/Avionics | https://api.github.com/repos/UCI-UAVForge/Avionics | opened | Current Measurement IC Testing | scope: power supply firmware type: testing | Once tasks #30 and #31 are complete, setup the evaluation board so that a current of 2A is flowing through it. This can be done using a resistor and a power supply (V=IR). Look at the example circuits in the datasheet to see how to connect the resistor and power supply to the evaluation board. Remember to power the evaluation board using a separate supply so that the board isn't also measuring its own usage. Upload your code to the arduino, connect it to the evaluation board via i2c and test your programming.
Deliverable for this item: screenshot of readings of the chip sent back to your computer (using arduino serial monitor)
Estimated time for this item: 3 Hours | 1.0 | Current Measurement IC Testing - Once tasks #30 and #31 are complete, setup the evaluation board so that a current of 2A is flowing through it. This can be done using a resistor and a power supply (V=IR). Look at the example circuits in the datasheet to see how to connect the resistor and power supply to the evaluation board. Remember to power the evaluation board using a separate supply so that the board isn't also measuring its own usage. Upload your code to the arduino, connect it to the evaluation board via i2c and test your programming.
Deliverable for this item: screenshot of readings of the chip sent back to your computer (using arduino serial monitor)
Estimated time for this item: 3 Hours | test | current measurement ic testing once tasks and are complete setup the evaluation board so that a current of is flowing through it this can be done using a resistor and a power supply v ir look at the example circuits in the datasheet to see how to connect the resistor and power supply to the evaluation board remember to power the evaluation board using a separate supply so that the board isn t also measuring its own usage upload your code to the arduino connect it to the evaluation board via and test your programming deliverable for this item screenshot of readings of the chip sent back to your computer using arduino serial monitor estimated time for this item hours | 1 |
116,435 | 11,912,218,036 | IssuesEvent | 2020-03-31 09:54:05 | ComputationalRadiationPhysics/alpaka | https://api.github.com/repos/ComputationalRadiationPhysics/alpaka | opened | alpaka repository ownership transfer | documentation | The owner ship of the alpaka repository will be transfered on 3rd April to https://github.com/alpaka-group. It could be that the CI will be offline for a short time period.
This will change the link to the repository to https://github.com/alpaka-group/alpaka.
@ComputationalRadiationPhysics/alpaka-maintainers @ComputationalRadiationPhysics/alpaka-developers | 1.0 | alpaka repository ownership transfer - The owner ship of the alpaka repository will be transfered on 3rd April to https://github.com/alpaka-group. It could be that the CI will be offline for a short time period.
This will change the link to the repository to https://github.com/alpaka-group/alpaka.
@ComputationalRadiationPhysics/alpaka-maintainers @ComputationalRadiationPhysics/alpaka-developers | non_test | alpaka repository ownership transfer the owner ship of the alpaka repository will be transfered on april to it could be that the ci will be offline for a short time period this will change the link to the repository to computationalradiationphysics alpaka maintainers computationalradiationphysics alpaka developers | 0 |
223,099 | 17,567,561,882 | IssuesEvent | 2021-08-14 02:26:57 | valora-inc/wallet | https://api.github.com/repos/valora-inc/wallet | opened | Mobile & Component Tests Shown As Passing In CI Pipeline But Are Not Running | bug Priority: P1 Component: Tests | ### Current behavior
- Mobile Tests and Component Tests are shown as passing but are not running in CI Pipeline
### Desired behavior
- Tests Run and Pass
### Steps to Reproduce
Run Mobile or Component Tests in the pipeline

| 1.0 | Mobile & Component Tests Shown As Passing In CI Pipeline But Are Not Running - ### Current behavior
- Mobile Tests and Component Tests are shown as passing but are not running in CI Pipeline
### Desired behavior
- Tests Run and Pass
### Steps to Reproduce
Run Mobile or Component Tests in the pipeline

| test | mobile component tests shown as passing in ci pipeline but are not running current behavior mobile tests and component tests are shown as passing but are not running in ci pipeline desired behavior tests run and pass steps to reproduce run mobile or component tests in the pipeline | 1 |
227,904 | 18,108,501,486 | IssuesEvent | 2021-09-22 22:27:19 | ueberdosis/tiptap | https://api.github.com/repos/ueberdosis/tiptap | closed | Selection changed only on second click | bug should have a test in v2 | When I use the Focus-Plugin and if the current focus is outside the editor you have to click twice to change the cursor position. This only happens in Firefox but not in Chrome.
I could reproduce the behaviour with the focus example on https://tiptap.scrumpy.io/focus
- if you load the page the editor has focus and the selection is at the beginning of the text
- then click outside the editor so it looses the focus
- when you then click at a new position the editor regains focus and the old selection is restored but should change to the position where I clicked
- a second click eventually changes the selection as expected | 1.0 | Selection changed only on second click - When I use the Focus-Plugin and if the current focus is outside the editor you have to click twice to change the cursor position. This only happens in Firefox but not in Chrome.
I could reproduce the behaviour with the focus example on https://tiptap.scrumpy.io/focus
- if you load the page the editor has focus and the selection is at the beginning of the text
- then click outside the editor so it looses the focus
- when you then click at a new position the editor regains focus and the old selection is restored but should change to the position where I clicked
- a second click eventually changes the selection as expected | test | selection changed only on second click when i use the focus plugin and if the current focus is outside the editor you have to click twice to change the cursor position this only happens in firefox but not in chrome i could reproduce the behaviour with the focus example on if you load the page the editor has focus and the selection is at the beginning of the text then click outside the editor so it looses the focus when you then click at a new position the editor regains focus and the old selection is restored but should change to the position where i clicked a second click eventually changes the selection as expected | 1 |
11,011 | 3,158,364,447 | IssuesEvent | 2015-09-18 00:12:46 | schwehr/libais | https://api.github.com/repos/schwehr/libais | opened | TAG Block complaint about more than one message | py testing | This NMEA TAG Block pair of lines:
\g:1-2-2977,n:24154,s:r17AWOM1,c:1434585618*5F\$ARFSR,r17AWOM1,000018,A,0017,0,0000,,,-127,*62
\g:2-2-2977,n:24155*19\$ARFSR,r17AWOM1,000018,B,0021,0,0000,,,-126,*65
Generates this complaint:
INFO:root:Error: Should get just one message decoded from this: {'matches': [{'sentence_tot': '2', 'group': '1-2-2977', 'line_num': '24154', 'dest': None, 'text': None, 'rcvr': 'r17AWOM1', 'text_date': None, 'group_id': '2977', 'tag_checksum': '5F', 'sentence_num': '1', 'time': '1434585618', 'rel_time': None, 'quality': None, 'payload': '$ARFSR,r17AWOM1,000018,A,0017,0,0000,,,-127,*62', 'metadata': 'g:1-2-2977,n:24154,s:r17AWOM1,c:1434585618*5F'}, {'sentence_tot': '2', 'group': '2-2-2977', 'line_num': '24155', 'dest': None, 'text': None, 'rcvr': None, 'text_date': None, 'group_id': '2977', 'tag_checksum': '19', 'sentence_num': '2', 'time': None, 'rel_time': None, 'quality': None, 'payload': '$ARFSR,r17AWOM1,000018,B,0021,0,0000,,,-126,*65', 'metadata': 'g:2-2-2977,n:24155*19'}], 'times': [1434585618, None], 'lines': ['\\g:1-2-2977,n:24154,s:r17AWOM1,c:1434585618*5F\\$ARFSR,r17AWOM1,000018,A,0017,0,0000,,,-127,*62', '\\g:2-2-2977,n:24155*19\\$ARFSR,r17AWOM1,000018,B,0021,0,0000,,,-126,*65'], 'line_nums': [1, 2]}
INFO:root:Unable to process: {'matches': [{'sentence_tot': '2', 'group': '1-2-2977', 'line_num': '24154', 'dest': None, 'text': None, 'rcvr': 'r17AWOM1', 'text_date': None, 'group_id': '2977', 'tag_checksum': '5F', 'sentence_num': '1', 'time': '1434585618', 'rel_time': None, 'quality': None, 'payload': '$ARFSR,r17AWOM1,000018,A,0017,0,0000,,,-127,*62', 'metadata': 'g:1-2-2977,n:24154,s:r17AWOM1,c:1434585618*5F'}, {'sentence_tot': '2', 'group': '2-2-2977', 'line_num': '24155', 'dest': None, 'text': None, 'rcvr': None, 'text_date': None, 'group_id': '2977', 'tag_checksum': '19', 'sentence_num': '2', 'time': None, 'rel_time': None, 'quality': None, 'payload': '$ARFSR,r17AWOM1,000018,B,0021,0,0000,,,-126,*65', 'metadata': 'g:2-2-2977,n:24155*19'}], 'times': [1434585618, None], 'lines': ['\\g:1-2-2977,n:24154,s:r17AWOM1,c:1434585618*5F\\$ARFSR,r17AWOM1,000018,A,0017,0,0000,,,-127,*62', '\\g:2-2-2977,n:24155*19\\$ARFSR,r17AWOM1,000018,B,0021,0,0000,,,-126,*65'], 'line_nums': [1, 2]}
| 1.0 | TAG Block complaint about more than one message - This NMEA TAG Block pair of lines:
\g:1-2-2977,n:24154,s:r17AWOM1,c:1434585618*5F\$ARFSR,r17AWOM1,000018,A,0017,0,0000,,,-127,*62
\g:2-2-2977,n:24155*19\$ARFSR,r17AWOM1,000018,B,0021,0,0000,,,-126,*65
Generates this complaint:
INFO:root:Error: Should get just one message decoded from this: {'matches': [{'sentence_tot': '2', 'group': '1-2-2977', 'line_num': '24154', 'dest': None, 'text': None, 'rcvr': 'r17AWOM1', 'text_date': None, 'group_id': '2977', 'tag_checksum': '5F', 'sentence_num': '1', 'time': '1434585618', 'rel_time': None, 'quality': None, 'payload': '$ARFSR,r17AWOM1,000018,A,0017,0,0000,,,-127,*62', 'metadata': 'g:1-2-2977,n:24154,s:r17AWOM1,c:1434585618*5F'}, {'sentence_tot': '2', 'group': '2-2-2977', 'line_num': '24155', 'dest': None, 'text': None, 'rcvr': None, 'text_date': None, 'group_id': '2977', 'tag_checksum': '19', 'sentence_num': '2', 'time': None, 'rel_time': None, 'quality': None, 'payload': '$ARFSR,r17AWOM1,000018,B,0021,0,0000,,,-126,*65', 'metadata': 'g:2-2-2977,n:24155*19'}], 'times': [1434585618, None], 'lines': ['\\g:1-2-2977,n:24154,s:r17AWOM1,c:1434585618*5F\\$ARFSR,r17AWOM1,000018,A,0017,0,0000,,,-127,*62', '\\g:2-2-2977,n:24155*19\\$ARFSR,r17AWOM1,000018,B,0021,0,0000,,,-126,*65'], 'line_nums': [1, 2]}
INFO:root:Unable to process: {'matches': [{'sentence_tot': '2', 'group': '1-2-2977', 'line_num': '24154', 'dest': None, 'text': None, 'rcvr': 'r17AWOM1', 'text_date': None, 'group_id': '2977', 'tag_checksum': '5F', 'sentence_num': '1', 'time': '1434585618', 'rel_time': None, 'quality': None, 'payload': '$ARFSR,r17AWOM1,000018,A,0017,0,0000,,,-127,*62', 'metadata': 'g:1-2-2977,n:24154,s:r17AWOM1,c:1434585618*5F'}, {'sentence_tot': '2', 'group': '2-2-2977', 'line_num': '24155', 'dest': None, 'text': None, 'rcvr': None, 'text_date': None, 'group_id': '2977', 'tag_checksum': '19', 'sentence_num': '2', 'time': None, 'rel_time': None, 'quality': None, 'payload': '$ARFSR,r17AWOM1,000018,B,0021,0,0000,,,-126,*65', 'metadata': 'g:2-2-2977,n:24155*19'}], 'times': [1434585618, None], 'lines': ['\\g:1-2-2977,n:24154,s:r17AWOM1,c:1434585618*5F\\$ARFSR,r17AWOM1,000018,A,0017,0,0000,,,-127,*62', '\\g:2-2-2977,n:24155*19\\$ARFSR,r17AWOM1,000018,B,0021,0,0000,,,-126,*65'], 'line_nums': [1, 2]}
| test | tag block complaint about more than one message this nmea tag block pair of lines g n s c arfsr a g n arfsr b generates this complaint info root error should get just one message decoded from this matches times lines line nums info root unable to process matches times lines line nums | 1 |
85,209 | 10,431,976,228 | IssuesEvent | 2019-09-17 10:13:01 | sizespectrum/mizer | https://api.github.com/repos/sizespectrum/mizer | opened | Which vignettes to include in package and how | discussion documentation | We are planning to have a large number of vignettes on the mizer website. Some of them will be quite large due to the inclusion of screenshots and plots. We do not want to include all of them in the package, because that would inflate the download size of the package. We expect most users to prefer to look at the online documentation anyway, rather than the vignettes downloaded with the package.
So the issue to be discussed here: which vignettes, if any, do we want to include in the package?
The next question is whether we want to include a vignette in pdf format. I think that would be quite nice for users who still like to print out things. However It will be a pain to maintain the pdf vignette and the website separately. So it would be nice if the pdf vignette could be created from selected vignettes on the website. In principle it is possible to combine vignettes and render to pdf using the method described at https://stackoverflow.com/questions/25824795/how-to-combine-two-rmarkdown-rmd-files-into-a-single-output . However the result does not look perfect and the cross-linking does not work. A better option is to wait for the resolution of r-lib/pkgdown#853 | 1.0 | Which vignettes to include in package and how - We are planning to have a large number of vignettes on the mizer website. Some of them will be quite large due to the inclusion of screenshots and plots. We do not want to include all of them in the package, because that would inflate the download size of the package. We expect most users to prefer to look at the online documentation anyway, rather than the vignettes downloaded with the package.
So the issue to be discussed here: which vignettes, if any, do we want to include in the package?
The next question is whether we want to include a vignette in pdf format. I think that would be quite nice for users who still like to print out things. However It will be a pain to maintain the pdf vignette and the website separately. So it would be nice if the pdf vignette could be created from selected vignettes on the website. In principle it is possible to combine vignettes and render to pdf using the method described at https://stackoverflow.com/questions/25824795/how-to-combine-two-rmarkdown-rmd-files-into-a-single-output . However the result does not look perfect and the cross-linking does not work. A better option is to wait for the resolution of r-lib/pkgdown#853 | non_test | which vignettes to include in package and how we are planning to have a large number of vignettes on the mizer website some of them will be quite large due to the inclusion of screenshots and plots we do not want to include all of them in the package because that would inflate the download size of the package we expect most users to prefer to look at the online documentation anyway rather than the vignettes downloaded with the package so the issue to be discussed here which vignettes if any do we want to include in the package the next question is whether we want to include a vignette in pdf format i think that would be quite nice for users who still like to print out things however it will be a pain to maintain the pdf vignette and the website separately so it would be nice if the pdf vignette could be created from selected vignettes on the website in principle it is possible to combine vignettes and render to pdf using the method described at however the result does not look perfect and the cross linking does not work a better option is to wait for the resolution of r lib pkgdown | 0 |
244,161 | 20,613,382,145 | IssuesEvent | 2022-03-07 10:48:57 | raiden-network/light-client | https://api.github.com/repos/raiden-network/light-client | closed | Update all components in the end-to-end Docker image in regards of the breaking contract changes | test | ## Description
The contracts got updated with breaking changes to make them work on Roll-ups. This also requires all components like the services to adapt to it. So be able to test all workspaces of the LightClient, we need to update all references components in the end-to-end Docker image.
One problem here is that the breaking changes require to also get partner nodes that work with these new contracts. According to #3049 this will be the LC CLI. But the CLI will not be released before the SDK and everything got finally updated to work with the new contracts. To resolve this problem, it must be possible to get the local development version into the image when necessary.
## Acceptance criteria
- all referenced components link to versions compatible with the new contracts
- the image can be build with local components
## Tasks
- [ ]
| 1.0 | Update all components in the end-to-end Docker image in regards of the breaking contract changes - ## Description
The contracts got updated with breaking changes to make them work on Roll-ups. This also requires all components like the services to adapt to it. So be able to test all workspaces of the LightClient, we need to update all references components in the end-to-end Docker image.
One problem here is that the breaking changes require to also get partner nodes that work with these new contracts. According to #3049 this will be the LC CLI. But the CLI will not be released before the SDK and everything got finally updated to work with the new contracts. To resolve this problem, it must be possible to get the local development version into the image when necessary.
## Acceptance criteria
- all referenced components link to versions compatible with the new contracts
- the image can be build with local components
## Tasks
- [ ]
| test | update all components in the end to end docker image in regards of the breaking contract changes description the contracts got updated with breaking changes to make them work on roll ups this also requires all components like the services to adapt to it so be able to test all workspaces of the lightclient we need to update all references components in the end to end docker image one problem here is that the breaking changes require to also get partner nodes that work with these new contracts according to this will be the lc cli but the cli will not be released before the sdk and everything got finally updated to work with the new contracts to resolve this problem it must be possible to get the local development version into the image when necessary acceptance criteria all referenced components link to versions compatible with the new contracts the image can be build with local components tasks | 1 |
82,604 | 15,651,094,654 | IssuesEvent | 2021-03-23 09:47:38 | OSWeekends/eventpoints-backend | https://api.github.com/repos/OSWeekends/eventpoints-backend | opened | CVE-2021-21353 (High) detected in pug-2.0.0-beta6.tgz, pug-code-gen-1.1.1.tgz | security vulnerability | ## CVE-2021-21353 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>pug-2.0.0-beta6.tgz</b>, <b>pug-code-gen-1.1.1.tgz</b></p></summary>
<p>
<details><summary><b>pug-2.0.0-beta6.tgz</b></p></summary>
<p>A clean, whitespace-sensitive template language for writing HTML</p>
<p>Library home page: <a href="https://registry.npmjs.org/pug/-/pug-2.0.0-beta6.tgz">https://registry.npmjs.org/pug/-/pug-2.0.0-beta6.tgz</a></p>
<p>Path to dependency file: eventpoints-backend/api/package.json</p>
<p>Path to vulnerable library: eventpoints-backend/api/node_modules/pug/package.json</p>
<p>
Dependency Hierarchy:
- pillars-0.7.1.tgz (Root Library)
- templated-0.3.9.tgz
- :x: **pug-2.0.0-beta6.tgz** (Vulnerable Library)
</details>
<details><summary><b>pug-code-gen-1.1.1.tgz</b></p></summary>
<p>Default code-generator for pug. It generates HTML via a JavaScript template function.</p>
<p>Library home page: <a href="https://registry.npmjs.org/pug-code-gen/-/pug-code-gen-1.1.1.tgz">https://registry.npmjs.org/pug-code-gen/-/pug-code-gen-1.1.1.tgz</a></p>
<p>Path to dependency file: eventpoints-backend/api/package.json</p>
<p>Path to vulnerable library: eventpoints-backend/api/node_modules/pug-code-gen/package.json</p>
<p>
Dependency Hierarchy:
- pillars-0.7.1.tgz (Root Library)
- templated-0.3.9.tgz
- pug-2.0.0-beta6.tgz
- :x: **pug-code-gen-1.1.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/OSWeekends/eventpoints-backend/commit/8b1ef684a59fb2d7ff8d97e44852b6f9e2628ad6">8b1ef684a59fb2d7ff8d97e44852b6f9e2628ad6</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Pug is an npm package which is a high-performance template engine. In pug before version 3.0.1, if a remote attacker was able to control the `pretty` option of the pug compiler, e.g. if you spread a user provided object such as the query parameters of a request into the pug template inputs, it was possible for them to achieve remote code execution on the node.js backend. This is fixed in version 3.0.1. This advisory applies to multiple pug packages including "pug", "pug-code-gen". pug-code-gen has a backported fix at version 2.0.3. This advisory is not exploitable if there is no way for un-trusted input to be passed to pug as the `pretty` option, e.g. if you compile templates in advance before applying user input to them, you do not need to upgrade.
<p>Publish Date: 2021-03-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21353>CVE-2021-21353</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-p493-635q-r6gr">https://github.com/advisories/GHSA-p493-635q-r6gr</a></p>
<p>Release Date: 2020-12-23</p>
<p>Fix Resolution: pug -3.0.1, pug-code-gen-2.0.3, pug-code-gen-3.0.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-21353 (High) detected in pug-2.0.0-beta6.tgz, pug-code-gen-1.1.1.tgz - ## CVE-2021-21353 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>pug-2.0.0-beta6.tgz</b>, <b>pug-code-gen-1.1.1.tgz</b></p></summary>
<p>
<details><summary><b>pug-2.0.0-beta6.tgz</b></p></summary>
<p>A clean, whitespace-sensitive template language for writing HTML</p>
<p>Library home page: <a href="https://registry.npmjs.org/pug/-/pug-2.0.0-beta6.tgz">https://registry.npmjs.org/pug/-/pug-2.0.0-beta6.tgz</a></p>
<p>Path to dependency file: eventpoints-backend/api/package.json</p>
<p>Path to vulnerable library: eventpoints-backend/api/node_modules/pug/package.json</p>
<p>
Dependency Hierarchy:
- pillars-0.7.1.tgz (Root Library)
- templated-0.3.9.tgz
- :x: **pug-2.0.0-beta6.tgz** (Vulnerable Library)
</details>
<details><summary><b>pug-code-gen-1.1.1.tgz</b></p></summary>
<p>Default code-generator for pug. It generates HTML via a JavaScript template function.</p>
<p>Library home page: <a href="https://registry.npmjs.org/pug-code-gen/-/pug-code-gen-1.1.1.tgz">https://registry.npmjs.org/pug-code-gen/-/pug-code-gen-1.1.1.tgz</a></p>
<p>Path to dependency file: eventpoints-backend/api/package.json</p>
<p>Path to vulnerable library: eventpoints-backend/api/node_modules/pug-code-gen/package.json</p>
<p>
Dependency Hierarchy:
- pillars-0.7.1.tgz (Root Library)
- templated-0.3.9.tgz
- pug-2.0.0-beta6.tgz
- :x: **pug-code-gen-1.1.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/OSWeekends/eventpoints-backend/commit/8b1ef684a59fb2d7ff8d97e44852b6f9e2628ad6">8b1ef684a59fb2d7ff8d97e44852b6f9e2628ad6</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Pug is an npm package which is a high-performance template engine. In pug before version 3.0.1, if a remote attacker was able to control the `pretty` option of the pug compiler, e.g. if you spread a user provided object such as the query parameters of a request into the pug template inputs, it was possible for them to achieve remote code execution on the node.js backend. This is fixed in version 3.0.1. This advisory applies to multiple pug packages including "pug", "pug-code-gen". pug-code-gen has a backported fix at version 2.0.3. This advisory is not exploitable if there is no way for un-trusted input to be passed to pug as the `pretty` option, e.g. if you compile templates in advance before applying user input to them, you do not need to upgrade.
<p>Publish Date: 2021-03-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21353>CVE-2021-21353</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-p493-635q-r6gr">https://github.com/advisories/GHSA-p493-635q-r6gr</a></p>
<p>Release Date: 2020-12-23</p>
<p>Fix Resolution: pug -3.0.1, pug-code-gen-2.0.3, pug-code-gen-3.0.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in pug tgz pug code gen tgz cve high severity vulnerability vulnerable libraries pug tgz pug code gen tgz pug tgz a clean whitespace sensitive template language for writing html library home page a href path to dependency file eventpoints backend api package json path to vulnerable library eventpoints backend api node modules pug package json dependency hierarchy pillars tgz root library templated tgz x pug tgz vulnerable library pug code gen tgz default code generator for pug it generates html via a javascript template function library home page a href path to dependency file eventpoints backend api package json path to vulnerable library eventpoints backend api node modules pug code gen package json dependency hierarchy pillars tgz root library templated tgz pug tgz x pug code gen tgz vulnerable library found in head commit a href vulnerability details pug is an npm package which is a high performance template engine in pug before version if a remote attacker was able to control the pretty option of the pug compiler e g if you spread a user provided object such as the query parameters of a request into the pug template inputs it was possible for them to achieve remote code execution on the node js backend this is fixed in version this advisory applies to multiple pug packages including pug pug code gen pug code gen has a backported fix at version this advisory is not exploitable if there is no way for un trusted input to be passed to pug as the pretty option e g if you compile templates in advance before applying user input to them you do not need to upgrade publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution pug pug code gen pug code gen step up your open source security game with whitesource | 0 |
348,712 | 31,711,218,275 | IssuesEvent | 2023-09-09 09:45:07 | hwicode/schedule | https://api.github.com/repos/hwicode/schedule | closed | [tag] 태그를 전부 가져오는 기능과 태그 이름을 통해 태그를 조회하는 기능 API 구현 | feature test concern | ## Description
- 이름을 통해 태그를 조회할 때, '%'를 키워드 뒤에 붙혀서 인덱스를 타게 만듦
| 1.0 | [tag] 태그를 전부 가져오는 기능과 태그 이름을 통해 태그를 조회하는 기능 API 구현 - ## Description
- 이름을 통해 태그를 조회할 때, '%'를 키워드 뒤에 붙혀서 인덱스를 타게 만듦
| test | 태그를 전부 가져오는 기능과 태그 이름을 통해 태그를 조회하는 기능 api 구현 description 이름을 통해 태그를 조회할 때 를 키워드 뒤에 붙혀서 인덱스를 타게 만듦 | 1 |
289,631 | 25,000,651,346 | IssuesEvent | 2022-11-03 07:31:59 | godotengine/godot | https://api.github.com/repos/godotengine/godot | closed | Dictionary is always empty when defined as a member variable. | bug topic:gdscript needs testing | ### Godot version
v4.0.beta4.mono.official [e6751549c]
### System information
Fedora Linux 37 (Workstation Edition)(beta) 64 bit
### Issue description
### Expected behaviour
The variable `dict` contains a dictionary containing the specified keys and values.
### Actual behaviour
The variable `dict` is an empty dictionary.
`print(dict)` prints `"{ }"`
Note: This is not an issue of `print()`, `dict` doesn't contain anything when accessing a key.
This is fixed when adding the `@onready` annotation to the definition like so `@onready var dict = {"hello": 1}`
### Steps to reproduce
1. Create a script which extends Control
2. Define a member variable and store a Dictionary containing some values
`var dict = {"hello": 1}`
4. Check the value of the variable
`print(dict)`
### Minimal reproduction project
_No response_ | 1.0 | Dictionary is always empty when defined as a member variable. - ### Godot version
v4.0.beta4.mono.official [e6751549c]
### System information
Fedora Linux 37 (Workstation Edition)(beta) 64 bit
### Issue description
### Expected behaviour
The variable `dict` contains a dictionary containing the specified keys and values.
### Actual behaviour
The variable `dict` is an empty dictionary.
`print(dict)` prints `"{ }"`
Note: This is not an issue of `print()`, `dict` doesn't contain anything when accessing a key.
This is fixed when adding the `@onready` annotation to the definition like so `@onready var dict = {"hello": 1}`
### Steps to reproduce
1. Create a script which extends Control
2. Define a member variable and store a Dictionary containing some values
`var dict = {"hello": 1}`
4. Check the value of the variable
`print(dict)`
### Minimal reproduction project
_No response_ | test | dictionary is always empty when defined as a member variable godot version mono official system information fedora linux workstation edition beta bit issue description expected behaviour the variable dict contains a dictionary containing the specified keys and values actual behaviour the variable dict is an empty dictionary print dict prints note this is not an issue of print dict doesn t contain anything when accessing a key this is fixed when adding the onready annotation to the definition like so onready var dict hello steps to reproduce create a script which extends control define a member variable and store a dictionary containing some values var dict hello check the value of the variable print dict minimal reproduction project no response | 1 |
284,826 | 24,624,313,993 | IssuesEvent | 2022-10-16 10:09:53 | roeszler/reabook | https://api.github.com/repos/roeszler/reabook | closed | User Story: Book for a Property Viewing | feature test | As a **user**, I can **easily understand and book a time to view a property at a particular time and date** so that **I can easily manage my plans to buy / rent a new real estate space**.
| 1.0 | User Story: Book for a Property Viewing - As a **user**, I can **easily understand and book a time to view a property at a particular time and date** so that **I can easily manage my plans to buy / rent a new real estate space**.
| test | user story book for a property viewing as a user i can easily understand and book a time to view a property at a particular time and date so that i can easily manage my plans to buy rent a new real estate space | 1 |
63,723 | 26,494,989,378 | IssuesEvent | 2023-01-18 04:16:10 | ballerina-platform/openapi-tools | https://api.github.com/repos/ballerina-platform/openapi-tools | closed | Support for response mapping for service generation when `additionalProperties` has enabled without `properties` field in response | Type/Improvement Points/4 Service OpenAPIToBallerina Reason/Other | **Description:**
<!-- Give a brief description of the improvement -->
example
```openapi
/store/inventory:
get:
tags:
- store
summary: Returns pet inventories by status
description: Returns a map of status codes to quantities
operationId: getInventory
responses:
"200":
description: successful operation
content:
application/json:
schema:
type: object
additionalProperties:
type: integer
format: int32
```
- [x] Scenarios 01: Response has inline object and additional properties
```openapi
/store/inventory:
get:
tags:
- store
summary: Returns pet inventories by status
description: Returns a map of status codes to quantities
operationId: getInventory
responses:
"200":
description: successful operation
content:
application/json:
schema:
type: object
properties:
name:
type: string
age:
type: integer
additionalProperties:
type: integer
format: int32
```
Here swagger parser flattens the inline object into a name object under the component schema
```ballerina
resource function get store/inventory() returns Inline_response_200 {
}
```
- [x] Scenario 02 : Return has only additional properties without main object properties
```openapi
/store/inventory:
get:
tags:
- store
operationId: getInventory02
responses:
"200":
description: successful operation
content:
application/json:
schema:
type: object
additionalProperties:
type: object
properties:
id:
type: integer
age:
type: integer
text/plain:
schema:
type: string
```
These types of scenarios are not flattened by the swagger parser. therefore it will generate an inline record.
```ballerina
resource function put store/inventory() returns StoreInventoryResponse|string {
}
public type StoreInventoryResponse record {|
Inline_response_map200...;
|};
public type Inline_response_map200 record {|
int id?;
int age?;
|};
```
- [x] Scenario 03 : Return has only additional properties with nested additional properties (complex scenarios)
```openapi
/store/inventory:
get:
tags:
- store
summary: Returns pet inventories by status
description: Returns a map of status codes to quantities
operationId: getInventory03
responses:
"200":
description: successful operation
content:
application/json:
schema:
type: object
additionalProperties:
type: object
additionalProperties:
type: object
properties:
name:
type: string
place:
type: string
```
This also returns an inline record.
```ballerina
resource function get store/inventory() returns StoreInventoryResponse {
}
public type StoreInventoryResponse record {|
record {|record {|string name?; string place?;|}...;|}...;
|};
```
- [x] Scenario 04: Return with additional property as reference
```openapi
/store/inventory:
get:
tags:
- store
summary: Returns pet inventories by status
description: Returns a map of status codes to quantities
operationId: getInventory04
responses:
"200":
description: successful operation
content:
application/json:
schema:
type: object
additionalProperties:
$ref: "#/components/schemas/User"
```
```ballerina
resource function get store/inventory() returns StoreInventoryResponse {
}
public type User record {|
string name?;
int id?;
|};
```
- [x] Scenario 05: Return has a status code which is not 200 and has additional properties.
```openapi
/store/inventory05:
get:
tags:
- store
operationId: getInventory05
responses:
"400":
description: successful operation
content:
application/json:
schema:
type: object
additionalProperties:
$ref: "#/components/schemas/User"
components:
schemas:
User:
properties:
name:
type: string
id:
type: integer
```
Under this issue https://github.com/ballerina-platform/openapi-tools/issues/990 we are supposed to add a return inline record when it has a status code above the 200 range.
```ballerina
resource function get store/inventory05() returns BadRequestStoreInventory05Response {
}
public type StoreInventory05Response record {|
*http:BadRequest;
record {|User...;|} body;
|};
public type StoreInventory05Response record {|
User...;
|};
public type User record {|
string name?;
int id?;
|};
```
Therefore, would like to take an idea for the above scenario 02-04 with keep it as an inline record or move to name record by adding some record name with path + Response + _(count of record created) -> `StoreInventoryResponse_1`
based on the discussion https://github.com/ballerina-platform/openapi-tools/issues/1109#issuecomment-1330325072 | 1.0 | Support for response mapping for service generation when `additionalProperties` has enabled without `properties` field in response - **Description:**
<!-- Give a brief description of the improvement -->
example
```openapi
/store/inventory:
get:
tags:
- store
summary: Returns pet inventories by status
description: Returns a map of status codes to quantities
operationId: getInventory
responses:
"200":
description: successful operation
content:
application/json:
schema:
type: object
additionalProperties:
type: integer
format: int32
```
- [x] Scenarios 01: Response has inline object and additional properties
```openapi
/store/inventory:
get:
tags:
- store
summary: Returns pet inventories by status
description: Returns a map of status codes to quantities
operationId: getInventory
responses:
"200":
description: successful operation
content:
application/json:
schema:
type: object
properties:
name:
type: string
age:
type: integer
additionalProperties:
type: integer
format: int32
```
Here swagger parser flattens the inline object into a name object under the component schema
```ballerina
resource function get store/inventory() returns Inline_response_200 {
}
```
- [x] Scenario 02 : Return has only additional properties without main object properties
```openapi
/store/inventory:
get:
tags:
- store
operationId: getInventory02
responses:
"200":
description: successful operation
content:
application/json:
schema:
type: object
additionalProperties:
type: object
properties:
id:
type: integer
age:
type: integer
text/plain:
schema:
type: string
```
These types of scenarios are not flattened by the swagger parser. therefore it will generate an inline record.
```ballerina
resource function put store/inventory() returns StoreInventoryResponse|string {
}
public type StoreInventoryResponse record {|
Inline_response_map200...;
|};
public type Inline_response_map200 record {|
int id?;
int age?;
|};
```
- [x] Scenario 03 : Return has only additional properties with nested additional properties (complex scenarios)
```openapi
/store/inventory:
get:
tags:
- store
summary: Returns pet inventories by status
description: Returns a map of status codes to quantities
operationId: getInventory03
responses:
"200":
description: successful operation
content:
application/json:
schema:
type: object
additionalProperties:
type: object
additionalProperties:
type: object
properties:
name:
type: string
place:
type: string
```
This also returns an inline record.
```ballerina
resource function get store/inventory() returns StoreInventoryResponse {
}
public type StoreInventoryResponse record {|
record {|record {|string name?; string place?;|}...;|}...;
|};
```
- [x] Scenario 04: Return with additional property as reference
```openapi
/store/inventory:
get:
tags:
- store
summary: Returns pet inventories by status
description: Returns a map of status codes to quantities
operationId: getInventory04
responses:
"200":
description: successful operation
content:
application/json:
schema:
type: object
additionalProperties:
$ref: "#/components/schemas/User"
```
```ballerina
resource function get store/inventory() returns StoreInventoryResponse {
}
public type User record {|
string name?;
int id?;
|};
```
- [x] Scenario 05: Return has a status code which is not 200 and has additional properties.
```openapi
/store/inventory05:
get:
tags:
- store
operationId: getInventory05
responses:
"400":
description: successful operation
content:
application/json:
schema:
type: object
additionalProperties:
$ref: "#/components/schemas/User"
components:
schemas:
User:
properties:
name:
type: string
id:
type: integer
```
Under this issue https://github.com/ballerina-platform/openapi-tools/issues/990 we are supposed to add a return inline record when it has a status code above the 200 range.
```ballerina
resource function get store/inventory05() returns BadRequestStoreInventory05Response {
}
public type StoreInventory05Response record {|
*http:BadRequest;
record {|User...;|} body;
|};
public type StoreInventory05Response record {|
User...;
|};
public type User record {|
string name?;
int id?;
|};
```
Therefore, would like to take an idea for the above scenario 02-04 with keep it as an inline record or move to name record by adding some record name with path + Response + _(count of record created) -> `StoreInventoryResponse_1`
based on the discussion https://github.com/ballerina-platform/openapi-tools/issues/1109#issuecomment-1330325072 | non_test | support for response mapping for service generation when additionalproperties has enabled without properties field in response description example openapi store inventory get tags store summary returns pet inventories by status description returns a map of status codes to quantities operationid getinventory responses description successful operation content application json schema type object additionalproperties type integer format scenarios response has inline object and additional properties openapi store inventory get tags store summary returns pet inventories by status description returns a map of status codes to quantities operationid getinventory responses description successful operation content application json schema type object properties name type string age type integer additionalproperties type integer format here swagger parser flattens the inline object into a name object under the component schema ballerina resource function get store inventory returns inline response scenario return has only additional properties without main object properties openapi store inventory get tags store operationid responses description successful operation content application json schema type object additionalproperties type object properties id type integer age type integer text plain schema type string these types of scenarios are not flattened by the swagger parser therefore it will generate an inline record ballerina resource function put store inventory returns storeinventoryresponse string public type storeinventoryresponse record inline response public type inline response record int id int age scenario return has only additional properties with nested additional properties complex scenarios openapi store inventory get tags store summary returns pet inventories by status description returns a map of status codes to quantities operationid responses description successful operation content application json schema type object additionalproperties type object additionalproperties type object properties name type string place type string this also returns an inline record ballerina resource function get store inventory returns storeinventoryresponse public type storeinventoryresponse record record record string name string place scenario return with additional property as reference openapi store inventory get tags store summary returns pet inventories by status description returns a map of status codes to quantities operationid responses description successful operation content application json schema type object additionalproperties ref components schemas user ballerina resource function get store inventory returns storeinventoryresponse public type user record string name int id scenario return has a status code which is not and has additional properties openapi store get tags store operationid responses description successful operation content application json schema type object additionalproperties ref components schemas user components schemas user properties name type string id type integer under this issue we are supposed to add a return inline record when it has a status code above the range ballerina resource function get store returns public type record http badrequest record user body public type record user public type user record string name int id therefore would like to take an idea for the above scenario with keep it as an inline record or move to name record by adding some record name with path response count of record created storeinventoryresponse based on the discussion | 0 |
823,910 | 31,073,403,660 | IssuesEvent | 2023-08-12 07:09:36 | zkSNACKs/WalletWasabi | https://api.github.com/repos/zkSNACKs/WalletWasabi | opened | RC doesn't work after updating from release? | debug priority | I asked this person to use the RC and it seems he was not able to update: https://github.com/molnard/WalletWasabi/releases/tag/v2.0.4rc3
<img width="304" alt="image" src="https://github.com/zkSNACKs/WalletWasabi/assets/9156103/6d429cbe-3f52-4987-a906-6b231724dbf9">
We need to (1) test and the suspect (2) TurboSync and (3) database.
@turbolay @kiminuo | 1.0 | RC doesn't work after updating from release? - I asked this person to use the RC and it seems he was not able to update: https://github.com/molnard/WalletWasabi/releases/tag/v2.0.4rc3
<img width="304" alt="image" src="https://github.com/zkSNACKs/WalletWasabi/assets/9156103/6d429cbe-3f52-4987-a906-6b231724dbf9">
We need to (1) test and the suspect (2) TurboSync and (3) database.
@turbolay @kiminuo | non_test | rc doesn t work after updating from release i asked this person to use the rc and it seems he was not able to update img width alt image src we need to test and the suspect turbosync and database turbolay kiminuo | 0 |
294,984 | 22,172,660,312 | IssuesEvent | 2022-06-06 03:54:53 | NorthDecoder/nasaMining | https://api.github.com/repos/NorthDecoder/nasaMining | opened | Fix spelling error in inspecting-the-json-structure.md | documentation | Change
FROM:
Inspecing the structure of nasa.json
TO:
Inspecting the structure of nasa.json | 1.0 | Fix spelling error in inspecting-the-json-structure.md - Change
FROM:
Inspecing the structure of nasa.json
TO:
Inspecting the structure of nasa.json | non_test | fix spelling error in inspecting the json structure md change from inspecing the structure of nasa json to inspecting the structure of nasa json | 0 |
234,021 | 19,090,980,989 | IssuesEvent | 2021-11-29 12:04:41 | SAP/ui5-webcomponents | https://api.github.com/repos/SAP/ui5-webcomponents | closed | Slider and RangeSlider: Handle's focus outline looks strange in Firefox | Low Prio TOPIC RL 1.0 Release Testing | ### **Bug Description**
Handle's focus outline in Firefox is round.
### **Expected Behavior**
The focus outline is the same and correct among all browsers supported.
### **Steps to Reproduce**
1. Go to https://sap.github.io/ui5-webcomponents/master/playground/main/pages/Slider/?sap-ui-theme=sap_fiori_3
2. Оpen the sample in Firefox
3. Click over a handle
<img width="148" alt="Screenshot 2021-08-26 at 11 16 51" src="https://user-images.githubusercontent.com/38278268/130927172-aa4164d0-5079-4632-9075-13cf91e67411.png">
### **Context**
- UI5 Web Components version: v1.0.0-rc.16
- OS/Platform: macOS
- Browser: Firefox version 91.0.2
- Affected component: ui5-slider, ui5-range-slider
| 1.0 | Slider and RangeSlider: Handle's focus outline looks strange in Firefox - ### **Bug Description**
Handle's focus outline in Firefox is round.
### **Expected Behavior**
The focus outline is the same and correct among all browsers supported.
### **Steps to Reproduce**
1. Go to https://sap.github.io/ui5-webcomponents/master/playground/main/pages/Slider/?sap-ui-theme=sap_fiori_3
2. Оpen the sample in Firefox
3. Click over a handle
<img width="148" alt="Screenshot 2021-08-26 at 11 16 51" src="https://user-images.githubusercontent.com/38278268/130927172-aa4164d0-5079-4632-9075-13cf91e67411.png">
### **Context**
- UI5 Web Components version: v1.0.0-rc.16
- OS/Platform: macOS
- Browser: Firefox version 91.0.2
- Affected component: ui5-slider, ui5-range-slider
| test | slider and rangeslider handle s focus outline looks strange in firefox bug description handle s focus outline in firefox is round expected behavior the focus outline is the same and correct among all browsers supported steps to reproduce go to оpen the sample in firefox click over a handle img width alt screenshot at src context web components version rc os platform macos browser firefox version affected component slider range slider | 1 |
42,444 | 5,437,779,418 | IssuesEvent | 2017-03-06 08:25:41 | mautic/mautic | https://api.github.com/repos/mautic/mautic | closed | [Enhancement] Fix URL Shortening Tooltip for Bitly.com | Bug Ready To Test | What type of report is this: | Enhancement |
## Description:
In the tooltip for URL shorteners, it gives an example of Bitly shortened link:
https://api-ssl.bitly.com/v3/shorten?access_token=[ACCESS_TOKEN]&format=txt&longurl=
However, that last parameter, 'longurl' is incorrect. It should be 'longUrl' (with a capital 'U').
It will not work, if it's not capitalize properly (at least in my testing).
Currently, the tooltip is the only source of documentation for URL shortening. Maybe someone can go in and correct the parameter.
| 1.0 | [Enhancement] Fix URL Shortening Tooltip for Bitly.com - What type of report is this: | Enhancement |
## Description:
In the tooltip for URL shorteners, it gives an example of Bitly shortened link:
https://api-ssl.bitly.com/v3/shorten?access_token=[ACCESS_TOKEN]&format=txt&longurl=
However, that last parameter, 'longurl' is incorrect. It should be 'longUrl' (with a capital 'U').
It will not work, if it's not capitalize properly (at least in my testing).
Currently, the tooltip is the only source of documentation for URL shortening. Maybe someone can go in and correct the parameter.
| test | fix url shortening tooltip for bitly com what type of report is this enhancement description in the tooltip for url shorteners it gives an example of bitly shortened link format txt longurl however that last parameter longurl is incorrect it should be longurl with a capital u it will not work if it s not capitalize properly at least in my testing currently the tooltip is the only source of documentation for url shortening maybe someone can go in and correct the parameter | 1 |
182,763 | 14,149,384,575 | IssuesEvent | 2020-11-11 00:43:13 | openzfs/zfs | https://api.github.com/repos/openzfs/zfs | closed | lockdep: possible recursive locking () at vfs_unlink | Component: Test Suite Status: Inactive Status: Stale | Full lockdep output:
```
[ INFO: possible recursive locking detected ]
2.6.32-504.16.2.1chaos.ch5.3.x86_64.debug #1
---------------------------------------------
zpool/17452 is trying to acquire lock:
(&sb->s_type->i_mutex_key#10){+.+.+.}, at: [<ffffffff811bab2e>] vfs_unlink+0x5e/0xf0
but task is already holding lock:
(&sb->s_type->i_mutex_key#10){+.+.+.}, at: [<ffffffffa01d7fa6>] spl_kern_path_locked+0x106/0x1b0 [spl]
other info that might help us debug this:
2 locks held by zpool/17452:
#0: (&spa_namespace_lock){+.+.+.}, at: [<ffffffffa02e9b56>] zfs_ioc_pool_set_props+0x116/0x220 [zfs]
#1: (&sb->s_type->i_mutex_key#10){+.+.+.}, at: [<ffffffffa01d7fa6>] spl_kern_path_locked+0x106/0x1b0 [spl]
stack backtrace:
Pid: 17452, comm: zpool Not tainted 2.6.32-504.16.2.1chaos.ch5.3.x86_64.debug #1
Call Trace:
[<ffffffff810bfe90>] ? __lock_acquire+0x11b0/0x1560
[<ffffffff810c02e4>] ? lock_acquire+0xa4/0x120
[<ffffffff811bab2e>] ? vfs_unlink+0x5e/0xf0
[<ffffffff811bab2e>] ? vfs_unlink+0x5e/0xf0
[<ffffffff8155f24c>] ? mutex_lock_nested+0x5c/0x3b0
[<ffffffff811bab2e>] ? vfs_unlink+0x5e/0xf0
[<ffffffff811b9273>] ? generic_permission+0x23/0xb0
[<ffffffff8125089f>] ? security_inode_permission+0x1f/0x30
[<ffffffff811b9ce7>] ? inode_permission+0xa7/0x100
[<ffffffff811bab2e>] ? vfs_unlink+0x5e/0xf0
[<ffffffffa01d825b>] ? vn_remove+0x6b/0x110 [spl]
[<ffffffffa02a456e>] ? spa_config_sync+0x49e/0x620 [zfs]
[<ffffffffa02e9b95>] ? zfs_ioc_pool_set_props+0x155/0x220 [zfs]
[<ffffffffa02ea302>] ? zfsdev_ioctl+0x562/0x620 [zfs]
[<ffffffff811c1532>] ? vfs_ioctl+0x22/0xa0
[<ffffffff810be52d>] ? trace_hardirqs_on+0xd/0x10
[<ffffffff811c16d4>] ? do_vfs_ioctl+0x84/0x590
[<ffffffff8100baf5>] ? retint_swapgs+0x13/0x1b
[<ffffffff811c1c61>] ? sys_ioctl+0x81/0xa0
[<ffffffff8100b072>] ? system_call_fastpath+0x16/0x1b
/* NOTE
* Lock actually taken in spl_inode_lock and released in _unlock.
* Both are macros defined in spl include/linux/file_compat.h
*/
```
| 1.0 | lockdep: possible recursive locking () at vfs_unlink - Full lockdep output:
```
[ INFO: possible recursive locking detected ]
2.6.32-504.16.2.1chaos.ch5.3.x86_64.debug #1
---------------------------------------------
zpool/17452 is trying to acquire lock:
(&sb->s_type->i_mutex_key#10){+.+.+.}, at: [<ffffffff811bab2e>] vfs_unlink+0x5e/0xf0
but task is already holding lock:
(&sb->s_type->i_mutex_key#10){+.+.+.}, at: [<ffffffffa01d7fa6>] spl_kern_path_locked+0x106/0x1b0 [spl]
other info that might help us debug this:
2 locks held by zpool/17452:
#0: (&spa_namespace_lock){+.+.+.}, at: [<ffffffffa02e9b56>] zfs_ioc_pool_set_props+0x116/0x220 [zfs]
#1: (&sb->s_type->i_mutex_key#10){+.+.+.}, at: [<ffffffffa01d7fa6>] spl_kern_path_locked+0x106/0x1b0 [spl]
stack backtrace:
Pid: 17452, comm: zpool Not tainted 2.6.32-504.16.2.1chaos.ch5.3.x86_64.debug #1
Call Trace:
[<ffffffff810bfe90>] ? __lock_acquire+0x11b0/0x1560
[<ffffffff810c02e4>] ? lock_acquire+0xa4/0x120
[<ffffffff811bab2e>] ? vfs_unlink+0x5e/0xf0
[<ffffffff811bab2e>] ? vfs_unlink+0x5e/0xf0
[<ffffffff8155f24c>] ? mutex_lock_nested+0x5c/0x3b0
[<ffffffff811bab2e>] ? vfs_unlink+0x5e/0xf0
[<ffffffff811b9273>] ? generic_permission+0x23/0xb0
[<ffffffff8125089f>] ? security_inode_permission+0x1f/0x30
[<ffffffff811b9ce7>] ? inode_permission+0xa7/0x100
[<ffffffff811bab2e>] ? vfs_unlink+0x5e/0xf0
[<ffffffffa01d825b>] ? vn_remove+0x6b/0x110 [spl]
[<ffffffffa02a456e>] ? spa_config_sync+0x49e/0x620 [zfs]
[<ffffffffa02e9b95>] ? zfs_ioc_pool_set_props+0x155/0x220 [zfs]
[<ffffffffa02ea302>] ? zfsdev_ioctl+0x562/0x620 [zfs]
[<ffffffff811c1532>] ? vfs_ioctl+0x22/0xa0
[<ffffffff810be52d>] ? trace_hardirqs_on+0xd/0x10
[<ffffffff811c16d4>] ? do_vfs_ioctl+0x84/0x590
[<ffffffff8100baf5>] ? retint_swapgs+0x13/0x1b
[<ffffffff811c1c61>] ? sys_ioctl+0x81/0xa0
[<ffffffff8100b072>] ? system_call_fastpath+0x16/0x1b
/* NOTE
* Lock actually taken in spl_inode_lock and released in _unlock.
* Both are macros defined in spl include/linux/file_compat.h
*/
```
| test | lockdep possible recursive locking at vfs unlink full lockdep output debug zpool is trying to acquire lock sb s type i mutex key at vfs unlink but task is already holding lock sb s type i mutex key at spl kern path locked other info that might help us debug this locks held by zpool spa namespace lock at zfs ioc pool set props sb s type i mutex key at spl kern path locked stack backtrace pid comm zpool not tainted debug call trace lock acquire lock acquire vfs unlink vfs unlink mutex lock nested vfs unlink generic permission security inode permission inode permission vfs unlink vn remove spa config sync zfs ioc pool set props zfsdev ioctl vfs ioctl trace hardirqs on do vfs ioctl retint swapgs sys ioctl system call fastpath note lock actually taken in spl inode lock and released in unlock both are macros defined in spl include linux file compat h | 1 |
2,322 | 3,391,966,353 | IssuesEvent | 2015-11-30 17:33:04 | Netflix/falcor | https://api.github.com/repos/Netflix/falcor | closed | BindSync very slow | High relevance performance | Bind sync takes longer than for a get for the preload paths. Reported by @steveorsomethin | True | BindSync very slow - Bind sync takes longer than for a get for the preload paths. Reported by @steveorsomethin | non_test | bindsync very slow bind sync takes longer than for a get for the preload paths reported by steveorsomethin | 0 |
222,354 | 17,408,512,501 | IssuesEvent | 2021-08-03 09:15:20 | Automattic/wp-calypso | https://api.github.com/repos/Automattic/wp-calypso | opened | Empathy Testing: Text beside icons are not aligned properly | Empathy Testing [Type] Bug | <!-- Thanks for contributing to Calypso! Pick a clear title ("Editor: add spell check") and proceed. -->
#### Steps to reproduce
1. Open the Calypso dashboard of a site.
2. Visit Appearance > Themes
3. You will notice the text beside icons is not properly aligned.
#### What I expected
The text should be vertically aligned with its icon.
#### What happened instead
The text was not vertically aligned with its icon.
#### Browser / OS version
Google Chrome 92.0.4515.131 (Official Build) (x86_64)
macOS Big Sur 11.3 (20E232)
#### Screenshot / Video

| 1.0 | Empathy Testing: Text beside icons are not aligned properly - <!-- Thanks for contributing to Calypso! Pick a clear title ("Editor: add spell check") and proceed. -->
#### Steps to reproduce
1. Open the Calypso dashboard of a site.
2. Visit Appearance > Themes
3. You will notice the text beside icons is not properly aligned.
#### What I expected
The text should be vertically aligned with its icon.
#### What happened instead
The text was not vertically aligned with its icon.
#### Browser / OS version
Google Chrome 92.0.4515.131 (Official Build) (x86_64)
macOS Big Sur 11.3 (20E232)
#### Screenshot / Video

| test | empathy testing text beside icons are not aligned properly steps to reproduce open the calypso dashboard of a site visit appearance themes you will notice the text beside icons is not properly aligned what i expected the text should be vertically aligned with its icon what happened instead the text was not vertically aligned with its icon browser os version google chrome official build macos big sur screenshot video | 1 |
65,150 | 19,185,597,180 | IssuesEvent | 2021-12-05 05:52:02 | department-of-veterans-affairs/va.gov-cms | https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms | opened | Lovell Federal health care defects - h1, breadcrumb | Defect Needs refining | ## Describe the defect
A clear and concise description of what the bug is.
## To Reproduce
1. Go to https://va.gov/lovell-federal-health-care
2. Note that it reads VA Lovell Federal health care for the h1, and Lovell FHCC for the breadcrumb
## Expected behavior
- [ ] It should read Lovell Federal health care in both places
## Screenshots
<details><summary>FE</summary>
</details>
<details><summary>CMS</summary>
These all need to read "Lovell Federal health care
<img width="596" alt="VA_Lovell_Federal_health_care___Veterans_Affairs" src="https://user-images.githubusercontent.com/643678/144732380-ac07f71c-5f8d-436e-bbac-2dfe2f9dd57c.png">
</details>
## Additional context
Add any other context about the problem here. Reach out to the Product Managers to determine if it should be escalated as critical (prevents users from accomplishing their work with no known workaround and needs to be addressed within 2 business days).
## Desktop (please complete the following information if relevant, or delete)
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
## Labels
(You can delete this section once it's complete)
- [x] Issue type (red) (defaults to "Defect")
- [ ] CMS subsystem (green)
- [ ] CMS practice area (blue)
- [x] CMS workstream (orange) (not needed for bug tickets)
- [ ] CMS-supported product (black)
### CMS Team
Please leave only the team that will do this work selected. If you're not sure, it's fine to leave both selected.
- [x] `Platform CMS Team`
- [x] `Sitewide CMS Team`
| 1.0 | Lovell Federal health care defects - h1, breadcrumb - ## Describe the defect
A clear and concise description of what the bug is.
## To Reproduce
1. Go to https://va.gov/lovell-federal-health-care
2. Note that it reads VA Lovell Federal health care for the h1, and Lovell FHCC for the breadcrumb
## Expected behavior
- [ ] It should read Lovell Federal health care in both places
## Screenshots
<details><summary>FE</summary>
</details>
<details><summary>CMS</summary>
These all need to read "Lovell Federal health care
<img width="596" alt="VA_Lovell_Federal_health_care___Veterans_Affairs" src="https://user-images.githubusercontent.com/643678/144732380-ac07f71c-5f8d-436e-bbac-2dfe2f9dd57c.png">
</details>
## Additional context
Add any other context about the problem here. Reach out to the Product Managers to determine if it should be escalated as critical (prevents users from accomplishing their work with no known workaround and needs to be addressed within 2 business days).
## Desktop (please complete the following information if relevant, or delete)
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
## Labels
(You can delete this section once it's complete)
- [x] Issue type (red) (defaults to "Defect")
- [ ] CMS subsystem (green)
- [ ] CMS practice area (blue)
- [x] CMS workstream (orange) (not needed for bug tickets)
- [ ] CMS-supported product (black)
### CMS Team
Please leave only the team that will do this work selected. If you're not sure, it's fine to leave both selected.
- [x] `Platform CMS Team`
- [x] `Sitewide CMS Team`
| non_test | lovell federal health care defects breadcrumb describe the defect a clear and concise description of what the bug is to reproduce go to note that it reads va lovell federal health care for the and lovell fhcc for the breadcrumb expected behavior it should read lovell federal health care in both places screenshots fe cms these all need to read lovell federal health care img width alt va lovell federal health care veterans affairs src additional context add any other context about the problem here reach out to the product managers to determine if it should be escalated as critical prevents users from accomplishing their work with no known workaround and needs to be addressed within business days desktop please complete the following information if relevant or delete os browser version labels you can delete this section once it s complete issue type red defaults to defect cms subsystem green cms practice area blue cms workstream orange not needed for bug tickets cms supported product black cms team please leave only the team that will do this work selected if you re not sure it s fine to leave both selected platform cms team sitewide cms team | 0 |
157,750 | 12,389,835,467 | IssuesEvent | 2020-05-20 09:39:26 | elastic/kibana | https://api.github.com/repos/elastic/kibana | opened | Failing test: Firefox XPack UI Functional Tests.x-pack/test/functional/apps/spaces/enter_space·ts - Spaces app Enter Space "after each" hook for "allows user to navigate to different spaces, respecting the configured default route" | failed-test | A test failed on a tracked branch
```
[Error: Timeout of 360000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (/dev/shm/workspace/kibana/x-pack/test/functional/apps/spaces/enter_space.ts)]
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+7.x/5243/)
<!-- kibanaCiData = {"failed-test":{"test.class":"Firefox XPack UI Functional Tests.x-pack/test/functional/apps/spaces/enter_space·ts","test.name":"Spaces app Enter Space \"after each\" hook for \"allows user to navigate to different spaces, respecting the configured default route\"","test.failCount":1}} --> | 1.0 | Failing test: Firefox XPack UI Functional Tests.x-pack/test/functional/apps/spaces/enter_space·ts - Spaces app Enter Space "after each" hook for "allows user to navigate to different spaces, respecting the configured default route" - A test failed on a tracked branch
```
[Error: Timeout of 360000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (/dev/shm/workspace/kibana/x-pack/test/functional/apps/spaces/enter_space.ts)]
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+7.x/5243/)
<!-- kibanaCiData = {"failed-test":{"test.class":"Firefox XPack UI Functional Tests.x-pack/test/functional/apps/spaces/enter_space·ts","test.name":"Spaces app Enter Space \"after each\" hook for \"allows user to navigate to different spaces, respecting the configured default route\"","test.failCount":1}} --> | test | failing test firefox xpack ui functional tests x pack test functional apps spaces enter space·ts spaces app enter space after each hook for allows user to navigate to different spaces respecting the configured default route a test failed on a tracked branch first failure | 1 |
278,034 | 30,702,161,604 | IssuesEvent | 2023-07-27 01:07:43 | Nivaskumark/packages_apps_settings_A10_r33_CVE-2020-0188 | https://api.github.com/repos/Nivaskumark/packages_apps_settings_A10_r33_CVE-2020-0188 | closed | CVE-2020-0416 (High) detected in Settingsandroid-10.0.0_r44 - autoclosed | Mend: dependency security vulnerability | ## CVE-2020-0416 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Settingsandroid-10.0.0_r44</b></p></summary>
<p>
<p>Library home page: <a href=https://android.googlesource.com/platform/packages/apps/Settings>https://android.googlesource.com/platform/packages/apps/Settings</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/packages_apps_settings_A10_r33_CVE-2020-0188/commit/f3df08e4562c757ffb3a076c5898906fdc1afde6">f3df08e4562c757ffb3a076c5898906fdc1afde6</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/src/com/android/settings/widget/AppSwitchPreference.java</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In multiple settings screens, there are possible tapjacking attacks due to an insecure default value. This could lead to local escalation of privilege and permissions with no additional execution privileges needed. User interaction is needed for exploitation.Product: AndroidVersions: Android-9 Android-10 Android-11 Android-8.0 Android-8.1Android ID: A-155288585
<p>Publish Date: 2020-10-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-0416>CVE-2020-0416</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-0416 (High) detected in Settingsandroid-10.0.0_r44 - autoclosed - ## CVE-2020-0416 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Settingsandroid-10.0.0_r44</b></p></summary>
<p>
<p>Library home page: <a href=https://android.googlesource.com/platform/packages/apps/Settings>https://android.googlesource.com/platform/packages/apps/Settings</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/packages_apps_settings_A10_r33_CVE-2020-0188/commit/f3df08e4562c757ffb3a076c5898906fdc1afde6">f3df08e4562c757ffb3a076c5898906fdc1afde6</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/src/com/android/settings/widget/AppSwitchPreference.java</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In multiple settings screens, there are possible tapjacking attacks due to an insecure default value. This could lead to local escalation of privilege and permissions with no additional execution privileges needed. User interaction is needed for exploitation.Product: AndroidVersions: Android-9 Android-10 Android-11 Android-8.0 Android-8.1Android ID: A-155288585
<p>Publish Date: 2020-10-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-0416>CVE-2020-0416</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in settingsandroid autoclosed cve high severity vulnerability vulnerable library settingsandroid library home page a href found in head commit a href found in base branch master vulnerable source files src com android settings widget appswitchpreference java vulnerability details in multiple settings screens there are possible tapjacking attacks due to an insecure default value this could lead to local escalation of privilege and permissions with no additional execution privileges needed user interaction is needed for exploitation product androidversions android android android android android id a publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href step up your open source security game with mend | 0 |
261,303 | 19,705,916,279 | IssuesEvent | 2022-01-12 21:59:32 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | closed | Update Facility Locator technical diagrams with Rails info | backend documentation vsa vsa-facilities planned-work stretch-goal | ## Issue Description
We need to update our technical diagrams with the new rails engine details
---
## Tasks
- [ ] Add updated documentation to diagrams and descriptions found in [Issue Response](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/products/facilities/facility-locator/issue-response.md)
## Acceptance criteria
- Technical documentation includes up to date architectural diagram which incorporates
- Rails engine
- New secure PPMS endpoint
- Existing documentation continues to exist to reflect the history/timeline
| 1.0 | Update Facility Locator technical diagrams with Rails info - ## Issue Description
We need to update our technical diagrams with the new rails engine details
---
## Tasks
- [ ] Add updated documentation to diagrams and descriptions found in [Issue Response](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/products/facilities/facility-locator/issue-response.md)
## Acceptance criteria
- Technical documentation includes up to date architectural diagram which incorporates
- Rails engine
- New secure PPMS endpoint
- Existing documentation continues to exist to reflect the history/timeline
| non_test | update facility locator technical diagrams with rails info issue description we need to update our technical diagrams with the new rails engine details tasks add updated documentation to diagrams and descriptions found in acceptance criteria technical documentation includes up to date architectural diagram which incorporates rails engine new secure ppms endpoint existing documentation continues to exist to reflect the history timeline | 0 |
149,981 | 23,583,485,863 | IssuesEvent | 2022-08-23 09:35:39 | Regalis11/Barotrauma | https://api.github.com/repos/Regalis11/Barotrauma | closed | [XML] Depleted fuel revolver rounds cause no severance | Design Unstable | ### Disclaimers
- [X] I have searched the issue tracker to check if the issue has already been reported.
- [ ] My issue happened while using mods.
### What happened?
While all other ammunition, even DF ammunition causes some severance chance, DF RR don't cause any.
```
<Attack structuredamage="10" targetforce="10" itemdamage="15" penetration="0.25">
<Affliction identifier="bleeding" strength="10" />
<Affliction identifier="gunshotwound" strength="35" />
<Affliction identifier="stun" strength="0.4" />
```
### Version
0.18.12.0 | 1.0 | [XML] Depleted fuel revolver rounds cause no severance - ### Disclaimers
- [X] I have searched the issue tracker to check if the issue has already been reported.
- [ ] My issue happened while using mods.
### What happened?
While all other ammunition, even DF ammunition causes some severance chance, DF RR don't cause any.
```
<Attack structuredamage="10" targetforce="10" itemdamage="15" penetration="0.25">
<Affliction identifier="bleeding" strength="10" />
<Affliction identifier="gunshotwound" strength="35" />
<Affliction identifier="stun" strength="0.4" />
```
### Version
0.18.12.0 | non_test | depleted fuel revolver rounds cause no severance disclaimers i have searched the issue tracker to check if the issue has already been reported my issue happened while using mods what happened while all other ammunition even df ammunition causes some severance chance df rr don t cause any version | 0 |
71,459 | 9,524,015,637 | IssuesEvent | 2019-04-27 22:29:48 | fga-eps-mds/2019.1-MindsY | https://api.github.com/repos/fga-eps-mds/2019.1-MindsY | closed | Criar identidade do produto. | documentation eps | ## Descrição
> Elaborar o registro da identidade visual do produto.
## Objetivo
> Criar um documento especificando as características visuais do produto.
## Tarefas
- [x] Criar documento.
| 1.0 | Criar identidade do produto. - ## Descrição
> Elaborar o registro da identidade visual do produto.
## Objetivo
> Criar um documento especificando as características visuais do produto.
## Tarefas
- [x] Criar documento.
| non_test | criar identidade do produto descrição elaborar o registro da identidade visual do produto objetivo criar um documento especificando as características visuais do produto tarefas criar documento | 0 |
799,184 | 28,301,256,665 | IssuesEvent | 2023-04-10 06:20:38 | bradietilley/pest-stories | https://api.github.com/repos/bradietilley/pest-stories | closed | Add Invokable callbacks | Type: Feature Priority: Medium Status: To Do | I've noticed that while most actions can be configured via the following syntax:
```php
->action('order.create', [
'total' => 123,
])
```
... there are scenarios where you just need a bit more control, and a bit more IDE auto-completion and this could be achieved if the callback argument (`order.create` in the example above) was an instance of the `Callback`. This is currently possible, however the callbacks are stored as-is and currently cannot store instance-specific configurations.
The ideal syntax could be:
```php
->action(OrderCreate::make()->withProduct('ABC123')->withShippingTo('AU', 6123)->withPromo('20OFF2023'))
``` | 1.0 | Add Invokable callbacks - I've noticed that while most actions can be configured via the following syntax:
```php
->action('order.create', [
'total' => 123,
])
```
... there are scenarios where you just need a bit more control, and a bit more IDE auto-completion and this could be achieved if the callback argument (`order.create` in the example above) was an instance of the `Callback`. This is currently possible, however the callbacks are stored as-is and currently cannot store instance-specific configurations.
The ideal syntax could be:
```php
->action(OrderCreate::make()->withProduct('ABC123')->withShippingTo('AU', 6123)->withPromo('20OFF2023'))
``` | non_test | add invokable callbacks i ve noticed that while most actions can be configured via the following syntax php action order create total there are scenarios where you just need a bit more control and a bit more ide auto completion and this could be achieved if the callback argument order create in the example above was an instance of the callback this is currently possible however the callbacks are stored as is and currently cannot store instance specific configurations the ideal syntax could be php action ordercreate make withproduct withshippingto au withpromo | 0 |
114,584 | 9,743,382,819 | IssuesEvent | 2019-06-03 01:25:55 | catolicasc-social/frontend | https://api.github.com/repos/catolicasc-social/frontend | closed | Adicionar testes iniciais | testes | Iniciar os testes para servir de exemplo ao restante do projeto, assim todos irão poder se basear nesses testes criados. | 1.0 | Adicionar testes iniciais - Iniciar os testes para servir de exemplo ao restante do projeto, assim todos irão poder se basear nesses testes criados. | test | adicionar testes iniciais iniciar os testes para servir de exemplo ao restante do projeto assim todos irão poder se basear nesses testes criados | 1 |
35,073 | 9,534,372,791 | IssuesEvent | 2019-04-30 01:10:53 | mgamlem3/Energy-Dashboard | https://api.github.com/repos/mgamlem3/Energy-Dashboard | closed | Make comparison graph editable for one building using UI | Building Comparison Page Design Task | **Details**
The user should be able to select between different buildings which they want to show on the comparisons graphs. Using a singular building, they should also be able to toggle data such as energy usage for a certain number of years beforehand using features implemented on the web page. This is through a list of radio buttons by the graph to select different outputs.
**Pre-Requisites**
**Estimated Time**
5 hrs
**Difficulty**
- [ ] 3
- [ ] 5
- [ ] 8
- [ ] 13
- [X] 20
- [ ] 40
| 1.0 | Make comparison graph editable for one building using UI - **Details**
The user should be able to select between different buildings which they want to show on the comparisons graphs. Using a singular building, they should also be able to toggle data such as energy usage for a certain number of years beforehand using features implemented on the web page. This is through a list of radio buttons by the graph to select different outputs.
**Pre-Requisites**
**Estimated Time**
5 hrs
**Difficulty**
- [ ] 3
- [ ] 5
- [ ] 8
- [ ] 13
- [X] 20
- [ ] 40
| non_test | make comparison graph editable for one building using ui details the user should be able to select between different buildings which they want to show on the comparisons graphs using a singular building they should also be able to toggle data such as energy usage for a certain number of years beforehand using features implemented on the web page this is through a list of radio buttons by the graph to select different outputs pre requisites estimated time hrs difficulty | 0 |
163,677 | 12,741,407,149 | IssuesEvent | 2020-06-26 06:00:03 | clarity-h2020/csis-technical-validation | https://api.github.com/repos/clarity-h2020/csis-technical-validation | closed | CSIS Acceptance Test by ATOS | testing | As announced [here](https://github.com/clarity-h2020/csis-technical-validation/issues/1), we now start with the CSIS Acceptance Tests.
Please read the [Acceptance Test Specification](https://github.com/clarity-h2020/csis-technical-validation/wiki/Acceptance-Test-Specification) and follow the 11 steps in the **Walkthrough** section to test the [public BETA of CSIS](https://csis.myclimateservice.eu/).
Please report any problems or bugs according to the instructions in the **Giving Feedback** section.
If you have further questions regarding the general test process, please contact @therter and @p-a-s-c-a-l
Initially, it was planned that @DanielRodera and Mario perform the tests. Since Mario isn't avilable any more, I've assigned @maesbri instead. Feel free to ask anybody else from ATOS to perform the tests. | 1.0 | CSIS Acceptance Test by ATOS - As announced [here](https://github.com/clarity-h2020/csis-technical-validation/issues/1), we now start with the CSIS Acceptance Tests.
Please read the [Acceptance Test Specification](https://github.com/clarity-h2020/csis-technical-validation/wiki/Acceptance-Test-Specification) and follow the 11 steps in the **Walkthrough** section to test the [public BETA of CSIS](https://csis.myclimateservice.eu/).
Please report any problems or bugs according to the instructions in the **Giving Feedback** section.
If you have further questions regarding the general test process, please contact @therter and @p-a-s-c-a-l
Initially, it was planned that @DanielRodera and Mario perform the tests. Since Mario isn't avilable any more, I've assigned @maesbri instead. Feel free to ask anybody else from ATOS to perform the tests. | test | csis acceptance test by atos as announced we now start with the csis acceptance tests please read the and follow the steps in the walkthrough section to test the please report any problems or bugs according to the instructions in the giving feedback section if you have further questions regarding the general test process please contact therter and p a s c a l initially it was planned that danielrodera and mario perform the tests since mario isn t avilable any more i ve assigned maesbri instead feel free to ask anybody else from atos to perform the tests | 1 |
295,968 | 25,518,589,269 | IssuesEvent | 2022-11-28 18:24:16 | brave/brave-browser | https://api.github.com/repos/brave/brave-browser | opened | Tor-component app-binary warning on launch with existing profile/install, using `--use-dev-goupdater-url` | bug QA/Yes QA/Test-Plan-Specified regression OS/Desktop OS/macOS-arm64 | <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
<!--Provide a brief description of the issue-->
Tor-component app-binary warning on launch with existing profile/install, using `--use-dev-goupdater-url`
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. install `1.48.5`
2. launch Brave (*without* command-line arguments)
3. click on the `"hamburger"` menu
4. click on `New Private Window with Tor`
5. load `brave.com`
6. wait for it to load
7. shut down Brave
8. relaunch using `--use-dev-goupdater-url`
9. open `brave.com`
10. click on the `Tor` button/icon in the URL bar
## Actual result:
<!--Please add screenshots if needed-->
## Expected result:
(Should?) use the latest `tor` binary, without this warning
## Reproduces how often:
<!--[Easily reproduced/Intermittent issue/No steps to reproduce]-->
100% with an existing profile and the above steps; have *not* yet checked new-profile case
## Brave version (brave://version info)
<!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details-->
`1.48.5`
## Version/Channel Information:
<!--Does this issue happen on any other channels? Or is it specific to a certain channel?-->
- Can you reproduce this issue with the current release? `Yes`
- Can you reproduce this issue with the beta channel? `Unknown`
- Can you reproduce this issue with the nightly channel? `Yes`
/cc @LaurenWags @kjozwiak @btlechowski @GeetaSarvadnya | 1.0 | Tor-component app-binary warning on launch with existing profile/install, using `--use-dev-goupdater-url` - <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
<!--Provide a brief description of the issue-->
Tor-component app-binary warning on launch with existing profile/install, using `--use-dev-goupdater-url`
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. install `1.48.5`
2. launch Brave (*without* command-line arguments)
3. click on the `"hamburger"` menu
4. click on `New Private Window with Tor`
5. load `brave.com`
6. wait for it to load
7. shut down Brave
8. relaunch using `--use-dev-goupdater-url`
9. open `brave.com`
10. click on the `Tor` button/icon in the URL bar
## Actual result:
<!--Please add screenshots if needed-->
## Expected result:
(Should?) use the latest `tor` binary, without this warning
## Reproduces how often:
<!--[Easily reproduced/Intermittent issue/No steps to reproduce]-->
100% with an existing profile and the above steps; have *not* yet checked new-profile case
## Brave version (brave://version info)
<!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details-->
`1.48.5`
## Version/Channel Information:
<!--Does this issue happen on any other channels? Or is it specific to a certain channel?-->
- Can you reproduce this issue with the current release? `Yes`
- Can you reproduce this issue with the beta channel? `Unknown`
- Can you reproduce this issue with the nightly channel? `Yes`
/cc @LaurenWags @kjozwiak @btlechowski @GeetaSarvadnya | test | tor component app binary warning on launch with existing profile install using use dev goupdater url have you searched for similar issues before submitting this issue please check the open issues and add a note before logging a new issue please use the template below to provide information about the issue insufficient info will get the issue closed it will only be reopened after sufficient info is provided description tor component app binary warning on launch with existing profile install using use dev goupdater url steps to reproduce install launch brave without command line arguments click on the hamburger menu click on new private window with tor load brave com wait for it to load shut down brave relaunch using use dev goupdater url open brave com click on the tor button icon in the url bar actual result expected result should use the latest tor binary without this warning reproduces how often with an existing profile and the above steps have not yet checked new profile case brave version brave version info version channel information can you reproduce this issue with the current release yes can you reproduce this issue with the beta channel unknown can you reproduce this issue with the nightly channel yes cc laurenwags kjozwiak btlechowski geetasarvadnya | 1 |
44,332 | 5,623,848,533 | IssuesEvent | 2017-04-04 15:46:10 | intermine/intermine | https://api.github.com/repos/intermine/intermine | closed | InterPro - new database (SFLD - Structure-Function Linkage Database) | good first bug please-test | Protein2iprConverter.java needs an additional DB entry
} else if (dbId.startsWith("SFLD")) {
dbName = "Structure-Function Linkage Database (SFLD)";
SFLD:
http://sfld.rbvi.ucsf.edu/django/ | 1.0 | InterPro - new database (SFLD - Structure-Function Linkage Database) - Protein2iprConverter.java needs an additional DB entry
} else if (dbId.startsWith("SFLD")) {
dbName = "Structure-Function Linkage Database (SFLD)";
SFLD:
http://sfld.rbvi.ucsf.edu/django/ | test | interpro new database sfld structure function linkage database java needs an additional db entry else if dbid startswith sfld dbname structure function linkage database sfld sfld | 1 |
161,767 | 13,877,456,858 | IssuesEvent | 2020-10-17 04:10:44 | ytyubox/ithelp_from_swift_learn_objc | https://api.github.com/repos/ytyubox/ithelp_from_swift_learn_objc | closed | [文章相關] 介紹 Objective-C 的 Swift Firendly 方法 | documentation | ## 檢查項目
- [ ] 要修改什麼:3️⃣ 增加新文章
- [ ] 草稿有了嗎?
- [ ] Branch建立了嗎?
## 描述
<!-- 大致上描述要怎麼編寫草稿。 -->
## 備註
<!-- 可以放referance,預備圖檔,筆記等等。-->
參考:
https://developer.apple.com/videos/play/wwdc2020/10680/
https://developer.apple.com/documentation/swift/objective-c_and_c_code_customization
以下空白。
| 1.0 | [文章相關] 介紹 Objective-C 的 Swift Firendly 方法 - ## 檢查項目
- [ ] 要修改什麼:3️⃣ 增加新文章
- [ ] 草稿有了嗎?
- [ ] Branch建立了嗎?
## 描述
<!-- 大致上描述要怎麼編寫草稿。 -->
## 備註
<!-- 可以放referance,預備圖檔,筆記等等。-->
參考:
https://developer.apple.com/videos/play/wwdc2020/10680/
https://developer.apple.com/documentation/swift/objective-c_and_c_code_customization
以下空白。
| non_test | 介紹 objective c 的 swift firendly 方法 檢查項目 要修改什麼: ️⃣ 增加新文章 草稿有了嗎? branch建立了嗎? 描述 備註 參考: 以下空白。 | 0 |
294,253 | 22,143,342,169 | IssuesEvent | 2022-06-03 09:15:59 | Avaiga/taipy-doc | https://api.github.com/repos/Avaiga/taipy-doc | closed | Review documentation related to Airflow | back-end devops documentation Configuration | - Documentation on job config. airflow mode does not exist anymore.
- Documentation on how to install Airflow with docker for development purposes
- Documentation on using Aiflow in production environment (refer to official Airflow doc or Docker) | 1.0 | Review documentation related to Airflow - - Documentation on job config. airflow mode does not exist anymore.
- Documentation on how to install Airflow with docker for development purposes
- Documentation on using Aiflow in production environment (refer to official Airflow doc or Docker) | non_test | review documentation related to airflow documentation on job config airflow mode does not exist anymore documentation on how to install airflow with docker for development purposes documentation on using aiflow in production environment refer to official airflow doc or docker | 0 |
98,366 | 8,675,494,846 | IssuesEvent | 2018-11-30 11:03:04 | shahkhan40/shantestrep | https://api.github.com/repos/shahkhan40/shantestrep | closed | fxscantest : ApiV1ProjectsFindByNameNameGetPathParamNameMysqlSqlInjectionTimebound | fxscantest | Project : fxscantest
Job : uatenv
Env : uatenv
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=NTFkNzVmZmQtNWMzMS00OTEyLWI5OTktNjNkZTg1OWVkNjEw; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Fri, 30 Nov 2018 10:14:55 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/projects/find-by-name/
Request :
Response :
{
"timestamp" : "2018-11-30T10:14:56.095+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/projects/find-by-name/"
}
Logs :
Assertion [@ResponseTime < 7000 OR @ResponseTime > 10000] resolved-to [819 < 7000 OR 819 > 10000] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]
--- FX Bot --- | 1.0 | fxscantest : ApiV1ProjectsFindByNameNameGetPathParamNameMysqlSqlInjectionTimebound - Project : fxscantest
Job : uatenv
Env : uatenv
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=NTFkNzVmZmQtNWMzMS00OTEyLWI5OTktNjNkZTg1OWVkNjEw; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Fri, 30 Nov 2018 10:14:55 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/projects/find-by-name/
Request :
Response :
{
"timestamp" : "2018-11-30T10:14:56.095+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/projects/find-by-name/"
}
Logs :
Assertion [@ResponseTime < 7000 OR @ResponseTime > 10000] resolved-to [819 < 7000 OR 819 > 10000] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]
--- FX Bot --- | test | fxscantest project fxscantest job uatenv env uatenv region us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options set cookie content type transfer encoding date endpoint request response timestamp status error not found message no message available path api api projects find by name logs assertion resolved to result assertion resolved to result fx bot | 1 |
101,621 | 12,698,747,356 | IssuesEvent | 2020-06-22 13:53:41 | microsoft/vscode | https://api.github.com/repos/microsoft/vscode | closed | Current line indicator rectangle of the High Contrast theme is not closed. | *as-designed | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- Use Help > Report Issue to prefill these. -->
- VSCode Version: 1.46.0
- OS Version: Windows 10 (2004)
Steps to Reproduce:
1. Set the theme to High Contrast
2. Type something.
<!-- Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
The brown rectangle outline is stretched too far to the right, so the right edge is lost. Not a critical issue, but not aesthetically pleasing. It would be nice to move the right edge a little bit to the left, so that one could see a beautiful closed rectangle.

For a moment, I thought it could be intentionally designed so, but that does not seem to be the case, as the rectangle is closed on Linux.

| 1.0 | Current line indicator rectangle of the High Contrast theme is not closed. - <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- Use Help > Report Issue to prefill these. -->
- VSCode Version: 1.46.0
- OS Version: Windows 10 (2004)
Steps to Reproduce:
1. Set the theme to High Contrast
2. Type something.
<!-- Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
The brown rectangle outline is stretched too far to the right, so the right edge is lost. Not a critical issue, but not aesthetically pleasing. It would be nice to move the right edge a little bit to the left, so that one could see a beautiful closed rectangle.

For a moment, I thought it could be intentionally designed so, but that does not seem to be the case, as the rectangle is closed on Linux.

| non_test | current line indicator rectangle of the high contrast theme is not closed report issue to prefill these vscode version os version windows steps to reproduce set the theme to high contrast type something does this issue occur when all extensions are disabled yes the brown rectangle outline is stretched too far to the right so the right edge is lost not a critical issue but not aesthetically pleasing it would be nice to move the right edge a little bit to the left so that one could see a beautiful closed rectangle for a moment i thought it could be intentionally designed so but that does not seem to be the case as the rectangle is closed on linux | 0 |
100,854 | 30,796,391,083 | IssuesEvent | 2023-07-31 20:15:04 | vitessio/vitess | https://api.github.com/repos/vitessio/vitess | closed | Vitess operator is unable to deploy vitess on Open Shift cluster | Type: CI/Build Component: Operator | ### Feature Description
Earlier the vitess operator existed on the red hat operator hub. Now it is not available.
I tried to install on openshift version.
[root@bastion ~]# oc - version
Client Version: 4.10.60
Server Version: 4.10.60
Kubernetes Version: v1.23.17+16bcd69
[root@bastion ~]#
I have used the initial_cluster.yaml file.Below error occurs on vitess operator logs.
[(https://stackoverflow.com/questions/76651126/unable-to-install-vitess-using-vitess-operator-on-red-hat-openshift-server)]
{"level":"error","ts":"2023-07-20T08:05:26Z","msg":"Reconciler error","controller":"vitessshard-controller","object":{"name":"example-commerce-x-x-0f5afee6","namespace":"test"},"namespace":"test","name":"example-commerce-x-x-0f5afee6",
"reconcileID":"703ca0b5-4c0d-4c19-9f43-a58a7a2d6d63","error":"pods \"example-vttablet-zone1-2548885007-46a852d0\" is forbidden: unable to validate against any security context constraint: [provider \"anyuid\":
Forbidden: not usable by user or serviceaccount, provider restricted: .spec.securityContext.fsGroup: Invalid value: []int64{999}: 999 is not an allowed group, spec.initContainers[0].securityContext.runAsUser:
Invalid value: 999: must be in the ranges: [1000700000, 1000709999], spec.initContainers[1].securityContext.runAsUser: Invalid value: 999: must be in the ranges: [1000700000, 1000709999], spec.containers[0].securityContext.runAsUser:
Invalid value: 999: must be in the ranges: [1000700000, 1000709999], spec.containers[1].securityContext.runAsUser: Invalid value: 999: must be in the ranges: [1000700000, 1000709999], spec.containers[2].securityContext.runAsUser:
Invalid value: 999: must be in the ranges: [1000700000, 1000709999], provider \"nonroot\": Forbidden: not usable by user or serviceaccount, provider \"hostmount-anyuid\": Forbidden: not usable by user or serviceaccount, provider
\"machine-api-termination-handler\": Forbidden: not usable by user or serviceaccount, provider \"hostnetwork\": Forbidden: not usable by user or serviceaccount, provider \"hostaccess\": Forbidden: not usable by user or serviceaccount, provider \"node-exporter\":
Forbidden: not usable by user or serviceaccount, provider \"privileged\": Forbidden: not usable by user or serviceaccount]","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/
controller-runtime@v0.14.3/pkg/internal/controller/controller.go:329\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.3/pkg/internal/controller/controller.go:274\
nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.3/pkg/internal/controller/controller.go:235"}
### Use Case(s)
Installation on various platform should be supported.
On-premise installation on red hat open shift server.
| 1.0 | Vitess operator is unable to deploy vitess on Open Shift cluster - ### Feature Description
Earlier the vitess operator existed on the red hat operator hub. Now it is not available.
I tried to install on openshift version.
[root@bastion ~]# oc - version
Client Version: 4.10.60
Server Version: 4.10.60
Kubernetes Version: v1.23.17+16bcd69
[root@bastion ~]#
I have used the initial_cluster.yaml file.Below error occurs on vitess operator logs.
[(https://stackoverflow.com/questions/76651126/unable-to-install-vitess-using-vitess-operator-on-red-hat-openshift-server)]
{"level":"error","ts":"2023-07-20T08:05:26Z","msg":"Reconciler error","controller":"vitessshard-controller","object":{"name":"example-commerce-x-x-0f5afee6","namespace":"test"},"namespace":"test","name":"example-commerce-x-x-0f5afee6",
"reconcileID":"703ca0b5-4c0d-4c19-9f43-a58a7a2d6d63","error":"pods \"example-vttablet-zone1-2548885007-46a852d0\" is forbidden: unable to validate against any security context constraint: [provider \"anyuid\":
Forbidden: not usable by user or serviceaccount, provider restricted: .spec.securityContext.fsGroup: Invalid value: []int64{999}: 999 is not an allowed group, spec.initContainers[0].securityContext.runAsUser:
Invalid value: 999: must be in the ranges: [1000700000, 1000709999], spec.initContainers[1].securityContext.runAsUser: Invalid value: 999: must be in the ranges: [1000700000, 1000709999], spec.containers[0].securityContext.runAsUser:
Invalid value: 999: must be in the ranges: [1000700000, 1000709999], spec.containers[1].securityContext.runAsUser: Invalid value: 999: must be in the ranges: [1000700000, 1000709999], spec.containers[2].securityContext.runAsUser:
Invalid value: 999: must be in the ranges: [1000700000, 1000709999], provider \"nonroot\": Forbidden: not usable by user or serviceaccount, provider \"hostmount-anyuid\": Forbidden: not usable by user or serviceaccount, provider
\"machine-api-termination-handler\": Forbidden: not usable by user or serviceaccount, provider \"hostnetwork\": Forbidden: not usable by user or serviceaccount, provider \"hostaccess\": Forbidden: not usable by user or serviceaccount, provider \"node-exporter\":
Forbidden: not usable by user or serviceaccount, provider \"privileged\": Forbidden: not usable by user or serviceaccount]","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/
controller-runtime@v0.14.3/pkg/internal/controller/controller.go:329\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.3/pkg/internal/controller/controller.go:274\
nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.3/pkg/internal/controller/controller.go:235"}
### Use Case(s)
Installation on various platform should be supported.
On-premise installation on red hat open shift server.
| non_test | vitess operator is unable to deploy vitess on open shift cluster feature description earlier the vitess operator existed on the red hat operator hub now it is not available i tried to install on openshift version oc version client version server version kubernetes version i have used the initial cluster yaml file below error occurs on vitess operator logs level error ts msg reconciler error controller vitessshard controller object name example commerce x x namespace test namespace test name example commerce x x reconcileid error pods example vttablet is forbidden unable to validate against any security context constraint provider anyuid forbidden not usable by user or serviceaccount provider restricted spec securitycontext fsgroup invalid value is not an allowed group spec initcontainers securitycontext runasuser invalid value must be in the ranges spec initcontainers securitycontext runasuser invalid value must be in the ranges spec containers securitycontext runasuser invalid value must be in the ranges spec containers securitycontext runasuser invalid value must be in the ranges spec containers securitycontext runasuser invalid value must be in the ranges provider nonroot forbidden not usable by user or serviceaccount provider hostmount anyuid forbidden not usable by user or serviceaccount provider machine api termination handler forbidden not usable by user or serviceaccount provider hostnetwork forbidden not usable by user or serviceaccount provider hostaccess forbidden not usable by user or serviceaccount provider node exporter forbidden not usable by user or serviceaccount provider privileged forbidden not usable by user or serviceaccount stacktrace sigs io controller runtime pkg internal controller controller reconcilehandler n t go pkg mod sigs io controller runtime pkg internal controller controller go nsigs io controller runtime pkg internal controller controller processnextworkitem n t go pkg mod sigs io controller runtime pkg internal controller controller go nsigs io controller runtime pkg internal controller controller start n t go pkg mod sigs io controller runtime pkg internal controller controller go use case s installation on various platform should be supported on premise installation on red hat open shift server | 0 |
194,845 | 14,690,017,894 | IssuesEvent | 2021-01-02 13:09:16 | github-vet/rangeloop-pointer-findings | https://api.github.com/repos/github-vet/rangeloop-pointer-findings | closed | pbolla0818/oci_terraform: oci/cloud_guard_responder_recipe_test.go; 16 LoC | fresh small test |
Found a possible issue in [pbolla0818/oci_terraform](https://www.github.com/pbolla0818/oci_terraform) at [oci/cloud_guard_responder_recipe_test.go](https://github.com/pbolla0818/oci_terraform/blob/c233d54c5fe32f12c234d6dceefba0a9b4ab3022/oci/cloud_guard_responder_recipe_test.go#L342-L357)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.
> reference to responderRecipeId is reassigned at line 346
[Click here to see the code in its original context.](https://github.com/pbolla0818/oci_terraform/blob/c233d54c5fe32f12c234d6dceefba0a9b4ab3022/oci/cloud_guard_responder_recipe_test.go#L342-L357)
<details>
<summary>Click here to show the 16 line(s) of Go which triggered the analyzer.</summary>
```go
for _, responderRecipeId := range responderRecipeIds {
if ok := SweeperDefaultResourceId[responderRecipeId]; !ok {
deleteResponderRecipeRequest := oci_cloud_guard.DeleteResponderRecipeRequest{}
deleteResponderRecipeRequest.ResponderRecipeId = &responderRecipeId
deleteResponderRecipeRequest.RequestMetadata.RetryPolicy = getRetryPolicy(true, "cloud_guard")
_, error := cloudGuardClient.DeleteResponderRecipe(context.Background(), deleteResponderRecipeRequest)
if error != nil {
fmt.Printf("Error deleting ResponderRecipe %s %s, It is possible that the resource is already deleted. Please verify manually \n", responderRecipeId, error)
continue
}
waitTillCondition(testAccProvider, &responderRecipeId, responderRecipeSweepWaitCondition, time.Duration(3*time.Minute),
responderRecipeSweepResponseFetchOperation, "cloud_guard", true)
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: c233d54c5fe32f12c234d6dceefba0a9b4ab3022
| 1.0 | pbolla0818/oci_terraform: oci/cloud_guard_responder_recipe_test.go; 16 LoC -
Found a possible issue in [pbolla0818/oci_terraform](https://www.github.com/pbolla0818/oci_terraform) at [oci/cloud_guard_responder_recipe_test.go](https://github.com/pbolla0818/oci_terraform/blob/c233d54c5fe32f12c234d6dceefba0a9b4ab3022/oci/cloud_guard_responder_recipe_test.go#L342-L357)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.
> reference to responderRecipeId is reassigned at line 346
[Click here to see the code in its original context.](https://github.com/pbolla0818/oci_terraform/blob/c233d54c5fe32f12c234d6dceefba0a9b4ab3022/oci/cloud_guard_responder_recipe_test.go#L342-L357)
<details>
<summary>Click here to show the 16 line(s) of Go which triggered the analyzer.</summary>
```go
for _, responderRecipeId := range responderRecipeIds {
if ok := SweeperDefaultResourceId[responderRecipeId]; !ok {
deleteResponderRecipeRequest := oci_cloud_guard.DeleteResponderRecipeRequest{}
deleteResponderRecipeRequest.ResponderRecipeId = &responderRecipeId
deleteResponderRecipeRequest.RequestMetadata.RetryPolicy = getRetryPolicy(true, "cloud_guard")
_, error := cloudGuardClient.DeleteResponderRecipe(context.Background(), deleteResponderRecipeRequest)
if error != nil {
fmt.Printf("Error deleting ResponderRecipe %s %s, It is possible that the resource is already deleted. Please verify manually \n", responderRecipeId, error)
continue
}
waitTillCondition(testAccProvider, &responderRecipeId, responderRecipeSweepWaitCondition, time.Duration(3*time.Minute),
responderRecipeSweepResponseFetchOperation, "cloud_guard", true)
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: c233d54c5fe32f12c234d6dceefba0a9b4ab3022
| test | oci terraform oci cloud guard responder recipe test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message reference to responderrecipeid is reassigned at line click here to show the line s of go which triggered the analyzer go for responderrecipeid range responderrecipeids if ok sweeperdefaultresourceid ok deleteresponderreciperequest oci cloud guard deleteresponderreciperequest deleteresponderreciperequest responderrecipeid responderrecipeid deleteresponderreciperequest requestmetadata retrypolicy getretrypolicy true cloud guard error cloudguardclient deleteresponderrecipe context background deleteresponderreciperequest if error nil fmt printf error deleting responderrecipe s s it is possible that the resource is already deleted please verify manually n responderrecipeid error continue waittillcondition testaccprovider responderrecipeid responderrecipesweepwaitcondition time duration time minute responderrecipesweepresponsefetchoperation cloud guard true leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id | 1 |
181,802 | 14,889,250,462 | IssuesEvent | 2021-01-20 21:07:57 | TooTallNate/Java-WebSocket | https://api.github.com/repos/TooTallNate/Java-WebSocket | closed | WebSocketClient.getRemoteSocketAddress() documentation issue | Documentation up-for-grabs | The documentation for [this method](https://github.com/TooTallNate/Java-WebSocket/blob/cfde6e05cb5e0c525dd36030ed085d240690c564/src/main/java/org/java_websocket/WebSocketListener.java#L199) indicates it can return null in the description but then states in the Returns: never return null.
Also spacing issue with "ornull"
That's all, also which does it return lol.

| 1.0 | WebSocketClient.getRemoteSocketAddress() documentation issue - The documentation for [this method](https://github.com/TooTallNate/Java-WebSocket/blob/cfde6e05cb5e0c525dd36030ed085d240690c564/src/main/java/org/java_websocket/WebSocketListener.java#L199) indicates it can return null in the description but then states in the Returns: never return null.
Also spacing issue with "ornull"
That's all, also which does it return lol.

| non_test | websocketclient getremotesocketaddress documentation issue the documentation for indicates it can return null in the description but then states in the returns never return null also spacing issue with ornull that s all also which does it return lol | 0 |
55,975 | 6,497,350,420 | IssuesEvent | 2017-08-22 13:44:57 | wp-cli/scaffold-command | https://api.github.com/repos/wp-cli/scaffold-command | closed | Add Support for Bitbucket Pipelines to --ci | command:scaffold command:scaffold-plugin-tests | Any thoughts on adding support for [Bitbucket Pipelines](https://bitbucket.org/product/features/pipelines) to `[--ci=<provider>]`?
* https://bitbucket.org/product/features/pipelines
* https://www.cuttlesoft.com/bitbucket-pipelines-first-impression/
* https://bitbucket.org/rw_grim/local-pipelines/overview
---
*bitbucket-pipelines.yml*
```yml
# This is a sample build configuration for PHP.
# Check our guides at https://confluence.atlassian.com/x/e8YWN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: phpunit/phpunit:5.0.3
pipelines:
default:
- step:
script: # Modify the commands below to build your repository.
- composer install
``` | 1.0 | Add Support for Bitbucket Pipelines to --ci - Any thoughts on adding support for [Bitbucket Pipelines](https://bitbucket.org/product/features/pipelines) to `[--ci=<provider>]`?
* https://bitbucket.org/product/features/pipelines
* https://www.cuttlesoft.com/bitbucket-pipelines-first-impression/
* https://bitbucket.org/rw_grim/local-pipelines/overview
---
*bitbucket-pipelines.yml*
```yml
# This is a sample build configuration for PHP.
# Check our guides at https://confluence.atlassian.com/x/e8YWN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: phpunit/phpunit:5.0.3
pipelines:
default:
- step:
script: # Modify the commands below to build your repository.
- composer install
``` | test | add support for bitbucket pipelines to ci any thoughts on adding support for to bitbucket pipelines yml yml this is a sample build configuration for php check our guides at for more examples only use spaces to indent your yml configuration you can specify a custom docker image from docker hub as your build environment image phpunit phpunit pipelines default step script modify the commands below to build your repository composer install | 1 |
49,033 | 25,952,361,154 | IssuesEvent | 2022-12-17 19:27:59 | qoollo/bob | https://api.github.com/repos/qoollo/bob | opened | Track min and max timestamp at the BLOB level and at the Pearl level | feature performance | This will allow to skip reading from holders if their max timestamp is less then the timestamp of already gotten record
Currently, we read from all holders that passed filtration by key:
https://github.com/qoollo/bob/blob/master/bob-backend/src/pearl/group.rs#L264 | True | Track min and max timestamp at the BLOB level and at the Pearl level - This will allow to skip reading from holders if their max timestamp is less then the timestamp of already gotten record
Currently, we read from all holders that passed filtration by key:
https://github.com/qoollo/bob/blob/master/bob-backend/src/pearl/group.rs#L264 | non_test | track min and max timestamp at the blob level and at the pearl level this will allow to skip reading from holders if their max timestamp is less then the timestamp of already gotten record currently we read from all holders that passed filtration by key | 0 |
220,799 | 17,261,516,831 | IssuesEvent | 2021-07-22 08:19:25 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | closed | Re-enable remote debugging tests on 1.2.x branch | Area/Debugger Area/IntegrationTest Later Priority/High Team/DevTools Type/TestFailure | **Description:**
The following tests are currently disabled on 1.2.x branch due to **intermittent failures specific to Travis CI**.
- BallerinaTestRemoteDebugTest
- BallerinaTestRemoteDebugTest
- BallerinaBuildRemoteDebugTest
**Steps to reproduce:**
**Affected Versions:**
**OS, DB, other environment details and versions:**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
| 2.0 | Re-enable remote debugging tests on 1.2.x branch - **Description:**
The following tests are currently disabled on 1.2.x branch due to **intermittent failures specific to Travis CI**.
- BallerinaTestRemoteDebugTest
- BallerinaTestRemoteDebugTest
- BallerinaBuildRemoteDebugTest
**Steps to reproduce:**
**Affected Versions:**
**OS, DB, other environment details and versions:**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
| test | re enable remote debugging tests on x branch description the following tests are currently disabled on x branch due to intermittent failures specific to travis ci ballerinatestremotedebugtest ballerinatestremotedebugtest ballerinabuildremotedebugtest steps to reproduce affected versions os db other environment details and versions related issues optional suggested labels optional suggested assignees optional | 1 |
330,287 | 28,366,418,587 | IssuesEvent | 2023-04-12 14:09:54 | lambdaclass/starknet_in_rust | https://api.github.com/repos/lambdaclass/starknet_in_rust | opened | Add a test for an account with validations | Complex Tests | You can use the OpenZeppelin account in the cairo-lang repo | 1.0 | Add a test for an account with validations - You can use the OpenZeppelin account in the cairo-lang repo | test | add a test for an account with validations you can use the openzeppelin account in the cairo lang repo | 1 |
544,132 | 15,890,012,824 | IssuesEvent | 2021-04-10 13:52:54 | AY2021S2-CS2103T-T13-4/tp | https://api.github.com/repos/AY2021S2-CS2103T-T13-4/tp | closed | As a user, I want to view all properties and appointments side by side | priority.High | so that I can simultaneously view related property and appointment data . | 1.0 | As a user, I want to view all properties and appointments side by side - so that I can simultaneously view related property and appointment data . | non_test | as a user i want to view all properties and appointments side by side so that i can simultaneously view related property and appointment data | 0 |
253,105 | 8,051,555,222 | IssuesEvent | 2018-08-01 16:27:59 | ArchitudeSweden/FileMaker_Timelines | https://api.github.com/repos/ArchitudeSweden/FileMaker_Timelines | opened | Script improvements in FileMaker | FileMaker HighPriority enhancement | - [ ] Move export methods to the start script
- [ ] Improve the ExportToExcel method (atm the table with filtered data is visible in the script)
- [ ] After the project selection we have to wait for a really long time in order to see the timelines. Can we improve this script? | 1.0 | Script improvements in FileMaker - - [ ] Move export methods to the start script
- [ ] Improve the ExportToExcel method (atm the table with filtered data is visible in the script)
- [ ] After the project selection we have to wait for a really long time in order to see the timelines. Can we improve this script? | non_test | script improvements in filemaker move export methods to the start script improve the exporttoexcel method atm the table with filtered data is visible in the script after the project selection we have to wait for a really long time in order to see the timelines can we improve this script | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.