Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
10,783
| 13,608,982,299
|
IssuesEvent
|
2020-09-23 03:55:53
|
googleapis/java-automl
|
https://api.github.com/repos/googleapis/java-automl
|
closed
|
Dependency Dashboard
|
api: automl type: process
|
This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/org.apache.maven.plugins-maven-project-info-reports-plugin-3.x -->build(deps): update dependency org.apache.maven.plugins:maven-project-info-reports-plugin to v3.1.1
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-automl-1.x -->chore(deps): update dependency com.google.cloud:google-cloud-automl to v1.2.1
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-bigquery-1.x -->deps: update dependency com.google.cloud:google-cloud-bigquery to v1.120.0
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-storage-1.x -->deps: update dependency com.google.cloud:google-cloud-storage to v1.113.1
- [ ] <!-- rebase-all-open-prs -->**Check this option to rebase all the above open PRs at once**
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/org.apache.maven.plugins-maven-project-info-reports-plugin-3.x -->build(deps): update dependency org.apache.maven.plugins:maven-project-info-reports-plugin to v3.1.1
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-automl-1.x -->chore(deps): update dependency com.google.cloud:google-cloud-automl to v1.2.1
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-bigquery-1.x -->deps: update dependency com.google.cloud:google-cloud-bigquery to v1.120.0
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-storage-1.x -->deps: update dependency com.google.cloud:google-cloud-storage to v1.113.1
- [ ] <!-- rebase-all-open-prs -->**Check this option to rebase all the above open PRs at once**
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue contains a list of renovate updates and their statuses open these updates have all been created already click a checkbox below to force a retry rebase of any build deps update dependency org apache maven plugins maven project info reports plugin to chore deps update dependency com google cloud google cloud automl to deps update dependency com google cloud google cloud bigquery to deps update dependency com google cloud google cloud storage to check this option to rebase all the above open prs at once check this box to trigger a request for renovate to run again on this repository
| 1
|
129,883
| 17,930,821,553
|
IssuesEvent
|
2021-09-10 09:00:47
|
navikt/nav-enonicxp-frontend
|
https://api.github.com/repos/navikt/nav-enonicxp-frontend
|
closed
|
Endre Ditt NAV kortet på forsiden når bruker er innlogget
|
design
|
Når man er innlogget på nav.no så endres ingen av kortene på forsiden.
Der er kortet "Logg inn på Ditt NAV", og det skaper usikkerhet for bruker om de er innlogget eller ikke.
Vi ønsker i første omgang å se på om kortet skal beholdes, og om vi skal endre hva som står der og hva vi lenker til.
I første omgang ser vi på denne enkelte problemstillingen, men vi må også begynne å planlegge hvordan vi skal bruke innlogget tilstand for å gi bedre informasjon til innbyggere. Det blir en separat diskusjon om det.
|
1.0
|
Endre Ditt NAV kortet på forsiden når bruker er innlogget - Når man er innlogget på nav.no så endres ingen av kortene på forsiden.
Der er kortet "Logg inn på Ditt NAV", og det skaper usikkerhet for bruker om de er innlogget eller ikke.
Vi ønsker i første omgang å se på om kortet skal beholdes, og om vi skal endre hva som står der og hva vi lenker til.
I første omgang ser vi på denne enkelte problemstillingen, men vi må også begynne å planlegge hvordan vi skal bruke innlogget tilstand for å gi bedre informasjon til innbyggere. Det blir en separat diskusjon om det.
|
non_process
|
endre ditt nav kortet på forsiden når bruker er innlogget når man er innlogget på nav no så endres ingen av kortene på forsiden der er kortet logg inn på ditt nav og det skaper usikkerhet for bruker om de er innlogget eller ikke vi ønsker i første omgang å se på om kortet skal beholdes og om vi skal endre hva som står der og hva vi lenker til i første omgang ser vi på denne enkelte problemstillingen men vi må også begynne å planlegge hvordan vi skal bruke innlogget tilstand for å gi bedre informasjon til innbyggere det blir en separat diskusjon om det
| 0
|
4,038
| 6,972,131,790
|
IssuesEvent
|
2017-12-11 16:06:32
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
closed
|
Page doesn't load in hammerhead-playground
|
!IMPORTANT! AREA: client SYSTEM: resource processing TYPE: bug
|
Based on [question ](https://testcafe-discuss.devexpress.com/t/browser-hangs-when-running-testcafe-locally/623)
url is private
It happens due to error in `generateCallExpression` method:
`(node:2717) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): TypeError: First argument must be a string or Buffer`
|
1.0
|
Page doesn't load in hammerhead-playground - Based on [question ](https://testcafe-discuss.devexpress.com/t/browser-hangs-when-running-testcafe-locally/623)
url is private
It happens due to error in `generateCallExpression` method:
`(node:2717) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): TypeError: First argument must be a string or Buffer`
|
process
|
page doesn t load in hammerhead playground based on url is private it happens due to error in generatecallexpression method node unhandledpromiserejectionwarning unhandled promise rejection rejection id typeerror first argument must be a string or buffer
| 1
|
10,040
| 13,044,161,612
|
IssuesEvent
|
2020-07-29 03:47:24
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `SubDateDurationDecimal` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `SubDateDurationDecimal` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @andylokandy
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `SubDateDurationDecimal` from TiDB -
## Description
Port the scalar function `SubDateDurationDecimal` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @andylokandy
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function subdatedurationdecimal from tidb description port the scalar function subdatedurationdecimal from tidb to coprocessor score mentor s andylokandy recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
59,165
| 3,103,659,241
|
IssuesEvent
|
2015-08-31 11:33:54
|
YetiForceCompany/YetiForceCRM
|
https://api.github.com/repos/YetiForceCompany/YetiForceCRM
|
closed
|
Widoczność kalendarzy przez administratora po aktualizacji do 2.1
|
Label::MoreInfoRequired Priority::#2 Normal Type::Discussion
|
Po update z wersji 2.0 do 2.1 nie widzę (jako administrator. Wcześniej było widoczne) kalendarzy innych użytkowników. Nie mogę się także dodać do listy osób do udostępnienia kalendarza na użytkowniku...
|
1.0
|
Widoczność kalendarzy przez administratora po aktualizacji do 2.1 - Po update z wersji 2.0 do 2.1 nie widzę (jako administrator. Wcześniej było widoczne) kalendarzy innych użytkowników. Nie mogę się także dodać do listy osób do udostępnienia kalendarza na użytkowniku...
|
non_process
|
widoczność kalendarzy przez administratora po aktualizacji do po update z wersji do nie widzę jako administrator wcześniej było widoczne kalendarzy innych użytkowników nie mogę się także dodać do listy osób do udostępnienia kalendarza na użytkowniku
| 0
|
171,027
| 14,274,886,256
|
IssuesEvent
|
2020-11-22 06:59:35
|
SpenceKonde/DxCore
|
https://api.github.com/repos/SpenceKonde/DxCore
|
closed
|
Some documentation suggestions
|
documentation
|
Thanks for providing this core! I've just successfully uploaded Blink to an AVR128DB28 on a breadboard.
Couple of comments:
1. Using a Nano-based UPDI programmer I previously used to program 0-series and 1-series chips, I got the error:
avrdude: jtagmkII_paged_write(): timeout/error communicating with programmer (status -1)
Thanks to the issue by @DustinWatts I realised I need to update jtag2updi, and it then worked. Perhaps mention this?
2. The link **Making a cheap UPDI programmer** in megaTinyCore (now) gives a 404.
|
1.0
|
Some documentation suggestions - Thanks for providing this core! I've just successfully uploaded Blink to an AVR128DB28 on a breadboard.
Couple of comments:
1. Using a Nano-based UPDI programmer I previously used to program 0-series and 1-series chips, I got the error:
avrdude: jtagmkII_paged_write(): timeout/error communicating with programmer (status -1)
Thanks to the issue by @DustinWatts I realised I need to update jtag2updi, and it then worked. Perhaps mention this?
2. The link **Making a cheap UPDI programmer** in megaTinyCore (now) gives a 404.
|
non_process
|
some documentation suggestions thanks for providing this core i ve just successfully uploaded blink to an on a breadboard couple of comments using a nano based updi programmer i previously used to program series and series chips i got the error avrdude jtagmkii paged write timeout error communicating with programmer status thanks to the issue by dustinwatts i realised i need to update and it then worked perhaps mention this the link making a cheap updi programmer in megatinycore now gives a
| 0
|
17,020
| 22,390,575,575
|
IssuesEvent
|
2022-06-17 07:14:27
|
python/cpython
|
https://api.github.com/repos/python/cpython
|
closed
|
multiprocessing Pool maxtasksperchild=0 raises exception with endless traceback
|
type-bug expert-multiprocessing
|
BPO | [39477](https://bugs.python.org/issue39477)
--- | :---
Nosy | @pitrou, @applio
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = None
closed_at = None
created_at = <Date 2020-01-28.13:04:56.485>
labels = ['type-bug']
title = 'multiprocessing Pool maxtasksperchild=0 raises exception with endless traceback'
updated_at = <Date 2020-01-28.13:48:42.667>
user = 'https://bugs.python.org/jeyekomon'
```
bugs.python.org fields:
```python
activity = <Date 2020-01-28.13:48:42.667>
actor = 'xtreak'
assignee = 'none'
closed = False
closed_date = None
closer = None
components = []
creation = <Date 2020-01-28.13:04:56.485>
creator = 'jeyekomon'
dependencies = []
files = []
hgrepos = []
issue_num = 39477
keywords = []
message_count = 1.0
messages = ['360872']
nosy_count = 3.0
nosy_names = ['pitrou', 'davin', 'jeyekomon']
pr_nums = []
priority = 'normal'
resolution = None
stage = None
status = 'open'
superseder = None
type = 'behavior'
url = 'https://bugs.python.org/issue39477'
versions = []
```
</p></details>
|
1.0
|
multiprocessing Pool maxtasksperchild=0 raises exception with endless traceback - BPO | [39477](https://bugs.python.org/issue39477)
--- | :---
Nosy | @pitrou, @applio
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = None
closed_at = None
created_at = <Date 2020-01-28.13:04:56.485>
labels = ['type-bug']
title = 'multiprocessing Pool maxtasksperchild=0 raises exception with endless traceback'
updated_at = <Date 2020-01-28.13:48:42.667>
user = 'https://bugs.python.org/jeyekomon'
```
bugs.python.org fields:
```python
activity = <Date 2020-01-28.13:48:42.667>
actor = 'xtreak'
assignee = 'none'
closed = False
closed_date = None
closer = None
components = []
creation = <Date 2020-01-28.13:04:56.485>
creator = 'jeyekomon'
dependencies = []
files = []
hgrepos = []
issue_num = 39477
keywords = []
message_count = 1.0
messages = ['360872']
nosy_count = 3.0
nosy_names = ['pitrou', 'davin', 'jeyekomon']
pr_nums = []
priority = 'normal'
resolution = None
stage = None
status = 'open'
superseder = None
type = 'behavior'
url = 'https://bugs.python.org/issue39477'
versions = []
```
</p></details>
|
process
|
multiprocessing pool maxtasksperchild raises exception with endless traceback bpo nosy pitrou applio note these values reflect the state of the issue at the time it was migrated and might not reflect the current state show more details github fields python assignee none closed at none created at labels title multiprocessing pool maxtasksperchild raises exception with endless traceback updated at user bugs python org fields python activity actor xtreak assignee none closed false closed date none closer none components creation creator jeyekomon dependencies files hgrepos issue num keywords message count messages nosy count nosy names pr nums priority normal resolution none stage none status open superseder none type behavior url versions
| 1
|
16,045
| 20,191,377,522
|
IssuesEvent
|
2022-02-11 05:58:13
|
novoda/gradle-static-analysis-plugin
|
https://api.github.com/repos/novoda/gradle-static-analysis-plugin
|
closed
|
Improve release script to create a GH release entry
|
process
|
We are using a Gradle script to automate most of the release process, including:
- publish to Bintray
- publish groovydoc (eg: https://novoda.github.io/gradle-static-analysis-plugin/docs/0.5/)
- tag `master`
At the moment the script is not automating the creation of a proper release entry in GitHub from the tag pushed as part of the process. It would be great to automate this last bit too.
|
1.0
|
Improve release script to create a GH release entry - We are using a Gradle script to automate most of the release process, including:
- publish to Bintray
- publish groovydoc (eg: https://novoda.github.io/gradle-static-analysis-plugin/docs/0.5/)
- tag `master`
At the moment the script is not automating the creation of a proper release entry in GitHub from the tag pushed as part of the process. It would be great to automate this last bit too.
|
process
|
improve release script to create a gh release entry we are using a gradle script to automate most of the release process including publish to bintray publish groovydoc eg tag master at the moment the script is not automating the creation of a proper release entry in github from the tag pushed as part of the process it would be great to automate this last bit too
| 1
|
97,631
| 4,003,430,549
|
IssuesEvent
|
2016-05-12 00:18:09
|
digital-detox/web-reader
|
https://api.github.com/repos/digital-detox/web-reader
|
closed
|
Keep the recognizer property of the Recognizer class private
|
enhancement in progress priority 2
|
The `Recognizer` class [exposes the `recognizer` property](https://github.com/digital-detox/web-reader/blob/20a94a99c7e5e97580b9e885f0008c3c7fcb9210/src/reader/recognizer.js#L84) but it should be kept private.
The solution to this issue could the same adopted in #28, that is to use a `WeakMap` to keep the property private but usable within the class.
|
1.0
|
Keep the recognizer property of the Recognizer class private - The `Recognizer` class [exposes the `recognizer` property](https://github.com/digital-detox/web-reader/blob/20a94a99c7e5e97580b9e885f0008c3c7fcb9210/src/reader/recognizer.js#L84) but it should be kept private.
The solution to this issue could the same adopted in #28, that is to use a `WeakMap` to keep the property private but usable within the class.
|
non_process
|
keep the recognizer property of the recognizer class private the recognizer class but it should be kept private the solution to this issue could the same adopted in that is to use a weakmap to keep the property private but usable within the class
| 0
|
19,249
| 25,444,353,864
|
IssuesEvent
|
2022-11-24 03:40:29
|
python/cpython
|
https://api.github.com/repos/python/cpython
|
closed
|
asyncio: support multiprocessing (support fork)
|
type-feature expert-asyncio 3.12 expert-multiprocessing
|
BPO | [22087](https://bugs.python.org/issue22087)
--- | :---
Nosy | @gvanrossum, @pitrou, @1st1, @thehesiod, @miss-islington
PRs | <li>python/cpython#7208</li><li>python/cpython#7215</li><li>python/cpython#7218</li><li>python/cpython#7226</li><li>python/cpython#7232</li><li>python/cpython#7233</li>
Files | <li>[test_loop.py](https://bugs.python.org/file36117/test_loop.py "Uploaded as text/plain at 2014-07-26.18:01:38 by dan.oreilly"): Test script demonstrating the issue</li><li>[handle_mp_unix.diff](https://bugs.python.org/file36118/handle_mp_unix.diff "Uploaded as text/plain at 2014-07-26.18:20:15 by dan.oreilly"): Patch that makes _UnixDefaultEventLoopPolicy create a new loop object if get_event_loop is called in a forked mp child process</li><li>[handle-mp_unix2.patch](https://bugs.python.org/file36119/handle-mp_unix2.patch "Uploaded as text/plain at 2014-07-26.20:13:57 by dan.oreilly"): Use os.getpid() instead of multiprocessing. Store pid state in Policy instance rather than the Loop instance.</li><li>[handle_mp_unix_with_test.diff](https://bugs.python.org/file36134/handle_mp_unix_with_test.diff "Uploaded as text/plain at 2014-07-27.16:09:52 by dan.oreilly"): Adds a unit test to previous patch</li>
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = None
closed_at = None
created_at = <Date 2014-07-26.18:01:10.150>
labels = ['type-bug', 'expert-asyncio']
title = 'asyncio: support multiprocessing (support fork)'
updated_at = <Date 2018-05-30.00:56:36.541>
user = 'https://bugs.python.org/danoreilly'
```
bugs.python.org fields:
```python
activity = <Date 2018-05-30.00:56:36.541>
actor = 'yselivanov'
assignee = 'none'
closed = False
closed_date = None
closer = None
components = ['asyncio']
creation = <Date 2014-07-26.18:01:10.150>
creator = 'dan.oreilly'
dependencies = []
files = ['36117', '36118', '36119', '36134']
hgrepos = []
issue_num = 22087
keywords = ['patch']
message_count = 23.0
messages = ['224082', '224084', '224085', '224097', '224125', '224140', '224143', '224144', '224145', '226698', '235404', '235411', '288327', '297222', '297226', '297227', '297229', '318077', '318092', '318135', '318140', '318143', '318144']
nosy_count = 7.0
nosy_names = ['gvanrossum', 'pitrou', 'zmedico', 'yselivanov', 'thehesiod', 'dan.oreilly', 'miss-islington']
pr_nums = ['7208', '7215', '7218', '7226', '7232', '7233']
priority = 'normal'
resolution = None
stage = 'patch review'
status = 'open'
superseder = None
type = 'behavior'
url = 'https://bugs.python.org/issue22087'
versions = ['Python 3.4', 'Python 3.5', 'Python 3.6']
```
</p></details>
<!-- gh-linked-prs -->
### Linked PRs
* gh-99539
<!-- /gh-linked-prs -->
|
1.0
|
asyncio: support multiprocessing (support fork) - BPO | [22087](https://bugs.python.org/issue22087)
--- | :---
Nosy | @gvanrossum, @pitrou, @1st1, @thehesiod, @miss-islington
PRs | <li>python/cpython#7208</li><li>python/cpython#7215</li><li>python/cpython#7218</li><li>python/cpython#7226</li><li>python/cpython#7232</li><li>python/cpython#7233</li>
Files | <li>[test_loop.py](https://bugs.python.org/file36117/test_loop.py "Uploaded as text/plain at 2014-07-26.18:01:38 by dan.oreilly"): Test script demonstrating the issue</li><li>[handle_mp_unix.diff](https://bugs.python.org/file36118/handle_mp_unix.diff "Uploaded as text/plain at 2014-07-26.18:20:15 by dan.oreilly"): Patch that makes _UnixDefaultEventLoopPolicy create a new loop object if get_event_loop is called in a forked mp child process</li><li>[handle-mp_unix2.patch](https://bugs.python.org/file36119/handle-mp_unix2.patch "Uploaded as text/plain at 2014-07-26.20:13:57 by dan.oreilly"): Use os.getpid() instead of multiprocessing. Store pid state in Policy instance rather than the Loop instance.</li><li>[handle_mp_unix_with_test.diff](https://bugs.python.org/file36134/handle_mp_unix_with_test.diff "Uploaded as text/plain at 2014-07-27.16:09:52 by dan.oreilly"): Adds a unit test to previous patch</li>
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = None
closed_at = None
created_at = <Date 2014-07-26.18:01:10.150>
labels = ['type-bug', 'expert-asyncio']
title = 'asyncio: support multiprocessing (support fork)'
updated_at = <Date 2018-05-30.00:56:36.541>
user = 'https://bugs.python.org/danoreilly'
```
bugs.python.org fields:
```python
activity = <Date 2018-05-30.00:56:36.541>
actor = 'yselivanov'
assignee = 'none'
closed = False
closed_date = None
closer = None
components = ['asyncio']
creation = <Date 2014-07-26.18:01:10.150>
creator = 'dan.oreilly'
dependencies = []
files = ['36117', '36118', '36119', '36134']
hgrepos = []
issue_num = 22087
keywords = ['patch']
message_count = 23.0
messages = ['224082', '224084', '224085', '224097', '224125', '224140', '224143', '224144', '224145', '226698', '235404', '235411', '288327', '297222', '297226', '297227', '297229', '318077', '318092', '318135', '318140', '318143', '318144']
nosy_count = 7.0
nosy_names = ['gvanrossum', 'pitrou', 'zmedico', 'yselivanov', 'thehesiod', 'dan.oreilly', 'miss-islington']
pr_nums = ['7208', '7215', '7218', '7226', '7232', '7233']
priority = 'normal'
resolution = None
stage = 'patch review'
status = 'open'
superseder = None
type = 'behavior'
url = 'https://bugs.python.org/issue22087'
versions = ['Python 3.4', 'Python 3.5', 'Python 3.6']
```
</p></details>
<!-- gh-linked-prs -->
### Linked PRs
* gh-99539
<!-- /gh-linked-prs -->
|
process
|
asyncio support multiprocessing support fork bpo nosy gvanrossum pitrou thehesiod miss islington prs python cpython python cpython python cpython python cpython python cpython python cpython files uploaded as text plain at by dan oreilly test script demonstrating the issue uploaded as text plain at by dan oreilly patch that makes unixdefaulteventlooppolicy create a new loop object if get event loop is called in a forked mp child process uploaded as text plain at by dan oreilly use os getpid instead of multiprocessing store pid state in policy instance rather than the loop instance uploaded as text plain at by dan oreilly adds a unit test to previous patch note these values reflect the state of the issue at the time it was migrated and might not reflect the current state show more details github fields python assignee none closed at none created at labels title asyncio support multiprocessing support fork updated at user bugs python org fields python activity actor yselivanov assignee none closed false closed date none closer none components creation creator dan oreilly dependencies files hgrepos issue num keywords message count messages nosy count nosy names pr nums priority normal resolution none stage patch review status open superseder none type behavior url versions linked prs gh
| 1
|
6,933
| 2,610,318,213
|
IssuesEvent
|
2015-02-26 19:42:42
|
chrsmith/republic-at-war
|
https://api.github.com/repos/chrsmith/republic-at-war
|
closed
|
Text
|
auto-migrated Priority-Medium Type-Defect
|
```
BARC Speeder
Class: Infanty
[MISSING]
This unit can capture reinforcement points, build pads, and some structures.
```
-----
Original issue reported on code.google.com by `z3r0...@gmail.com` on 5 May 2011 at 4:47
|
1.0
|
Text - ```
BARC Speeder
Class: Infanty
[MISSING]
This unit can capture reinforcement points, build pads, and some structures.
```
-----
Original issue reported on code.google.com by `z3r0...@gmail.com` on 5 May 2011 at 4:47
|
non_process
|
text barc speeder class infanty this unit can capture reinforcement points build pads and some structures original issue reported on code google com by gmail com on may at
| 0
|
25,109
| 12,217,683,846
|
IssuesEvent
|
2020-05-01 17:43:05
|
terraform-providers/terraform-provider-aws
|
https://api.github.com/repos/terraform-providers/terraform-provider-aws
|
closed
|
cross-account vpc connections deleting vpc_peering every run
|
bug service/ec2 stale
|
_This issue was originally opened by @mindlace as hashicorp/terraform#13385. It was migrated here as part of the [provider split](https://www.hashicorp.com/blog/upcoming-provider-changes-in-terraform-0-10/). The original body of the issue is below._
<hr>
Terraform tries to destroy my gateway on every subsequent run. (Terraform 0.9.2, details attached)
On the first apply, it correctly creates the cross-account vpc connection.
On subsequent applies, it plans to remove the route references. Attempting to apply results in an error like the attached.
[fail.txt](https://github.com/hashicorp/terraform/files/900577/fail.txt)
|
1.0
|
cross-account vpc connections deleting vpc_peering every run - _This issue was originally opened by @mindlace as hashicorp/terraform#13385. It was migrated here as part of the [provider split](https://www.hashicorp.com/blog/upcoming-provider-changes-in-terraform-0-10/). The original body of the issue is below._
<hr>
Terraform tries to destroy my gateway on every subsequent run. (Terraform 0.9.2, details attached)
On the first apply, it correctly creates the cross-account vpc connection.
On subsequent applies, it plans to remove the route references. Attempting to apply results in an error like the attached.
[fail.txt](https://github.com/hashicorp/terraform/files/900577/fail.txt)
|
non_process
|
cross account vpc connections deleting vpc peering every run this issue was originally opened by mindlace as hashicorp terraform it was migrated here as part of the the original body of the issue is below terraform tries to destroy my gateway on every subsequent run terraform details attached on the first apply it correctly creates the cross account vpc connection on subsequent applies it plans to remove the route references attempting to apply results in an error like the attached
| 0
|
165,525
| 26,185,634,351
|
IssuesEvent
|
2023-01-02 23:28:48
|
phetsims/molecule-polarity
|
https://api.github.com/repos/phetsims/molecule-polarity
|
closed
|
Update examples.md
|
design:phet-io
|
For #142
[Examples.md](https://github.com/phetsims/phet-io-sim-specific/blob/master/repos/molecule-polarity/client-guide/examples.md) should be updated for the 1.3 release.
- [x] Remove general examples covered in the PhET-iO Guide
- [x] Delete anything inaccurate
- [x] Ensure referenced `phetioIDs` still exist, update as necessary
|
1.0
|
Update examples.md - For #142
[Examples.md](https://github.com/phetsims/phet-io-sim-specific/blob/master/repos/molecule-polarity/client-guide/examples.md) should be updated for the 1.3 release.
- [x] Remove general examples covered in the PhET-iO Guide
- [x] Delete anything inaccurate
- [x] Ensure referenced `phetioIDs` still exist, update as necessary
|
non_process
|
update examples md for should be updated for the release remove general examples covered in the phet io guide delete anything inaccurate ensure referenced phetioids still exist update as necessary
| 0
|
19,965
| 26,443,853,395
|
IssuesEvent
|
2023-01-16 04:34:21
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Start/Stop VM during off-hours Version 2 url is broken
|
automation/svc triaged cxp doc-enhancement process-automation/subsvc Pri1
|
In relation to the first Note in this article, the following link for version 2 is showing a 404 - Page not found.
> We recommend that you start using [version 2](https://learn.microsoft.com/en-us/articles/azure-functions/start-stop-vms/overview.md), which is now generally available.
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 225c9d05-83dd-b006-0025-3753f5ab25bf
* Version Independent ID: 9eecef0c-b1cb-1136-faf7-542214492096
* Content: [Azure Automation Start/Stop VMs during off-hours overview](https://learn.microsoft.com/en-us/azure/automation/automation-solution-vm-management)
* Content Source: [articles/automation/automation-solution-vm-management.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/automation-solution-vm-management.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @SnehaSudhirG
* Microsoft Alias: **sudhirsneha**
|
1.0
|
Start/Stop VM during off-hours Version 2 url is broken - In relation to the first Note in this article, the following link for version 2 is showing a 404 - Page not found.
> We recommend that you start using [version 2](https://learn.microsoft.com/en-us/articles/azure-functions/start-stop-vms/overview.md), which is now generally available.
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 225c9d05-83dd-b006-0025-3753f5ab25bf
* Version Independent ID: 9eecef0c-b1cb-1136-faf7-542214492096
* Content: [Azure Automation Start/Stop VMs during off-hours overview](https://learn.microsoft.com/en-us/azure/automation/automation-solution-vm-management)
* Content Source: [articles/automation/automation-solution-vm-management.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/automation-solution-vm-management.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @SnehaSudhirG
* Microsoft Alias: **sudhirsneha**
|
process
|
start stop vm during off hours version url is broken in relation to the first note in this article the following link for version is showing a page not found we recommend that you start using which is now generally available document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login snehasudhirg microsoft alias sudhirsneha
| 1
|
7,083
| 9,369,536,583
|
IssuesEvent
|
2019-04-03 11:22:06
|
Yoast/wordpress-seo
|
https://api.github.com/repos/Yoast/wordpress-seo
|
closed
|
WordPress 4.9.9 Compatibility
|
compatibility
|
Any special reason why the latest Yoast SEO 10.0 version (both free and premium) doesn't show up in updates if you are running WordPress 4.9.9 or 4.9.10?
|
True
|
WordPress 4.9.9 Compatibility - Any special reason why the latest Yoast SEO 10.0 version (both free and premium) doesn't show up in updates if you are running WordPress 4.9.9 or 4.9.10?
|
non_process
|
wordpress compatibility any special reason why the latest yoast seo version both free and premium doesn t show up in updates if you are running wordpress or
| 0
|
246,175
| 7,893,210,837
|
IssuesEvent
|
2018-06-28 17:17:36
|
visit-dav/issues-test
|
https://api.github.com/repos/visit-dav/issues-test
|
closed
|
Build data directory on Windows
|
Expected Use: 3 - Occasional Feature Impact: 3 - Medium OS: All Priority: Normal Support Group: Any
|
A first step to doing testing on windows is to get the data diretory building.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. The following information
could not be accurately captured in the new ticket:
Original author: Kathleen Biagas
Original creation: 11/15/2010 02:10 pm
Original update: 01/19/2011 01:06 am
Ticket number: 477
|
1.0
|
Build data directory on Windows - A first step to doing testing on windows is to get the data diretory building.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. The following information
could not be accurately captured in the new ticket:
Original author: Kathleen Biagas
Original creation: 11/15/2010 02:10 pm
Original update: 01/19/2011 01:06 am
Ticket number: 477
|
non_process
|
build data directory on windows a first step to doing testing on windows is to get the data diretory building redmine migration this ticket was migrated from redmine the following information could not be accurately captured in the new ticket original author kathleen biagas original creation pm original update am ticket number
| 0
|
2,521
| 5,287,583,798
|
IssuesEvent
|
2017-02-08 12:49:57
|
mesosphere/marathon
|
https://api.github.com/repos/mesosphere/marathon
|
opened
|
Launch Cluster with Specific Version and Trigger Scale Test Job
|
Epic:Improve CI and Release Process next story
|
As a Marathon developer I would like the cluster to start with a specific version automatically so that I don’t have to know how to start a cluster.
This job creates a cluster based on the following parameters:
- DC/OS branch or commit id
- Security mode: Strict, Permissive, None
- Number of nodes
The jobs tries to create a cluster with CCM. It does not retry in case of an error but reports it.
The number of nodes has an arbitrary limit of 500 nodes for now. That does not mean we support up to 500 nodes. In this first iterations it is up to the user to investigate why a cluster did not start.
|
1.0
|
Launch Cluster with Specific Version and Trigger Scale Test Job - As a Marathon developer I would like the cluster to start with a specific version automatically so that I don’t have to know how to start a cluster.
This job creates a cluster based on the following parameters:
- DC/OS branch or commit id
- Security mode: Strict, Permissive, None
- Number of nodes
The jobs tries to create a cluster with CCM. It does not retry in case of an error but reports it.
The number of nodes has an arbitrary limit of 500 nodes for now. That does not mean we support up to 500 nodes. In this first iterations it is up to the user to investigate why a cluster did not start.
|
process
|
launch cluster with specific version and trigger scale test job as a marathon developer i would like the cluster to start with a specific version automatically so that i don’t have to know how to start a cluster this job creates a cluster based on the following parameters dc os branch or commit id security mode strict permissive none number of nodes the jobs tries to create a cluster with ccm it does not retry in case of an error but reports it the number of nodes has an arbitrary limit of nodes for now that does not mean we support up to nodes in this first iterations it is up to the user to investigate why a cluster did not start
| 1
|
6,124
| 8,996,582,476
|
IssuesEvent
|
2019-02-02 02:33:53
|
bow-simulation/virtualbow
|
https://api.github.com/repos/bow-simulation/virtualbow
|
closed
|
Problems running `python3 build.py` in Ubuntu 17.04
|
area: software process prio: high type: bug
|
In GitLab by **ozra** on Jan 8, 2018, 08:33
First I got:
```
Traceback (most recent call last):
File "build.py", line 66, in <module>
build_vtk("build/vtk/source", "build/vtk/build", "build/vtk")
TypeError: build_vtk() takes 2 positional arguments but 3 were given
```
Tried to run it again immediately after (_just because_...) and then I got another error:
```
Traceback (most recent call last):
File "build.py", line 69, in <module>
build_application(".", "build/bow-simulator/build", "build/bow-simulator")
File "/home/oscar/3-p/bow-simulator/platforms/linux/build.py", line 28, in build_application
"-DCMAKE_BUILD_TYPE=Release"])
File "/usr/lib/python3.5/subprocess.py", line 247, in call
with Popen(*popenargs, **kwargs) as p:
File "/usr/lib/python3.5/subprocess.py", line 676, in __init__
restore_signals, start_new_session)
File "/usr/lib/python3.5/subprocess.py", line 1282, in _execute_child
raise child_exception_type(errno_num, err_msg)
FileNotFoundError: [Errno 2] No such file or directory: 'cmake'
```
[edit] I tried a third time and then I got yet another:
```
Traceback (most recent call last):
File "build.py", line 72, in <module>
build_packages("0.4", "build/packages/build", "build/packages")
File "/home/oscar/3-p/bow-simulator/platforms/linux/build.py", line 117, in build_packages
build_deb_package(version, build_dir + "/build-deb", output_dir)
File "/home/oscar/3-p/bow-simulator/platforms/linux/build.py", line 60, in build_deb_package
create_install_tree(build_dir)
File "/home/oscar/3-p/bow-simulator/platforms/linux/build.py", line 48, in create_install_tree
shutil.copy("build/bow-simulator/bin/bow-simulator", output_dir + "/usr/local/bin") # Todo: Repetition
File "/usr/lib/python3.5/shutil.py", line 241, in copy
copyfile(src, dst, follow_symlinks=follow_symlinks)
File "/usr/lib/python3.5/shutil.py", line 120, in copyfile
with open(src, 'rb') as fsrc:
FileNotFoundError: [Errno 2] No such file or directory: 'build/bow-simulator/bin/bow-simulator'
```
And a fourth time where it says nothing, so looks like it's succeeding (it's not)
|
1.0
|
Problems running `python3 build.py` in Ubuntu 17.04 - In GitLab by **ozra** on Jan 8, 2018, 08:33
First I got:
```
Traceback (most recent call last):
File "build.py", line 66, in <module>
build_vtk("build/vtk/source", "build/vtk/build", "build/vtk")
TypeError: build_vtk() takes 2 positional arguments but 3 were given
```
Tried to run it again immediately after (_just because_...) and then I got another error:
```
Traceback (most recent call last):
File "build.py", line 69, in <module>
build_application(".", "build/bow-simulator/build", "build/bow-simulator")
File "/home/oscar/3-p/bow-simulator/platforms/linux/build.py", line 28, in build_application
"-DCMAKE_BUILD_TYPE=Release"])
File "/usr/lib/python3.5/subprocess.py", line 247, in call
with Popen(*popenargs, **kwargs) as p:
File "/usr/lib/python3.5/subprocess.py", line 676, in __init__
restore_signals, start_new_session)
File "/usr/lib/python3.5/subprocess.py", line 1282, in _execute_child
raise child_exception_type(errno_num, err_msg)
FileNotFoundError: [Errno 2] No such file or directory: 'cmake'
```
[edit] I tried a third time and then I got yet another:
```
Traceback (most recent call last):
File "build.py", line 72, in <module>
build_packages("0.4", "build/packages/build", "build/packages")
File "/home/oscar/3-p/bow-simulator/platforms/linux/build.py", line 117, in build_packages
build_deb_package(version, build_dir + "/build-deb", output_dir)
File "/home/oscar/3-p/bow-simulator/platforms/linux/build.py", line 60, in build_deb_package
create_install_tree(build_dir)
File "/home/oscar/3-p/bow-simulator/platforms/linux/build.py", line 48, in create_install_tree
shutil.copy("build/bow-simulator/bin/bow-simulator", output_dir + "/usr/local/bin") # Todo: Repetition
File "/usr/lib/python3.5/shutil.py", line 241, in copy
copyfile(src, dst, follow_symlinks=follow_symlinks)
File "/usr/lib/python3.5/shutil.py", line 120, in copyfile
with open(src, 'rb') as fsrc:
FileNotFoundError: [Errno 2] No such file or directory: 'build/bow-simulator/bin/bow-simulator'
```
And a fourth time where it says nothing, so looks like it's succeeding (it's not)
|
process
|
problems running build py in ubuntu in gitlab by ozra on jan first i got traceback most recent call last file build py line in build vtk build vtk source build vtk build build vtk typeerror build vtk takes positional arguments but were given tried to run it again immediately after just because and then i got another error traceback most recent call last file build py line in build application build bow simulator build build bow simulator file home oscar p bow simulator platforms linux build py line in build application dcmake build type release file usr lib subprocess py line in call with popen popenargs kwargs as p file usr lib subprocess py line in init restore signals start new session file usr lib subprocess py line in execute child raise child exception type errno num err msg filenotfounderror no such file or directory cmake i tried a third time and then i got yet another traceback most recent call last file build py line in build packages build packages build build packages file home oscar p bow simulator platforms linux build py line in build packages build deb package version build dir build deb output dir file home oscar p bow simulator platforms linux build py line in build deb package create install tree build dir file home oscar p bow simulator platforms linux build py line in create install tree shutil copy build bow simulator bin bow simulator output dir usr local bin todo repetition file usr lib shutil py line in copy copyfile src dst follow symlinks follow symlinks file usr lib shutil py line in copyfile with open src rb as fsrc filenotfounderror no such file or directory build bow simulator bin bow simulator and a fourth time where it says nothing so looks like it s succeeding it s not
| 1
|
9,753
| 12,737,163,814
|
IssuesEvent
|
2020-06-25 18:14:44
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
Windows and Linux results are different when using the Process.GetCurrentProcess function.
|
area-System.Diagnostics.Process untriaged
|
The result of GetCurrentProcess for each os.


centos7 some information is missing.
I've been told that this issue is fixed in dotnet core 3.1.
However, the issue also occurs after applying dotnet core 3.1.
If you know the reason for that, please share it.
|
1.0
|
Windows and Linux results are different when using the Process.GetCurrentProcess function. - The result of GetCurrentProcess for each os.


centos7 some information is missing.
I've been told that this issue is fixed in dotnet core 3.1.
However, the issue also occurs after applying dotnet core 3.1.
If you know the reason for that, please share it.
|
process
|
windows and linux results are different when using the process getcurrentprocess function the result of getcurrentprocess for each os some information is missing i ve been told that this issue is fixed in dotnet core however the issue also occurs after applying dotnet core if you know the reason for that please share it
| 1
|
7,167
| 10,311,326,265
|
IssuesEvent
|
2019-08-29 17:05:31
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
Process.NonpagedSystemMemorySize64,etc are 0
|
area-System.Diagnostics.Process
|
This code:
```csharp
using System;
using System.Diagnostics;
namespace ConsoleApp32
{
class Program
{
static void Main(string[] args)
{
Process currentProcess = Process.GetCurrentProcess();
Console.WriteLine("NonpagedSystemMemorySize64: {0:#,##0}", currentProcess.NonpagedSystemMemorySize64);
Console.WriteLine("PagedMemorySize64: {0:#,##0}", currentProcess.PagedMemorySize64);
Console.WriteLine("PagedSystemMemorySize64: {0:#,##0}", currentProcess.PagedSystemMemorySize64);
Console.WriteLine("PrivateMemorySize64: {0:#,##0}", currentProcess.PrivateMemorySize64);
}
}
}
```
Gives the following output in Linux:
```
NonpagedSystemMemorySize64: 0
PagedMemorySize64: 0
PagedSystemMemorySize64: 0
PrivateMemorySize64: 0
```
|
1.0
|
Process.NonpagedSystemMemorySize64,etc are 0 - This code:
```csharp
using System;
using System.Diagnostics;
namespace ConsoleApp32
{
class Program
{
static void Main(string[] args)
{
Process currentProcess = Process.GetCurrentProcess();
Console.WriteLine("NonpagedSystemMemorySize64: {0:#,##0}", currentProcess.NonpagedSystemMemorySize64);
Console.WriteLine("PagedMemorySize64: {0:#,##0}", currentProcess.PagedMemorySize64);
Console.WriteLine("PagedSystemMemorySize64: {0:#,##0}", currentProcess.PagedSystemMemorySize64);
Console.WriteLine("PrivateMemorySize64: {0:#,##0}", currentProcess.PrivateMemorySize64);
}
}
}
```
Gives the following output in Linux:
```
NonpagedSystemMemorySize64: 0
PagedMemorySize64: 0
PagedSystemMemorySize64: 0
PrivateMemorySize64: 0
```
|
process
|
process etc are this code csharp using system using system diagnostics namespace class program static void main string args process currentprocess process getcurrentprocess console writeline currentprocess console writeline currentprocess console writeline currentprocess console writeline currentprocess gives the following output in linux
| 1
|
238,356
| 18,239,277,908
|
IssuesEvent
|
2021-10-01 10:52:11
|
1ezio/ietians-diary
|
https://api.github.com/repos/1ezio/ietians-diary
|
closed
|
Mistakes in Readme
|
documentation good first issue hacktoberfest
|
- Typos'
- Android Logo not visible in dark mode
- Inappropriate sub-heading sizes
|
1.0
|
Mistakes in Readme - - Typos'
- Android Logo not visible in dark mode
- Inappropriate sub-heading sizes
|
non_process
|
mistakes in readme typos android logo not visible in dark mode inappropriate sub heading sizes
| 0
|
13,019
| 15,306,585,267
|
IssuesEvent
|
2021-02-24 19:41:08
|
vercel/hyper
|
https://api.github.com/repos/vercel/hyper
|
closed
|
Build for Linux ARM devices
|
good first issue 🤯 Type: Compatibility
|
any plans for a Linux ARM build? if nobody else is on it or has attempted it, i can give it a try.
|
True
|
Build for Linux ARM devices - any plans for a Linux ARM build? if nobody else is on it or has attempted it, i can give it a try.
|
non_process
|
build for linux arm devices any plans for a linux arm build if nobody else is on it or has attempted it i can give it a try
| 0
|
202,584
| 15,287,029,494
|
IssuesEvent
|
2021-02-23 15:20:13
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
roachtest: sysbench/oltp_write_only/nodes=3/cpu=32/conc=128 failed
|
C-test-failure O-roachtest O-robot branch-master release-blocker
|
[(roachtest).sysbench/oltp_write_only/nodes=3/cpu=32/conc=128 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2657140&tab=buildLog) on [master@7853fd32de8b6dea869f2a2a92dcd7506f4a8998](https://github.com/cockroachdb/cockroach/commits/7853fd32de8b6dea869f2a2a92dcd7506f4a8998):
```
| | --pgsql-password= \
| | --pgsql-db=sysbench \
| | --report-interval=1 \
| | --time=600 \
| | --threads=128 \
| | --tables=10 \
| | --table_size=10000000 \
| | --auto_inc=false \
| | oltp_write_only prepare
| | ```
| Wraps: (3) exit status 139
| Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError
|
| stdout:
| <... some data truncated by circular buffer; go to artifacts for details ...>
| 7-43846886773-84205124523-12355831368-23491120988-20356191733-96662940087', '42877436411-66241970795-03961401219-42185303580-14625943529'),(6142028, 4965383, '59789147254-27106246542-89291939032-07336910269-30372315349-59597660991-27213791106-81017093052-60313524291-03774020309', '41677976458-86828233414-31758045667-37800818816-28723992749'),(6142029, 5002146, '41681805518-90691912629-19486584487-63055859255-28706305487-17496033403-28209340426-33162796290-97414304649-55741698901', '31340475378-63892257553-83797741248-88039179841-67168351718'),(6142030, 4999325, '13436650237-09110316505-56434818857-96858847374-69154573974-89104087092-43881095954-81142164803-43935141514-54333720817', '94765551749-41444289143-75008400564-62382222318-10809613709'),(6142031, 5041295, '09831395438-76548648872-14708356669-90633273037-59440760742-18099118142-71427778938-58836514866-93662219050-55125451296', '43130217065-98758954565-68434523126-59099924414-11204321347'),(6142032, 5032110, '76773599574-20306856373-82075328036-08908268767-24435180544-26278225629-57476694458-83345800845-56022551811-81984921648', '48223285051-54477171217-37680636279-59887417801-51809312439'),(6142033, 5031974, '36206543181-26971282912-60822663458-60104343814-91414139373-81906111818-09492327417-12358059700-97826150076-58847035992', '98410214938-64099724188-32729862200-72556492785-03835190996'),(6142034, 4886332, '89569531815-45549616209-53545416311-37733188621-89399691804-33741113723-42352191595-51308677744-92742851096-59328808582', '36309533092-58701210225-67930768544-94522309490-87279380114'),(6142035, 6300588, '16559133955-02200351225-23056353558-22910079205-11405820355-05405145700-12022171419-09321418736-97704739230-89333522946', '61510334732-22731888465-46125864367-78860302235-43408885445'),(6142036, 4293941, '73986004554-53815810496-27624085689-27567680568-74770835999-21196794182-00642702920-00090585463-65203157863-47999143278', '40528008873-81396952280-75890774526-74325459656-99526983930'),(6142037, 5042689, '46992163578-83970593123-33595787548-46024200426-32913864546-92811787792-29419057175-13002409174-43310912763-66209145775', '21018122672-54560049754-76279127273-40515107759-81096136022'),(6142038, 4979575, '88136719500-45635125690-50569110013-71264297268-38331280342-90505086449-73564410717-69680817597-73242585878-11441972787', '82733420759-47844090832-34406703235-70136425582-12426845032'),(6142039, 5030716, '92607119586-21255812809-08391896400-35756624444-09590827740-23098925133-91816337071-49213573499-12524740889-40868541633', '95897084471-71095257996-71890215965-69342799286-10291158550'),(6142040, 4982886, '69725469161-83796004962-91029509967-52933736799-10030742693-84884512992-31351963325-32930025254-35709365173-48139879218', '14734838082-42328349751-20906456008-57532950877-81883690844'),(6142041, 5001060, '50290542207-53425641328-23641260438-24321534313-08574958759-28178112466-68159999034-27131974331-17925342253-86230165722', '34826616999-06608458771-62815342212-80010275683-39962938534'),(6142042, 5007435, '76257873654-17916037102-01486300634-32804775707-53040544052-72966732944-24306602866-20495189279-07905402600-35160295439', '07988593612-46540400791-50186230451-79711281364-02519306670'),(6142043, 5798695, '40869788639-20756383873-10241370193-17863193074-53938543244-86210818157-76888567194-33696889233-37634578470-85756558224', '20072495592-46308220286-70474211671-80912881855-98469115906'),(6142044, 4999957, '51065815466-24456501721-78884499936-85744080664-18788630628-93785974571-96186772809-27335003385-75514646039-31847148788', '68253458045-03462423614-96658924255-09029267156-01335037147'),(6142045, 5018946, '22327842180-61770336676-26613074693-56824506025-38133690712-35172672572-28615447986-47118975693-42998707487-08771783258', '12308362753-30718471921-78914543938-52812627631-58037238904'),(6142046, 5018387, '58484072750-84104452208-94945742413-61713836901-76742889115-49478090234-21970033630-72523156678-55781594701-06776110764',FATAL: `sysbench.cmdline.call_command' function failed: /usr/share/sysbench/oltp_common.lua:230: db_bulk_insert_next() failed
Wraps: (4) exit status 20
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *main.withCommandDetails (4) *exec.ExitError
cluster.go:2687,sysbench.go:124,sysbench.go:145,test_runner.go:767: monitor failure: monitor task failed: t.Fatal() was called
(1) attached stack trace
-- stack trace:
| main.(*monitor).WaitE
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2675
| main.(*monitor).Wait
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2683
| main.runSysbench
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/sysbench.go:124
| main.registerSysbench.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/sysbench.go:145
| main.(*testRunner).runTest.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:767
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
-- stack trace:
| main.(*monitor).wait.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2731
Wraps: (4) monitor task failed
Wraps: (5) attached stack trace
-- stack trace:
| main.init
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2645
| runtime.doInit
| /usr/local/go/src/runtime/proc.go:5652
| runtime.main
| /usr/local/go/src/runtime/proc.go:191
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1374
Wraps: (6) t.Fatal() was called
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.leafError
```
<details><summary>More</summary><p>
Artifacts: [/sysbench/oltp_write_only/nodes=3/cpu=32/conc=128](https://teamcity.cockroachdb.com/viewLog.html?buildId=2657140&tab=artifacts#/sysbench/oltp_write_only/nodes=3/cpu=32/conc=128)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Asysbench%2Foltp_write_only%2Fnodes%3D3%2Fcpu%3D32%2Fconc%3D128.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
2.0
|
roachtest: sysbench/oltp_write_only/nodes=3/cpu=32/conc=128 failed - [(roachtest).sysbench/oltp_write_only/nodes=3/cpu=32/conc=128 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2657140&tab=buildLog) on [master@7853fd32de8b6dea869f2a2a92dcd7506f4a8998](https://github.com/cockroachdb/cockroach/commits/7853fd32de8b6dea869f2a2a92dcd7506f4a8998):
```
| | --pgsql-password= \
| | --pgsql-db=sysbench \
| | --report-interval=1 \
| | --time=600 \
| | --threads=128 \
| | --tables=10 \
| | --table_size=10000000 \
| | --auto_inc=false \
| | oltp_write_only prepare
| | ```
| Wraps: (3) exit status 139
| Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError
|
| stdout:
| <... some data truncated by circular buffer; go to artifacts for details ...>
| 7-43846886773-84205124523-12355831368-23491120988-20356191733-96662940087', '42877436411-66241970795-03961401219-42185303580-14625943529'),(6142028, 4965383, '59789147254-27106246542-89291939032-07336910269-30372315349-59597660991-27213791106-81017093052-60313524291-03774020309', '41677976458-86828233414-31758045667-37800818816-28723992749'),(6142029, 5002146, '41681805518-90691912629-19486584487-63055859255-28706305487-17496033403-28209340426-33162796290-97414304649-55741698901', '31340475378-63892257553-83797741248-88039179841-67168351718'),(6142030, 4999325, '13436650237-09110316505-56434818857-96858847374-69154573974-89104087092-43881095954-81142164803-43935141514-54333720817', '94765551749-41444289143-75008400564-62382222318-10809613709'),(6142031, 5041295, '09831395438-76548648872-14708356669-90633273037-59440760742-18099118142-71427778938-58836514866-93662219050-55125451296', '43130217065-98758954565-68434523126-59099924414-11204321347'),(6142032, 5032110, '76773599574-20306856373-82075328036-08908268767-24435180544-26278225629-57476694458-83345800845-56022551811-81984921648', '48223285051-54477171217-37680636279-59887417801-51809312439'),(6142033, 5031974, '36206543181-26971282912-60822663458-60104343814-91414139373-81906111818-09492327417-12358059700-97826150076-58847035992', '98410214938-64099724188-32729862200-72556492785-03835190996'),(6142034, 4886332, '89569531815-45549616209-53545416311-37733188621-89399691804-33741113723-42352191595-51308677744-92742851096-59328808582', '36309533092-58701210225-67930768544-94522309490-87279380114'),(6142035, 6300588, '16559133955-02200351225-23056353558-22910079205-11405820355-05405145700-12022171419-09321418736-97704739230-89333522946', '61510334732-22731888465-46125864367-78860302235-43408885445'),(6142036, 4293941, '73986004554-53815810496-27624085689-27567680568-74770835999-21196794182-00642702920-00090585463-65203157863-47999143278', '40528008873-81396952280-75890774526-74325459656-99526983930'),(6142037, 5042689, '46992163578-83970593123-33595787548-46024200426-32913864546-92811787792-29419057175-13002409174-43310912763-66209145775', '21018122672-54560049754-76279127273-40515107759-81096136022'),(6142038, 4979575, '88136719500-45635125690-50569110013-71264297268-38331280342-90505086449-73564410717-69680817597-73242585878-11441972787', '82733420759-47844090832-34406703235-70136425582-12426845032'),(6142039, 5030716, '92607119586-21255812809-08391896400-35756624444-09590827740-23098925133-91816337071-49213573499-12524740889-40868541633', '95897084471-71095257996-71890215965-69342799286-10291158550'),(6142040, 4982886, '69725469161-83796004962-91029509967-52933736799-10030742693-84884512992-31351963325-32930025254-35709365173-48139879218', '14734838082-42328349751-20906456008-57532950877-81883690844'),(6142041, 5001060, '50290542207-53425641328-23641260438-24321534313-08574958759-28178112466-68159999034-27131974331-17925342253-86230165722', '34826616999-06608458771-62815342212-80010275683-39962938534'),(6142042, 5007435, '76257873654-17916037102-01486300634-32804775707-53040544052-72966732944-24306602866-20495189279-07905402600-35160295439', '07988593612-46540400791-50186230451-79711281364-02519306670'),(6142043, 5798695, '40869788639-20756383873-10241370193-17863193074-53938543244-86210818157-76888567194-33696889233-37634578470-85756558224', '20072495592-46308220286-70474211671-80912881855-98469115906'),(6142044, 4999957, '51065815466-24456501721-78884499936-85744080664-18788630628-93785974571-96186772809-27335003385-75514646039-31847148788', '68253458045-03462423614-96658924255-09029267156-01335037147'),(6142045, 5018946, '22327842180-61770336676-26613074693-56824506025-38133690712-35172672572-28615447986-47118975693-42998707487-08771783258', '12308362753-30718471921-78914543938-52812627631-58037238904'),(6142046, 5018387, '58484072750-84104452208-94945742413-61713836901-76742889115-49478090234-21970033630-72523156678-55781594701-06776110764',FATAL: `sysbench.cmdline.call_command' function failed: /usr/share/sysbench/oltp_common.lua:230: db_bulk_insert_next() failed
Wraps: (4) exit status 20
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *main.withCommandDetails (4) *exec.ExitError
cluster.go:2687,sysbench.go:124,sysbench.go:145,test_runner.go:767: monitor failure: monitor task failed: t.Fatal() was called
(1) attached stack trace
-- stack trace:
| main.(*monitor).WaitE
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2675
| main.(*monitor).Wait
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2683
| main.runSysbench
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/sysbench.go:124
| main.registerSysbench.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/sysbench.go:145
| main.(*testRunner).runTest.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:767
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
-- stack trace:
| main.(*monitor).wait.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2731
Wraps: (4) monitor task failed
Wraps: (5) attached stack trace
-- stack trace:
| main.init
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2645
| runtime.doInit
| /usr/local/go/src/runtime/proc.go:5652
| runtime.main
| /usr/local/go/src/runtime/proc.go:191
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1374
Wraps: (6) t.Fatal() was called
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.leafError
```
<details><summary>More</summary><p>
Artifacts: [/sysbench/oltp_write_only/nodes=3/cpu=32/conc=128](https://teamcity.cockroachdb.com/viewLog.html?buildId=2657140&tab=artifacts#/sysbench/oltp_write_only/nodes=3/cpu=32/conc=128)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Asysbench%2Foltp_write_only%2Fnodes%3D3%2Fcpu%3D32%2Fconc%3D128.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
non_process
|
roachtest sysbench oltp write only nodes cpu conc failed on pgsql password pgsql db sysbench report interval time threads tables table size auto inc false oltp write only prepare wraps exit status error types errors cmd hintdetail withdetail exec exiterror stdout fatal sysbench cmdline call command function failed usr share sysbench oltp common lua db bulk insert next failed wraps exit status error types withstack withstack errutil withprefix main withcommanddetails exec exiterror cluster go sysbench go sysbench go test runner go monitor failure monitor task failed t fatal was called attached stack trace stack trace main monitor waite home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main runsysbench home agent work go src github com cockroachdb cockroach pkg cmd roachtest sysbench go main registersysbench home agent work go src github com cockroachdb cockroach pkg cmd roachtest sysbench go main testrunner runtest home agent work go src github com cockroachdb cockroach pkg cmd roachtest test runner go wraps monitor failure wraps attached stack trace stack trace main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go wraps monitor task failed wraps attached stack trace stack trace main init home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go runtime doinit usr local go src runtime proc go runtime main usr local go src runtime proc go runtime goexit usr local go src runtime asm s wraps t fatal was called error types withstack withstack errutil withprefix withstack withstack errutil withprefix withstack withstack errutil leaferror more artifacts powered by
| 0
|
10,351
| 13,177,561,150
|
IssuesEvent
|
2020-08-12 07:36:28
|
spring-projects/spring-hateoas
|
https://api.github.com/repos/spring-projects/spring-hateoas
|
closed
|
Map content is not getting serialized properly
|
in: core process: in progress type: bug
|
spring-hateoas version: **1.2.0-SNAPSHOT**
When the content is a `Map` in `EntityModel`, it is not getting converted in proper `JSON`. The content is getting _duplicated._
Test case to describe this behavior:
```
@Test
public void testSerializationOfMap() throws JsonProcessingException {
final Map map = new HashMap();
map.put("key", "value");
final String serialized = new ObjectMapper().writerWithDefaultPrettyPrinter().writeValueAsString(EntityModel.of(map));
System.out.println("Incorrect representation, key value are duplicating at root level as well as under content");
System.out.println(serialized);
class KeyValue {
public KeyValue(String value) {
key = value;
}
public String key;
}
KeyValue keyValue = new KeyValue("value");
final String serializedKeyValue = new ObjectMapper().writerWithDefaultPrettyPrinter().writeValueAsString(EntityModel.of(keyValue));
System.out.println("Correct representation when using custom class KeyValue");
System.out.println(serializedKeyValue);
}
```
|
1.0
|
Map content is not getting serialized properly - spring-hateoas version: **1.2.0-SNAPSHOT**
When the content is a `Map` in `EntityModel`, it is not getting converted in proper `JSON`. The content is getting _duplicated._
Test case to describe this behavior:
```
@Test
public void testSerializationOfMap() throws JsonProcessingException {
final Map map = new HashMap();
map.put("key", "value");
final String serialized = new ObjectMapper().writerWithDefaultPrettyPrinter().writeValueAsString(EntityModel.of(map));
System.out.println("Incorrect representation, key value are duplicating at root level as well as under content");
System.out.println(serialized);
class KeyValue {
public KeyValue(String value) {
key = value;
}
public String key;
}
KeyValue keyValue = new KeyValue("value");
final String serializedKeyValue = new ObjectMapper().writerWithDefaultPrettyPrinter().writeValueAsString(EntityModel.of(keyValue));
System.out.println("Correct representation when using custom class KeyValue");
System.out.println(serializedKeyValue);
}
```
|
process
|
map content is not getting serialized properly spring hateoas version snapshot when the content is a map in entitymodel it is not getting converted in proper json the content is getting duplicated test case to describe this behavior test public void testserializationofmap throws jsonprocessingexception final map map new hashmap map put key value final string serialized new objectmapper writerwithdefaultprettyprinter writevalueasstring entitymodel of map system out println incorrect representation key value are duplicating at root level as well as under content system out println serialized class keyvalue public keyvalue string value key value public string key keyvalue keyvalue new keyvalue value final string serializedkeyvalue new objectmapper writerwithdefaultprettyprinter writevalueasstring entitymodel of keyvalue system out println correct representation when using custom class keyvalue system out println serializedkeyvalue
| 1
|
22,151
| 10,731,366,148
|
IssuesEvent
|
2019-10-28 19:23:25
|
melsorg/github-scanner-test
|
https://api.github.com/repos/melsorg/github-scanner-test
|
opened
|
CVE-2011-3048 (Medium) detected in reactos-backups/ros-amd64-bringup@60669
|
security vulnerability
|
## CVE-2011-3048 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>reactosbackups/ros-amd64-bringup@60669</b></p></summary>
<p>
<p>A free Windows-compatible Operating System</p>
<p>Library home page: <a href=https://github.com/vgalnt/reactos.git>https://github.com/vgalnt/reactos.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/melsorg/github-scanner-test/commit/38c8615a6d0a047787b5e7401328782154ba03e4">38c8615a6d0a047787b5e7401328782154ba03e4</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Library Source Files (3)</summary>
<p></p>
<p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p>
<p>
- /github-scanner-test/libpng/pngset.c
- /github-scanner-test/libpng/pngerror.c
- /github-scanner-test/libpng/pngmem.c
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The png_set_text_2 function in pngset.c in libpng 1.0.x before 1.0.59, 1.2.x before 1.2.49, 1.4.x before 1.4.11, and 1.5.x before 1.5.10 allows remote attackers to cause a denial of service (crash) or execute arbitrary code via a crafted text chunk in a PNG image file, which triggers a memory allocation failure that is not properly handled, leading to a heap-based buffer overflow.
<p>Publish Date: 2012-05-29
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2011-3048>CVE-2011-3048</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2011-3048">https://nvd.nist.gov/vuln/detail/CVE-2011-3048</a></p>
<p>Release Date: 2012-05-29</p>
<p>Fix Resolution: 1.0.59,1.2.49,1.4.11,1.5.10</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2011-3048 (Medium) detected in reactos-backups/ros-amd64-bringup@60669 - ## CVE-2011-3048 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>reactosbackups/ros-amd64-bringup@60669</b></p></summary>
<p>
<p>A free Windows-compatible Operating System</p>
<p>Library home page: <a href=https://github.com/vgalnt/reactos.git>https://github.com/vgalnt/reactos.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/melsorg/github-scanner-test/commit/38c8615a6d0a047787b5e7401328782154ba03e4">38c8615a6d0a047787b5e7401328782154ba03e4</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Library Source Files (3)</summary>
<p></p>
<p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p>
<p>
- /github-scanner-test/libpng/pngset.c
- /github-scanner-test/libpng/pngerror.c
- /github-scanner-test/libpng/pngmem.c
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The png_set_text_2 function in pngset.c in libpng 1.0.x before 1.0.59, 1.2.x before 1.2.49, 1.4.x before 1.4.11, and 1.5.x before 1.5.10 allows remote attackers to cause a denial of service (crash) or execute arbitrary code via a crafted text chunk in a PNG image file, which triggers a memory allocation failure that is not properly handled, leading to a heap-based buffer overflow.
<p>Publish Date: 2012-05-29
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2011-3048>CVE-2011-3048</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2011-3048">https://nvd.nist.gov/vuln/detail/CVE-2011-3048</a></p>
<p>Release Date: 2012-05-29</p>
<p>Fix Resolution: 1.0.59,1.2.49,1.4.11,1.5.10</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in reactos backups ros bringup cve medium severity vulnerability vulnerable library reactosbackups ros bringup a free windows compatible operating system library home page a href found in head commit a href library source files the source files were matched to this source library based on a best effort match source libraries are selected from a list of probable public libraries github scanner test libpng pngset c github scanner test libpng pngerror c github scanner test libpng pngmem c vulnerability details the png set text function in pngset c in libpng x before x before x before and x before allows remote attackers to cause a denial of service crash or execute arbitrary code via a crafted text chunk in a png image file which triggers a memory allocation failure that is not properly handled leading to a heap based buffer overflow publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
229,018
| 17,498,167,147
|
IssuesEvent
|
2021-08-10 05:27:26
|
nitrictech/docs
|
https://api.github.com/repos/nitrictech/docs
|
closed
|
REST API Tutorial
|
documentation
|
Provide an introductory REST API tutorial for Nitric, which is useful for a developer who is new to Nitric as a way to get started.
|
1.0
|
REST API Tutorial - Provide an introductory REST API tutorial for Nitric, which is useful for a developer who is new to Nitric as a way to get started.
|
non_process
|
rest api tutorial provide an introductory rest api tutorial for nitric which is useful for a developer who is new to nitric as a way to get started
| 0
|
656,355
| 21,727,745,879
|
IssuesEvent
|
2022-05-11 09:11:27
|
zephyrproject-rtos/zephyr
|
https://api.github.com/repos/zephyrproject-rtos/zephyr
|
closed
|
net: tcp: No retries of a TCP FIN message
|
bug priority: medium area: Networking
|
**Describe the bug**
When a connection is opened and a FIN message does not reach the server, it is never retried (no ACK received). The context is properly closed, by the connection stays open at the server because it did not receive the FIN message.
**To Reproduce**
Use the tests as added in the following branch
https://github.com/ssharks/zephyr/tree/net/tcp_obstructed_close
Run the following tests:
./scripts/twister -T tests/net/socket/tcp -p qemu_x86
**Expected behavior**
More than 1 FIN message is expected to be send, minimally 3. This reduces the probability the connection stays open at the other side in case of packet loss.
**Impact**
Could lead to TCP connections staying half open at one half of the connection in case of packet loss.
**Logs**
```
===================================================================
START - test_close_obstructed
Assertion failed at WEST_TOPDIR/zephyr/tests/net/socket/tcp/src/main.c:774: test_close_obstructed: (dropped_packets not equal to CONFIG_NET_TCP_RETRY_COUNT)
Insuffcient FIN retries, got 1
FAIL - test_close_obstructed in 0.25 seconds
===================================================================
```
**Environment**
- OS: Ubuntu 20.04
- Toolchain: Zephyr SDK 0.13.2
- Commit: branch of https://github.com/ssharks/zephyr/tree/net/tcp_obstructed_close
|
1.0
|
net: tcp: No retries of a TCP FIN message - **Describe the bug**
When a connection is opened and a FIN message does not reach the server, it is never retried (no ACK received). The context is properly closed, by the connection stays open at the server because it did not receive the FIN message.
**To Reproduce**
Use the tests as added in the following branch
https://github.com/ssharks/zephyr/tree/net/tcp_obstructed_close
Run the following tests:
./scripts/twister -T tests/net/socket/tcp -p qemu_x86
**Expected behavior**
More than 1 FIN message is expected to be send, minimally 3. This reduces the probability the connection stays open at the other side in case of packet loss.
**Impact**
Could lead to TCP connections staying half open at one half of the connection in case of packet loss.
**Logs**
```
===================================================================
START - test_close_obstructed
Assertion failed at WEST_TOPDIR/zephyr/tests/net/socket/tcp/src/main.c:774: test_close_obstructed: (dropped_packets not equal to CONFIG_NET_TCP_RETRY_COUNT)
Insuffcient FIN retries, got 1
FAIL - test_close_obstructed in 0.25 seconds
===================================================================
```
**Environment**
- OS: Ubuntu 20.04
- Toolchain: Zephyr SDK 0.13.2
- Commit: branch of https://github.com/ssharks/zephyr/tree/net/tcp_obstructed_close
|
non_process
|
net tcp no retries of a tcp fin message describe the bug when a connection is opened and a fin message does not reach the server it is never retried no ack received the context is properly closed by the connection stays open at the server because it did not receive the fin message to reproduce use the tests as added in the following branch run the following tests scripts twister t tests net socket tcp p qemu expected behavior more than fin message is expected to be send minimally this reduces the probability the connection stays open at the other side in case of packet loss impact could lead to tcp connections staying half open at one half of the connection in case of packet loss logs start test close obstructed assertion failed at west topdir zephyr tests net socket tcp src main c test close obstructed dropped packets not equal to config net tcp retry count insuffcient fin retries got fail test close obstructed in seconds environment os ubuntu toolchain zephyr sdk commit branch of
| 0
|
4,953
| 7,801,007,885
|
IssuesEvent
|
2018-06-09 16:01:04
|
sysown/proxysql
|
https://api.github.com/repos/sysown/proxysql
|
closed
|
Forward SELECTs on LAST_INSERT_ID
|
CONNECTION POOL PROTOCOL QUERY PROCESSOR ROUTING
|
When an application executes `SELECT LAST_INSERT_ID()` , ProxySQL doesn't execute this query on any backend, but replies returning the value of `last_insert_id` sent in the [OK packet](https://dev.mysql.com/doc/internals/en/packet-OK_Packet.html)
Although this is ok in many circumstances, this is not always correct.
Details, similarities and differences about the two are listed [here](https://dev.mysql.com/doc/refman/5.7/en/mysql-insert-id.html)
For this reason, ProxySQL should try to not send incorrect data when they do not match.
To return the correct value of `LAST_INSERT_ID()` and not `last_insert_id`, ProxySQL should execute the query on the same backend connection. This is possible only if multiplexing is disabled.
A possible workaround that should work on most of the use cases is:
* track the the last HG where `affected_rows` is not 0
* if the HG has already a connection attached (multiplexing is disabled) execute the query on the same connection
* if the HG doesn't have a connection attached, fall back on old algorithm and return `last_insert_id`
|
1.0
|
Forward SELECTs on LAST_INSERT_ID - When an application executes `SELECT LAST_INSERT_ID()` , ProxySQL doesn't execute this query on any backend, but replies returning the value of `last_insert_id` sent in the [OK packet](https://dev.mysql.com/doc/internals/en/packet-OK_Packet.html)
Although this is ok in many circumstances, this is not always correct.
Details, similarities and differences about the two are listed [here](https://dev.mysql.com/doc/refman/5.7/en/mysql-insert-id.html)
For this reason, ProxySQL should try to not send incorrect data when they do not match.
To return the correct value of `LAST_INSERT_ID()` and not `last_insert_id`, ProxySQL should execute the query on the same backend connection. This is possible only if multiplexing is disabled.
A possible workaround that should work on most of the use cases is:
* track the the last HG where `affected_rows` is not 0
* if the HG has already a connection attached (multiplexing is disabled) execute the query on the same connection
* if the HG doesn't have a connection attached, fall back on old algorithm and return `last_insert_id`
|
process
|
forward selects on last insert id when an application executes select last insert id proxysql doesn t execute this query on any backend but replies returning the value of last insert id sent in the although this is ok in many circumstances this is not always correct details similarities and differences about the two are listed for this reason proxysql should try to not send incorrect data when they do not match to return the correct value of last insert id and not last insert id proxysql should execute the query on the same backend connection this is possible only if multiplexing is disabled a possible workaround that should work on most of the use cases is track the the last hg where affected rows is not if the hg has already a connection attached multiplexing is disabled execute the query on the same connection if the hg doesn t have a connection attached fall back on old algorithm and return last insert id
| 1
|
284,160
| 8,736,225,283
|
IssuesEvent
|
2018-12-11 18:56:32
|
googleapis/google-cloud-java
|
https://api.github.com/repos/googleapis/google-cloud-java
|
closed
|
Failed to unpack response from 'any' field on v1beta2.ClusterControllerClient.createClusterAsync
|
api: dataproc priority: p2 status: blocked type: bug
|
[`ClusterControllerClient.createClusterAsync`](https://github.com/GoogleCloudPlatform/google-cloud-java/blob/master/google-cloud-clients/google-cloud-dataproc/src/main/java/com/google/cloud/dataproc/v1beta2/ClusterControllerClient.java) throws the following exception when it successfully created a cluster.
```
java.util.concurrent.ExecutionException: com.google.api.gax.rpc.UnknownException: java.lang.IllegalStateException: Failed to unpack object from 'any' field. Expected com.google.cloud.dataproc.v1beta2.Cluster, found type.googleapis.com/google.cloud.dataproc.v1.Cluster
at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:502)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:481)
at com.google.common.util.concurrent.AbstractFuture$TrustedFuture.get(AbstractFuture.java:83)
at com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:62)
at com.google.api.gax.longrunning.OperationFutureImpl.get(OperationFutureImpl.java:127)
at App.main(App.java:41)
Caused by: com.google.api.gax.rpc.UnknownException: java.lang.IllegalStateException: Failed to unpack object from 'any' field. Expected com.google.cloud.dataproc.v1beta2.Cluster, found type.googleapis.com/google.cloud.dataproc.v1.Cluster
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:117)
at com.google.api.gax.grpc.ProtoOperationTransformers$ResponseTransformer.apply(ProtoOperationTransformers.java:69)
at com.google.api.gax.grpc.ProtoOperationTransformers$ResponseTransformer.apply(ProtoOperationTransformers.java:46)
at com.google.api.core.ApiFutures$GaxFunctionToGuavaFunction.apply(ApiFutures.java:204)
at com.google.common.util.concurrent.AbstractTransformFuture$TransformFuture.doTransform(AbstractTransformFuture.java:249)
at com.google.common.util.concurrent.AbstractTransformFuture$TransformFuture.doTransform(AbstractTransformFuture.java:239)
at com.google.common.util.concurrent.AbstractTransformFuture.run(AbstractTransformFuture.java:130)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:399)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:973)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:821)
at com.google.common.util.concurrent.AbstractFuture.set(AbstractFuture.java:663)
at com.google.api.gax.retrying.BasicRetryingFuture.handleAttempt(BasicRetryingFuture.java:159)
at com.google.api.gax.retrying.CallbackChainRetryingFuture$AttemptCompletionListener.handle(CallbackChainRetryingFuture.java:134)
at com.google.api.gax.retrying.CallbackChainRetryingFuture$AttemptCompletionListener.run(CallbackChainRetryingFuture.java:114)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:399)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:973)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:821)
at com.google.common.util.concurrent.AbstractFuture.set(AbstractFuture.java:663)
at com.google.common.util.concurrent.AbstractTransformFuture$TransformFuture.setResult(AbstractTransformFuture.java:255)
at com.google.common.util.concurrent.AbstractTransformFuture.run(AbstractTransformFuture.java:177)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:399)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:973)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:821)
at com.google.common.util.concurrent.AbstractFuture.set(AbstractFuture.java:663)
at com.google.api.gax.retrying.BasicRetryingFuture.handleAttempt(BasicRetryingFuture.java:159)
at com.google.api.gax.retrying.CallbackChainRetryingFuture$AttemptCompletionListener.handle(CallbackChainRetryingFuture.java:134)
at com.google.api.gax.retrying.CallbackChainRetryingFuture$AttemptCompletionListener.run(CallbackChainRetryingFuture.java:114)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:399)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:973)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:821)
at com.google.common.util.concurrent.AbstractFuture.set(AbstractFuture.java:663)
at com.google.api.core.AbstractApiFuture$InternalSettableFuture.set(AbstractApiFuture.java:90)
at com.google.api.core.AbstractApiFuture.set(AbstractApiFuture.java:73)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onSuccess(GrpcExceptionCallable.java:88)
at com.google.api.core.ApiFutures$1.onSuccess(ApiFutures.java:73)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1374)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:399)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:973)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:821)
at com.google.common.util.concurrent.AbstractFuture.set(AbstractFuture.java:663)
at io.grpc.stub.ClientCalls$GrpcFuture.set(ClientCalls.java:488)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:466)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1$1.onClose(CensusStatsModule.java:684)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1$1.onClose(CensusTracingModule.java:403)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:459)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:63)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:546)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$600(ClientCallImpl.java:467)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:584)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalStateException: Failed to unpack object from 'any' field. Expected com.google.cloud.dataproc.v1beta2.Cluster, found type.googleapis.com/google.cloud.dataproc.v1.Cluster
at com.google.api.gax.grpc.ProtoOperationTransformers$AnyTransformer.apply(ProtoOperationTransformers.java:131)
at com.google.api.gax.grpc.ProtoOperationTransformers$ResponseTransformer.apply(ProtoOperationTransformers.java:67)
... 62 more
```
|
1.0
|
Failed to unpack response from 'any' field on v1beta2.ClusterControllerClient.createClusterAsync - [`ClusterControllerClient.createClusterAsync`](https://github.com/GoogleCloudPlatform/google-cloud-java/blob/master/google-cloud-clients/google-cloud-dataproc/src/main/java/com/google/cloud/dataproc/v1beta2/ClusterControllerClient.java) throws the following exception when it successfully created a cluster.
```
java.util.concurrent.ExecutionException: com.google.api.gax.rpc.UnknownException: java.lang.IllegalStateException: Failed to unpack object from 'any' field. Expected com.google.cloud.dataproc.v1beta2.Cluster, found type.googleapis.com/google.cloud.dataproc.v1.Cluster
at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:502)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:481)
at com.google.common.util.concurrent.AbstractFuture$TrustedFuture.get(AbstractFuture.java:83)
at com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:62)
at com.google.api.gax.longrunning.OperationFutureImpl.get(OperationFutureImpl.java:127)
at App.main(App.java:41)
Caused by: com.google.api.gax.rpc.UnknownException: java.lang.IllegalStateException: Failed to unpack object from 'any' field. Expected com.google.cloud.dataproc.v1beta2.Cluster, found type.googleapis.com/google.cloud.dataproc.v1.Cluster
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:117)
at com.google.api.gax.grpc.ProtoOperationTransformers$ResponseTransformer.apply(ProtoOperationTransformers.java:69)
at com.google.api.gax.grpc.ProtoOperationTransformers$ResponseTransformer.apply(ProtoOperationTransformers.java:46)
at com.google.api.core.ApiFutures$GaxFunctionToGuavaFunction.apply(ApiFutures.java:204)
at com.google.common.util.concurrent.AbstractTransformFuture$TransformFuture.doTransform(AbstractTransformFuture.java:249)
at com.google.common.util.concurrent.AbstractTransformFuture$TransformFuture.doTransform(AbstractTransformFuture.java:239)
at com.google.common.util.concurrent.AbstractTransformFuture.run(AbstractTransformFuture.java:130)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:399)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:973)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:821)
at com.google.common.util.concurrent.AbstractFuture.set(AbstractFuture.java:663)
at com.google.api.gax.retrying.BasicRetryingFuture.handleAttempt(BasicRetryingFuture.java:159)
at com.google.api.gax.retrying.CallbackChainRetryingFuture$AttemptCompletionListener.handle(CallbackChainRetryingFuture.java:134)
at com.google.api.gax.retrying.CallbackChainRetryingFuture$AttemptCompletionListener.run(CallbackChainRetryingFuture.java:114)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:399)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:973)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:821)
at com.google.common.util.concurrent.AbstractFuture.set(AbstractFuture.java:663)
at com.google.common.util.concurrent.AbstractTransformFuture$TransformFuture.setResult(AbstractTransformFuture.java:255)
at com.google.common.util.concurrent.AbstractTransformFuture.run(AbstractTransformFuture.java:177)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:399)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:973)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:821)
at com.google.common.util.concurrent.AbstractFuture.set(AbstractFuture.java:663)
at com.google.api.gax.retrying.BasicRetryingFuture.handleAttempt(BasicRetryingFuture.java:159)
at com.google.api.gax.retrying.CallbackChainRetryingFuture$AttemptCompletionListener.handle(CallbackChainRetryingFuture.java:134)
at com.google.api.gax.retrying.CallbackChainRetryingFuture$AttemptCompletionListener.run(CallbackChainRetryingFuture.java:114)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:399)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:973)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:821)
at com.google.common.util.concurrent.AbstractFuture.set(AbstractFuture.java:663)
at com.google.api.core.AbstractApiFuture$InternalSettableFuture.set(AbstractApiFuture.java:90)
at com.google.api.core.AbstractApiFuture.set(AbstractApiFuture.java:73)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onSuccess(GrpcExceptionCallable.java:88)
at com.google.api.core.ApiFutures$1.onSuccess(ApiFutures.java:73)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1374)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:399)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:973)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:821)
at com.google.common.util.concurrent.AbstractFuture.set(AbstractFuture.java:663)
at io.grpc.stub.ClientCalls$GrpcFuture.set(ClientCalls.java:488)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:466)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1$1.onClose(CensusStatsModule.java:684)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1$1.onClose(CensusTracingModule.java:403)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:459)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:63)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:546)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$600(ClientCallImpl.java:467)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:584)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalStateException: Failed to unpack object from 'any' field. Expected com.google.cloud.dataproc.v1beta2.Cluster, found type.googleapis.com/google.cloud.dataproc.v1.Cluster
at com.google.api.gax.grpc.ProtoOperationTransformers$AnyTransformer.apply(ProtoOperationTransformers.java:131)
at com.google.api.gax.grpc.ProtoOperationTransformers$ResponseTransformer.apply(ProtoOperationTransformers.java:67)
... 62 more
```
|
non_process
|
failed to unpack response from any field on clustercontrollerclient createclusterasync throws the following exception when it successfully created a cluster java util concurrent executionexception com google api gax rpc unknownexception java lang illegalstateexception failed to unpack object from any field expected com google cloud dataproc cluster found type googleapis com google cloud dataproc cluster at com google common util concurrent abstractfuture getdonevalue abstractfuture java at com google common util concurrent abstractfuture get abstractfuture java at com google common util concurrent abstractfuture trustedfuture get abstractfuture java at com google common util concurrent forwardingfuture get forwardingfuture java at com google api gax longrunning operationfutureimpl get operationfutureimpl java at app main app java caused by com google api gax rpc unknownexception java lang illegalstateexception failed to unpack object from any field expected com google cloud dataproc cluster found type googleapis com google cloud dataproc cluster at com google api gax rpc apiexceptionfactory createexception apiexceptionfactory java at com google api gax grpc protooperationtransformers responsetransformer apply protooperationtransformers java at com google api gax grpc protooperationtransformers responsetransformer apply protooperationtransformers java at com google api core apifutures gaxfunctiontoguavafunction apply apifutures java at com google common util concurrent abstracttransformfuture transformfuture dotransform abstracttransformfuture java at com google common util concurrent abstracttransformfuture transformfuture dotransform abstracttransformfuture java at com google common util concurrent abstracttransformfuture run abstracttransformfuture java at com google common util concurrent moreexecutors directexecutor execute moreexecutors java at com google common util concurrent abstractfuture executelistener abstractfuture java at com google common util concurrent abstractfuture complete abstractfuture java at com google common util concurrent abstractfuture set abstractfuture java at com google api gax retrying basicretryingfuture handleattempt basicretryingfuture java at com google api gax retrying callbackchainretryingfuture attemptcompletionlistener handle callbackchainretryingfuture java at com google api gax retrying callbackchainretryingfuture attemptcompletionlistener run callbackchainretryingfuture java at com google common util concurrent moreexecutors directexecutor execute moreexecutors java at com google common util concurrent abstractfuture executelistener abstractfuture java at com google common util concurrent abstractfuture complete abstractfuture java at com google common util concurrent abstractfuture set abstractfuture java at com google common util concurrent abstracttransformfuture transformfuture setresult abstracttransformfuture java at com google common util concurrent abstracttransformfuture run abstracttransformfuture java at com google common util concurrent moreexecutors directexecutor execute moreexecutors java at com google common util concurrent abstractfuture executelistener abstractfuture java at com google common util concurrent abstractfuture complete abstractfuture java at com google common util concurrent abstractfuture set abstractfuture java at com google api gax retrying basicretryingfuture handleattempt basicretryingfuture java at com google api gax retrying callbackchainretryingfuture attemptcompletionlistener handle callbackchainretryingfuture java at com google api gax retrying callbackchainretryingfuture attemptcompletionlistener run callbackchainretryingfuture java at com google common util concurrent moreexecutors directexecutor execute moreexecutors java at com google common util concurrent abstractfuture executelistener abstractfuture java at com google common util concurrent abstractfuture complete abstractfuture java at com google common util concurrent abstractfuture set abstractfuture java at com google api core abstractapifuture internalsettablefuture set abstractapifuture java at com google api core abstractapifuture set abstractapifuture java at com google api gax grpc grpcexceptioncallable exceptiontransformingfuture onsuccess grpcexceptioncallable java at com google api core apifutures onsuccess apifutures java at com google common util concurrent futures callbacklistener run futures java at com google common util concurrent moreexecutors directexecutor execute moreexecutors java at com google common util concurrent abstractfuture executelistener abstractfuture java at com google common util concurrent abstractfuture complete abstractfuture java at com google common util concurrent abstractfuture set abstractfuture java at io grpc stub clientcalls grpcfuture set clientcalls java at io grpc stub clientcalls unarystreamtofuture onclose clientcalls java at io grpc partialforwardingclientcalllistener onclose partialforwardingclientcalllistener java at io grpc forwardingclientcalllistener onclose forwardingclientcalllistener java at io grpc forwardingclientcalllistener simpleforwardingclientcalllistener onclose forwardingclientcalllistener java at io grpc internal censusstatsmodule statsclientinterceptor onclose censusstatsmodule java at io grpc partialforwardingclientcalllistener onclose partialforwardingclientcalllistener java at io grpc forwardingclientcalllistener onclose forwardingclientcalllistener java at io grpc forwardingclientcalllistener simpleforwardingclientcalllistener onclose forwardingclientcalllistener java at io grpc internal censustracingmodule tracingclientinterceptor onclose censustracingmodule java at io grpc internal clientcallimpl closeobserver clientcallimpl java at io grpc internal clientcallimpl access clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl close clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl access clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl runincontext clientcallimpl java at io grpc internal contextrunnable run contextrunnable java at io grpc internal serializingexecutor run serializingexecutor java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent scheduledthreadpoolexecutor scheduledfuturetask access scheduledthreadpoolexecutor java at java util concurrent scheduledthreadpoolexecutor scheduledfuturetask run scheduledthreadpoolexecutor java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by java lang illegalstateexception failed to unpack object from any field expected com google cloud dataproc cluster found type googleapis com google cloud dataproc cluster at com google api gax grpc protooperationtransformers anytransformer apply protooperationtransformers java at com google api gax grpc protooperationtransformers responsetransformer apply protooperationtransformers java more
| 0
|
63,660
| 3,197,207,835
|
IssuesEvent
|
2015-10-01 02:14:45
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
There should be a cluster-level SELinux capability
|
priority/P2 team/api team/node
|
Forking from #12944
Currently, the SELinux support in kubernetes is essentially optional and is driven by whether
SELinux is enabled on any particular node. It should be possible for cluster operators to enforce
that SELinux is enabled if SELinux integration is expected by the cluster. If SELinux integration
is expected and SELinux is not enabled on a node, the node should have a `NotReady` status.
There should be a cluster-wide capability that controls whether SELinux integration is enabled in
the cluster. Cluster operators generally either run SELinux or they don't; we do not know of any
cases where an operator wants to have SELinux enforcing on some nodes in a cluster and not on
others.
@swagiaal and I have been discussing this; he is beginning work on a PR now.
@bgrant0607 @thockin @erictune @smarterclayton
A new `EnableSELinuxIntegration` capability should be added to the `capabilities.Capabilities`
struct:
```go
// Capabilities defines the set of capabilities available within the system.
// For now these are global. Eventually they may be per-user
type Capabilities struct {
// descriptions omitted for brevity
AllowPrivileged bool
HostNetworkSources []string
PerConnectionBandwidthLimitBytesPerSec int64
// EnableSELinuxIntegration controls whether SELinux integration
// is expected from Kubernetes
EnableSELinuxIntegration bool
}
```
There should be validations added to ensure that if SELinux integration is not enabled, pods cannot
be created with SELinux contexts set. If this capability is set and SELinux is not enabled on the node, the node should report `NotReady` status.
|
1.0
|
There should be a cluster-level SELinux capability - Forking from #12944
Currently, the SELinux support in kubernetes is essentially optional and is driven by whether
SELinux is enabled on any particular node. It should be possible for cluster operators to enforce
that SELinux is enabled if SELinux integration is expected by the cluster. If SELinux integration
is expected and SELinux is not enabled on a node, the node should have a `NotReady` status.
There should be a cluster-wide capability that controls whether SELinux integration is enabled in
the cluster. Cluster operators generally either run SELinux or they don't; we do not know of any
cases where an operator wants to have SELinux enforcing on some nodes in a cluster and not on
others.
@swagiaal and I have been discussing this; he is beginning work on a PR now.
@bgrant0607 @thockin @erictune @smarterclayton
A new `EnableSELinuxIntegration` capability should be added to the `capabilities.Capabilities`
struct:
```go
// Capabilities defines the set of capabilities available within the system.
// For now these are global. Eventually they may be per-user
type Capabilities struct {
// descriptions omitted for brevity
AllowPrivileged bool
HostNetworkSources []string
PerConnectionBandwidthLimitBytesPerSec int64
// EnableSELinuxIntegration controls whether SELinux integration
// is expected from Kubernetes
EnableSELinuxIntegration bool
}
```
There should be validations added to ensure that if SELinux integration is not enabled, pods cannot
be created with SELinux contexts set. If this capability is set and SELinux is not enabled on the node, the node should report `NotReady` status.
|
non_process
|
there should be a cluster level selinux capability forking from currently the selinux support in kubernetes is essentially optional and is driven by whether selinux is enabled on any particular node it should be possible for cluster operators to enforce that selinux is enabled if selinux integration is expected by the cluster if selinux integration is expected and selinux is not enabled on a node the node should have a notready status there should be a cluster wide capability that controls whether selinux integration is enabled in the cluster cluster operators generally either run selinux or they don t we do not know of any cases where an operator wants to have selinux enforcing on some nodes in a cluster and not on others swagiaal and i have been discussing this he is beginning work on a pr now thockin erictune smarterclayton a new enableselinuxintegration capability should be added to the capabilities capabilities struct go capabilities defines the set of capabilities available within the system for now these are global eventually they may be per user type capabilities struct descriptions omitted for brevity allowprivileged bool hostnetworksources string perconnectionbandwidthlimitbytespersec enableselinuxintegration controls whether selinux integration is expected from kubernetes enableselinuxintegration bool there should be validations added to ensure that if selinux integration is not enabled pods cannot be created with selinux contexts set if this capability is set and selinux is not enabled on the node the node should report notready status
| 0
|
32,253
| 13,784,726,368
|
IssuesEvent
|
2020-10-08 21:22:52
|
microsoft/vscode-cpptools
|
https://api.github.com/repos/microsoft/vscode-cpptools
|
closed
|
Including seemingly irrelevant header file breaks Intellisense
|
Language Service bug more info needed need repro
|
**Type: LanguageService**
<!----- Input information below ----->
<!--
**Prior to filing an issue, please review:**
- Existing issues at https://github.com/Microsoft/vscode-cpptools/issues
- Our documentation at https://code.visualstudio.com/docs/languages/cpp
- FAQs at https://code.visualstudio.com/docs/cpp/faq-cpp
-->
**Describe the bug**
- OS and Version: Linux (Ubuntu 18.04)
- VS Code Version: `1.46.0
a5d1cc28bb5da32ec67e86cc50f84c67cc690321
x64`
- C/C++ Extension Version: v0.28.3
- Other extensions you installed (and if the issue persists after disabling them): I disabled all but vscode-cpptools, issue persists
- Does this issue involve using SSH remote to run the extension on a remote machine?: no
- A clear and concise description of what the bug is, including information about the workspace (i.e. is the workspace a single project or multiple projects, size of the project, etc).
Autocomplete/suggest stops working after `#include`ing a certain header file (i.e. a ROS header file in this case)
**Steps to reproduce**
1. Intellisense autocomplete/suggest is working (see screenshots below) when pressing Ctrl+Space
2. I `#include` a certain header file
3. Press Ctrl+Space, now autocomplete says 'no suggestions found'.
The complete, Intellisense-breaking file:
```
#include "nav_msgs/Odometry.h"
#include "ros/node_handle.h" // <------ if I include this, autocomplete stops working
int main(int argc, char const *argv[])
{
nav_msgs::Odometry::
return 0;
}
```
**Expected behavior**
I expect that including extra headers does not break autocomplete.
<!-- Please provide the following logs that show diagnostics and debugging information about the language server.
1. Logs from the command `C/C++: Log Diagnostics`
2. Logs from [the language server](https://code.visualstudio.com/docs/cpp/enable-logging-cpp#_enable-logging-for-the-language-server)
-->
<details>
<summary><strong>Logs</strong></summary>
<!-- Note: do not remove empty line after </summary> tag, otherwise the code blocks formatting won't show correctly. -->
```
-------- Diagnostics - 6/18/2020, 10:37:48 AM
Version: 0.28.3
Current Configuration:
{
"browse": {
"limitSymbolsToIncludedHeaders": true,
"path": [
"<path/to/folder>/catkin_ws/devel/include/**",
"/opt/ros/melodic/include/**",
"/usr/include/**",
"${workspaceFolder}"
]
},
"includePath": [
"<path/to/folder>/devel/include/**",
"/opt/ros/melodic/include/**",
"/usr/include/**"
],
"name": "Linux",
"compilerPath": "/usr/bin/gcc",
"cStandard": "gnu11",
"cppStandard": "gnu++17",
"compilerArgs": []
}
Translation Unit Mappings:
[ <path/to/folder>/src/test_intellisense.cpp ]:
<path/to/folder>/src/test_intellisense.cpp
Translation Unit Configurations:
[ <path/to/folder>/src/test_intellisense.cpp ]:
Process ID: 22869
Memory Usage: 254 MB
Compiler Path: /usr/bin/gcc
Includes:
/usr/include/c++/7
/usr/include/x86_64-linux-gnu/c++/7
/usr/include/c++/7/backward
/usr/lib/gcc/x86_64-linux-gnu/7/include
/usr/local/include
/usr/lib/gcc/x86_64-linux-gnu/7/include-fixed
/usr/include/x86_64-linux-gnu
/usr/include
/opt/ros/melodic/include
/opt/ros/melodic/include/kdl
/opt/ros/melodic/include/moveit
/usr/include/bsd
/usr/include/c++/7/ext
/usr/include/boost/predef/os
Standard Version: c++17
IntelliSense Mode: gcc-x64
Other Flags:
--g++
--gnu_version=70500
Total Memory Usage: 254 MB
```
</details>
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem. -->


**Additional context**
<!--
* Call Stacks: For bugs like crashes, deadlocks, infinite loops, etc. that we are not able to repro and for which the call stack may be useful, please attach a debugger and/or create a dmp and provide the call stacks. Windows binaries have symbols available in VS Code by setting your "symbolSearchPath" to "https://msdl.microsoft.com/download/symbols".
-->
|
1.0
|
Including seemingly irrelevant header file breaks Intellisense - **Type: LanguageService**
<!----- Input information below ----->
<!--
**Prior to filing an issue, please review:**
- Existing issues at https://github.com/Microsoft/vscode-cpptools/issues
- Our documentation at https://code.visualstudio.com/docs/languages/cpp
- FAQs at https://code.visualstudio.com/docs/cpp/faq-cpp
-->
**Describe the bug**
- OS and Version: Linux (Ubuntu 18.04)
- VS Code Version: `1.46.0
a5d1cc28bb5da32ec67e86cc50f84c67cc690321
x64`
- C/C++ Extension Version: v0.28.3
- Other extensions you installed (and if the issue persists after disabling them): I disabled all but vscode-cpptools, issue persists
- Does this issue involve using SSH remote to run the extension on a remote machine?: no
- A clear and concise description of what the bug is, including information about the workspace (i.e. is the workspace a single project or multiple projects, size of the project, etc).
Autocomplete/suggest stops working after `#include`ing a certain header file (i.e. a ROS header file in this case)
**Steps to reproduce**
1. Intellisense autocomplete/suggest is working (see screenshots below) when pressing Ctrl+Space
2. I `#include` a certain header file
3. Press Ctrl+Space, now autocomplete says 'no suggestions found'.
The complete, Intellisense-breaking file:
```
#include "nav_msgs/Odometry.h"
#include "ros/node_handle.h" // <------ if I include this, autocomplete stops working
int main(int argc, char const *argv[])
{
nav_msgs::Odometry::
return 0;
}
```
**Expected behavior**
I expect that including extra headers does not break autocomplete.
<!-- Please provide the following logs that show diagnostics and debugging information about the language server.
1. Logs from the command `C/C++: Log Diagnostics`
2. Logs from [the language server](https://code.visualstudio.com/docs/cpp/enable-logging-cpp#_enable-logging-for-the-language-server)
-->
<details>
<summary><strong>Logs</strong></summary>
<!-- Note: do not remove empty line after </summary> tag, otherwise the code blocks formatting won't show correctly. -->
```
-------- Diagnostics - 6/18/2020, 10:37:48 AM
Version: 0.28.3
Current Configuration:
{
"browse": {
"limitSymbolsToIncludedHeaders": true,
"path": [
"<path/to/folder>/catkin_ws/devel/include/**",
"/opt/ros/melodic/include/**",
"/usr/include/**",
"${workspaceFolder}"
]
},
"includePath": [
"<path/to/folder>/devel/include/**",
"/opt/ros/melodic/include/**",
"/usr/include/**"
],
"name": "Linux",
"compilerPath": "/usr/bin/gcc",
"cStandard": "gnu11",
"cppStandard": "gnu++17",
"compilerArgs": []
}
Translation Unit Mappings:
[ <path/to/folder>/src/test_intellisense.cpp ]:
<path/to/folder>/src/test_intellisense.cpp
Translation Unit Configurations:
[ <path/to/folder>/src/test_intellisense.cpp ]:
Process ID: 22869
Memory Usage: 254 MB
Compiler Path: /usr/bin/gcc
Includes:
/usr/include/c++/7
/usr/include/x86_64-linux-gnu/c++/7
/usr/include/c++/7/backward
/usr/lib/gcc/x86_64-linux-gnu/7/include
/usr/local/include
/usr/lib/gcc/x86_64-linux-gnu/7/include-fixed
/usr/include/x86_64-linux-gnu
/usr/include
/opt/ros/melodic/include
/opt/ros/melodic/include/kdl
/opt/ros/melodic/include/moveit
/usr/include/bsd
/usr/include/c++/7/ext
/usr/include/boost/predef/os
Standard Version: c++17
IntelliSense Mode: gcc-x64
Other Flags:
--g++
--gnu_version=70500
Total Memory Usage: 254 MB
```
</details>
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem. -->


**Additional context**
<!--
* Call Stacks: For bugs like crashes, deadlocks, infinite loops, etc. that we are not able to repro and for which the call stack may be useful, please attach a debugger and/or create a dmp and provide the call stacks. Windows binaries have symbols available in VS Code by setting your "symbolSearchPath" to "https://msdl.microsoft.com/download/symbols".
-->
|
non_process
|
including seemingly irrelevant header file breaks intellisense type languageservice prior to filing an issue please review existing issues at our documentation at faqs at describe the bug os and version linux ubuntu vs code version c c extension version other extensions you installed and if the issue persists after disabling them i disabled all but vscode cpptools issue persists does this issue involve using ssh remote to run the extension on a remote machine no a clear and concise description of what the bug is including information about the workspace i e is the workspace a single project or multiple projects size of the project etc autocomplete suggest stops working after include ing a certain header file i e a ros header file in this case steps to reproduce intellisense autocomplete suggest is working see screenshots below when pressing ctrl space i include a certain header file press ctrl space now autocomplete says no suggestions found the complete intellisense breaking file include nav msgs odometry h include ros node handle h if i include this autocomplete stops working int main int argc char const argv nav msgs odometry return expected behavior i expect that including extra headers does not break autocomplete please provide the following logs that show diagnostics and debugging information about the language server logs from the command c c log diagnostics logs from logs tag otherwise the code blocks formatting won t show correctly diagnostics am version current configuration browse limitsymbolstoincludedheaders true path catkin ws devel include opt ros melodic include usr include workspacefolder includepath devel include opt ros melodic include usr include name linux compilerpath usr bin gcc cstandard cppstandard gnu compilerargs translation unit mappings src test intellisense cpp translation unit configurations process id memory usage mb compiler path usr bin gcc includes usr include c usr include linux gnu c usr include c backward usr lib gcc linux gnu include usr local include usr lib gcc linux gnu include fixed usr include linux gnu usr include opt ros melodic include opt ros melodic include kdl opt ros melodic include moveit usr include bsd usr include c ext usr include boost predef os standard version c intellisense mode gcc other flags g gnu version total memory usage mb screenshots additional context call stacks for bugs like crashes deadlocks infinite loops etc that we are not able to repro and for which the call stack may be useful please attach a debugger and or create a dmp and provide the call stacks windows binaries have symbols available in vs code by setting your symbolsearchpath to
| 0
|
278,303
| 24,144,313,566
|
IssuesEvent
|
2022-09-21 17:19:06
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
node-kubelet-serial-crio job is failing/erroring
|
priority/important-soon sig/node kind/failing-test triage/accepted
|
### Which jobs are failing?
node-kubelet-serial-crio: https://testgrid.k8s.io/sig-node-cri-o#node-kubelet-serial-crio&show-stale-tests=
### Which tests are failing?
Tests fail to run.
### Since when has it been failing?
Since 03/23. First failure: https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial-cri-o/1506730409776386048
### Testgrid link
https://testgrid.k8s.io/sig-node-cri-o#node-kubelet-serial-crio&show-stale-tests=
### Reason for failure (if possible)
Have not yet investigated.
### Anything else we need to know?
/cc @saschagrunert
could this be related to the work you're doing injecting SSH keys? IIRC this job runs on the Google, not prow community cluster
### Relevant SIG(s)
/sig node
|
1.0
|
node-kubelet-serial-crio job is failing/erroring - ### Which jobs are failing?
node-kubelet-serial-crio: https://testgrid.k8s.io/sig-node-cri-o#node-kubelet-serial-crio&show-stale-tests=
### Which tests are failing?
Tests fail to run.
### Since when has it been failing?
Since 03/23. First failure: https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial-cri-o/1506730409776386048
### Testgrid link
https://testgrid.k8s.io/sig-node-cri-o#node-kubelet-serial-crio&show-stale-tests=
### Reason for failure (if possible)
Have not yet investigated.
### Anything else we need to know?
/cc @saschagrunert
could this be related to the work you're doing injecting SSH keys? IIRC this job runs on the Google, not prow community cluster
### Relevant SIG(s)
/sig node
|
non_process
|
node kubelet serial crio job is failing erroring which jobs are failing node kubelet serial crio which tests are failing tests fail to run since when has it been failing since first failure testgrid link reason for failure if possible have not yet investigated anything else we need to know cc saschagrunert could this be related to the work you re doing injecting ssh keys iirc this job runs on the google not prow community cluster relevant sig s sig node
| 0
|
1,389
| 3,955,855,209
|
IssuesEvent
|
2016-04-29 22:56:07
|
mapbox/mapbox-gl-js
|
https://api.github.com/repos/mapbox/mapbox-gl-js
|
opened
|
Backporting fixes
|
meta testing & release process
|
As `mapbox-gl-js` becomes more widely deployed, stability becomes more important.
Our current release processes bundles both bug fixes and new ~~bugs~~ features into each release. Users who don't need the new features might want to upgrade to the latest release for bug fixes, however the new release may have new features ~~bugs~~ too.
One solution is to backport bug fixes from each release the previous minor release. Bug fixes in `v0.18.x` would be backported to create new `v0.17.y` releases. This process could be largely automated (`git cherry-pick`).
Thoughts?
cc @scothis @ansis @mourner @tmcw @jfirebaugh @kkaefer @1ec5
|
1.0
|
Backporting fixes - As `mapbox-gl-js` becomes more widely deployed, stability becomes more important.
Our current release processes bundles both bug fixes and new ~~bugs~~ features into each release. Users who don't need the new features might want to upgrade to the latest release for bug fixes, however the new release may have new features ~~bugs~~ too.
One solution is to backport bug fixes from each release the previous minor release. Bug fixes in `v0.18.x` would be backported to create new `v0.17.y` releases. This process could be largely automated (`git cherry-pick`).
Thoughts?
cc @scothis @ansis @mourner @tmcw @jfirebaugh @kkaefer @1ec5
|
process
|
backporting fixes as mapbox gl js becomes more widely deployed stability becomes more important our current release processes bundles both bug fixes and new bugs features into each release users who don t need the new features might want to upgrade to the latest release for bug fixes however the new release may have new features bugs too one solution is to backport bug fixes from each release the previous minor release bug fixes in x would be backported to create new y releases this process could be largely automated git cherry pick thoughts cc scothis ansis mourner tmcw jfirebaugh kkaefer
| 1
|
10,308
| 6,668,530,965
|
IssuesEvent
|
2017-10-03 16:05:26
|
numbbo/coco
|
https://api.github.com/repos/numbbo/coco
|
opened
|
return value of COCODataArchive.get
|
Usability
|
When we want to get some archived data by index like
```python
import cocopp
' '.join(cocopp.bbob.get(i, remote=False) for i in [2, 13, 33])
```
this raises an exception when the data have not been downloaded yet. If `get` would return `''` instead of `None`, it would be fine.
```python
' '.join(bbob.get(i, remote=False) or '' for i in [2, 13, 33])
```
works, but it's probably hard to find for any user inexperienced with Python.
|
True
|
return value of COCODataArchive.get - When we want to get some archived data by index like
```python
import cocopp
' '.join(cocopp.bbob.get(i, remote=False) for i in [2, 13, 33])
```
this raises an exception when the data have not been downloaded yet. If `get` would return `''` instead of `None`, it would be fine.
```python
' '.join(bbob.get(i, remote=False) or '' for i in [2, 13, 33])
```
works, but it's probably hard to find for any user inexperienced with Python.
|
non_process
|
return value of cocodataarchive get when we want to get some archived data by index like python import cocopp join cocopp bbob get i remote false for i in this raises an exception when the data have not been downloaded yet if get would return instead of none it would be fine python join bbob get i remote false or for i in works but it s probably hard to find for any user inexperienced with python
| 0
|
16,475
| 21,409,540,024
|
IssuesEvent
|
2022-04-22 03:17:23
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Generate XYZ Tiles has missing data across the 180th meridian.
|
Feedback stale Processing Bug
|
<!--
Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone.
If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix
Checklist before submitting
- [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists
- [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles).
- [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue
-->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
Generate XYZ Tiles (Directory) fails to render properly across the 180th meridian. I'm trying to create a map of Fiji which happens to intersect this line. The algorithm fails to capture data right on either side of the meridian as can be seen in this screenshot of my tiles in Openlayers.
<img width="583" alt="Screen Shot 2020-07-28 at 3 55 24 PM" src="https://user-images.githubusercontent.com/8892185/88617473-e15b4080-d0ea-11ea-9d05-e03b7049f78e.png">
**How to Reproduce**
Try to save XYZ tiles with an extent that crosses the 180th meridian.
<!-- Steps, sample datasets and qgis project file to reproduce the behavior. Screencasts or screenshots welcome
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error -->
**QGIS and OS versions**
QGIS version | 3.14.1-Pi | QGIS code revision | de08d6b71d
-- | -- | -- | --
Compiled against Qt | 5.12.3 | Running against Qt | 5.12.3
Compiled against GDAL/OGR | 2.4.1 | Running against GDAL/OGR | 2.4.1
Compiled against GEOS | 3.7.2-CAPI-1.11.2 | Running against GEOS | 3.7.2-CAPI-1.11.2 b55d2125
Compiled against SQLite | 3.28.0 | Running against SQLite | 3.28.0
PostgreSQL Client Version | 11.3 | SpatiaLite Version | 4.3.0a
QWT Version | 6.1.4 | QScintilla2 Version | 2.11.1
Compiled against PROJ | 5.2.0 | Running against PROJ | Rel. 5.2.0, September 15th, 2018
OS Version | macOS Mojave (10.14)
Active python plugins | processing; db_manager; MetaSearch
<!-- In the QGIS Help menu -> About, click in the table, Ctrl+A and then Ctrl+C. Finally paste here -->
**Additional context**
My guess is that there is some kind of number precision problem of data to the east of 179.9999 and west of -179.9999.
<!-- Add any other context about the problem here. -->
|
1.0
|
Generate XYZ Tiles has missing data across the 180th meridian. - <!--
Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone.
If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix
Checklist before submitting
- [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists
- [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles).
- [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue
-->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
Generate XYZ Tiles (Directory) fails to render properly across the 180th meridian. I'm trying to create a map of Fiji which happens to intersect this line. The algorithm fails to capture data right on either side of the meridian as can be seen in this screenshot of my tiles in Openlayers.
<img width="583" alt="Screen Shot 2020-07-28 at 3 55 24 PM" src="https://user-images.githubusercontent.com/8892185/88617473-e15b4080-d0ea-11ea-9d05-e03b7049f78e.png">
**How to Reproduce**
Try to save XYZ tiles with an extent that crosses the 180th meridian.
<!-- Steps, sample datasets and qgis project file to reproduce the behavior. Screencasts or screenshots welcome
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error -->
**QGIS and OS versions**
QGIS version | 3.14.1-Pi | QGIS code revision | de08d6b71d
-- | -- | -- | --
Compiled against Qt | 5.12.3 | Running against Qt | 5.12.3
Compiled against GDAL/OGR | 2.4.1 | Running against GDAL/OGR | 2.4.1
Compiled against GEOS | 3.7.2-CAPI-1.11.2 | Running against GEOS | 3.7.2-CAPI-1.11.2 b55d2125
Compiled against SQLite | 3.28.0 | Running against SQLite | 3.28.0
PostgreSQL Client Version | 11.3 | SpatiaLite Version | 4.3.0a
QWT Version | 6.1.4 | QScintilla2 Version | 2.11.1
Compiled against PROJ | 5.2.0 | Running against PROJ | Rel. 5.2.0, September 15th, 2018
OS Version | macOS Mojave (10.14)
Active python plugins | processing; db_manager; MetaSearch
<!-- In the QGIS Help menu -> About, click in the table, Ctrl+A and then Ctrl+C. Finally paste here -->
**Additional context**
My guess is that there is some kind of number precision problem of data to the east of 179.9999 and west of -179.9999.
<!-- Add any other context about the problem here. -->
|
process
|
generate xyz tiles has missing data across the meridian bug fixing and feature development is a community responsibility and not the responsibility of the qgis project alone if this bug report or feature request is high priority for you we suggest engaging a qgis developer or support organisation and financially sponsoring a fix checklist before submitting search through existing issue reports and gis stackexchange com to check whether the issue already exists test with a create a light and self contained sample dataset and project file which demonstrates the issue describe the bug generate xyz tiles directory fails to render properly across the meridian i m trying to create a map of fiji which happens to intersect this line the algorithm fails to capture data right on either side of the meridian as can be seen in this screenshot of my tiles in openlayers img width alt screen shot at pm src how to reproduce try to save xyz tiles with an extent that crosses the meridian steps sample datasets and qgis project file to reproduce the behavior screencasts or screenshots welcome go to click on scroll down to see error qgis and os versions qgis version pi qgis code revision compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi compiled against sqlite running against sqlite postgresql client version spatialite version qwt version version compiled against proj running against proj rel september os version macos mojave active python plugins processing db manager metasearch about click in the table ctrl a and then ctrl c finally paste here additional context my guess is that there is some kind of number precision problem of data to the east of and west of
| 1
|
283,182
| 8,717,628,393
|
IssuesEvent
|
2018-12-07 17:44:56
|
Stivius/XiboLinuxStack
|
https://api.github.com/repos/Stivius/XiboLinuxStack
|
closed
|
Refactor media and region classes
|
medium priority refactoring
|
The design of this classes is overcomplicated and needs to be refactored. We need an intermediate layer between media and region that should be called RegionContent (the media that is placed in the region).
Also, the event system should simplify the reaction (which can be different) of different media to the same event.
- [x] Event System
- [x] Implement intermediate layer RegionContent
- [x] Fix unit-tests
|
1.0
|
Refactor media and region classes - The design of this classes is overcomplicated and needs to be refactored. We need an intermediate layer between media and region that should be called RegionContent (the media that is placed in the region).
Also, the event system should simplify the reaction (which can be different) of different media to the same event.
- [x] Event System
- [x] Implement intermediate layer RegionContent
- [x] Fix unit-tests
|
non_process
|
refactor media and region classes the design of this classes is overcomplicated and needs to be refactored we need an intermediate layer between media and region that should be called regioncontent the media that is placed in the region also the event system should simplify the reaction which can be different of different media to the same event event system implement intermediate layer regioncontent fix unit tests
| 0
|
15,535
| 19,703,298,348
|
IssuesEvent
|
2022-01-12 18:54:25
|
googleapis/python-runtimeconfig
|
https://api.github.com/repos/googleapis/python-runtimeconfig
|
opened
|
Your .repo-metadata.json file has a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'runtimeconfig' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'runtimeconfig' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 api shortname runtimeconfig invalid in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
| 1
|
365,129
| 10,775,955,472
|
IssuesEvent
|
2019-11-03 17:34:27
|
CN-UPB/tng-sdk-benchmark
|
https://api.github.com/repos/CN-UPB/tng-sdk-benchmark
|
closed
|
Deploy a sample docker container on Openstack
|
priority: low
|
Deploy an example docker container (ex. Suricata container) on Openstack.
Responsible person: Bhuvan, with advice from Avi
|
1.0
|
Deploy a sample docker container on Openstack - Deploy an example docker container (ex. Suricata container) on Openstack.
Responsible person: Bhuvan, with advice from Avi
|
non_process
|
deploy a sample docker container on openstack deploy an example docker container ex suricata container on openstack responsible person bhuvan with advice from avi
| 0
|
145,875
| 13,163,751,345
|
IssuesEvent
|
2020-08-11 01:28:13
|
enzoampil/fastquant
|
https://api.github.com/repos/enzoampil/fastquant
|
opened
|
Add Custom Strategy Notebook / Technical Tutorial
|
documentation
|
### Tutorial notebook outline
**Tutorial title:** Backtest your forecasts with fastquant
**Tutorial summary:** We use the Prophet package to make baseline forecasts and use those as a custom indicator for fastquant.
Please use this checklist as a rough outline of prerequisites when submitting a new tutorial notebook to fastquant!
- [ ] Complete [front matter](https://github.com/fastai/fastpages#customizing-blog-posts-with-front-matter) (title, description, author, etc)
- [ ] Each section has at least some commentary to guide the reader
- [ ] All images, including graphs, and equations are displaying properly
- [ ] Code is expected to work for someone with fastquant [dependencies](https://github.com/enzoampil/fastquant/blob/master/python/requirements.txt) installed; otherwise, indicate the installation on the notebook.
- [ ] Each of the section headers have their first letter capitalized (e.g. *Define the search space*)
|
1.0
|
Add Custom Strategy Notebook / Technical Tutorial - ### Tutorial notebook outline
**Tutorial title:** Backtest your forecasts with fastquant
**Tutorial summary:** We use the Prophet package to make baseline forecasts and use those as a custom indicator for fastquant.
Please use this checklist as a rough outline of prerequisites when submitting a new tutorial notebook to fastquant!
- [ ] Complete [front matter](https://github.com/fastai/fastpages#customizing-blog-posts-with-front-matter) (title, description, author, etc)
- [ ] Each section has at least some commentary to guide the reader
- [ ] All images, including graphs, and equations are displaying properly
- [ ] Code is expected to work for someone with fastquant [dependencies](https://github.com/enzoampil/fastquant/blob/master/python/requirements.txt) installed; otherwise, indicate the installation on the notebook.
- [ ] Each of the section headers have their first letter capitalized (e.g. *Define the search space*)
|
non_process
|
add custom strategy notebook technical tutorial tutorial notebook outline tutorial title backtest your forecasts with fastquant tutorial summary we use the prophet package to make baseline forecasts and use those as a custom indicator for fastquant please use this checklist as a rough outline of prerequisites when submitting a new tutorial notebook to fastquant complete title description author etc each section has at least some commentary to guide the reader all images including graphs and equations are displaying properly code is expected to work for someone with fastquant installed otherwise indicate the installation on the notebook each of the section headers have their first letter capitalized e g define the search space
| 0
|
16,757
| 21,925,771,349
|
IssuesEvent
|
2022-05-23 03:54:17
|
camunda/zeebe
|
https://api.github.com/repos/camunda/zeebe
|
closed
|
Java client fails to describe partitions that are `DEAD`
|
kind/bug scope/clients-java team/process-automation
|
**Describe the bug**
`PartitionInfoImpl` only accepts partitions that are `HEALTHY` or `UNHEALTHY` and throws an exception for partitions which are `DEAD`.
https://github.com/camunda/zeebe/blob/1edfe4f762ef700885859bf87e1370963d3448a1/clients/java/src/main/java/io/camunda/zeebe/client/impl/response/PartitionInfoImpl.java#L46-L56
**Expected behavior**
`PartitionInfoImpl` should also support the `DEAD` status and not throw an exception.
|
1.0
|
Java client fails to describe partitions that are `DEAD` - **Describe the bug**
`PartitionInfoImpl` only accepts partitions that are `HEALTHY` or `UNHEALTHY` and throws an exception for partitions which are `DEAD`.
https://github.com/camunda/zeebe/blob/1edfe4f762ef700885859bf87e1370963d3448a1/clients/java/src/main/java/io/camunda/zeebe/client/impl/response/PartitionInfoImpl.java#L46-L56
**Expected behavior**
`PartitionInfoImpl` should also support the `DEAD` status and not throw an exception.
|
process
|
java client fails to describe partitions that are dead describe the bug partitioninfoimpl only accepts partitions that are healthy or unhealthy and throws an exception for partitions which are dead expected behavior partitioninfoimpl should also support the dead status and not throw an exception
| 1
|
14,301
| 8,552,812,464
|
IssuesEvent
|
2018-11-07 22:15:13
|
LLK/scratch-gui
|
https://api.github.com/repos/LLK/scratch-gui
|
closed
|
Performance Regression with Geometry Dash
|
bug has-patch needs-triage performance regression
|
The geometry dash project (https://llk.github.io/scratch-gui/develop/#105500895) is running slow again despite performance issues being fixed recently.
The `Player: FPS` monitor reads out `15` on llk.github.io/scratch-gui/develop, but runs at normal speed (30 FPS) on beta. This means that this change was introduced sometime in the last week (Thursday Oct 4th - Thursday Oct 11th).
|
True
|
Performance Regression with Geometry Dash - The geometry dash project (https://llk.github.io/scratch-gui/develop/#105500895) is running slow again despite performance issues being fixed recently.
The `Player: FPS` monitor reads out `15` on llk.github.io/scratch-gui/develop, but runs at normal speed (30 FPS) on beta. This means that this change was introduced sometime in the last week (Thursday Oct 4th - Thursday Oct 11th).
|
non_process
|
performance regression with geometry dash the geometry dash project is running slow again despite performance issues being fixed recently the player fps monitor reads out on llk github io scratch gui develop but runs at normal speed fps on beta this means that this change was introduced sometime in the last week thursday oct thursday oct
| 0
|
19,619
| 25,971,851,557
|
IssuesEvent
|
2022-12-19 11:55:40
|
firebase/firebase-cpp-sdk
|
https://api.github.com/repos/firebase/firebase-cpp-sdk
|
closed
|
[C++] Nightly Integration Testing Report for Firestore
|
type: process nightly-testing
|
<hidden value="integration-test-status-comment"></hidden>
### [build against repo] Integration test with FLAKINESS (succeeded after retry)
Requested by @sunmou99 on commit 54271d844b95b19562060d4bfdf1b6963632dde9
Last updated: Sun Dec 18 04:02 PST 2022
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3724315020)**
| Failures | Configs |
|----------|---------|
| firestore | [TEST] [FLAKINESS] [Android] [1/3 os: macos] [1/4 android_device: android_target]<details><summary>(1 failed tests)</summary> CRASH/TIMEOUT</details> |
Add flaky tests to **[go/fpl-cpp-flake-tracker](http://go/fpl-cpp-flake-tracker)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against SDK] Integration test succeeded!
Requested by @firebase-workflow-trigger[bot] on commit 54271d844b95b19562060d4bfdf1b6963632dde9
Last updated: Sun Dec 18 05:46 PST 2022
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3724732671)**
|
1.0
|
[C++] Nightly Integration Testing Report for Firestore -
<hidden value="integration-test-status-comment"></hidden>
### [build against repo] Integration test with FLAKINESS (succeeded after retry)
Requested by @sunmou99 on commit 54271d844b95b19562060d4bfdf1b6963632dde9
Last updated: Sun Dec 18 04:02 PST 2022
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3724315020)**
| Failures | Configs |
|----------|---------|
| firestore | [TEST] [FLAKINESS] [Android] [1/3 os: macos] [1/4 android_device: android_target]<details><summary>(1 failed tests)</summary> CRASH/TIMEOUT</details> |
Add flaky tests to **[go/fpl-cpp-flake-tracker](http://go/fpl-cpp-flake-tracker)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against SDK] Integration test succeeded!
Requested by @firebase-workflow-trigger[bot] on commit 54271d844b95b19562060d4bfdf1b6963632dde9
Last updated: Sun Dec 18 05:46 PST 2022
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3724732671)**
|
process
|
nightly integration testing report for firestore integration test with flakiness succeeded after retry requested by on commit last updated sun dec pst failures configs firestore failed tests nbsp nbsp crash timeout add flaky tests to ✅ nbsp integration test succeeded requested by firebase workflow trigger on commit last updated sun dec pst
| 1
|
270,047
| 8,445,844,611
|
IssuesEvent
|
2018-10-18 23:11:56
|
robot-lab/judyst-main-web-service
|
https://api.github.com/repos/robot-lab/judyst-main-web-service
|
opened
|
Сделать 2 страницы для восстановления пароля
|
area/front-end priority/high type/feature type/task
|
# Task request
## Цель задачи
Сделать страницы для восстановления забытого пароля.
## Решение задачи
Первая страница должна содержать одно поле для ввода email. Вторая страница содержит поля для изменения пароля. Обе имеют ссылки на страницу авторизации и главную страницу.
## Дополнительный контекст или ссылки на связанные с данной задачей issues
|
1.0
|
Сделать 2 страницы для восстановления пароля - # Task request
## Цель задачи
Сделать страницы для восстановления забытого пароля.
## Решение задачи
Первая страница должна содержать одно поле для ввода email. Вторая страница содержит поля для изменения пароля. Обе имеют ссылки на страницу авторизации и главную страницу.
## Дополнительный контекст или ссылки на связанные с данной задачей issues
|
non_process
|
сделать страницы для восстановления пароля task request цель задачи сделать страницы для восстановления забытого пароля решение задачи первая страница должна содержать одно поле для ввода email вторая страница содержит поля для изменения пароля обе имеют ссылки на страницу авторизации и главную страницу дополнительный контекст или ссылки на связанные с данной задачей issues
| 0
|
500,344
| 14,496,643,537
|
IssuesEvent
|
2020-12-11 13:06:24
|
kubermatic/machine-controller
|
https://api.github.com/repos/kubermatic/machine-controller
|
closed
|
Flatcar support in the machine controller(Packet) for k8s 1.17.3, 1.18.10 and 1.19.0 is broken
|
kind/bug priority/normal team/lifecycle
|
Since k8s 1.17.3, flatcar machines are being created in the packet cloud provider, however they fail to join the cluster due to some kubelet related issues.
|
1.0
|
Flatcar support in the machine controller(Packet) for k8s 1.17.3, 1.18.10 and 1.19.0 is broken - Since k8s 1.17.3, flatcar machines are being created in the packet cloud provider, however they fail to join the cluster due to some kubelet related issues.
|
non_process
|
flatcar support in the machine controller packet for and is broken since flatcar machines are being created in the packet cloud provider however they fail to join the cluster due to some kubelet related issues
| 0
|
12,458
| 14,935,730,482
|
IssuesEvent
|
2021-01-25 12:23:20
|
threefoldtech/js-sdk
|
https://api.github.com/repos/threefoldtech/js-sdk
|
closed
|
Adding a worker node to an existing cluster resulted in failure while capacity is available in testnet.
|
process_wontfix
|
Adding a worker node failed:

However, in freefarm there is an empty server.

So my guess is that the node selection algorithm does not take into account all available nodes/capacity.
|
1.0
|
Adding a worker node to an existing cluster resulted in failure while capacity is available in testnet. - Adding a worker node failed:

However, in freefarm there is an empty server.

So my guess is that the node selection algorithm does not take into account all available nodes/capacity.
|
process
|
adding a worker node to an existing cluster resulted in failure while capacity is available in testnet adding a worker node failed however in freefarm there is an empty server so my guess is that the node selection algorithm does not take into account all available nodes capacity
| 1
|
6,029
| 8,837,405,082
|
IssuesEvent
|
2019-01-05 04:18:38
|
jwowillo/greenerthumb
|
https://api.github.com/repos/jwowillo/greenerthumb
|
opened
|
Make a Sender Selector
|
process
|
This may involve renaming 'select' in process to something related to selecting messages types.
|
1.0
|
Make a Sender Selector - This may involve renaming 'select' in process to something related to selecting messages types.
|
process
|
make a sender selector this may involve renaming select in process to something related to selecting messages types
| 1
|
106,550
| 4,274,273,245
|
IssuesEvent
|
2016-07-13 19:59:29
|
VsevolodTrofimov/P-app
|
https://api.github.com/repos/VsevolodTrofimov/P-app
|
opened
|
Smart ajax module
|
low priority
|
Send request
If failed resend
If failed wait and resend
---
If server error send bug report
If no_connection save to local storage and retry while possible
|
1.0
|
Smart ajax module - Send request
If failed resend
If failed wait and resend
---
If server error send bug report
If no_connection save to local storage and retry while possible
|
non_process
|
smart ajax module send request if failed resend if failed wait and resend if server error send bug report if no connection save to local storage and retry while possible
| 0
|
1,179
| 3,681,568,911
|
IssuesEvent
|
2016-02-24 04:12:09
|
18F/FEC
|
https://api.github.com/repos/18F/FEC
|
closed
|
Positive: Calendar features
|
processed
|
## What were you trying to do and how can we improve it?
I was looking at your new calendar features
## General feedback?
I like the new additions
## Tell us about yourself
I'm a new visitor to the FEC page
## Details
* URL: https://fec-proxy.18f.gov/calendar/
* User Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:44.0) Gecko/20100101 Firefox/44.0
|
1.0
|
Positive: Calendar features -
## What were you trying to do and how can we improve it?
I was looking at your new calendar features
## General feedback?
I like the new additions
## Tell us about yourself
I'm a new visitor to the FEC page
## Details
* URL: https://fec-proxy.18f.gov/calendar/
* User Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:44.0) Gecko/20100101 Firefox/44.0
|
process
|
positive calendar features what were you trying to do and how can we improve it i was looking at your new calendar features general feedback i like the new additions tell us about yourself i m a new visitor to the fec page details url user agent mozilla macintosh intel mac os x rv gecko firefox
| 1
|
33,250
| 14,019,228,858
|
IssuesEvent
|
2020-10-29 17:53:14
|
Azure/azure-sdk-for-python
|
https://api.github.com/repos/Azure/azure-sdk-for-python
|
closed
|
[ServiceBus] Align stress tests to cross-language min-bar before GA
|
Client Service Bus
|
Towards GA fit-and-finish: Ensure our stress test coverage is on par with other SDK priorities.
**Message lock renewal**
Keep sending messages in a stream, keep receiving them and
○ manually keep renewing the lock for X duration
○ Use auto renew for X duration
Variation - load the queue with a set of messages initially
Snapshot
- Time stamp
- Number of operations performed
- Number of successes in X duration
- Number of failures in X duration
- Errors seen (Dump all the errors seen in a separate file at the end)
- Also include the snapshot from scenario 4
- Memory consumed
More thoughts
- Expectation is that it never fails in the X duration, observe if it fails?
- How many messages can we handle lock renewals for?
- Lock renewals over a long duration - reliability
**Session lock renewal**
Multiple sessions
- manually keep renewing the lock for X duration
- Use auto renew for X duration
Similar to above... but on session lock
**Single sender**
○ Loop over sendMessages for X duration with Y delay in between
○ Loop over Z parallel sendMessages for X duration with Y delay
Large messages
• Array of messages
• Batch message
Snapshot
○ Time stamp
○ Number of messages sent so far
○ Number of messages per sec
○ Number of sends per sec
○ Number of successes in X duration
○ Number of failures in X duration
○ Errors seen (Dump all the errors seen in a separate file at the end)
More Thoughts
○ Client should handle multiple sends in parallel? How many?
○ Client should work for sending for a long duration and see for any failures - reliability
○ Stretch goal - Send latency (requires internal instrumentation - Account only for the time the SDK takes ...to ignore service/network latencies)
**Single Receiver**
(Note the Sequence number and match with the received ones)
Keep sending messages in a stream, keep receiving them with a single receiver
○ ReceiveBatch in a loop with a single receiver for X duration(X=3hs)
§ Peeklock (random settlement method)
§ receiveAndDelete
§ maxMessageCount = 1 and Y
○ Streaming receiver left open for X duration to keep receiving the messages
§ peekLock (random settlement method)
§ receiveAndDelete
§ maxConcurrentCalls = 1 and Y
(As you increase the number, it should scale up)
Validation
Snapshot
○ Time elapsed
○ Number of messages sent so far
○ Number of messages received so far
○ Number of messages sent/received per sec
○ Number of successes in sending/receiving in X duration
○ Number of failures in sending/receiving in X duration
○ Number of messages per sec
○ Number of sends per sec
○ Number of receives per sec
○ Errors seen (Dump all the errors seen in a separate file at the end)
More Thoughts
○ Expectation is that we don't lose messages
○ Receiver is capable of receiving all the messages without breaking in between - reliability
○ Receive latency
**Any of the managementLink operations**
(Validate the sequence numbers)
Keep making peekMessage calls for X duration with Y delay in between
Snapshot
○ Time elapsed
○ Number of messages sent
○ Number of messages peeked so far
○ Number of successes in X duration
○ Number of failures in X duration
○ Errors seen (Dump all the errors seen in a separate file at the end)
More Thoughts
○ Stressing the managementLink
○ Difference b/w scenario-1 is that this deals with the data
○ Use fromSequenceNumber API
○ Client should work for a long duration and see for any failures - reliability
**Relaxed tests - X is relatively longer (1hr/1day)**
Do an operation, wait for X duration, do the operation again, repeat
Operation can be
○ Send
○ Receive
min_duration < X < max_duration
Snapshot
- Same as scenario 4
More Thoughts
- Implementation wise, scenario 3 and 4 would cover this
- Expectation is that the operation doesn't fail even if done after longer idle intervals
**Closes and Opens**
Create, open, close in sequence - repeat for X duration
○ Sender
○ Receiver
○ Session receiver
Variation - add closing the client too
Snapshot
- Include snapshot from scenario 4
- For senders/receivers/session-receivers on a single client
○ Number of close() calls made
○ Number of failures for close()
○ Number of successes for close()
○ Number of create() calls
○ Number of failures for create()
○ Number of successes for create()
- Errors seen (Dump all the errors seen in a separate file at the end)
More Thoughts
- Expectation is that closes and opens are graceful - observe if it fails
**Same as above + make minor calls like send, receive in between**
**Pull receive reconnect**
**- Iterator timeout - python specific**
|
1.0
|
[ServiceBus] Align stress tests to cross-language min-bar before GA - Towards GA fit-and-finish: Ensure our stress test coverage is on par with other SDK priorities.
**Message lock renewal**
Keep sending messages in a stream, keep receiving them and
○ manually keep renewing the lock for X duration
○ Use auto renew for X duration
Variation - load the queue with a set of messages initially
Snapshot
- Time stamp
- Number of operations performed
- Number of successes in X duration
- Number of failures in X duration
- Errors seen (Dump all the errors seen in a separate file at the end)
- Also include the snapshot from scenario 4
- Memory consumed
More thoughts
- Expectation is that it never fails in the X duration, observe if it fails?
- How many messages can we handle lock renewals for?
- Lock renewals over a long duration - reliability
**Session lock renewal**
Multiple sessions
- manually keep renewing the lock for X duration
- Use auto renew for X duration
Similar to above... but on session lock
**Single sender**
○ Loop over sendMessages for X duration with Y delay in between
○ Loop over Z parallel sendMessages for X duration with Y delay
Large messages
• Array of messages
• Batch message
Snapshot
○ Time stamp
○ Number of messages sent so far
○ Number of messages per sec
○ Number of sends per sec
○ Number of successes in X duration
○ Number of failures in X duration
○ Errors seen (Dump all the errors seen in a separate file at the end)
More Thoughts
○ Client should handle multiple sends in parallel? How many?
○ Client should work for sending for a long duration and see for any failures - reliability
○ Stretch goal - Send latency (requires internal instrumentation - Account only for the time the SDK takes ...to ignore service/network latencies)
**Single Receiver**
(Note the Sequence number and match with the received ones)
Keep sending messages in a stream, keep receiving them with a single receiver
○ ReceiveBatch in a loop with a single receiver for X duration(X=3hs)
§ Peeklock (random settlement method)
§ receiveAndDelete
§ maxMessageCount = 1 and Y
○ Streaming receiver left open for X duration to keep receiving the messages
§ peekLock (random settlement method)
§ receiveAndDelete
§ maxConcurrentCalls = 1 and Y
(As you increase the number, it should scale up)
Validation
Snapshot
○ Time elapsed
○ Number of messages sent so far
○ Number of messages received so far
○ Number of messages sent/received per sec
○ Number of successes in sending/receiving in X duration
○ Number of failures in sending/receiving in X duration
○ Number of messages per sec
○ Number of sends per sec
○ Number of receives per sec
○ Errors seen (Dump all the errors seen in a separate file at the end)
More Thoughts
○ Expectation is that we don't lose messages
○ Receiver is capable of receiving all the messages without breaking in between - reliability
○ Receive latency
**Any of the managementLink operations**
(Validate the sequence numbers)
Keep making peekMessage calls for X duration with Y delay in between
Snapshot
○ Time elapsed
○ Number of messages sent
○ Number of messages peeked so far
○ Number of successes in X duration
○ Number of failures in X duration
○ Errors seen (Dump all the errors seen in a separate file at the end)
More Thoughts
○ Stressing the managementLink
○ Difference b/w scenario-1 is that this deals with the data
○ Use fromSequenceNumber API
○ Client should work for a long duration and see for any failures - reliability
**Relaxed tests - X is relatively longer (1hr/1day)**
Do an operation, wait for X duration, do the operation again, repeat
Operation can be
○ Send
○ Receive
min_duration < X < max_duration
Snapshot
- Same as scenario 4
More Thoughts
- Implementation wise, scenario 3 and 4 would cover this
- Expectation is that the operation doesn't fail even if done after longer idle intervals
**Closes and Opens**
Create, open, close in sequence - repeat for X duration
○ Sender
○ Receiver
○ Session receiver
Variation - add closing the client too
Snapshot
- Include snapshot from scenario 4
- For senders/receivers/session-receivers on a single client
○ Number of close() calls made
○ Number of failures for close()
○ Number of successes for close()
○ Number of create() calls
○ Number of failures for create()
○ Number of successes for create()
- Errors seen (Dump all the errors seen in a separate file at the end)
More Thoughts
- Expectation is that closes and opens are graceful - observe if it fails
**Same as above + make minor calls like send, receive in between**
**Pull receive reconnect**
**- Iterator timeout - python specific**
|
non_process
|
align stress tests to cross language min bar before ga towards ga fit and finish ensure our stress test coverage is on par with other sdk priorities message lock renewal keep sending messages in a stream keep receiving them and ○ manually keep renewing the lock for x duration ○ use auto renew for x duration variation load the queue with a set of messages initially snapshot time stamp number of operations performed number of successes in x duration number of failures in x duration errors seen dump all the errors seen in a separate file at the end also include the snapshot from scenario memory consumed more thoughts expectation is that it never fails in the x duration observe if it fails how many messages can we handle lock renewals for lock renewals over a long duration reliability session lock renewal multiple sessions manually keep renewing the lock for x duration use auto renew for x duration similar to above but on session lock single sender ○ loop over sendmessages for x duration with y delay in between ○ loop over z parallel sendmessages for x duration with y delay large messages • array of messages • batch message snapshot ○ time stamp ○ number of messages sent so far ○ number of messages per sec ○ number of sends per sec ○ number of successes in x duration ○ number of failures in x duration ○ errors seen dump all the errors seen in a separate file at the end more thoughts ○ client should handle multiple sends in parallel how many ○ client should work for sending for a long duration and see for any failures reliability ○ stretch goal send latency requires internal instrumentation account only for the time the sdk takes to ignore service network latencies single receiver note the sequence number and match with the received ones keep sending messages in a stream keep receiving them with a single receiver ○ receivebatch in a loop with a single receiver for x duration x § peeklock random settlement method § receiveanddelete § maxmessagecount and y ○ streaming receiver left open for x duration to keep receiving the messages § peeklock random settlement method § receiveanddelete § maxconcurrentcalls and y as you increase the number it should scale up validation snapshot ○ time elapsed ○ number of messages sent so far ○ number of messages received so far ○ number of messages sent received per sec ○ number of successes in sending receiving in x duration ○ number of failures in sending receiving in x duration ○ number of messages per sec ○ number of sends per sec ○ number of receives per sec ○ errors seen dump all the errors seen in a separate file at the end more thoughts ○ expectation is that we don t lose messages ○ receiver is capable of receiving all the messages without breaking in between reliability ○ receive latency any of the managementlink operations validate the sequence numbers keep making peekmessage calls for x duration with y delay in between snapshot ○ time elapsed ○ number of messages sent ○ number of messages peeked so far ○ number of successes in x duration ○ number of failures in x duration ○ errors seen dump all the errors seen in a separate file at the end more thoughts ○ stressing the managementlink ○ difference b w scenario is that this deals with the data ○ use fromsequencenumber api ○ client should work for a long duration and see for any failures reliability relaxed tests x is relatively longer do an operation wait for x duration do the operation again repeat operation can be ○ send ○ receive min duration x max duration snapshot same as scenario more thoughts implementation wise scenario and would cover this expectation is that the operation doesn t fail even if done after longer idle intervals closes and opens create open close in sequence repeat for x duration ○ sender ○ receiver ○ session receiver variation add closing the client too snapshot include snapshot from scenario for senders receivers session receivers on a single client ○ number of close calls made ○ number of failures for close ○ number of successes for close ○ number of create calls ○ number of failures for create ○ number of successes for create errors seen dump all the errors seen in a separate file at the end more thoughts expectation is that closes and opens are graceful observe if it fails same as above make minor calls like send receive in between pull receive reconnect iterator timeout python specific
| 0
|
329,585
| 24,228,137,703
|
IssuesEvent
|
2022-09-26 15:53:14
|
Sage-Bionetworks/challenge-registry
|
https://api.github.com/repos/Sage-Bionetworks/challenge-registry
|
opened
|
Model name suffix "Dto" should not be visible in Swagger UI
|
bug documentation api/spec java
|
I specify the following property to the config of the OpenAPI generator so that the classes generated in the `model.dto` package are suffixed with "Dto".
```json
"modelNameSuffix": "Dto"
```
A side effect is that Springdoc / Swagger UI uses the name of the class for the schemas instead of the schema name defined in the API specification.

|
1.0
|
Model name suffix "Dto" should not be visible in Swagger UI - I specify the following property to the config of the OpenAPI generator so that the classes generated in the `model.dto` package are suffixed with "Dto".
```json
"modelNameSuffix": "Dto"
```
A side effect is that Springdoc / Swagger UI uses the name of the class for the schemas instead of the schema name defined in the API specification.

|
non_process
|
model name suffix dto should not be visible in swagger ui i specify the following property to the config of the openapi generator so that the classes generated in the model dto package are suffixed with dto json modelnamesuffix dto a side effect is that springdoc swagger ui uses the name of the class for the schemas instead of the schema name defined in the api specification
| 0
|
22,662
| 31,895,964,338
|
IssuesEvent
|
2023-09-18 01:44:32
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
closed
|
Change term - disposition
|
Term - change normative Task Group - Material Sample Process - complete Class - MaterialEntity
|
## Term change
* Submitter: [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/)
* Efficacy Justification (why is this change necessary?): Disposition is currently organized in the Occurrence class. Occurrences are not considered to have dispositions, however the evidence obtained from them do. Organizing this term with MaterialEntity will also provide for its use with any existing classes of material things within Darwin Core, as it would be understood that MaterialEntity would be an informal superclass to `dwc:MaterialSample`, `dwc:PreservedSpecimen`, `dwc:LivingSpecimen`, `dwc:FossilSpecimen`. The examples make clear the intended usage of the term. It is a very specific meaning of the word "disposition" with respect to the availability of an artefact in a collection. The definition change reflects this intent.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/), which includes representatives of over 10 organizations.
* Stability Justification (what concerns are there that this might affect existing implementations?): Usage as currently occurs in Global Biodiversity Information Facility (GBIF) Darwin Core Archives would not be affected by these changes. Darwin Core does not include formal class hierarchies, but if we ignore that formality and imagine what the hierarchy would look like for the classes, we have MaterialEntity as the highest for material things. All of the other material-based classes in Darwin Core (`dwc:MaterialSample`, `dwc:PreservedSpecimen`, `dwc:LivingSpecimen`, `dwc:FossilSpecimen`) might be expected to have dispositions. As there are no other classes in between MaterialEntity and those subtypes, disposition is best organized with the MaterialEntity.
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: No
Current Term definition: https://dwc.tdwg.org/list/#dwc_disposition
Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): disposition
* Term label (English, not normative): Disposition
* Organized in Class (e.g., Occurrence, Event, Location, Taxon): **MaterialEntity** ~~Occurrence~~
* Definition of the term (normative): The current state of a **dwc:MaterialEntity**~~specimen~~ with respect to **a**~~the~~ collection~~identified in collectionCode or collectionID~~.
* Usage comments (recommendations regarding content, etc., not normative): Recommended best practice is to use a controlled vocabulary.
* Examples (not normative): in collection, missing, ~~voucher elsewhere, duplicates elsewhere~~**on loan, used up, destroyed, deaccessioned**
* Refines (identifier of the broader term this term refines; normative):
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): DataSets/DataSet/Units/Unit/SpecimenUnit/Disposition
|
1.0
|
Change term - disposition - ## Term change
* Submitter: [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/)
* Efficacy Justification (why is this change necessary?): Disposition is currently organized in the Occurrence class. Occurrences are not considered to have dispositions, however the evidence obtained from them do. Organizing this term with MaterialEntity will also provide for its use with any existing classes of material things within Darwin Core, as it would be understood that MaterialEntity would be an informal superclass to `dwc:MaterialSample`, `dwc:PreservedSpecimen`, `dwc:LivingSpecimen`, `dwc:FossilSpecimen`. The examples make clear the intended usage of the term. It is a very specific meaning of the word "disposition" with respect to the availability of an artefact in a collection. The definition change reflects this intent.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/), which includes representatives of over 10 organizations.
* Stability Justification (what concerns are there that this might affect existing implementations?): Usage as currently occurs in Global Biodiversity Information Facility (GBIF) Darwin Core Archives would not be affected by these changes. Darwin Core does not include formal class hierarchies, but if we ignore that formality and imagine what the hierarchy would look like for the classes, we have MaterialEntity as the highest for material things. All of the other material-based classes in Darwin Core (`dwc:MaterialSample`, `dwc:PreservedSpecimen`, `dwc:LivingSpecimen`, `dwc:FossilSpecimen`) might be expected to have dispositions. As there are no other classes in between MaterialEntity and those subtypes, disposition is best organized with the MaterialEntity.
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: No
Current Term definition: https://dwc.tdwg.org/list/#dwc_disposition
Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): disposition
* Term label (English, not normative): Disposition
* Organized in Class (e.g., Occurrence, Event, Location, Taxon): **MaterialEntity** ~~Occurrence~~
* Definition of the term (normative): The current state of a **dwc:MaterialEntity**~~specimen~~ with respect to **a**~~the~~ collection~~identified in collectionCode or collectionID~~.
* Usage comments (recommendations regarding content, etc., not normative): Recommended best practice is to use a controlled vocabulary.
* Examples (not normative): in collection, missing, ~~voucher elsewhere, duplicates elsewhere~~**on loan, used up, destroyed, deaccessioned**
* Refines (identifier of the broader term this term refines; normative):
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): DataSets/DataSet/Units/Unit/SpecimenUnit/Disposition
|
process
|
change term disposition term change submitter efficacy justification why is this change necessary disposition is currently organized in the occurrence class occurrences are not considered to have dispositions however the evidence obtained from them do organizing this term with materialentity will also provide for its use with any existing classes of material things within darwin core as it would be understood that materialentity would be an informal superclass to dwc materialsample dwc preservedspecimen dwc livingspecimen dwc fossilspecimen the examples make clear the intended usage of the term it is a very specific meaning of the word disposition with respect to the availability of an artefact in a collection the definition change reflects this intent demand justification if the change is semantic in nature name at least two organizations that independently need this term which includes representatives of over organizations stability justification what concerns are there that this might affect existing implementations usage as currently occurs in global biodiversity information facility gbif darwin core archives would not be affected by these changes darwin core does not include formal class hierarchies but if we ignore that formality and imagine what the hierarchy would look like for the classes we have materialentity as the highest for material things all of the other material based classes in darwin core dwc materialsample dwc preservedspecimen dwc livingspecimen dwc fossilspecimen might be expected to have dispositions as there are no other classes in between materialentity and those subtypes disposition is best organized with the materialentity implications for dwciri namespace does this change affect a dwciri term version no current term definition proposed attributes of the new term version please put actual changes to be implemented in bold and strikethrough term name in lowercamelcase for properties uppercamelcase for classes disposition term label english not normative disposition organized in class e g occurrence event location taxon materialentity occurrence definition of the term normative the current state of a dwc materialentity specimen with respect to a the collection identified in collectioncode or collectionid usage comments recommendations regarding content etc not normative recommended best practice is to use a controlled vocabulary examples not normative in collection missing voucher elsewhere duplicates elsewhere on loan used up destroyed deaccessioned refines identifier of the broader term this term refines normative replaces identifier of the existing term that would be deprecated and replaced by this term normative none abcd xpath of the equivalent term in abcd or efg not normative datasets dataset units unit specimenunit disposition
| 1
|
18,036
| 5,556,909,940
|
IssuesEvent
|
2017-03-24 10:29:45
|
OpenRoberta/robertalab
|
https://api.github.com/repos/OpenRoberta/robertalab
|
closed
|
Separate EV3lejos and EV3dev as two different robot plugins
|
code cleanup enhancement ev3dev lejos
|
Treating the both systems as two different robots plugins will make the workflow for developers and users! much easier to understand.
|
1.0
|
Separate EV3lejos and EV3dev as two different robot plugins - Treating the both systems as two different robots plugins will make the workflow for developers and users! much easier to understand.
|
non_process
|
separate and as two different robot plugins treating the both systems as two different robots plugins will make the workflow for developers and users much easier to understand
| 0
|
12,431
| 14,927,943,905
|
IssuesEvent
|
2021-01-24 17:20:01
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[iOS] Upcoming > Custom schedule should show first run in the list
|
Bug P2 Process: Fixed Process: Tested dev iOS
|
Steps:
1. Schedule a custom schedule having multiple runs
2. Navigate to upcoming
3. Observe the Start and End date
Actual: Currently showing first run's start date & time and last run's end date & time
Expected: Custom schedule should show first run in the list
iOS:

Android for reference:

|
2.0
|
[iOS] Upcoming > Custom schedule should show first run in the list - Steps:
1. Schedule a custom schedule having multiple runs
2. Navigate to upcoming
3. Observe the Start and End date
Actual: Currently showing first run's start date & time and last run's end date & time
Expected: Custom schedule should show first run in the list
iOS:

Android for reference:

|
process
|
upcoming custom schedule should show first run in the list steps schedule a custom schedule having multiple runs navigate to upcoming observe the start and end date actual currently showing first run s start date time and last run s end date time expected custom schedule should show first run in the list ios android for reference
| 1
|
9,228
| 12,259,957,486
|
IssuesEvent
|
2020-05-06 17:27:55
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Missing information on package resources
|
Pri1 devops-cicd-process/tech devops/prod doc-enhancement
|
Hi, above it is mentioned that there are "package" resources but I cannot find any documentation about it. Can you fix that?
Thank you
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: ee4ec9d0-e0d5-4fb4-7c3e-b84abfa290c2
* Version Independent ID: 3e2b80d9-30e5-0c48-49f0-4fcdfedf5eee
* Content: [Resources - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/resources?view=azure-devops&tabs=schema)
* Content Source: [docs/pipelines/process/resources.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/resources.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Missing information on package resources - Hi, above it is mentioned that there are "package" resources but I cannot find any documentation about it. Can you fix that?
Thank you
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: ee4ec9d0-e0d5-4fb4-7c3e-b84abfa290c2
* Version Independent ID: 3e2b80d9-30e5-0c48-49f0-4fcdfedf5eee
* Content: [Resources - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/resources?view=azure-devops&tabs=schema)
* Content Source: [docs/pipelines/process/resources.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/resources.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
missing information on package resources hi above it is mentioned that there are package resources but i cannot find any documentation about it can you fix that thank you document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
7,317
| 10,452,828,621
|
IssuesEvent
|
2019-09-19 15:22:21
|
AnalyticalGraphicsInc/cesium
|
https://api.github.com/repos/AnalyticalGraphicsInc/cesium
|
closed
|
PostProcessStage shader function czm_selected sometimes picks wrong selectedId from czm_selectedIdTexture for certain selection indices
|
category - post-processing type - bug
|
PostProcessStage shader function czm_selected sometimes picks wrong selectedId from czm_selectedIdTexture for certain selection indices.
Expected: With the sandcastle below every billboard should have a white shadow billboard in front of it. Change value of 'limit' in the sandcastle to see different set of selected items being missed.
Seems to be caused by floating point issues. Changing the for loop in czm_selected to a float index starting from 0.5 or changing 'float(i)' to 'float(i) + 0.5' should resolve the issue.
Sandcastle example:
https://cesiumjs.org/Cesium/Build/Apps/Sandcastle/#c=hVVtb5swEP4rVr4EpsisVfulSatt6V46TWq1VNuHUkUOHMStsZFtkiYV/31nIJBkbEUJsu/uubvnfD5WTJMVhzVockkkrMkUDC8y+quSecOo2k6VtIxL0EN/HMpQrhAVM8tmqtARILJBfQX13Sh53aqoUCz2hpQG7e9jnptgxrJcgLMLDHdLYzf4oimoJ3RQhanTol0cQ1kce91+z2irVHavjnS7TBPN0gyknS1ZDLrNeUjCMJSF5InSGTFVRvr0msSQ2+U9vNhCw/gfNpESSvfaJMjYEgEytctDzQqisxpYy90f09twmTrdKVnNbe1xqpSOuWQWzLi1VDwmGR6C51a+E726F8GnQiMWBNJyG++EvicBibbZ3BUoV9rS7Yj0SNf+eOeFJ8RzOgMCIguxV3t8V3s8H537vk/amO5JxfwL1nbqOGHkmtu+gQakI1tRuZfwGVGat9CG+Om1t1/ZUW9JuoyP4rcOK4Ny2LaqsSwF42pTd4uJQALNlbF3WmFfmVllMG6suViqAqx1TVJDq8bbux13R1DvtePsnsOOu+jtwNEhpOkSc0GOfLmnKsrFLnrFkUZCSfAORL+/3dx/9g/hZbctfXcrOnp0d9LI8+GxrZbgGbcoOjk/Q1kTYL0ESZkQ3kN3xx5HJClkZLmSxPN3eTc11iCRquciOml/1NvFEy7pM2yMd3A2EQ4cPHg6z3n0XFuZhhhNuLDouQuNcH+/anXXkTcdPiDwkeba8eUrIFziacsIVLKr9CcuxEIxHTcsyv/lMEIHMbz0pVIpyKQu7bGvjOVvk3l1ed/E2B19vZBolf1MF6xCl2UboVkgeaOEG8ap13MQtJ5Wzrqs5+ZgNJhUE/mqdvABZzROC1Jo4eEIt4Bz0F3FYFFEz2BpZNytlJNgB5rEfEV4fBkOjj4g4YBEghmDmqQQYsa3EA6uJgHaH8DcdwMH4+0KtGAbZ7I8ufpRCymlkwC3f6OsUmLB9J7HPw
Browser: Chrome 64-bit 74.0.3729.131
Operating System: Windows 10
|
1.0
|
PostProcessStage shader function czm_selected sometimes picks wrong selectedId from czm_selectedIdTexture for certain selection indices - PostProcessStage shader function czm_selected sometimes picks wrong selectedId from czm_selectedIdTexture for certain selection indices.
Expected: With the sandcastle below every billboard should have a white shadow billboard in front of it. Change value of 'limit' in the sandcastle to see different set of selected items being missed.
Seems to be caused by floating point issues. Changing the for loop in czm_selected to a float index starting from 0.5 or changing 'float(i)' to 'float(i) + 0.5' should resolve the issue.
Sandcastle example:
https://cesiumjs.org/Cesium/Build/Apps/Sandcastle/#c=hVVtb5swEP4rVr4EpsisVfulSatt6V46TWq1VNuHUkUOHMStsZFtkiYV/31nIJBkbEUJsu/uubvnfD5WTJMVhzVockkkrMkUDC8y+quSecOo2k6VtIxL0EN/HMpQrhAVM8tmqtARILJBfQX13Sh53aqoUCz2hpQG7e9jnptgxrJcgLMLDHdLYzf4oimoJ3RQhanTol0cQ1kce91+z2irVHavjnS7TBPN0gyknS1ZDLrNeUjCMJSF5InSGTFVRvr0msSQ2+U9vNhCw/gfNpESSvfaJMjYEgEytctDzQqisxpYy90f09twmTrdKVnNbe1xqpSOuWQWzLi1VDwmGR6C51a+E726F8GnQiMWBNJyG++EvicBibbZ3BUoV9rS7Yj0SNf+eOeFJ8RzOgMCIguxV3t8V3s8H537vk/amO5JxfwL1nbqOGHkmtu+gQakI1tRuZfwGVGat9CG+Om1t1/ZUW9JuoyP4rcOK4Ny2LaqsSwF42pTd4uJQALNlbF3WmFfmVllMG6suViqAqx1TVJDq8bbux13R1DvtePsnsOOu+jtwNEhpOkSc0GOfLmnKsrFLnrFkUZCSfAORL+/3dx/9g/hZbctfXcrOnp0d9LI8+GxrZbgGbcoOjk/Q1kTYL0ESZkQ3kN3xx5HJClkZLmSxPN3eTc11iCRquciOml/1NvFEy7pM2yMd3A2EQ4cPHg6z3n0XFuZhhhNuLDouQuNcH+/anXXkTcdPiDwkeba8eUrIFziacsIVLKr9CcuxEIxHTcsyv/lMEIHMbz0pVIpyKQu7bGvjOVvk3l1ed/E2B19vZBolf1MF6xCl2UboVkgeaOEG8ap13MQtJ5Wzrqs5+ZgNJhUE/mqdvABZzROC1Jo4eEIt4Bz0F3FYFFEz2BpZNytlJNgB5rEfEV4fBkOjj4g4YBEghmDmqQQYsa3EA6uJgHaH8DcdwMH4+0KtGAbZ7I8ufpRCymlkwC3f6OsUmLB9J7HPw
Browser: Chrome 64-bit 74.0.3729.131
Operating System: Windows 10
|
process
|
postprocessstage shader function czm selected sometimes picks wrong selectedid from czm selectedidtexture for certain selection indices postprocessstage shader function czm selected sometimes picks wrong selectedid from czm selectedidtexture for certain selection indices expected with the sandcastle below every billboard should have a white shadow billboard in front of it change value of limit in the sandcastle to see different set of selected items being missed seems to be caused by floating point issues changing the for loop in czm selected to a float index starting from or changing float i to float i should resolve the issue sandcastle example browser chrome bit operating system windows
| 1
|
6,906
| 6,657,869,018
|
IssuesEvent
|
2017-09-30 11:52:35
|
php-coder/mystamps
|
https://api.github.com/repos/php-coder/mystamps
|
opened
|
Deploy script should check that application has been started successfully
|
area/infrastructure
|
During the last deploy, I forgot to update a configuration file and application failed to start. Unfortunately, I didn't know about that and the site "was down for 62 hours, 53 minutes and 49 seconds" :-(
Deploy script should check that application has been started successfully to prevent the cases like that.
|
1.0
|
Deploy script should check that application has been started successfully - During the last deploy, I forgot to update a configuration file and application failed to start. Unfortunately, I didn't know about that and the site "was down for 62 hours, 53 minutes and 49 seconds" :-(
Deploy script should check that application has been started successfully to prevent the cases like that.
|
non_process
|
deploy script should check that application has been started successfully during the last deploy i forgot to update a configuration file and application failed to start unfortunately i didn t know about that and the site was down for hours minutes and seconds deploy script should check that application has been started successfully to prevent the cases like that
| 0
|
18,228
| 24,294,263,897
|
IssuesEvent
|
2022-09-29 08:49:08
|
ros-acceleration/community
|
https://api.github.com/repos/ros-acceleration/community
|
closed
|
Robotic Processing Unit (RPU) meta-ticket
|
Robotic Processing Unit
|
*This ticket tracks the progress of the **Robotic Processing Unit (`RPU`)** [subproject](https://github.com/ros-acceleration/community) of the ROS 2 Hardware Acceleration Working Group. <ins>Content will be updated over time</ins>. In time, a repository will branch out of this effort containing additional resources. Expectation however should be for the discussion to remain in here for organizational purposes. You can send feedback about this subproject via [this form](https://docs.google.com/forms/d/e/1FAIpQLScHWIibgjdmyMd9ZrWitFVsQA8lKU8FQrih6h4Xa3uS_l523w/viewform).*
#### The Robotic Processing Unit (`RPU`)
<ins>Definition</ins>: A robot-specific processing unit that maps ROS and robot computational graphs efficiently to underlying compute resources including CPUs, FPGAs and GPUs to obtain best performance.
#### Vision
The vision is that `RPU`s will empower robots with the ability to react faster (lower latency, higher throughput), consume less power, and deliver additional real-time capabilities with their custom compute architectures that fit best the usual robotics pipelines. This includes tasks across *sensing, perception, mapping, localization, motion control, low-level control and actuation*.
#### Antigoal
The initial objective of this subproject is **not** to design a new physical device. Instead, existing off-the-shelf hardware acceleration development platforms will be used to prototype a robot-specific processing unit that performs best when it comes to ROS 2 and robot computational graphs.
#### Sponsorship
The project is open to sponsorships and collaborations. For sponsoring the Robotic Processing Unit (`RPU`) contact [here](mailto:victor@accelerationrobotics.com).
#### Milestones
**Milestone 1: first demonstrators** - *raise awareness*
- [x] [Robotic Processing Unit (`RPU`) project announcement](https://news.accelerationrobotics.com/hardware-accelerated-ros2-pipelines/#new-subproject-robotic-processing-unit-rpu)
- [x] RFC to receive feedback and interest https://forms.gle/d4rCCoLpx9ciPiau9
- [x] Use cases driving the architecture and the development
- [x] Perception (`image_pipeline` and friends)
- [x] [`perception_2nodes`](https://github.com/ros-acceleration/acceleration_examples/tree/main/graphs/perception/perception_2nodes)
- [x] [`perception_3nodes`](https://github.com/ros-acceleration/acceleration_examples/tree/main/graphs/perception/perception_3nodes)
- [ ] *Maybe consider a more elaborated graph with multi-processing paths involving more complex CV crunching, e.g. HOG (Histogram of Oriented Gradients)*?
- [ ] ~~Navigation~~
- [ ] ~~Still in dicussions, open to feedback.~~
- [x] Partition work into demonstrators, prioritize and execute
- [x] Disclose an initial hardware reference design of the Robotic Processing Unit https://github.com/ros-acceleration/robotic_processing_unit
- [ ] Disclose benchmarking results and discuss (connected to https://github.com/ros-acceleration/community/issues/10)
|
1.0
|
Robotic Processing Unit (RPU) meta-ticket - *This ticket tracks the progress of the **Robotic Processing Unit (`RPU`)** [subproject](https://github.com/ros-acceleration/community) of the ROS 2 Hardware Acceleration Working Group. <ins>Content will be updated over time</ins>. In time, a repository will branch out of this effort containing additional resources. Expectation however should be for the discussion to remain in here for organizational purposes. You can send feedback about this subproject via [this form](https://docs.google.com/forms/d/e/1FAIpQLScHWIibgjdmyMd9ZrWitFVsQA8lKU8FQrih6h4Xa3uS_l523w/viewform).*
#### The Robotic Processing Unit (`RPU`)
<ins>Definition</ins>: A robot-specific processing unit that maps ROS and robot computational graphs efficiently to underlying compute resources including CPUs, FPGAs and GPUs to obtain best performance.
#### Vision
The vision is that `RPU`s will empower robots with the ability to react faster (lower latency, higher throughput), consume less power, and deliver additional real-time capabilities with their custom compute architectures that fit best the usual robotics pipelines. This includes tasks across *sensing, perception, mapping, localization, motion control, low-level control and actuation*.
#### Antigoal
The initial objective of this subproject is **not** to design a new physical device. Instead, existing off-the-shelf hardware acceleration development platforms will be used to prototype a robot-specific processing unit that performs best when it comes to ROS 2 and robot computational graphs.
#### Sponsorship
The project is open to sponsorships and collaborations. For sponsoring the Robotic Processing Unit (`RPU`) contact [here](mailto:victor@accelerationrobotics.com).
#### Milestones
**Milestone 1: first demonstrators** - *raise awareness*
- [x] [Robotic Processing Unit (`RPU`) project announcement](https://news.accelerationrobotics.com/hardware-accelerated-ros2-pipelines/#new-subproject-robotic-processing-unit-rpu)
- [x] RFC to receive feedback and interest https://forms.gle/d4rCCoLpx9ciPiau9
- [x] Use cases driving the architecture and the development
- [x] Perception (`image_pipeline` and friends)
- [x] [`perception_2nodes`](https://github.com/ros-acceleration/acceleration_examples/tree/main/graphs/perception/perception_2nodes)
- [x] [`perception_3nodes`](https://github.com/ros-acceleration/acceleration_examples/tree/main/graphs/perception/perception_3nodes)
- [ ] *Maybe consider a more elaborated graph with multi-processing paths involving more complex CV crunching, e.g. HOG (Histogram of Oriented Gradients)*?
- [ ] ~~Navigation~~
- [ ] ~~Still in dicussions, open to feedback.~~
- [x] Partition work into demonstrators, prioritize and execute
- [x] Disclose an initial hardware reference design of the Robotic Processing Unit https://github.com/ros-acceleration/robotic_processing_unit
- [ ] Disclose benchmarking results and discuss (connected to https://github.com/ros-acceleration/community/issues/10)
|
process
|
robotic processing unit rpu meta ticket this ticket tracks the progress of the robotic processing unit rpu of the ros hardware acceleration working group content will be updated over time in time a repository will branch out of this effort containing additional resources expectation however should be for the discussion to remain in here for organizational purposes you can send feedback about this subproject via the robotic processing unit rpu definition a robot specific processing unit that maps ros and robot computational graphs efficiently to underlying compute resources including cpus fpgas and gpus to obtain best performance vision the vision is that rpu s will empower robots with the ability to react faster lower latency higher throughput consume less power and deliver additional real time capabilities with their custom compute architectures that fit best the usual robotics pipelines this includes tasks across sensing perception mapping localization motion control low level control and actuation antigoal the initial objective of this subproject is not to design a new physical device instead existing off the shelf hardware acceleration development platforms will be used to prototype a robot specific processing unit that performs best when it comes to ros and robot computational graphs sponsorship the project is open to sponsorships and collaborations for sponsoring the robotic processing unit rpu contact mailto victor accelerationrobotics com milestones milestone first demonstrators raise awareness rfc to receive feedback and interest use cases driving the architecture and the development perception image pipeline and friends maybe consider a more elaborated graph with multi processing paths involving more complex cv crunching e g hog histogram of oriented gradients navigation still in dicussions open to feedback partition work into demonstrators prioritize and execute disclose an initial hardware reference design of the robotic processing unit disclose benchmarking results and discuss connected to
| 1
|
962
| 3,421,175,176
|
IssuesEvent
|
2015-12-08 17:39:01
|
davidguerreiro/evscalculator
|
https://api.github.com/repos/davidguerreiro/evscalculator
|
closed
|
GitHub chat
|
process
|
We should have a chat, Álvaro you (@davidguerreiro) and me about GitHub, issues, PRs and labels. We need to start using labels on issues for example, as they're getting big.
**Markdown**
- Explanation: http://www.remarq.io/articles/five-minutes-to-markdown-mastery/
- Practice: http://markdowntutorial.com/
**GitHub**
- How to use GitHub's website and basic conceptos: https://guides.github.com/activities/hello-world/
- Process (flow): https://guides.github.com/introduction/flow/
- Issues: https://guides.github.com/features/issues/
- Where to use Markdown on GitHub: https://guides.github.com/features/mastering-markdown/
- How to use GitHub for Windows app: https://www.youtube.com/watch?v=u12zHGRfiKU
|
1.0
|
GitHub chat - We should have a chat, Álvaro you (@davidguerreiro) and me about GitHub, issues, PRs and labels. We need to start using labels on issues for example, as they're getting big.
**Markdown**
- Explanation: http://www.remarq.io/articles/five-minutes-to-markdown-mastery/
- Practice: http://markdowntutorial.com/
**GitHub**
- How to use GitHub's website and basic conceptos: https://guides.github.com/activities/hello-world/
- Process (flow): https://guides.github.com/introduction/flow/
- Issues: https://guides.github.com/features/issues/
- Where to use Markdown on GitHub: https://guides.github.com/features/mastering-markdown/
- How to use GitHub for Windows app: https://www.youtube.com/watch?v=u12zHGRfiKU
|
process
|
github chat we should have a chat álvaro you davidguerreiro and me about github issues prs and labels we need to start using labels on issues for example as they re getting big markdown explanation practice github how to use github s website and basic conceptos process flow issues where to use markdown on github how to use github for windows app
| 1
|
80,639
| 7,752,914,153
|
IssuesEvent
|
2018-05-30 21:57:09
|
elastic/elasticsearch
|
https://api.github.com/repos/elastic/elasticsearch
|
opened
|
Failures in ML tests in UpgradeClusterClientYamlTestSuiteIT
|
:ml >test-failure
|
Failure: https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+g1gc/5918
This seems to be caused by the error in C library, so it is most likely different form #30456
```
2> REPRODUCE WITH: ./gradlew :x-pack:qa:rolling-upgrade:with-system-key:v6.4.0-SNAPSHOT#upgradedClusterTestRunner -Dtests.seed=2400057C67EBE734 -Dtests.class=org.elasticsearch.upgrades.UpgradeClusterClientYamlTestSuiteIT -Dtests.method="test {p0=upgraded_cluster/30_ml_jobs_crud/Test job with no model memory limit has established model memory after reopening}" -Dtests.security.manager=true -Dtests.jvm.argline="-XX:-UseConcMarkSweepGC -XX:+UseG1GC" -Dtests.locale=en-IE -Dtests.timezone=EST5EDT -Dtests.rest.suite=upgraded_cluster
FAILURE 1.08s | UpgradeClusterClientYamlTestSuiteIT.test {p0=upgraded_cluster/30_ml_jobs_crud/Test job with no model memory limit has established model memory after reopening} <<< FAILURES!
> Throwable #1: java.lang.AssertionError: Failure at [upgraded_cluster/30_ml_jobs_crud:88]: expected [2xx] status code but api [xpack.ml.close_job] returned [500 Internal Server Error] [{"error":{"root_cause":[{"type":"exception","reason":"Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n","stack_trace":"ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]\n\tat org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper.serverError(ExceptionsHelper.java:34)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.NativeAutodetectProcess.close(NativeAutodetectProcess.java:221)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.lambda$close$2(AutodetectCommunicator.java:157)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager$AutodetectWorkerExecutorService.start(AutodetectProcessManager.java:740)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.lang.Thread.run(Thread.java:844)\n"}],"type":"exception","reason":"Exception closing autodetect process","caused_by":{"type":"exception","reason":"java.util.concurrent.ExecutionException: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]","caused_by":{"type":"execution_exception","reason":"execution_exception: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]","caused_by":{"type":"exception","reason":"Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n","stack_trace":"ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]\n\tat org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper.serverError(ExceptionsHelper.java:34)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.NativeAutodetectProcess.close(NativeAutodetectProcess.java:221)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.lambda$close$2(AutodetectCommunicator.java:157)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager$AutodetectWorkerExecutorService.start(AutodetectProcessManager.java:740)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.lang.Thread.run(Thread.java:844)\n"},"stack_trace":"NotSerializableExceptionWrapper[execution_exception: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n];\n\tat java.util.concurrent.FutureTask.report(FutureTask.java:122)\n\tat java.util.concurrent.FutureTask.get(FutureTask.java:191)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.close(AutodetectCommunicator.java:170)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager.closeJob(AutodetectProcessManager.java:571)\n\tat org.elasticsearch.xpack.ml.action.TransportOpenJobAction$JobTask.closeJob(TransportOpenJobAction.java:731)\n\tat org.elasticsearch.xpack.ml.action.TransportCloseJobAction$1.doRun(TransportCloseJobAction.java:270)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:724)\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.lang.Thread.run(Thread.java:844)\nCaused by: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]\n\tat org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper.serverError(ExceptionsHelper.java:34)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.NativeAutodetectProcess.close(NativeAutodetectProcess.java:221)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.lambda$close$2(AutodetectCommunicator.java:157)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager$AutodetectWorkerExecutorService.start(AutodetectProcessManager.java:740)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\t... 3 more\n"},"stack_trace":"ElasticsearchException[java.util.concurrent.ExecutionException: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: NotSerializableExceptionWrapper[execution_exception: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n];\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.close(AutodetectCommunicator.java:179)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager.closeJob(AutodetectProcessManager.java:571)\n\tat org.elasticsearch.xpack.ml.action.TransportOpenJobAction$JobTask.closeJob(TransportOpenJobAction.java:731)\n\tat org.elasticsearch.xpack.ml.action.TransportCloseJobAction$1.doRun(TransportCloseJobAction.java:270)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:724)\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.lang.Thread.run(Thread.java:844)\nCaused by: NotSerializableExceptionWrapper[execution_exception: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n];\n\tat java.util.concurrent.FutureTask.report(FutureTask.java:122)\n\tat java.util.concurrent.FutureTask.get(FutureTask.java:191)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.close(AutodetectCommunicator.java:170)\n\t... 8 more\nCaused by: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]\n\tat org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper.serverError(ExceptionsHelper.java:34)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.NativeAutodetectProcess.close(NativeAutodetectProcess.java:221)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.lambda$close$2(AutodetectCommunicator.java:157)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager$AutodetectWorkerExecutorService.start(AutodetectProcessManager.java:740)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\t... 3 more\n"},"stack_trace":"ElasticsearchException[Exception closing autodetect process]; nested: ElasticsearchException[java.util.concurrent.ExecutionException: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: NotSerializableExceptionWrapper[execution_exception: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n];\n\tat org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper.serverError(ExceptionsHelper.java:38)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager.closeJob(AutodetectProcessManager.java:580)\n\tat org.elasticsearch.xpack.ml.action.TransportOpenJobAction$JobTask.closeJob(TransportOpenJobAction.java:731)\n\tat org.elasticsearch.xpack.ml.action.TransportCloseJobAction$1.doRun(TransportCloseJobAction.java:270)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:724)\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.lang.Thread.run(Thread.java:844)\nCaused by: ElasticsearchException[java.util.concurrent.ExecutionException: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: NotSerializableExceptionWrapper[execution_exception: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n];\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.close(AutodetectCommunicator.java:179)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager.closeJob(AutodetectProcessManager.java:571)\n\t... 7 more\nCaused by: NotSerializableExceptionWrapper[execution_exception: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n];\n\tat java.util.concurrent.FutureTask.report(FutureTask.java:122)\n\tat java.util.concurrent.FutureTask.get(FutureTask.java:191)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.close(AutodetectCommunicator.java:170)\n\t... 8 more\nCaused by: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]\n\tat org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper.serverError(ExceptionsHelper.java:34)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.NativeAutodetectProcess.close(NativeAutodetectProcess.java:221)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.lambda$close$2(AutodetectCommunicator.java:157)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager$AutodetectWorkerExecutorService.start(AutodetectProcessManager.java:740)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\t... 3 more\n"},"status":500}]
> at __randomizedtesting.SeedInfo.seed([2400057C67EBE734:AC543AA6C9178ACC]:0)
> at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.executeSection(ESClientYamlSuiteTestCase.java:365)
> at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.test(ESClientYamlSuiteTestCase.java:342)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.AssertionError: expected [2xx] status code but api [xpack.ml.close_job] returned [500 Internal Server Error] [{"error":{"root_cause":[{"type":"exception","reason":"Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n","stack_trace":"ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]\n\tat org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper.serverError(ExceptionsHelper.java:34)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.NativeAutodetectProcess.close(NativeAutodetectProcess.java:221)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.lambda$close$2(AutodetectCommunicator.java:157)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager$AutodetectWorkerExecutorService.start(AutodetectProcessManager.java:740)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.lang.Thread.run(Thread.java:844)\n"}],"type":"exception","reason":"Exception closing autodetect process","caused_by":{"type":"exception","reason":"java.util.concurrent.ExecutionException: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]","caused_by":{"type":"execution_exception","reason":"execution_exception: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]","caused_by":{"type":"exception","reason":"Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n","stack_trace":"ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]\n\tat org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper.serverError(ExceptionsHelper.java:34)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.NativeAutodetectProcess.close(NativeAutodetectProcess.java:221)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.lambda$close$2(AutodetectCommunicator.java:157)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager$AutodetectWorkerExecutorService.start(AutodetectProcessManager.java:740)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.lang.Thread.run(Thread.java:844)\n"},"stack_trace":"NotSerializableExceptionWrapper[execution_exception: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n];\n\tat java.util.concurrent.FutureTask.report(FutureTask.java:122)\n\tat java.util.concurrent.FutureTask.get(FutureTask.java:191)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.close(AutodetectCommunicator.java:170)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager.closeJob(AutodetectProcessManager.java:571)\n\tat org.elasticsearch.xpack.ml.action.TransportOpenJobAction$JobTask.closeJob(TransportOpenJobAction.java:731)\n\tat org.elasticsearch.xpack.ml.action.TransportCloseJobAction$1.doRun(TransportCloseJobAction.java:270)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:724)\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.lang.Thread.run(Thread.java:844)\nCaused by: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]\n\tat org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper.serverError(ExceptionsHelper.java:34)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.NativeAutodetectProcess.close(NativeAutodetectProcess.java:221)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.lambda$close$2(AutodetectCommunicator.java:157)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager$AutodetectWorkerExecutorService.start(AutodetectProcessManager.java:740)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\t... 3 more\n"},"stack_trace":"ElasticsearchException[java.util.concurrent.ExecutionException: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: NotSerializableExceptionWrapper[execution_exception: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n];\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.close(AutodetectCommunicator.java:179)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager.closeJob(AutodetectProcessManager.java:571)\n\tat org.elasticsearch.xpack.ml.action.TransportOpenJobAction$JobTask.closeJob(TransportOpenJobAction.java:731)\n\tat org.elasticsearch.xpack.ml.action.TransportCloseJobAction$1.doRun(TransportCloseJobAction.java:270)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:724)\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.lang.Thread.run(Thread.java:844)\nCaused by: NotSerializableExceptionWrapper[execution_exception: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n];\n\tat java.util.concurrent.FutureTask.report(FutureTask.java:122)\n\tat java.util.concurrent.FutureTask.get(FutureTask.java:191)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.close(AutodetectCommunicator.java:170)\n\t... 8 more\nCaused by: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]\n\tat org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper.serverError(ExceptionsHelper.java:34)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.NativeAutodetectProcess.close(NativeAutodetectProcess.java:221)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.lambda$close$2(AutodetectCommunicator.java:157)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager$AutodetectWorkerExecutorService.start(AutodetectProcessManager.java:740)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\t... 3 more\n"},"stack_trace":"ElasticsearchException[Exception closing autodetect process]; nested: ElasticsearchException[java.util.concurrent.ExecutionException: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: NotSerializableExceptionWrapper[execution_exception: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n];\n\tat org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper.serverError(ExceptionsHelper.java:38)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager.closeJob(AutodetectProcessManager.java:580)\n\tat org.elasticsearch.xpack.ml.action.TransportOpenJobAction$JobTask.closeJob(TransportOpenJobAction.java:731)\n\tat org.elasticsearch.xpack.ml.action.TransportCloseJobAction$1.doRun(TransportCloseJobAction.java:270)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:724)\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.lang.Thread.run(Thread.java:844)\nCaused by: ElasticsearchException[java.util.concurrent.ExecutionException: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: NotSerializableExceptionWrapper[execution_exception: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n];\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.close(AutodetectCommunicator.java:179)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager.closeJob(AutodetectProcessManager.java:571)\n\t... 7 more\nCaused by: NotSerializableExceptionWrapper[execution_exception: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n];\n\tat java.util.concurrent.FutureTask.report(FutureTask.java:122)\n\tat java.util.concurrent.FutureTask.get(FutureTask.java:191)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.close(AutodetectCommunicator.java:170)\n\t... 8 more\nCaused by: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]\n\tat org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper.serverError(ExceptionsHelper.java:34)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.NativeAutodetectProcess.close(NativeAutodetectProcess.java:221)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.lambda$close$2(AutodetectCommunicator.java:157)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager$AutodetectWorkerExecutorService.start(AutodetectProcessManager.java:740)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\t... 3 more\n"},"status":500}]
> at org.elasticsearch.test.rest.yaml.section.DoSection.execute(DoSection.java:241)
> at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.executeSection(ESClientYamlSuiteTestCase.java:358)
> ... 38 more
1> [2018-05-30T16:47:41,100][INFO ][o.e.u.UpgradeClusterClientYamlTestSuiteIT] [test {p0=upgraded_cluster/30_ml_jobs_crud/Test open old jobs}]: before test
1> [2018-05-30T16:47:42,726][INFO ][o.e.u.UpgradeClusterClientYamlTestSuiteIT] Stash dump on test failure [{
1> "stash" : {
1> "body" : {
1> "count" : 1,
1> "jobs" : [
1> {
1> "job_id" : "mixed-cluster-job",
1> "data_counts" : {
1> "job_id" : "mixed-cluster-job",
1> "processed_record_count" : 2,
1> "processed_field_count" : 4,
1> "input_bytes" : 178,
1> "input_field_count" : 6,
1> "invalid_date_count" : 0,
1> "missing_field_count" : 0,
1> "out_of_order_timestamp_count" : 0,
1> "empty_bucket_count" : 0,
1> "sparse_bucket_count" : 0,
1> "bucket_count" : 1,
1> "earliest_record_timestamp" : 1403481600000,
1> "latest_record_timestamp" : 1403481700000,
1> "last_data_time" : 1527713202621,
1> "input_record_count" : 2
1> },
1> "model_size_stats" : {
1> "job_id" : "mixed-cluster-job",
1> "result_type" : "model_size_stats",
1> "model_bytes" : 125204,
1> "total_by_field_count" : 4,
1> "total_over_field_count" : 0,
1> "total_partition_field_count" : 2,
1> "bucket_allocation_failures_count" : 0,
1> "memory_status" : "ok",
1> "log_time" : 1527713202000,
1> "timestamp" : 1403481600000
1> },
1> "state" : "opened",
1> "assignment_explanation" : ""
1> }
1> ]
1> }
1> }
1> }]
1> [2018-05-30T16:47:42,766][INFO ][o.e.u.UpgradeClusterClientYamlTestSuiteIT] There are still tasks running after this test that might break subsequent tests [xpack/ml/job[c]].
1> [2018-05-30T16:47:42,767][INFO ][o.e.u.UpgradeClusterClientYamlTestSuiteIT] [test {p0=upgraded_cluster/30_ml_jobs_crud/Test open old jobs}]: after test
2> REPRODUCE WITH: ./gradlew :x-pack:qa:rolling-upgrade:with-system-key:v6.4.0-SNAPSHOT#upgradedClusterTestRunner -Dtests.seed=2400057C67EBE734 -Dtests.class=org.elasticsearch.upgrades.UpgradeClusterClientYamlTestSuiteIT -Dtests.method="test {p0=upgraded_cluster/30_ml_jobs_crud/Test open old jobs}" -Dtests.security.manager=true -Dtests.jvm.argline="-XX:-UseConcMarkSweepGC -XX:+UseG1GC" -Dtests.locale=en-IE -Dtests.timezone=EST5EDT -Dtests.rest.suite=upgraded_cluster
FAILURE 1.68s | UpgradeClusterClientYamlTestSuiteIT.test {p0=upgraded_cluster/30_ml_jobs_crud/Test open old jobs} <<< FAILURES!
> Throwable #1: java.lang.AssertionError: Failure at [upgraded_cluster/30_ml_jobs_crud:35]: field [jobs.0.node] doesn't have a true value
> Expected: not null
> but: was null
> at __randomizedtesting.SeedInfo.seed([2400057C67EBE734:AC543AA6C9178ACC]:0)
> at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.executeSection(ESClientYamlSuiteTestCase.java:365)
> at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.test(ESClientYamlSuiteTestCase.java:342)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.AssertionError: field [jobs.0.node] doesn't have a true value
> Expected: not null
> but: was null
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> at org.elasticsearch.test.rest.yaml.section.IsTrueAssertion.doAssert(IsTrueAssertion.java:55)
> at org.elasticsearch.test.rest.yaml.section.Assertion.execute(Assertion.java:76)
> at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.executeSection(ESClientYamlSuiteTestCase.java:358)
> ... 38 more
1> [2018-05-30T16:47:42,777][INFO ][o.e.u.UpgradeClusterClientYamlTestSuiteIT] [test {p0=upgraded_cluster/30_ml_jobs_crud/Test get job with function shortcut should expand}]: before test
1> [2018-05-30T16:47:43,185][INFO ][o.e.u.UpgradeClusterClientYamlTestSuiteIT] There are still tasks running after this test that might break subsequent tests [xpack/ml/job[c]].
1> [2018-05-30T16:47:43,185][INFO ][o.e.u.UpgradeClusterClientYamlTestSuiteIT] [test {p0=upgraded_cluster/30_ml_jobs_crud/Test get job with function shortcut should expand}]: after test
1> [2018-05-30T16:47:43,208][INFO ][o.e.u.UpgradeClusterClientYamlTestSuiteIT] [test {p0=upgraded_cluster/10_basic/Index data and search on the upgraded cluster}]: before test
1> [2018-05-30T16:47:44,132][INFO ][o.e.u.UpgradeClusterClientYamlTestSuiteIT] There are still tasks running after this test that might break subsequent tests [xpack/ml/job[c]].
1> [2018-05-30T16:47:44,133][INFO ][o.e.u.UpgradeClusterClientYamlTestSuiteIT] [test {p0=upgraded_cluster/10_basic/Index data and search on the upgraded cluster}]: after test
2> NOTE: leaving temporary files on disk at: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/testrun/v6.4.0-SNAPSHOT#upgradedClusterTestRunner/J0/temp/org.elasticsearch.upgrades.UpgradeClusterClientYamlTestSuiteIT_2400057C67EBE734-001
2> NOTE: test params are: codec=Lucene70, sim=RandomSimilarity(queryNorm=false): {}, locale=en-IE, timezone=EST5EDT
2> NOTE: Linux 3.16.0-4-amd64 amd64/Oracle Corporation 1.8.0_172 (64-bit)/cpus=4,threads=1,free=435702720,total=536870912
2> NOTE: All tests run in this JVM: [UpgradeClusterClientYamlTestSuiteIT]
Completed [1/3] in 22.18s, 9 tests, 2 failures <<< FAILURES!
```
|
1.0
|
Failures in ML tests in UpgradeClusterClientYamlTestSuiteIT - Failure: https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+g1gc/5918
This seems to be caused by the error in C library, so it is most likely different form #30456
```
2> REPRODUCE WITH: ./gradlew :x-pack:qa:rolling-upgrade:with-system-key:v6.4.0-SNAPSHOT#upgradedClusterTestRunner -Dtests.seed=2400057C67EBE734 -Dtests.class=org.elasticsearch.upgrades.UpgradeClusterClientYamlTestSuiteIT -Dtests.method="test {p0=upgraded_cluster/30_ml_jobs_crud/Test job with no model memory limit has established model memory after reopening}" -Dtests.security.manager=true -Dtests.jvm.argline="-XX:-UseConcMarkSweepGC -XX:+UseG1GC" -Dtests.locale=en-IE -Dtests.timezone=EST5EDT -Dtests.rest.suite=upgraded_cluster
FAILURE 1.08s | UpgradeClusterClientYamlTestSuiteIT.test {p0=upgraded_cluster/30_ml_jobs_crud/Test job with no model memory limit has established model memory after reopening} <<< FAILURES!
> Throwable #1: java.lang.AssertionError: Failure at [upgraded_cluster/30_ml_jobs_crud:88]: expected [2xx] status code but api [xpack.ml.close_job] returned [500 Internal Server Error] [{"error":{"root_cause":[{"type":"exception","reason":"Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n","stack_trace":"ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]\n\tat org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper.serverError(ExceptionsHelper.java:34)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.NativeAutodetectProcess.close(NativeAutodetectProcess.java:221)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.lambda$close$2(AutodetectCommunicator.java:157)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager$AutodetectWorkerExecutorService.start(AutodetectProcessManager.java:740)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.lang.Thread.run(Thread.java:844)\n"}],"type":"exception","reason":"Exception closing autodetect process","caused_by":{"type":"exception","reason":"java.util.concurrent.ExecutionException: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]","caused_by":{"type":"execution_exception","reason":"execution_exception: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]","caused_by":{"type":"exception","reason":"Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n","stack_trace":"ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]\n\tat org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper.serverError(ExceptionsHelper.java:34)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.NativeAutodetectProcess.close(NativeAutodetectProcess.java:221)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.lambda$close$2(AutodetectCommunicator.java:157)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager$AutodetectWorkerExecutorService.start(AutodetectProcessManager.java:740)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.lang.Thread.run(Thread.java:844)\n"},"stack_trace":"NotSerializableExceptionWrapper[execution_exception: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n];\n\tat java.util.concurrent.FutureTask.report(FutureTask.java:122)\n\tat java.util.concurrent.FutureTask.get(FutureTask.java:191)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.close(AutodetectCommunicator.java:170)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager.closeJob(AutodetectProcessManager.java:571)\n\tat org.elasticsearch.xpack.ml.action.TransportOpenJobAction$JobTask.closeJob(TransportOpenJobAction.java:731)\n\tat org.elasticsearch.xpack.ml.action.TransportCloseJobAction$1.doRun(TransportCloseJobAction.java:270)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:724)\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.lang.Thread.run(Thread.java:844)\nCaused by: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]\n\tat org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper.serverError(ExceptionsHelper.java:34)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.NativeAutodetectProcess.close(NativeAutodetectProcess.java:221)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.lambda$close$2(AutodetectCommunicator.java:157)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager$AutodetectWorkerExecutorService.start(AutodetectProcessManager.java:740)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\t... 3 more\n"},"stack_trace":"ElasticsearchException[java.util.concurrent.ExecutionException: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: NotSerializableExceptionWrapper[execution_exception: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n];\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.close(AutodetectCommunicator.java:179)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager.closeJob(AutodetectProcessManager.java:571)\n\tat org.elasticsearch.xpack.ml.action.TransportOpenJobAction$JobTask.closeJob(TransportOpenJobAction.java:731)\n\tat org.elasticsearch.xpack.ml.action.TransportCloseJobAction$1.doRun(TransportCloseJobAction.java:270)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:724)\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.lang.Thread.run(Thread.java:844)\nCaused by: NotSerializableExceptionWrapper[execution_exception: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n];\n\tat java.util.concurrent.FutureTask.report(FutureTask.java:122)\n\tat java.util.concurrent.FutureTask.get(FutureTask.java:191)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.close(AutodetectCommunicator.java:170)\n\t... 8 more\nCaused by: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]\n\tat org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper.serverError(ExceptionsHelper.java:34)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.NativeAutodetectProcess.close(NativeAutodetectProcess.java:221)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.lambda$close$2(AutodetectCommunicator.java:157)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager$AutodetectWorkerExecutorService.start(AutodetectProcessManager.java:740)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\t... 3 more\n"},"stack_trace":"ElasticsearchException[Exception closing autodetect process]; nested: ElasticsearchException[java.util.concurrent.ExecutionException: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: NotSerializableExceptionWrapper[execution_exception: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n];\n\tat org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper.serverError(ExceptionsHelper.java:38)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager.closeJob(AutodetectProcessManager.java:580)\n\tat org.elasticsearch.xpack.ml.action.TransportOpenJobAction$JobTask.closeJob(TransportOpenJobAction.java:731)\n\tat org.elasticsearch.xpack.ml.action.TransportCloseJobAction$1.doRun(TransportCloseJobAction.java:270)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:724)\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.lang.Thread.run(Thread.java:844)\nCaused by: ElasticsearchException[java.util.concurrent.ExecutionException: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: NotSerializableExceptionWrapper[execution_exception: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n];\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.close(AutodetectCommunicator.java:179)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager.closeJob(AutodetectProcessManager.java:571)\n\t... 7 more\nCaused by: NotSerializableExceptionWrapper[execution_exception: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n];\n\tat java.util.concurrent.FutureTask.report(FutureTask.java:122)\n\tat java.util.concurrent.FutureTask.get(FutureTask.java:191)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.close(AutodetectCommunicator.java:170)\n\t... 8 more\nCaused by: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]\n\tat org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper.serverError(ExceptionsHelper.java:34)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.NativeAutodetectProcess.close(NativeAutodetectProcess.java:221)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.lambda$close$2(AutodetectCommunicator.java:157)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager$AutodetectWorkerExecutorService.start(AutodetectProcessManager.java:740)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\t... 3 more\n"},"status":500}]
> at __randomizedtesting.SeedInfo.seed([2400057C67EBE734:AC543AA6C9178ACC]:0)
> at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.executeSection(ESClientYamlSuiteTestCase.java:365)
> at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.test(ESClientYamlSuiteTestCase.java:342)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.AssertionError: expected [2xx] status code but api [xpack.ml.close_job] returned [500 Internal Server Error] [{"error":{"root_cause":[{"type":"exception","reason":"Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n","stack_trace":"ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]\n\tat org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper.serverError(ExceptionsHelper.java:34)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.NativeAutodetectProcess.close(NativeAutodetectProcess.java:221)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.lambda$close$2(AutodetectCommunicator.java:157)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager$AutodetectWorkerExecutorService.start(AutodetectProcessManager.java:740)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.lang.Thread.run(Thread.java:844)\n"}],"type":"exception","reason":"Exception closing autodetect process","caused_by":{"type":"exception","reason":"java.util.concurrent.ExecutionException: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]","caused_by":{"type":"execution_exception","reason":"execution_exception: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]","caused_by":{"type":"exception","reason":"Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n","stack_trace":"ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]\n\tat org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper.serverError(ExceptionsHelper.java:34)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.NativeAutodetectProcess.close(NativeAutodetectProcess.java:221)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.lambda$close$2(AutodetectCommunicator.java:157)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager$AutodetectWorkerExecutorService.start(AutodetectProcessManager.java:740)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.lang.Thread.run(Thread.java:844)\n"},"stack_trace":"NotSerializableExceptionWrapper[execution_exception: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n];\n\tat java.util.concurrent.FutureTask.report(FutureTask.java:122)\n\tat java.util.concurrent.FutureTask.get(FutureTask.java:191)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.close(AutodetectCommunicator.java:170)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager.closeJob(AutodetectProcessManager.java:571)\n\tat org.elasticsearch.xpack.ml.action.TransportOpenJobAction$JobTask.closeJob(TransportOpenJobAction.java:731)\n\tat org.elasticsearch.xpack.ml.action.TransportCloseJobAction$1.doRun(TransportCloseJobAction.java:270)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:724)\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.lang.Thread.run(Thread.java:844)\nCaused by: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]\n\tat org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper.serverError(ExceptionsHelper.java:34)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.NativeAutodetectProcess.close(NativeAutodetectProcess.java:221)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.lambda$close$2(AutodetectCommunicator.java:157)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager$AutodetectWorkerExecutorService.start(AutodetectProcessManager.java:740)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\t... 3 more\n"},"stack_trace":"ElasticsearchException[java.util.concurrent.ExecutionException: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: NotSerializableExceptionWrapper[execution_exception: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n];\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.close(AutodetectCommunicator.java:179)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager.closeJob(AutodetectProcessManager.java:571)\n\tat org.elasticsearch.xpack.ml.action.TransportOpenJobAction$JobTask.closeJob(TransportOpenJobAction.java:731)\n\tat org.elasticsearch.xpack.ml.action.TransportCloseJobAction$1.doRun(TransportCloseJobAction.java:270)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:724)\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.lang.Thread.run(Thread.java:844)\nCaused by: NotSerializableExceptionWrapper[execution_exception: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n];\n\tat java.util.concurrent.FutureTask.report(FutureTask.java:122)\n\tat java.util.concurrent.FutureTask.get(FutureTask.java:191)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.close(AutodetectCommunicator.java:170)\n\t... 8 more\nCaused by: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]\n\tat org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper.serverError(ExceptionsHelper.java:34)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.NativeAutodetectProcess.close(NativeAutodetectProcess.java:221)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.lambda$close$2(AutodetectCommunicator.java:157)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager$AutodetectWorkerExecutorService.start(AutodetectProcessManager.java:740)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\t... 3 more\n"},"stack_trace":"ElasticsearchException[Exception closing autodetect process]; nested: ElasticsearchException[java.util.concurrent.ExecutionException: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: NotSerializableExceptionWrapper[execution_exception: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n];\n\tat org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper.serverError(ExceptionsHelper.java:38)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager.closeJob(AutodetectProcessManager.java:580)\n\tat org.elasticsearch.xpack.ml.action.TransportOpenJobAction$JobTask.closeJob(TransportOpenJobAction.java:731)\n\tat org.elasticsearch.xpack.ml.action.TransportCloseJobAction$1.doRun(TransportCloseJobAction.java:270)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:724)\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.lang.Thread.run(Thread.java:844)\nCaused by: ElasticsearchException[java.util.concurrent.ExecutionException: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: NotSerializableExceptionWrapper[execution_exception: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n];\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.close(AutodetectCommunicator.java:179)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager.closeJob(AutodetectProcessManager.java:571)\n\t... 7 more\nCaused by: NotSerializableExceptionWrapper[execution_exception: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]]; nested: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n];\n\tat java.util.concurrent.FutureTask.report(FutureTask.java:122)\n\tat java.util.concurrent.FutureTask.get(FutureTask.java:191)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.close(AutodetectCommunicator.java:170)\n\t... 8 more\nCaused by: ElasticsearchException[Fatal error: 'si_signo 11, si_code: 1, si_errno: 0, address: 0x7f692bfcd56c, library: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/cluster/v6.4.0-SNAPSHOT#mixedClusterTestCluster node0/elasticsearch-7.0.0-alpha1-SNAPSHOT/modules/x-pack-ml/platform/linux-x86_64/bin/../lib/libMlMaths.so, base: 0x7f692bd6c000, normalized address: 0x26156c', version: 7.0.0-alpha1-SNAPSHOT (build 71dc485b6fd7da)\n]\n\tat org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper.serverError(ExceptionsHelper.java:34)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.NativeAutodetectProcess.close(NativeAutodetectProcess.java:221)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectCommunicator.lambda$close$2(AutodetectCommunicator.java:157)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\tat org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager$AutodetectWorkerExecutorService.start(AutodetectProcessManager.java:740)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625)\n\t... 3 more\n"},"status":500}]
> at org.elasticsearch.test.rest.yaml.section.DoSection.execute(DoSection.java:241)
> at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.executeSection(ESClientYamlSuiteTestCase.java:358)
> ... 38 more
1> [2018-05-30T16:47:41,100][INFO ][o.e.u.UpgradeClusterClientYamlTestSuiteIT] [test {p0=upgraded_cluster/30_ml_jobs_crud/Test open old jobs}]: before test
1> [2018-05-30T16:47:42,726][INFO ][o.e.u.UpgradeClusterClientYamlTestSuiteIT] Stash dump on test failure [{
1> "stash" : {
1> "body" : {
1> "count" : 1,
1> "jobs" : [
1> {
1> "job_id" : "mixed-cluster-job",
1> "data_counts" : {
1> "job_id" : "mixed-cluster-job",
1> "processed_record_count" : 2,
1> "processed_field_count" : 4,
1> "input_bytes" : 178,
1> "input_field_count" : 6,
1> "invalid_date_count" : 0,
1> "missing_field_count" : 0,
1> "out_of_order_timestamp_count" : 0,
1> "empty_bucket_count" : 0,
1> "sparse_bucket_count" : 0,
1> "bucket_count" : 1,
1> "earliest_record_timestamp" : 1403481600000,
1> "latest_record_timestamp" : 1403481700000,
1> "last_data_time" : 1527713202621,
1> "input_record_count" : 2
1> },
1> "model_size_stats" : {
1> "job_id" : "mixed-cluster-job",
1> "result_type" : "model_size_stats",
1> "model_bytes" : 125204,
1> "total_by_field_count" : 4,
1> "total_over_field_count" : 0,
1> "total_partition_field_count" : 2,
1> "bucket_allocation_failures_count" : 0,
1> "memory_status" : "ok",
1> "log_time" : 1527713202000,
1> "timestamp" : 1403481600000
1> },
1> "state" : "opened",
1> "assignment_explanation" : ""
1> }
1> ]
1> }
1> }
1> }]
1> [2018-05-30T16:47:42,766][INFO ][o.e.u.UpgradeClusterClientYamlTestSuiteIT] There are still tasks running after this test that might break subsequent tests [xpack/ml/job[c]].
1> [2018-05-30T16:47:42,767][INFO ][o.e.u.UpgradeClusterClientYamlTestSuiteIT] [test {p0=upgraded_cluster/30_ml_jobs_crud/Test open old jobs}]: after test
2> REPRODUCE WITH: ./gradlew :x-pack:qa:rolling-upgrade:with-system-key:v6.4.0-SNAPSHOT#upgradedClusterTestRunner -Dtests.seed=2400057C67EBE734 -Dtests.class=org.elasticsearch.upgrades.UpgradeClusterClientYamlTestSuiteIT -Dtests.method="test {p0=upgraded_cluster/30_ml_jobs_crud/Test open old jobs}" -Dtests.security.manager=true -Dtests.jvm.argline="-XX:-UseConcMarkSweepGC -XX:+UseG1GC" -Dtests.locale=en-IE -Dtests.timezone=EST5EDT -Dtests.rest.suite=upgraded_cluster
FAILURE 1.68s | UpgradeClusterClientYamlTestSuiteIT.test {p0=upgraded_cluster/30_ml_jobs_crud/Test open old jobs} <<< FAILURES!
> Throwable #1: java.lang.AssertionError: Failure at [upgraded_cluster/30_ml_jobs_crud:35]: field [jobs.0.node] doesn't have a true value
> Expected: not null
> but: was null
> at __randomizedtesting.SeedInfo.seed([2400057C67EBE734:AC543AA6C9178ACC]:0)
> at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.executeSection(ESClientYamlSuiteTestCase.java:365)
> at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.test(ESClientYamlSuiteTestCase.java:342)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.AssertionError: field [jobs.0.node] doesn't have a true value
> Expected: not null
> but: was null
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> at org.elasticsearch.test.rest.yaml.section.IsTrueAssertion.doAssert(IsTrueAssertion.java:55)
> at org.elasticsearch.test.rest.yaml.section.Assertion.execute(Assertion.java:76)
> at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.executeSection(ESClientYamlSuiteTestCase.java:358)
> ... 38 more
1> [2018-05-30T16:47:42,777][INFO ][o.e.u.UpgradeClusterClientYamlTestSuiteIT] [test {p0=upgraded_cluster/30_ml_jobs_crud/Test get job with function shortcut should expand}]: before test
1> [2018-05-30T16:47:43,185][INFO ][o.e.u.UpgradeClusterClientYamlTestSuiteIT] There are still tasks running after this test that might break subsequent tests [xpack/ml/job[c]].
1> [2018-05-30T16:47:43,185][INFO ][o.e.u.UpgradeClusterClientYamlTestSuiteIT] [test {p0=upgraded_cluster/30_ml_jobs_crud/Test get job with function shortcut should expand}]: after test
1> [2018-05-30T16:47:43,208][INFO ][o.e.u.UpgradeClusterClientYamlTestSuiteIT] [test {p0=upgraded_cluster/10_basic/Index data and search on the upgraded cluster}]: before test
1> [2018-05-30T16:47:44,132][INFO ][o.e.u.UpgradeClusterClientYamlTestSuiteIT] There are still tasks running after this test that might break subsequent tests [xpack/ml/job[c]].
1> [2018-05-30T16:47:44,133][INFO ][o.e.u.UpgradeClusterClientYamlTestSuiteIT] [test {p0=upgraded_cluster/10_basic/Index data and search on the upgraded cluster}]: after test
2> NOTE: leaving temporary files on disk at: /var/lib/jenkins/workspace/elastic+elasticsearch+master+g1gc/x-pack/qa/rolling-upgrade/with-system-key/build/testrun/v6.4.0-SNAPSHOT#upgradedClusterTestRunner/J0/temp/org.elasticsearch.upgrades.UpgradeClusterClientYamlTestSuiteIT_2400057C67EBE734-001
2> NOTE: test params are: codec=Lucene70, sim=RandomSimilarity(queryNorm=false): {}, locale=en-IE, timezone=EST5EDT
2> NOTE: Linux 3.16.0-4-amd64 amd64/Oracle Corporation 1.8.0_172 (64-bit)/cpus=4,threads=1,free=435702720,total=536870912
2> NOTE: All tests run in this JVM: [UpgradeClusterClientYamlTestSuiteIT]
Completed [1/3] in 22.18s, 9 tests, 2 failures <<< FAILURES!
```
|
non_process
|
failures in ml tests in upgradeclusterclientyamltestsuiteit failure this seems to be caused by the error in c library so it is most likely different form reproduce with gradlew x pack qa rolling upgrade with system key snapshot upgradedclustertestrunner dtests seed dtests class org elasticsearch upgrades upgradeclusterclientyamltestsuiteit dtests method test upgraded cluster ml jobs crud test job with no model memory limit has established model memory after reopening dtests security manager true dtests jvm argline xx useconcmarksweepgc xx dtests locale en ie dtests timezone dtests rest suite upgraded cluster failure upgradeclusterclientyamltestsuiteit test upgraded cluster ml jobs crud test job with no model memory limit has established model memory after reopening failures throwable java lang assertionerror failure at expected status code but api returned n tat org elasticsearch xpack core ml utils exceptionshelper servererror exceptionshelper java n tat org elasticsearch xpack ml job process autodetect nativeautodetectprocess close nativeautodetectprocess java n tat org elasticsearch xpack ml job process autodetect autodetectcommunicator lambda close autodetectcommunicator java n tat java util concurrent futuretask run futuretask java n tat org elasticsearch common util concurrent threadcontext contextpreservingrunnable run threadcontext java n tat org elasticsearch xpack ml job process autodetect autodetectprocessmanager autodetectworkerexecutorservice start autodetectprocessmanager java n tat java util concurrent executors runnableadapter call executors java n tat java util concurrent futuretask run futuretask java n tat org elasticsearch common util concurrent threadcontext contextpreservingrunnable run threadcontext java n tat java util concurrent threadpoolexecutor runworker threadpoolexecutor java n tat java util concurrent threadpoolexecutor worker run threadpoolexecutor java n tat java lang thread run thread java n type exception reason exception closing autodetect process caused by type exception reason java util concurrent executionexception elasticsearchexception caused by type execution exception reason execution exception elasticsearchexception caused by type exception reason fatal error si signo si code si errno address library var lib jenkins workspace elastic elasticsearch master x pack qa rolling upgrade with system key build cluster snapshot mixedclustertestcluster elasticsearch snapshot modules x pack ml platform linux bin lib libmlmaths so base normalized address version snapshot build n stack trace elasticsearchexception n tat org elasticsearch xpack core ml utils exceptionshelper servererror exceptionshelper java n tat org elasticsearch xpack ml job process autodetect nativeautodetectprocess close nativeautodetectprocess java n tat org elasticsearch xpack ml job process autodetect autodetectcommunicator lambda close autodetectcommunicator java n tat java util concurrent futuretask run futuretask java n tat org elasticsearch common util concurrent threadcontext contextpreservingrunnable run threadcontext java n tat org elasticsearch xpack ml job process autodetect autodetectprocessmanager autodetectworkerexecutorservice start autodetectprocessmanager java n tat java util concurrent executors runnableadapter call executors java n tat java util concurrent futuretask run futuretask java n tat org elasticsearch common util concurrent threadcontext contextpreservingrunnable run threadcontext java n tat java util concurrent threadpoolexecutor runworker threadpoolexecutor java n tat java util concurrent threadpoolexecutor worker run threadpoolexecutor java n tat java lang thread run thread java n stack trace notserializableexceptionwrapper nested elasticsearchexception n tat java util concurrent futuretask report futuretask java n tat java util concurrent futuretask get futuretask java n tat org elasticsearch xpack ml job process autodetect autodetectcommunicator close autodetectcommunicator java n tat org elasticsearch xpack ml job process autodetect autodetectprocessmanager closejob autodetectprocessmanager java n tat org elasticsearch xpack ml action transportopenjobaction jobtask closejob transportopenjobaction java n tat org elasticsearch xpack ml action transportclosejobaction dorun transportclosejobaction java n tat org elasticsearch common util concurrent threadcontext contextpreservingabstractrunnable dorun threadcontext java n tat org elasticsearch common util concurrent abstractrunnable run abstractrunnable java n tat java util concurrent threadpoolexecutor runworker threadpoolexecutor java n tat java util concurrent threadpoolexecutor worker run threadpoolexecutor java n tat java lang thread run thread java ncaused by elasticsearchexception n tat org elasticsearch xpack core ml utils exceptionshelper servererror exceptionshelper java n tat org elasticsearch xpack ml job process autodetect nativeautodetectprocess close nativeautodetectprocess java n tat org elasticsearch xpack ml job process autodetect autodetectcommunicator lambda close autodetectcommunicator java n tat java util concurrent futuretask run futuretask java n tat org elasticsearch common util concurrent threadcontext contextpreservingrunnable run threadcontext java n tat org elasticsearch xpack ml job process autodetect autodetectprocessmanager autodetectworkerexecutorservice start autodetectprocessmanager java n tat java util concurrent executors runnableadapter call executors java n tat java util concurrent futuretask run futuretask java n tat org elasticsearch common util concurrent threadcontext contextpreservingrunnable run threadcontext java n t more n stack trace elasticsearchexception nested notserializableexceptionwrapper nested elasticsearchexception n tat org elasticsearch xpack ml job process autodetect autodetectcommunicator close autodetectcommunicator java n tat org elasticsearch xpack ml job process autodetect autodetectprocessmanager closejob autodetectprocessmanager java n tat org elasticsearch xpack ml action transportopenjobaction jobtask closejob transportopenjobaction java n tat org elasticsearch xpack ml action transportclosejobaction dorun transportclosejobaction java n tat org elasticsearch common util concurrent threadcontext contextpreservingabstractrunnable dorun threadcontext java n tat org elasticsearch common util concurrent abstractrunnable run abstractrunnable java n tat java util concurrent threadpoolexecutor runworker threadpoolexecutor java n tat java util concurrent threadpoolexecutor worker run threadpoolexecutor java n tat java lang thread run thread java ncaused by notserializableexceptionwrapper nested elasticsearchexception n tat java util concurrent futuretask report futuretask java n tat java util concurrent futuretask get futuretask java n tat org elasticsearch xpack ml job process autodetect autodetectcommunicator close autodetectcommunicator java n t more ncaused by elasticsearchexception n tat org elasticsearch xpack core ml utils exceptionshelper servererror exceptionshelper java n tat org elasticsearch xpack ml job process autodetect nativeautodetectprocess close nativeautodetectprocess java n tat org elasticsearch xpack ml job process autodetect autodetectcommunicator lambda close autodetectcommunicator java n tat java util concurrent futuretask run futuretask java n tat org elasticsearch common util concurrent threadcontext contextpreservingrunnable run threadcontext java n tat org elasticsearch xpack ml job process autodetect autodetectprocessmanager autodetectworkerexecutorservice start autodetectprocessmanager java n tat java util concurrent executors runnableadapter call executors java n tat java util concurrent futuretask run futuretask java n tat org elasticsearch common util concurrent threadcontext contextpreservingrunnable run threadcontext java n t more n stack trace elasticsearchexception nested elasticsearchexception nested notserializableexceptionwrapper nested elasticsearchexception n tat org elasticsearch xpack core ml utils exceptionshelper servererror exceptionshelper java n tat org elasticsearch xpack ml job process autodetect autodetectprocessmanager closejob autodetectprocessmanager java n tat org elasticsearch xpack ml action transportopenjobaction jobtask closejob transportopenjobaction java n tat org elasticsearch xpack ml action transportclosejobaction dorun transportclosejobaction java n tat org elasticsearch common util concurrent threadcontext contextpreservingabstractrunnable dorun threadcontext java n tat org elasticsearch common util concurrent abstractrunnable run abstractrunnable java n tat java util concurrent threadpoolexecutor runworker threadpoolexecutor java n tat java util concurrent threadpoolexecutor worker run threadpoolexecutor java n tat java lang thread run thread java ncaused by elasticsearchexception nested notserializableexceptionwrapper nested elasticsearchexception n tat org elasticsearch xpack ml job process autodetect autodetectcommunicator close autodetectcommunicator java n tat org elasticsearch xpack ml job process autodetect autodetectprocessmanager closejob autodetectprocessmanager java n t more ncaused by notserializableexceptionwrapper nested elasticsearchexception n tat java util concurrent futuretask report futuretask java n tat java util concurrent futuretask get futuretask java n tat org elasticsearch xpack ml job process autodetect autodetectcommunicator close autodetectcommunicator java n t more ncaused by elasticsearchexception n tat org elasticsearch xpack core ml utils exceptionshelper servererror exceptionshelper java n tat org elasticsearch xpack ml job process autodetect nativeautodetectprocess close nativeautodetectprocess java n tat org elasticsearch xpack ml job process autodetect autodetectcommunicator lambda close autodetectcommunicator java n tat java util concurrent futuretask run futuretask java n tat org elasticsearch common util concurrent threadcontext contextpreservingrunnable run threadcontext java n tat org elasticsearch xpack ml job process autodetect autodetectprocessmanager autodetectworkerexecutorservice start autodetectprocessmanager java n tat java util concurrent executors runnableadapter call executors java n tat java util concurrent futuretask run futuretask java n tat org elasticsearch common util concurrent threadcontext contextpreservingrunnable run threadcontext java n t more n status at randomizedtesting seedinfo seed at org elasticsearch test rest yaml esclientyamlsuitetestcase executesection esclientyamlsuitetestcase java at org elasticsearch test rest yaml esclientyamlsuitetestcase test esclientyamlsuitetestcase java at java lang thread run thread java caused by java lang assertionerror expected status code but api returned n tat org elasticsearch xpack core ml utils exceptionshelper servererror exceptionshelper java n tat org elasticsearch xpack ml job process autodetect nativeautodetectprocess close nativeautodetectprocess java n tat org elasticsearch xpack ml job process autodetect autodetectcommunicator lambda close autodetectcommunicator java n tat java util concurrent futuretask run futuretask java n tat org elasticsearch common util concurrent threadcontext contextpreservingrunnable run threadcontext java n tat org elasticsearch xpack ml job process autodetect autodetectprocessmanager autodetectworkerexecutorservice start autodetectprocessmanager java n tat java util concurrent executors runnableadapter call executors java n tat java util concurrent futuretask run futuretask java n tat org elasticsearch common util concurrent threadcontext contextpreservingrunnable run threadcontext java n tat java util concurrent threadpoolexecutor runworker threadpoolexecutor java n tat java util concurrent threadpoolexecutor worker run threadpoolexecutor java n tat java lang thread run thread java n type exception reason exception closing autodetect process caused by type exception reason java util concurrent executionexception elasticsearchexception caused by type execution exception reason execution exception elasticsearchexception caused by type exception reason fatal error si signo si code si errno address library var lib jenkins workspace elastic elasticsearch master x pack qa rolling upgrade with system key build cluster snapshot mixedclustertestcluster elasticsearch snapshot modules x pack ml platform linux bin lib libmlmaths so base normalized address version snapshot build n stack trace elasticsearchexception n tat org elasticsearch xpack core ml utils exceptionshelper servererror exceptionshelper java n tat org elasticsearch xpack ml job process autodetect nativeautodetectprocess close nativeautodetectprocess java n tat org elasticsearch xpack ml job process autodetect autodetectcommunicator lambda close autodetectcommunicator java n tat java util concurrent futuretask run futuretask java n tat org elasticsearch common util concurrent threadcontext contextpreservingrunnable run threadcontext java n tat org elasticsearch xpack ml job process autodetect autodetectprocessmanager autodetectworkerexecutorservice start autodetectprocessmanager java n tat java util concurrent executors runnableadapter call executors java n tat java util concurrent futuretask run futuretask java n tat org elasticsearch common util concurrent threadcontext contextpreservingrunnable run threadcontext java n tat java util concurrent threadpoolexecutor runworker threadpoolexecutor java n tat java util concurrent threadpoolexecutor worker run threadpoolexecutor java n tat java lang thread run thread java n stack trace notserializableexceptionwrapper nested elasticsearchexception n tat java util concurrent futuretask report futuretask java n tat java util concurrent futuretask get futuretask java n tat org elasticsearch xpack ml job process autodetect autodetectcommunicator close autodetectcommunicator java n tat org elasticsearch xpack ml job process autodetect autodetectprocessmanager closejob autodetectprocessmanager java n tat org elasticsearch xpack ml action transportopenjobaction jobtask closejob transportopenjobaction java n tat org elasticsearch xpack ml action transportclosejobaction dorun transportclosejobaction java n tat org elasticsearch common util concurrent threadcontext contextpreservingabstractrunnable dorun threadcontext java n tat org elasticsearch common util concurrent abstractrunnable run abstractrunnable java n tat java util concurrent threadpoolexecutor runworker threadpoolexecutor java n tat java util concurrent threadpoolexecutor worker run threadpoolexecutor java n tat java lang thread run thread java ncaused by elasticsearchexception n tat org elasticsearch xpack core ml utils exceptionshelper servererror exceptionshelper java n tat org elasticsearch xpack ml job process autodetect nativeautodetectprocess close nativeautodetectprocess java n tat org elasticsearch xpack ml job process autodetect autodetectcommunicator lambda close autodetectcommunicator java n tat java util concurrent futuretask run futuretask java n tat org elasticsearch common util concurrent threadcontext contextpreservingrunnable run threadcontext java n tat org elasticsearch xpack ml job process autodetect autodetectprocessmanager autodetectworkerexecutorservice start autodetectprocessmanager java n tat java util concurrent executors runnableadapter call executors java n tat java util concurrent futuretask run futuretask java n tat org elasticsearch common util concurrent threadcontext contextpreservingrunnable run threadcontext java n t more n stack trace elasticsearchexception nested notserializableexceptionwrapper nested elasticsearchexception n tat org elasticsearch xpack ml job process autodetect autodetectcommunicator close autodetectcommunicator java n tat org elasticsearch xpack ml job process autodetect autodetectprocessmanager closejob autodetectprocessmanager java n tat org elasticsearch xpack ml action transportopenjobaction jobtask closejob transportopenjobaction java n tat org elasticsearch xpack ml action transportclosejobaction dorun transportclosejobaction java n tat org elasticsearch common util concurrent threadcontext contextpreservingabstractrunnable dorun threadcontext java n tat org elasticsearch common util concurrent abstractrunnable run abstractrunnable java n tat java util concurrent threadpoolexecutor runworker threadpoolexecutor java n tat java util concurrent threadpoolexecutor worker run threadpoolexecutor java n tat java lang thread run thread java ncaused by notserializableexceptionwrapper nested elasticsearchexception n tat java util concurrent futuretask report futuretask java n tat java util concurrent futuretask get futuretask java n tat org elasticsearch xpack ml job process autodetect autodetectcommunicator close autodetectcommunicator java n t more ncaused by elasticsearchexception n tat org elasticsearch xpack core ml utils exceptionshelper servererror exceptionshelper java n tat org elasticsearch xpack ml job process autodetect nativeautodetectprocess close nativeautodetectprocess java n tat org elasticsearch xpack ml job process autodetect autodetectcommunicator lambda close autodetectcommunicator java n tat java util concurrent futuretask run futuretask java n tat org elasticsearch common util concurrent threadcontext contextpreservingrunnable run threadcontext java n tat org elasticsearch xpack ml job process autodetect autodetectprocessmanager autodetectworkerexecutorservice start autodetectprocessmanager java n tat java util concurrent executors runnableadapter call executors java n tat java util concurrent futuretask run futuretask java n tat org elasticsearch common util concurrent threadcontext contextpreservingrunnable run threadcontext java n t more n stack trace elasticsearchexception nested elasticsearchexception nested notserializableexceptionwrapper nested elasticsearchexception n tat org elasticsearch xpack core ml utils exceptionshelper servererror exceptionshelper java n tat org elasticsearch xpack ml job process autodetect autodetectprocessmanager closejob autodetectprocessmanager java n tat org elasticsearch xpack ml action transportopenjobaction jobtask closejob transportopenjobaction java n tat org elasticsearch xpack ml action transportclosejobaction dorun transportclosejobaction java n tat org elasticsearch common util concurrent threadcontext contextpreservingabstractrunnable dorun threadcontext java n tat org elasticsearch common util concurrent abstractrunnable run abstractrunnable java n tat java util concurrent threadpoolexecutor runworker threadpoolexecutor java n tat java util concurrent threadpoolexecutor worker run threadpoolexecutor java n tat java lang thread run thread java ncaused by elasticsearchexception nested notserializableexceptionwrapper nested elasticsearchexception n tat org elasticsearch xpack ml job process autodetect autodetectcommunicator close autodetectcommunicator java n tat org elasticsearch xpack ml job process autodetect autodetectprocessmanager closejob autodetectprocessmanager java n t more ncaused by notserializableexceptionwrapper nested elasticsearchexception n tat java util concurrent futuretask report futuretask java n tat java util concurrent futuretask get futuretask java n tat org elasticsearch xpack ml job process autodetect autodetectcommunicator close autodetectcommunicator java n t more ncaused by elasticsearchexception n tat org elasticsearch xpack core ml utils exceptionshelper servererror exceptionshelper java n tat org elasticsearch xpack ml job process autodetect nativeautodetectprocess close nativeautodetectprocess java n tat org elasticsearch xpack ml job process autodetect autodetectcommunicator lambda close autodetectcommunicator java n tat java util concurrent futuretask run futuretask java n tat org elasticsearch common util concurrent threadcontext contextpreservingrunnable run threadcontext java n tat org elasticsearch xpack ml job process autodetect autodetectprocessmanager autodetectworkerexecutorservice start autodetectprocessmanager java n tat java util concurrent executors runnableadapter call executors java n tat java util concurrent futuretask run futuretask java n tat org elasticsearch common util concurrent threadcontext contextpreservingrunnable run threadcontext java n t more n status at org elasticsearch test rest yaml section dosection execute dosection java at org elasticsearch test rest yaml esclientyamlsuitetestcase executesection esclientyamlsuitetestcase java more before test stash dump on test failure stash body count jobs job id mixed cluster job data counts job id mixed cluster job processed record count processed field count input bytes input field count invalid date count missing field count out of order timestamp count empty bucket count sparse bucket count bucket count earliest record timestamp latest record timestamp last data time input record count model size stats job id mixed cluster job result type model size stats model bytes total by field count total over field count total partition field count bucket allocation failures count memory status ok log time timestamp state opened assignment explanation there are still tasks running after this test that might break subsequent tests after test reproduce with gradlew x pack qa rolling upgrade with system key snapshot upgradedclustertestrunner dtests seed dtests class org elasticsearch upgrades upgradeclusterclientyamltestsuiteit dtests method test upgraded cluster ml jobs crud test open old jobs dtests security manager true dtests jvm argline xx useconcmarksweepgc xx dtests locale en ie dtests timezone dtests rest suite upgraded cluster failure upgradeclusterclientyamltestsuiteit test upgraded cluster ml jobs crud test open old jobs failures throwable java lang assertionerror failure at field doesn t have a true value expected not null but was null at randomizedtesting seedinfo seed at org elasticsearch test rest yaml esclientyamlsuitetestcase executesection esclientyamlsuitetestcase java at org elasticsearch test rest yaml esclientyamlsuitetestcase test esclientyamlsuitetestcase java at java lang thread run thread java caused by java lang assertionerror field doesn t have a true value expected not null but was null at org hamcrest matcherassert assertthat matcherassert java at org elasticsearch test rest yaml section istrueassertion doassert istrueassertion java at org elasticsearch test rest yaml section assertion execute assertion java at org elasticsearch test rest yaml esclientyamlsuitetestcase executesection esclientyamlsuitetestcase java more before test there are still tasks running after this test that might break subsequent tests after test before test there are still tasks running after this test that might break subsequent tests after test note leaving temporary files on disk at var lib jenkins workspace elastic elasticsearch master x pack qa rolling upgrade with system key build testrun snapshot upgradedclustertestrunner temp org elasticsearch upgrades upgradeclusterclientyamltestsuiteit note test params are codec sim randomsimilarity querynorm false locale en ie timezone note linux oracle corporation bit cpus threads free total note all tests run in this jvm completed in tests failures failures
| 0
|
120,935
| 25,896,829,521
|
IssuesEvent
|
2022-12-14 23:32:45
|
apigee/registry
|
https://api.github.com/repos/apigee/registry
|
closed
|
Move v1alpha1 protos to registry-experimental
|
enhancement code quality
|
To avoid confusion and over-indexing on them, I think we should move the application/v1alpha1 protos to the registry-experimental repo. Anything in the registry tool that uses them should go to the registry-experimental tool, but without the protos, `registry get` won't be able to print their contents. Previously that seemed to me to be a blocker, but as we build out our supported application protos, I think these are needed less and less.
|
1.0
|
Move v1alpha1 protos to registry-experimental - To avoid confusion and over-indexing on them, I think we should move the application/v1alpha1 protos to the registry-experimental repo. Anything in the registry tool that uses them should go to the registry-experimental tool, but without the protos, `registry get` won't be able to print their contents. Previously that seemed to me to be a blocker, but as we build out our supported application protos, I think these are needed less and less.
|
non_process
|
move protos to registry experimental to avoid confusion and over indexing on them i think we should move the application protos to the registry experimental repo anything in the registry tool that uses them should go to the registry experimental tool but without the protos registry get won t be able to print their contents previously that seemed to me to be a blocker but as we build out our supported application protos i think these are needed less and less
| 0
|
109,597
| 11,646,204,704
|
IssuesEvent
|
2020-03-01 07:28:43
|
ErnstThalmann/title-comong-soon
|
https://api.github.com/repos/ErnstThalmann/title-comong-soon
|
closed
|
Знакомство с принципами работы нейронных сетей.
|
documentation good first issue
|
От вас требуется:
- Составить себе представление о принципах работы нейросети.
- Знать и понимать термины: "Функция активации", "Синапс", "Тренировочный сет", "Эпоха" - в контексте нейросети.
- Уметь распознать в коде нейросеть(в самых простых вариантах).
Вот материал по которому можно ознакомиться с темой:
- https://habr.com/ru/post/312450/
- https://habr.com/ru/post/440162/
- https://www.youtube.com/watch?v=Q_TqexVPNkg - обучалка с ютуба
|
1.0
|
Знакомство с принципами работы нейронных сетей. - От вас требуется:
- Составить себе представление о принципах работы нейросети.
- Знать и понимать термины: "Функция активации", "Синапс", "Тренировочный сет", "Эпоха" - в контексте нейросети.
- Уметь распознать в коде нейросеть(в самых простых вариантах).
Вот материал по которому можно ознакомиться с темой:
- https://habr.com/ru/post/312450/
- https://habr.com/ru/post/440162/
- https://www.youtube.com/watch?v=Q_TqexVPNkg - обучалка с ютуба
|
non_process
|
знакомство с принципами работы нейронных сетей от вас требуется составить себе представление о принципах работы нейросети знать и понимать термины функция активации синапс тренировочный сет эпоха в контексте нейросети уметь распознать в коде нейросеть в самых простых вариантах вот материал по которому можно ознакомиться с темой обучалка с ютуба
| 0
|
11,652
| 14,516,079,163
|
IssuesEvent
|
2020-12-13 14:43:42
|
threefoldfoundation/tft-stellar
|
https://api.github.com/repos/threefoldfoundation/tft-stellar
|
closed
|
Remove the activate account method from the conversion service
|
priority_major process_wontfix type_feature
|
Check with Jimber first if this method is called
|
1.0
|
Remove the activate account method from the conversion service - Check with Jimber first if this method is called
|
process
|
remove the activate account method from the conversion service check with jimber first if this method is called
| 1
|
16,050
| 20,192,886,613
|
IssuesEvent
|
2022-02-11 07:51:03
|
soederpop/active-mdx-software-project-test-repo
|
https://api.github.com/repos/soederpop/active-mdx-software-project-test-repo
|
closed
|
A customer should be able to pay with paypal
|
story-created epic-payment-processing
|
# A customer should be able to pay with paypal
As a customer I want to be able to pay with paypal so I can complete my order
|
1.0
|
A customer should be able to pay with paypal - # A customer should be able to pay with paypal
As a customer I want to be able to pay with paypal so I can complete my order
|
process
|
a customer should be able to pay with paypal a customer should be able to pay with paypal as a customer i want to be able to pay with paypal so i can complete my order
| 1
|
18,529
| 24,552,209,616
|
IssuesEvent
|
2022-10-12 13:26:09
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Mobile app] Not able to enroll to the study > getting 500 error
|
Bug P0 iOS Android Process: Fixed Process: Tested dev
|
Not able to enroll to the study > getting 500 error
Note:
1. issue observe in the Myaccount screen
2. Enrollment flow
3. Previously enrolled studies are getting displayed in the Yet to enroll status

|
2.0
|
[Mobile app] Not able to enroll to the study > getting 500 error - Not able to enroll to the study > getting 500 error
Note:
1. issue observe in the Myaccount screen
2. Enrollment flow
3. Previously enrolled studies are getting displayed in the Yet to enroll status

|
process
|
not able to enroll to the study getting error not able to enroll to the study getting error note issue observe in the myaccount screen enrollment flow previously enrolled studies are getting displayed in the yet to enroll status
| 1
|
10,384
| 13,195,316,917
|
IssuesEvent
|
2020-08-13 18:26:23
|
googleapis/python-storage
|
https://api.github.com/repos/googleapis/python-storage
|
closed
|
Update the 'google-cloud-core' dependency version
|
api: storage type: process
|
Found from the Other library's PR comment https://github.com/googleapis/python-spanner/pull/132#discussion_r469011820 that `google-clou-core==1.4.0` breaks the dependency.
|
1.0
|
Update the 'google-cloud-core' dependency version - Found from the Other library's PR comment https://github.com/googleapis/python-spanner/pull/132#discussion_r469011820 that `google-clou-core==1.4.0` breaks the dependency.
|
process
|
update the google cloud core dependency version found from the other library s pr comment that google clou core breaks the dependency
| 1
|
21,477
| 29,511,568,908
|
IssuesEvent
|
2023-06-04 01:24:50
|
goravel/goravel
|
https://api.github.com/repos/goravel/goravel
|
closed
|
Support Package Development
|
enhancement processing
|
Just like: https://laravel.com/docs/10.x/packages
```
go get github.com/goravel/sms
go run . artisan vendor:publish --provider="sms"
```
|
1.0
|
Support Package Development - Just like: https://laravel.com/docs/10.x/packages
```
go get github.com/goravel/sms
go run . artisan vendor:publish --provider="sms"
```
|
process
|
support package development just like go get github com goravel sms go run artisan vendor publish provider sms
| 1
|
428,629
| 30,003,565,175
|
IssuesEvent
|
2023-06-26 10:55:44
|
apecloud/kubeblocks
|
https://api.github.com/repos/apecloud/kubeblocks
|
closed
|
[Features] add prometheus cluster into addons
|
kind/feature area/user-interaction feature documentation
|
Motivations:
Add prometheus cluster into addon list, so that we can deploy prometheus cluster by `kbcli addon enable`
|
1.0
|
[Features] add prometheus cluster into addons - Motivations:
Add prometheus cluster into addon list, so that we can deploy prometheus cluster by `kbcli addon enable`
|
non_process
|
add prometheus cluster into addons motivations add prometheus cluster into addon list so that we can deploy prometheus cluster by kbcli addon enable
| 0
|
2,238
| 5,088,623,022
|
IssuesEvent
|
2016-12-31 23:27:27
|
sw4j-org/tool-jpa-processor
|
https://api.github.com/repos/sw4j-org/tool-jpa-processor
|
opened
|
Handle @JoinColumns Annotation
|
annotation processor task
|
Handle the `@JoinColumns` annotation for a property or field.
See [JSR 338: Java Persistence API, Version 2.1](http://download.oracle.com/otn-pub/jcp/persistence-2_1-fr-eval-spec/JavaPersistence.pdf)
- 11.1.26 JoinColumns Annotation
|
1.0
|
Handle @JoinColumns Annotation - Handle the `@JoinColumns` annotation for a property or field.
See [JSR 338: Java Persistence API, Version 2.1](http://download.oracle.com/otn-pub/jcp/persistence-2_1-fr-eval-spec/JavaPersistence.pdf)
- 11.1.26 JoinColumns Annotation
|
process
|
handle joincolumns annotation handle the joincolumns annotation for a property or field see joincolumns annotation
| 1
|
139,346
| 18,850,341,129
|
IssuesEvent
|
2021-11-11 19:58:35
|
snowdensb/sonar-xanitizer
|
https://api.github.com/repos/snowdensb/sonar-xanitizer
|
opened
|
CVE-2020-10968 (High) detected in jackson-databind-2.6.3.jar
|
security vulnerability
|
## CVE-2020-10968 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.6.3.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /src/test/resources/webgoat/WEB-INF/lib/jackson-databind-2.6.3.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.6.3.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/snowdensb/sonar-xanitizer/commit/e2144e84b1fdbf18c01c24e6ab9ade7b45b25283">e2144e84b1fdbf18c01c24e6ab9ade7b45b25283</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.aoju.bus.proxy.provider.remoting.RmiProvider (aka bus-proxy).
<p>Publish Date: 2020-03-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10968>CVE-2020-10968</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2020-10968">https://nvd.nist.gov/vuln/detail/CVE-2020-10968</a></p>
<p>Release Date: 2020-03-26</p>
<p>Fix Resolution: jackson-databind-2.9.10.4</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.6.3","packageFilePaths":[],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.6.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jackson-databind-2.9.10.4"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-10968","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.aoju.bus.proxy.provider.remoting.RmiProvider (aka bus-proxy).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10968","cvss3Severity":"high","cvss3Score":"8.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-10968 (High) detected in jackson-databind-2.6.3.jar - ## CVE-2020-10968 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.6.3.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /src/test/resources/webgoat/WEB-INF/lib/jackson-databind-2.6.3.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.6.3.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/snowdensb/sonar-xanitizer/commit/e2144e84b1fdbf18c01c24e6ab9ade7b45b25283">e2144e84b1fdbf18c01c24e6ab9ade7b45b25283</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.aoju.bus.proxy.provider.remoting.RmiProvider (aka bus-proxy).
<p>Publish Date: 2020-03-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10968>CVE-2020-10968</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2020-10968">https://nvd.nist.gov/vuln/detail/CVE-2020-10968</a></p>
<p>Release Date: 2020-03-26</p>
<p>Fix Resolution: jackson-databind-2.9.10.4</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.6.3","packageFilePaths":[],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.6.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jackson-databind-2.9.10.4"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-10968","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.aoju.bus.proxy.provider.remoting.RmiProvider (aka bus-proxy).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10968","cvss3Severity":"high","cvss3Score":"8.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library src test resources webgoat web inf lib jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org aoju bus proxy provider remoting rmiprovider aka bus proxy publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jackson databind rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion jackson databind basebranches vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org aoju bus proxy provider remoting rmiprovider aka bus proxy vulnerabilityurl
| 0
|
52,737
| 13,042,741,533
|
IssuesEvent
|
2020-07-28 23:21:29
|
MagmaDNN/magmadnn
|
https://api.github.com/repos/MagmaDNN/magmadnn
|
closed
|
[BUILD/INSTALL]
|
build/install
|
**Explain your build/install issue or feature request:**
Dev branch build error: error: ‘memset’ is not a member of ‘std’
**Environment:**
- OS: [Ubuntu + CentOS + Manjaro]
- CUDA Version: [10.2]
- CBLAS: [OpenBLAS]
- Magma Version: [2.5.3]
- CuDNN Version: [7.6]
- MagmaDNN Version [dev branch]
Fix: add the cstring include to memorymanager.cpp
|
1.0
|
[BUILD/INSTALL] - **Explain your build/install issue or feature request:**
Dev branch build error: error: ‘memset’ is not a member of ‘std’
**Environment:**
- OS: [Ubuntu + CentOS + Manjaro]
- CUDA Version: [10.2]
- CBLAS: [OpenBLAS]
- Magma Version: [2.5.3]
- CuDNN Version: [7.6]
- MagmaDNN Version [dev branch]
Fix: add the cstring include to memorymanager.cpp
|
non_process
|
explain your build install issue or feature request dev branch build error error ‘memset’ is not a member of ‘std’ environment os cuda version cblas magma version cudnn version magmadnn version fix add the cstring include to memorymanager cpp
| 0
|
69
| 2,523,501,580
|
IssuesEvent
|
2015-01-20 11:03:53
|
Graylog2/graylog2-server
|
https://api.github.com/repos/Graylog2/graylog2-server
|
opened
|
Return proper errors for invalid grok patterns
|
bug processing
|
If the users enters an invalid grok pattern or an invalid name for a pattern, the server responds with a 500 instead of a 400 with a proper JSON error response.
Also there is no error indicator in the web interface. It just silently fails.
```
2015-01-20 11:41:37,950 WARN : org.graylog2.grok.GrokPatternServiceImpl - Invalid regular expression syntax for '%{RANDOMHTTP}' with pattern %{NOTSPACE:mymethod}
java.util.regex.PatternSyntaxException: Illegal repetition near index 0
%{(null)}
^
at java.util.regex.Pattern.error(Pattern.java:1924)
at java.util.regex.Pattern.closure(Pattern.java:3104)
at java.util.regex.Pattern.sequence(Pattern.java:2101)
at java.util.regex.Pattern.expr(Pattern.java:1964)
at java.util.regex.Pattern.compile(Pattern.java:1665)
at java.util.regex.Pattern.<init>(Pattern.java:1337)
at java.util.regex.Pattern.compile(Pattern.java:1047)
at com.google.code.regexp.Pattern.buildStandardPattern(Unknown Source)
at com.google.code.regexp.Pattern.<init>(Unknown Source)
at com.google.code.regexp.Pattern.compile(Unknown Source)
at oi.thekraken.grok.api.Grok.compile(Grok.java:376)
at org.graylog2.grok.GrokPatternServiceImpl.validate(GrokPatternServiceImpl.java:87)
at org.graylog2.grok.GrokPatternServiceImpl.save(GrokPatternServiceImpl.java:74)
at org.graylog2.rest.resources.system.GrokResource.createPattern(GrokResource.java:89)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:151)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:172)
at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:152)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:104)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:384)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:342)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:101)
at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:271)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
at org.glassfish.jersey.internal.Errors.process(Errors.java:267)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:297)
at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:254)
at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1030)
at org.graylog2.jersey.container.netty.NettyContainer.messageReceived(NettyContainer.java:356)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:43)
at org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:67)
at com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
at org.jboss.netty.handler.execution.MemoryAwareThreadPoolExecutor$MemoryAwareRunnable.run(MemoryAwareThreadPoolExecutor.java:622)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
```
|
1.0
|
Return proper errors for invalid grok patterns - If the users enters an invalid grok pattern or an invalid name for a pattern, the server responds with a 500 instead of a 400 with a proper JSON error response.
Also there is no error indicator in the web interface. It just silently fails.
```
2015-01-20 11:41:37,950 WARN : org.graylog2.grok.GrokPatternServiceImpl - Invalid regular expression syntax for '%{RANDOMHTTP}' with pattern %{NOTSPACE:mymethod}
java.util.regex.PatternSyntaxException: Illegal repetition near index 0
%{(null)}
^
at java.util.regex.Pattern.error(Pattern.java:1924)
at java.util.regex.Pattern.closure(Pattern.java:3104)
at java.util.regex.Pattern.sequence(Pattern.java:2101)
at java.util.regex.Pattern.expr(Pattern.java:1964)
at java.util.regex.Pattern.compile(Pattern.java:1665)
at java.util.regex.Pattern.<init>(Pattern.java:1337)
at java.util.regex.Pattern.compile(Pattern.java:1047)
at com.google.code.regexp.Pattern.buildStandardPattern(Unknown Source)
at com.google.code.regexp.Pattern.<init>(Unknown Source)
at com.google.code.regexp.Pattern.compile(Unknown Source)
at oi.thekraken.grok.api.Grok.compile(Grok.java:376)
at org.graylog2.grok.GrokPatternServiceImpl.validate(GrokPatternServiceImpl.java:87)
at org.graylog2.grok.GrokPatternServiceImpl.save(GrokPatternServiceImpl.java:74)
at org.graylog2.rest.resources.system.GrokResource.createPattern(GrokResource.java:89)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:151)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:172)
at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:152)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:104)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:384)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:342)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:101)
at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:271)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
at org.glassfish.jersey.internal.Errors.process(Errors.java:267)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:297)
at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:254)
at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1030)
at org.graylog2.jersey.container.netty.NettyContainer.messageReceived(NettyContainer.java:356)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:43)
at org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:67)
at com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
at org.jboss.netty.handler.execution.MemoryAwareThreadPoolExecutor$MemoryAwareRunnable.run(MemoryAwareThreadPoolExecutor.java:622)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
```
|
process
|
return proper errors for invalid grok patterns if the users enters an invalid grok pattern or an invalid name for a pattern the server responds with a instead of a with a proper json error response also there is no error indicator in the web interface it just silently fails warn org grok grokpatternserviceimpl invalid regular expression syntax for randomhttp with pattern notspace mymethod java util regex patternsyntaxexception illegal repetition near index null at java util regex pattern error pattern java at java util regex pattern closure pattern java at java util regex pattern sequence pattern java at java util regex pattern expr pattern java at java util regex pattern compile pattern java at java util regex pattern pattern java at java util regex pattern compile pattern java at com google code regexp pattern buildstandardpattern unknown source at com google code regexp pattern unknown source at com google code regexp pattern compile unknown source at oi thekraken grok api grok compile grok java at org grok grokpatternserviceimpl validate grokpatternserviceimpl java at org grok grokpatternserviceimpl save grokpatternserviceimpl java at org rest resources system grokresource createpattern grokresource java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org glassfish jersey server model internal resourcemethodinvocationhandlerfactory invoke resourcemethodinvocationhandlerfactory java at org glassfish jersey server model internal abstractjavaresourcemethoddispatcher run abstractjavaresourcemethoddispatcher java at org glassfish jersey server model internal abstractjavaresourcemethoddispatcher invoke abstractjavaresourcemethoddispatcher java at org glassfish jersey server model internal javaresourcemethoddispatcherprovider responseoutinvoker dodispatch javaresourcemethoddispatcherprovider java at org glassfish jersey server model internal abstractjavaresourcemethoddispatcher dispatch abstractjavaresourcemethoddispatcher java at org glassfish jersey server model resourcemethodinvoker invoke resourcemethodinvoker java at org glassfish jersey server model resourcemethodinvoker apply resourcemethodinvoker java at org glassfish jersey server model resourcemethodinvoker apply resourcemethodinvoker java at org glassfish jersey server serverruntime run serverruntime java at org glassfish jersey internal errors call errors java at org glassfish jersey internal errors call errors java at org glassfish jersey internal errors process errors java at org glassfish jersey internal errors process errors java at org glassfish jersey internal errors process errors java at org glassfish jersey process internal requestscope runinscope requestscope java at org glassfish jersey server serverruntime process serverruntime java at org glassfish jersey server applicationhandler handle applicationhandler java at org jersey container netty nettycontainer messagereceived nettycontainer java at org jboss netty channel simplechannelupstreamhandler handleupstream simplechannelupstreamhandler java at org jboss netty channel defaultchannelpipeline sendupstream defaultchannelpipeline java at org jboss netty channel defaultchannelpipeline defaultchannelhandlercontext sendupstream defaultchannelpipeline java at org jboss netty handler execution channelupstreameventrunnable dorun channelupstreameventrunnable java at org jboss netty handler execution channeleventrunnable run channeleventrunnable java at com codahale metrics instrumentedexecutorservice instrumentedrunnable run instrumentedexecutorservice java at org jboss netty handler execution memoryawarethreadpoolexecutor memoryawarerunnable run memoryawarethreadpoolexecutor java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java
| 1
|
916
| 3,374,199,123
|
IssuesEvent
|
2015-11-24 11:47:35
|
kerubistan/kerub
|
https://api.github.com/repos/kerubistan/kerub
|
opened
|
printable virtual machine details
|
component:data processing component:ui enhancement priority: low
|
Let the user have a feature that downloads a pdf with nice rendered details of the VM, attached storage , expectations and a summary of history
|
1.0
|
printable virtual machine details - Let the user have a feature that downloads a pdf with nice rendered details of the VM, attached storage , expectations and a summary of history
|
process
|
printable virtual machine details let the user have a feature that downloads a pdf with nice rendered details of the vm attached storage expectations and a summary of history
| 1
|
1,289
| 3,828,384,146
|
IssuesEvent
|
2016-03-31 05:18:54
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
Use title as link text when target is title element
|
enhancement P3 preprocess
|
When a link targets a title element, use the title contents as the link text.
|
1.0
|
Use title as link text when target is title element - When a link targets a title element, use the title contents as the link text.
|
process
|
use title as link text when target is title element when a link targets a title element use the title contents as the link text
| 1
|
20,703
| 27,390,914,813
|
IssuesEvent
|
2023-02-28 16:11:59
|
google/ground-android
|
https://api.github.com/repos/google/ground-android
|
closed
|
[Process] Robust automated testing in place
|
type: process priority: p1
|
- [x] Automated testing run via CI
- [x] Automated UI testing run via CI
- [x] Acceptable unit test coverage
- [x] Acceptable UI test coverage
|
1.0
|
[Process] Robust automated testing in place - - [x] Automated testing run via CI
- [x] Automated UI testing run via CI
- [x] Acceptable unit test coverage
- [x] Acceptable UI test coverage
|
process
|
robust automated testing in place automated testing run via ci automated ui testing run via ci acceptable unit test coverage acceptable ui test coverage
| 1
|
24,269
| 4,074,401,365
|
IssuesEvent
|
2016-05-28 12:20:49
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
stress: failed test in cockroach/gossip/gossip.test: TestGossipNoForwardSelf
|
Robot test-failure
|
Binary: cockroach/static-tests.tar.gz sha: https://github.com/cockroachdb/cockroach/commits/2bb99ef73064de323277b965dba0c74e4c834413
Stress build found a failed test:
```
=== RUN TestGossipNoForwardSelf
W160528 03:49:32.816174 gossip/gossip.go:897 not connected to cluster; use --join to specify a connected node
W160528 03:49:32.818144 gossip/gossip.go:897 not connected to cluster; use --join to specify a connected node
W160528 03:49:32.819333 gossip/gossip.go:897 not connected to cluster; use --join to specify a connected node
W160528 03:49:32.820711 gossip/gossip.go:897 not connected to cluster; use --join to specify a connected node
W160528 03:49:32.824784 gossip/gossip.go:897 not connected to cluster; use --join to specify a connected node
I160528 03:49:32.828531 gossip/server.go:185 refusing gossip from node 4 (max 3 conns); forwarding to 5 ({tcp 127.0.0.1:43757})
SIGABRT: abort
PC=0x4612a1 m=0
goroutine 0 [idle]:
runtime.futex(0x1248c08, 0x0, 0x0, 0x0, 0x0, 0x12481b0, 0x0, 0x0, 0x40fc14, 0x1248c08, ...)
/usr/local/go/src/runtime/sys_linux_amd64.s:306 +0x21
runtime.futexsleep(0x1248c08, 0x0, 0xffffffffffffffff)
/usr/local/go/src/runtime/os1_linux.go:40 +0x53
runtime.notesleep(0x1248c08)
/usr/local/go/src/runtime/lock_futex.go:145 +0xa4
runtime.stopm()
/usr/local/go/src/runtime/proc.go:1538 +0x10b
runtime.findrunnable(0xc820015500, 0x0)
/usr/local/go/src/runtime/proc.go:1976 +0x739
runtime.schedule()
/usr/local/go/src/runtime/proc.go:2075 +0x24f
runtime.park_m(0xc8202d4a80)
/usr/local/go/src/runtime/proc.go:2140 +0x18b
runtime.mcall(0x7ffecfcf1940)
/usr/local/go/src/runtime/asm_amd64.s:233 +0x5b
goroutine 1 [chan receive, 9 minutes]:
testing.RunTests(0xdafea8, 0x1227d60, 0x21, 0x21, 0xc820186b01)
/usr/local/go/src/testing/testing.go:583 +0x8d2
testing.(*M).Run(0xc820042f08, 0x4098a3)
/usr/local/go/src/testing/testing.go:515 +0x81
main.main()
github.com/cockroachdb/cockroach/gossip/_test/_testmain.go:120 +0x117
goroutine 17 [syscall, 9 minutes, locked to thread]:
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1998 +0x1
goroutine 5 [chan receive]:
github.com/cockroachdb/cockroach/util/log.(*loggingT).flushDaemon(0x12480c0)
/go/src/github.com/cockroachdb/cockroach/util/log/clog.go:1011 +0x64
created by github.com/cockroachdb/cockroach/util/log.init.1
/go/src/github.com/cockroachdb/cockroach/util/log/clog.go:598 +0x8a
goroutine 308 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/gossip.TestGossipNoForwardSelf(0xc8202222d0)
/go/src/github.com/cockroachdb/cockroach/gossip/gossip_test.go:148 +0x809
testing.tRunner(0xc8202222d0, 0x1227e50)
/usr/local/go/src/testing/testing.go:473 +0x98
created by testing.RunTests
/usr/local/go/src/testing/testing.go:582 +0x892
goroutine 382 [select]:
github.com/cockroachdb/cockroach/rpc.(*Context).runHeartbeat(0xc8200102a0, 0xc8201b0fd0, 0xc8201f61b0, 0xf, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:231 +0x649
github.com/cockroachdb/cockroach/rpc.(*Context).GRPCDial.func1()
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:171 +0x66
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc82048f020)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 317 [IO wait, 9 minutes]:
net.runtime_pollWait(0x7f45289b1420, 0x72, 0x0)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc8200e2a00, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc8200e2a00, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).accept(0xc8200e29a0, 0x0, 0x7f45289b18b8, 0xc8201727a0)
/usr/local/go/src/net/fd_unix.go:426 +0x27c
net.(*TCPListener).AcceptTCP(0xc8200f0230, 0x454730, 0x0, 0x0)
/usr/local/go/src/net/tcpsock_posix.go:254 +0x4d
net.(*TCPListener).Accept(0xc8200f0230, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/net/tcpsock_posix.go:264 +0x3d
google.golang.org/grpc.(*Server).Serve(0xc8202d01b0, 0x7f45289b0538, 0xc8200f0230, 0x0, 0x0)
/go/src/google.golang.org/grpc/server.go:279 +0x1cf
github.com/cockroachdb/cockroach/util.ListenAndServeGRPC.func2()
/go/src/github.com/cockroachdb/cockroach/util/net.go:52 +0x3f
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc820172ee0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 380 [select, 9 minutes]:
google.golang.org/grpc/transport.(*http2Client).controller(0xc82012e000)
/go/src/google.golang.org/grpc/transport/http2_client.go:869 +0x5da
created by google.golang.org/grpc/transport.newHTTP2Client
/go/src/google.golang.org/grpc/transport/http2_client.go:201 +0x15c2
goroutine 379 [IO wait]:
net.runtime_pollWait(0x7f45289b1360, 0x72, 0xc820456000)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc8200e3410, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc8200e3410, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).Read(0xc8200e33b0, 0xc820456000, 0x8000, 0x8000, 0x0, 0x7f45289e6050, 0xc82000e068)
/usr/local/go/src/net/fd_unix.go:250 +0x23a
net.(*conn).Read(0xc8201580f0, 0xc820456000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:172 +0xe4
bufio.(*Reader).fill(0xc8202d2ba0)
/usr/local/go/src/bufio/bufio.go:97 +0x1e9
bufio.(*Reader).Read(0xc8202d2ba0, 0xc820afc0f8, 0x9, 0x9, 0xc81ffd6f48, 0x0, 0x0)
/usr/local/go/src/bufio/bufio.go:207 +0x260
io.ReadAtLeast(0x7f4528966238, 0xc8202d2ba0, 0xc820afc0f8, 0x9, 0x9, 0x9, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:297 +0xe6
io.ReadFull(0x7f4528966238, 0xc8202d2ba0, 0xc820afc0f8, 0x9, 0x9, 0xc820396e18, 0x0, 0x0)
/usr/local/go/src/io/io.go:315 +0x62
golang.org/x/net/http2.readFrameHeader(0xc820afc0f8, 0x9, 0x9, 0x7f4528966238, 0xc8202d2ba0, 0x0, 0xc800000000, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:237 +0xa5
golang.org/x/net/http2.(*Framer).ReadFrame(0xc820afc0c0, 0x0, 0x0, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:464 +0x106
google.golang.org/grpc/transport.(*framer).readFrame(0xc82048ef60, 0x0, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/http_util.go:406 +0x3d
google.golang.org/grpc/transport.(*http2Client).reader(0xc82012e000)
/go/src/google.golang.org/grpc/transport/http2_client.go:791 +0x109
created by google.golang.org/grpc/transport.newHTTP2Client
/go/src/google.golang.org/grpc/transport/http2_client.go:174 +0xd21
goroutine 314 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/gossip.(*server).start.func3()
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:298 +0x5c
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8201724a0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 315 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/rpc.NewContext.func1()
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:104 +0x57
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc820172560)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 378 [select, 9 minutes]:
google.golang.org/grpc/transport.(*recvBufferReader).Read(0xc82048ec60, 0xc8201ae790, 0x5, 0x5, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/transport.go:141 +0x7e6
google.golang.org/grpc/transport.(*Stream).Read(0xc8202a80e0, 0xc8201ae790, 0x5, 0x5, 0x1b, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/transport.go:294 +0x71
io.ReadAtLeast(0x7f4528966420, 0xc8202a80e0, 0xc8201ae790, 0x5, 0x5, 0x5, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:297 +0xe6
io.ReadFull(0x7f4528966420, 0xc8202a80e0, 0xc8201ae790, 0x5, 0x5, 0xc820299a18, 0x0, 0x0)
/usr/local/go/src/io/io.go:315 +0x62
google.golang.org/grpc.(*parser).recvMsg(0xc8201ae780, 0x7, 0x0, 0x0, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/rpc_util.go:216 +0xb9
google.golang.org/grpc.recv(0xc8201ae780, 0x7f45289b0478, 0x1268c48, 0xc8202a80e0, 0x0, 0x0, 0xbf67c0, 0xc8202aadc0, 0x0, 0x0)
/go/src/google.golang.org/grpc/rpc_util.go:297 +0x45
google.golang.org/grpc.(*serverStream).RecvMsg(0xc820164000, 0xbf67c0, 0xc8202aadc0, 0x0, 0x0)
/go/src/google.golang.org/grpc/stream.go:413 +0xe4
github.com/cockroachdb/cockroach/gossip.(*gossipGossipServer).Recv(0xc8201f6460, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/gossip.pb.go:228 +0x7e
github.com/cockroachdb/cockroach/gossip.(Gossip_GossipServer).Recv-fm(0xc8202d23c8, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:101 +0x40
github.com/cockroachdb/cockroach/gossip.(*server).gossipReceiver(0xc8202d23c0, 0xc8201580e8, 0xc82048ede0, 0xc820299f50, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:227 +0x747
github.com/cockroachdb/cockroach/gossip.(*server).Gossip.func3.1()
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:101 +0x8b
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8202aad80)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 390 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/gossip.(*server).start.func3()
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:298 +0x5c
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8200cef40)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 377 [select, 9 minutes]:
github.com/cockroachdb/cockroach/gossip.(*server).Gossip(0xc8202d23c0, 0x7f45289b1ce0, 0xc8201f6460, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:134 +0xa0f
github.com/cockroachdb/cockroach/gossip._Gossip_Gossip_Handler(0xbc9f60, 0xc8202d23c0, 0x7f45289b1c98, 0xc820164000, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/gossip.pb.go:209 +0xd8
google.golang.org/grpc.(*Server).processStreamingRPC(0xc820222000, 0x7f45289b1b38, 0xc820222120, 0xc8202a80e0, 0xc82000a220, 0x1221740, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/server.go:604 +0x47a
google.golang.org/grpc.(*Server).handleStream(0xc820222000, 0x7f45289b1b38, 0xc820222120, 0xc8202a80e0, 0x0)
/go/src/google.golang.org/grpc/server.go:688 +0x114e
google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc8201f6390, 0xc820222000, 0x7f45289b1b38, 0xc820222120, 0xc8202a80e0)
/go/src/google.golang.org/grpc/server.go:350 +0xa0
created by google.golang.org/grpc.(*Server).serveStreams.func1
/go/src/google.golang.org/grpc/server.go:351 +0x9a
goroutine 353 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/rpc.NewContext.func1()
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:104 +0x57
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8200ce7a0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 310 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/util.ListenAndServeGRPC.func1()
/go/src/github.com/cockroachdb/cockroach/util/net.go:47 +0x47
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8201720e0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 311 [IO wait, 9 minutes]:
net.runtime_pollWait(0x7f45289b1060, 0x72, 0x0)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc8200e2060, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc8200e2060, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).accept(0xc8200e2000, 0x0, 0x7f45289b18b8, 0xc820172520)
/usr/local/go/src/net/fd_unix.go:426 +0x27c
net.(*TCPListener).AcceptTCP(0xc8200f0000, 0x454730, 0x0, 0x0)
/usr/local/go/src/net/tcpsock_posix.go:254 +0x4d
net.(*TCPListener).Accept(0xc8200f0000, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/net/tcpsock_posix.go:264 +0x3d
google.golang.org/grpc.(*Server).Serve(0xc8202d0000, 0x7f45289b0538, 0xc8200f0000, 0x0, 0x0)
/go/src/google.golang.org/grpc/server.go:279 +0x1cf
github.com/cockroachdb/cockroach/util.ListenAndServeGRPC.func2()
/go/src/github.com/cockroachdb/cockroach/util/net.go:52 +0x3f
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8201721e0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 391 [IO wait]:
net.runtime_pollWait(0x7f45289b11e0, 0x72, 0xc820108000)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc82020a840, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc82020a840, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).Read(0xc82020a7e0, 0xc820108000, 0x8000, 0x8000, 0x0, 0x7f45289e6050, 0xc82000e068)
/usr/local/go/src/net/fd_unix.go:250 +0x23a
net.(*conn).Read(0xc820034050, 0xc820108000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:172 +0xe4
bufio.(*Reader).fill(0xc820010840)
/usr/local/go/src/bufio/bufio.go:97 +0x1e9
bufio.(*Reader).Read(0xc820010840, 0xc82043a278, 0x9, 0x9, 0x0, 0x0, 0x0)
/usr/local/go/src/bufio/bufio.go:207 +0x260
io.ReadAtLeast(0x7f4528966238, 0xc820010840, 0xc82043a278, 0x9, 0x9, 0x9, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:297 +0xe6
io.ReadFull(0x7f4528966238, 0xc820010840, 0xc82043a278, 0x9, 0x9, 0xc82043de00, 0x0, 0x0)
/usr/local/go/src/io/io.go:315 +0x62
golang.org/x/net/http2.readFrameHeader(0xc82043a278, 0x9, 0x9, 0x7f4528966238, 0xc820010840, 0x20000000, 0x0, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:237 +0xa5
golang.org/x/net/http2.(*Framer).ReadFrame(0xc82043a240, 0x0, 0x0, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:464 +0x106
google.golang.org/grpc/transport.(*framer).readFrame(0xc8201d16e0, 0x0, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/http_util.go:406 +0x3d
google.golang.org/grpc/transport.(*http2Server).HandleStreams(0xc82048a240, 0xc8201d1800)
/go/src/google.golang.org/grpc/transport/http2_server.go:243 +0x646
google.golang.org/grpc.(*Server).serveStreams(0xc820222000, 0x7f45289b1b38, 0xc82048a240)
/go/src/google.golang.org/grpc/server.go:352 +0x159
google.golang.org/grpc.(*Server).serveNewHTTP2Transport(0xc820222000, 0x7f4528928800, 0xc820034050, 0x0, 0x0)
/go/src/google.golang.org/grpc/server.go:339 +0x49d
google.golang.org/grpc.(*Server).handleRawConn(0xc820222000, 0x7f4528928800, 0xc820034050)
/go/src/google.golang.org/grpc/server.go:316 +0x4ee
created by google.golang.org/grpc.(*Server).Serve
/go/src/google.golang.org/grpc/server.go:288 +0x38c
goroutine 434 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/util.ListenAndServeGRPC.func1()
/go/src/github.com/cockroachdb/cockroach/util/net.go:47 +0x47
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8201af020)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 393 [select, 9 minutes]:
github.com/cockroachdb/cockroach/gossip.(*server).Gossip(0xc8202d23c0, 0x7f45289b1ce0, 0xc82028c410, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:134 +0xa0f
github.com/cockroachdb/cockroach/gossip._Gossip_Gossip_Handler(0xbc9f60, 0xc8202d23c0, 0x7f45289b1c98, 0xc82016e180, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/gossip.pb.go:209 +0xd8
google.golang.org/grpc.(*Server).processStreamingRPC(0xc820222000, 0x7f45289b1b38, 0xc82048a240, 0xc82013a380, 0xc82000a220, 0x1221740, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/server.go:604 +0x47a
google.golang.org/grpc.(*Server).handleStream(0xc820222000, 0x7f45289b1b38, 0xc82048a240, 0xc82013a380, 0x0)
/go/src/google.golang.org/grpc/server.go:688 +0x114e
google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc82028c370, 0xc820222000, 0x7f45289b1b38, 0xc82048a240, 0xc82013a380)
/go/src/google.golang.org/grpc/server.go:350 +0xa0
created by google.golang.org/grpc.(*Server).serveStreams.func1
/go/src/google.golang.org/grpc/server.go:351 +0x9a
goroutine 386 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/util.ListenAndServeGRPC.func1()
/go/src/github.com/cockroachdb/cockroach/util/net.go:47 +0x47
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8200ce7e0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 385 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/rpc.NewContext.func1()
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:104 +0x57
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8201aeee0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 370 [IO wait, 9 minutes]:
net.runtime_pollWait(0x7f45289b14e0, 0x72, 0x0)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc82006a1b0, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc82006a1b0, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).accept(0xc82006a150, 0x0, 0x7f45289b18b8, 0xc8200ed7e0)
/usr/local/go/src/net/fd_unix.go:426 +0x27c
net.(*TCPListener).AcceptTCP(0xc820158000, 0xc820040ea8, 0x0, 0x0)
/usr/local/go/src/net/tcpsock_posix.go:254 +0x4d
net.(*TCPListener).Accept(0xc820158000, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/net/tcpsock_posix.go:264 +0x3d
google.golang.org/grpc.(*Server).Serve(0xc820222000, 0x7f45289b0538, 0xc820158000, 0x0, 0x0)
/go/src/google.golang.org/grpc/server.go:279 +0x1cf
github.com/cockroachdb/cockroach/util.ListenAndServeGRPC.func2()
/go/src/github.com/cockroachdb/cockroach/util/net.go:52 +0x3f
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8201502e0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 375 [IO wait]:
net.runtime_pollWait(0x7f45289b0ee0, 0x72, 0xc820aec000)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc82006a920, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc82006a920, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).Read(0xc82006a8c0, 0xc820aec000, 0x8000, 0x8000, 0x0, 0x7f45289e6050, 0xc82000e068)
/usr/local/go/src/net/fd_unix.go:250 +0x23a
net.(*conn).Read(0xc8201580e0, 0xc820aec000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:172 +0xe4
bufio.(*Reader).fill(0xc8202d2600)
/usr/local/go/src/bufio/bufio.go:97 +0x1e9
bufio.(*Reader).Read(0xc8202d2600, 0xc820afc038, 0x9, 0x9, 0x0, 0x0, 0x0)
/usr/local/go/src/bufio/bufio.go:207 +0x260
io.ReadAtLeast(0x7f4528966238, 0xc8202d2600, 0xc820afc038, 0x9, 0x9, 0x9, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:297 +0xe6
io.ReadFull(0x7f4528966238, 0xc8202d2600, 0xc820afc038, 0x9, 0x9, 0xc8202b0900, 0x0, 0x0)
/usr/local/go/src/io/io.go:315 +0x62
golang.org/x/net/http2.readFrameHeader(0xc820afc038, 0x9, 0x9, 0x7f4528966238, 0xc8202d2600, 0x20000000, 0x0, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:237 +0xa5
golang.org/x/net/http2.(*Framer).ReadFrame(0xc820afc000, 0x0, 0x0, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:464 +0x106
google.golang.org/grpc/transport.(*framer).readFrame(0xc82048e9f0, 0x0, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/http_util.go:406 +0x3d
google.golang.org/grpc/transport.(*http2Server).HandleStreams(0xc820222120, 0xc82048ea80)
/go/src/google.golang.org/grpc/transport/http2_server.go:243 +0x646
google.golang.org/grpc.(*Server).serveStreams(0xc820222000, 0x7f45289b1b38, 0xc820222120)
/go/src/google.golang.org/grpc/server.go:352 +0x159
google.golang.org/grpc.(*Server).serveNewHTTP2Transport(0xc820222000, 0x7f4528928800, 0xc8201580e0, 0x0, 0x0)
/go/src/google.golang.org/grpc/server.go:339 +0x49d
google.golang.org/grpc.(*Server).handleRawConn(0xc820222000, 0x7f4528928800, 0xc8201580e0)
/go/src/google.golang.org/grpc/server.go:316 +0x4ee
created by google.golang.org/grpc.(*Server).Serve
/go/src/google.golang.org/grpc/server.go:288 +0x38c
goroutine 381 [select, 9 minutes]:
google.golang.org/grpc.(*Conn).transportMonitor(0xc820afe000)
/go/src/google.golang.org/grpc/clientconn.go:547 +0x1d3
created by google.golang.org/grpc.NewConn
/go/src/google.golang.org/grpc/clientconn.go:346 +0x49f
goroutine 304 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/rpc.NewContext.func1()
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:104 +0x57
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc820150180)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 376 [select, 9 minutes]:
google.golang.org/grpc/transport.(*http2Server).controller(0xc820222120)
/go/src/google.golang.org/grpc/transport/http2_server.go:652 +0x5da
created by google.golang.org/grpc/transport.newHTTP2Server
/go/src/google.golang.org/grpc/transport/http2_server.go:134 +0x84f
goroutine 305 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/util.ListenAndServeGRPC.func1()
/go/src/github.com/cockroachdb/cockroach/util/net.go:47 +0x47
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8201502c0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 387 [IO wait, 9 minutes]:
net.runtime_pollWait(0x7f45289b0fa0, 0x72, 0x0)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc82020a060, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc82020a060, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).accept(0xc82020a000, 0x0, 0x7f45289b18b8, 0xc8200cef60)
/usr/local/go/src/net/fd_unix.go:426 +0x27c
net.(*TCPListener).AcceptTCP(0xc820034000, 0x454730, 0x0, 0x0)
/usr/local/go/src/net/tcpsock_posix.go:254 +0x4d
net.(*TCPListener).Accept(0xc820034000, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/net/tcpsock_posix.go:264 +0x3d
google.golang.org/grpc.(*Server).Serve(0xc82048a000, 0x7f45289b0538, 0xc820034000, 0x0, 0x0)
/go/src/google.golang.org/grpc/server.go:279 +0x1cf
github.com/cockroachdb/cockroach/util.ListenAndServeGRPC.func2()
/go/src/github.com/cockroachdb/cockroach/util/net.go:52 +0x3f
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8200ce800)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 383 [select, 9 minutes]:
google.golang.org/grpc.NewClientStream.func1(0x7f4528966288, 0xc82012e000, 0xc8202a81c0, 0xc820098f00)
/go/src/google.golang.org/grpc/stream.go:151 +0x258
created by google.golang.org/grpc.NewClientStream
/go/src/google.golang.org/grpc/stream.go:159 +0xab2
goroutine 392 [select, 9 minutes]:
google.golang.org/grpc/transport.(*http2Server).controller(0xc82048a240)
/go/src/google.golang.org/grpc/transport/http2_server.go:652 +0x5da
created by google.golang.org/grpc/transport.newHTTP2Server
/go/src/google.golang.org/grpc/transport/http2_server.go:134 +0x84f
goroutine 373 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/gossip.(*server).start.func3()
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:298 +0x5c
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc82000a3c0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 394 [select, 9 minutes]:
google.golang.org/grpc/transport.(*recvBufferReader).Read(0xc8201d1cb0, 0xc8200cf1f0, 0x5, 0x5, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/transport.go:141 +0x7e6
google.golang.org/grpc/transport.(*Stream).Read(0xc82013a380, 0xc8200cf1f0, 0x5, 0x5, 0x1b, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/transport.go:294 +0x71
io.ReadAtLeast(0x7f4528966420, 0xc82013a380, 0xc8200cf1f0, 0x5, 0x5, 0x5, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:297 +0xe6
io.ReadFull(0x7f4528966420, 0xc82013a380, 0xc8200cf1f0, 0x5, 0x5, 0xc82029fa18, 0x0, 0x0)
/usr/local/go/src/io/io.go:315 +0x62
google.golang.org/grpc.(*parser).recvMsg(0xc8200cf1e0, 0x7, 0x0, 0x0, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/rpc_util.go:216 +0xb9
google.golang.org/grpc.recv(0xc8200cf1e0, 0x7f45289b0478, 0x1268c48, 0xc82013a380, 0x0, 0x0, 0xbf67c0, 0xc8202646c0, 0x0, 0x0)
/go/src/google.golang.org/grpc/rpc_util.go:297 +0x45
google.golang.org/grpc.(*serverStream).RecvMsg(0xc82016e180, 0xbf67c0, 0xc8202646c0, 0x0, 0x0)
/go/src/google.golang.org/grpc/stream.go:413 +0xe4
github.com/cockroachdb/cockroach/gossip.(*gossipGossipServer).Recv(0xc82028c410, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/gossip.pb.go:228 +0x7e
github.com/cockroachdb/cockroach/gossip.(Gossip_GossipServer).Recv-fm(0xc8202d23c8, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:101 +0x40
github.com/cockroachdb/cockroach/gossip.(*server).gossipReceiver(0xc8202d23c0, 0xc820034058, 0xc8201d1ef0, 0xc82029ff50, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:227 +0x747
github.com/cockroachdb/cockroach/gossip.(*server).Gossip.func3.1()
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:101 +0x8b
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc820264640)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 309 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/rpc.NewContext.func1()
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:104 +0x57
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8201721c0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 363 [select, 9 minutes]:
google.golang.org/grpc/transport.(*http2Client).controller(0xc8201e6000)
/go/src/google.golang.org/grpc/transport/http2_client.go:869 +0x5da
created by google.golang.org/grpc/transport.newHTTP2Client
/go/src/google.golang.org/grpc/transport/http2_client.go:201 +0x15c2
goroutine 362 [IO wait]:
net.runtime_pollWait(0x7f45289b12a0, 0x72, 0xc820268000)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc82020a7d0, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc82020a7d0, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).Read(0xc82020a770, 0xc820268000, 0x8000, 0x8000, 0x0, 0x7f45289e6050, 0xc82000e068)
/usr/local/go/src/net/fd_unix.go:250 +0x23a
net.(*conn).Read(0xc820496000, 0xc820268000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:172 +0xe4
bufio.(*Reader).fill(0xc8201ba300)
/usr/local/go/src/bufio/bufio.go:97 +0x1e9
bufio.(*Reader).Read(0xc8201ba300, 0xc8200d60f8, 0x9, 0x9, 0xc81ffe298d, 0x0, 0x0)
/usr/local/go/src/bufio/bufio.go:207 +0x260
io.ReadAtLeast(0x7f4528966238, 0xc8201ba300, 0xc8200d60f8, 0x9, 0x9, 0x9, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:297 +0xe6
io.ReadFull(0x7f4528966238, 0xc8201ba300, 0xc8200d60f8, 0x9, 0x9, 0xc820396ab8, 0x0, 0x0)
/usr/local/go/src/io/io.go:315 +0x62
golang.org/x/net/http2.readFrameHeader(0xc8200d60f8, 0x9, 0x9, 0x7f4528966238, 0xc8201ba300, 0x0, 0xc800000000, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:237 +0xa5
golang.org/x/net/http2.(*Framer).ReadFrame(0xc8200d60c0, 0x0, 0x0, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:464 +0x106
google.golang.org/grpc/transport.(*framer).readFrame(0xc8200e0090, 0x0, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/http_util.go:406 +0x3d
google.golang.org/grpc/transport.(*http2Client).reader(0xc8201e6000)
/go/src/google.golang.org/grpc/transport/http2_client.go:791 +0x109
created by google.golang.org/grpc/transport.newHTTP2Client
/go/src/google.golang.org/grpc/transport/http2_client.go:174 +0xd21
goroutine 316 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/util.ListenAndServeGRPC.func1()
/go/src/github.com/cockroachdb/cockroach/util/net.go:47 +0x47
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc820172ec0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 418 [IO wait]:
net.runtime_pollWait(0x7f45289b0ca0, 0x72, 0xc82025c000)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc8200e34f0, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc8200e34f0, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).Read(0xc8200e3490, 0xc82025c000, 0x8000, 0x8000, 0x0, 0x7f45289e6050, 0xc82000e068)
/usr/local/go/src/net/fd_unix.go:250 +0x23a
net.(*conn).Read(0xc8200f0280, 0xc82025c000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:172 +0xe4
bufio.(*Reader).fill(0xc8200e8d80)
/usr/local/go/src/bufio/bufio.go:97 +0x1e9
bufio.(*Reader).Read(0xc8200e8d80, 0xc820468038, 0x9, 0x9, 0x0, 0x0, 0x0)
/usr/local/go/src/bufio/bufio.go:207 +0x260
io.ReadAtLeast(0x7f4528966238, 0xc8200e8d80, 0xc820468038, 0x9, 0x9, 0x9, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:297 +0xe6
io.ReadFull(0x7f4528966238, 0xc8200e8d80, 0xc820468038, 0x9, 0x9, 0xc820232900, 0x0, 0x0)
/usr/local/go/src/io/io.go:315 +0x62
golang.org/x/net/http2.readFrameHeader(0xc820468038, 0x9, 0x9, 0x7f4528966238, 0xc8200e8d80, 0x20000000, 0x0, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:237 +0xa5
golang.org/x/net/http2.(*Framer).ReadFrame(0xc820468000, 0x0, 0x0, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:464 +0x106
google.golang.org/grpc/transport.(*framer).readFrame(0xc8201f0060, 0x0, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/http_util.go:406 +0x3d
google.golang.org/grpc/transport.(*http2Server).HandleStreams(0xc8202d03f0, 0xc8201f0180)
/go/src/google.golang.org/grpc/transport/http2_server.go:243 +0x646
google.golang.org/grpc.(*Server).serveStreams(0xc820222000, 0x7f45289b1b38, 0xc8202d03f0)
/go/src/google.golang.org/grpc/server.go:352 +0x159
google.golang.org/grpc.(*Server).serveNewHTTP2Transport(0xc820222000, 0x7f4528928800, 0xc8200f0280, 0x0, 0x0)
/go/src/google.golang.org/grpc/server.go:339 +0x49d
google.golang.org/grpc.(*Server).handleRawConn(0xc820222000, 0x7f4528928800, 0xc8200f0280)
/go/src/google.golang.org/grpc/server.go:316 +0x4ee
created by google.golang.org/grpc.(*Server).Serve
/go/src/google.golang.org/grpc/server.go:288 +0x38c
goroutine 419 [select, 9 minutes]:
google.golang.org/grpc/transport.(*http2Server).controller(0xc8202d03f0)
/go/src/google.golang.org/grpc/transport/http2_server.go:652 +0x5da
created by google.golang.org/grpc/transport.newHTTP2Server
/go/src/google.golang.org/grpc/transport/http2_server.go:134 +0x84f
goroutine 320 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/gossip.(*server).start.func3()
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:298 +0x5c
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc820172780)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 364 [select, 9 minutes]:
google.golang.org/grpc.(*Conn).transportMonitor(0xc8201fa000)
/go/src/google.golang.org/grpc/clientconn.go:547 +0x1d3
created by google.golang.org/grpc.NewConn
/go/src/google.golang.org/grpc/clientconn.go:346 +0x49f
goroutine 365 [select]:
github.com/cockroachdb/cockroach/rpc.(*Context).runHeartbeat(0xc8202d2240, 0xc8200d2210, 0xc8201f61b0, 0xf, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:231 +0x649
github.com/cockroachdb/cockroach/rpc.(*Context).GRPCDial.func1()
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:171 +0x66
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8200e0750)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 366 [select, 9 minutes]:
google.golang.org/grpc.NewClientStream.func1(0x7f4528966288, 0xc8201e6000, 0xc8201801c0, 0xc8201d2000)
/go/src/google.golang.org/grpc/stream.go:151 +0x258
created by google.golang.org/grpc.NewClientStream
/go/src/google.golang.org/grpc/stream.go:159 +0xab2
goroutine 367 [IO wait]:
net.runtime_pollWait(0x7f45289b1120, 0x72, 0xc820ae4000)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc8204244c0, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc8204244c0, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).Read(0xc820424460, 0xc820ae4000, 0x8000, 0x8000, 0x0, 0x7f45289e6050, 0xc82000e068)
/usr/local/go/src/net/fd_unix.go:250 +0x23a
net.(*conn).Read(0xc820496018, 0xc820ae4000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:172 +0xe4
bufio.(*Reader).fill(0xc8201ba360)
/usr/local/go/src/bufio/bufio.go:97 +0x1e9
bufio.(*Reader).Read(0xc8201ba360, 0xc8200d61b8, 0x9, 0x9, 0xc81ffd6f33, 0x0, 0x0)
/usr/local/go/src/bufio/bufio.go:207 +0x260
io.ReadAtLeast(0x7f4528966238, 0xc8201ba360, 0xc8200d61b8, 0x9, 0x9, 0x9, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:297 +0xe6
io.ReadFull(0x7f4528966238, 0xc8201ba360, 0xc8200d61b8, 0x9, 0x9, 0xc820396f08, 0x0, 0x0)
/usr/local/go/src/io/io.go:315 +0x62
golang.org/x/net/http2.readFrameHeader(0xc8200d61b8, 0x9, 0x9, 0x7f4528966238, 0xc8201ba360, 0x0, 0xc800000000, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:237 +0xa5
golang.org/x/net/http2.(*Framer).ReadFrame(0xc8200d6180, 0x0, 0x0, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:464 +0x106
google.golang.org/grpc/transport.(*framer).readFrame(0xc8201f5650, 0x0, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/http_util.go:406 +0x3d
google.golang.org/grpc/transport.(*http2Client).reader(0xc8201e61e0)
/go/src/google.golang.org/grpc/transport/http2_client.go:791 +0x109
created by google.golang.org/grpc/transport.newHTTP2Client
/go/src/google.golang.org/grpc/transport/http2_client.go:174 +0xd21
goroutine 368 [select, 9 minutes]:
google.golang.org/grpc/transport.(*http2Client).controller(0xc8201e61e0)
/go/src/google.golang.org/grpc/transport/http2_client.go:869 +0x5da
created by google.golang.org/grpc/transport.newHTTP2Client
/go/src/google.golang.org/grpc/transport/http2_client.go:201 +0x15c2
goroutine 369 [select, 9 minutes]:
google.golang.org/grpc.(*Conn).transportMonitor(0xc8201e60f0)
/go/src/google.golang.org/grpc/clientconn.go:547 +0x1d3
created by google.golang.org/grpc.NewConn
/go/src/google.golang.org/grpc/clientconn.go:346 +0x49f
goroutine 402 [select]:
github.com/cockroachdb/cockroach/rpc.(*Context).runHeartbeat(0xc8200e8180, 0xc82012a0b0, 0xc8201f61b0, 0xf, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:231 +0x649
github.com/cockroachdb/cockroach/rpc.(*Context).GRPCDial.func1()
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:171 +0x66
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8201f5710)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 403 [select, 9 minutes]:
google.golang.org/grpc.NewClientStream.func1(0x7f4528966288, 0xc8201e61e0, 0xc820180380, 0xc8201d2280)
/go/src/google.golang.org/grpc/stream.go:151 +0x258
created by google.golang.org/grpc.NewClientStream
/go/src/google.golang.org/grpc/stream.go:159 +0xab2
goroutine 404 [IO wait]:
net.runtime_pollWait(0x7f45289b0e20, 0x72, 0xc820234000)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc820424840, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc820424840, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).Read(0xc8204247e0, 0xc820234000, 0x8000, 0x8000, 0x0, 0x7f45289e6050, 0xc82000e068)
/usr/local/go/src/net/fd_unix.go:250 +0x23a
net.(*conn).Read(0xc820496030, 0xc820234000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:172 +0xe4
bufio.(*Reader).fill(0xc8201baa80)
/usr/local/go/src/bufio/bufio.go:97 +0x1e9
bufio.(*Reader).Read(0xc8201baa80, 0xc8200d6278, 0x9, 0x9, 0x0, 0x0, 0x0)
/usr/local/go/src/bufio/bufio.go:207 +0x260
io.ReadAtLeast(0x7f4528966238, 0xc8201baa80, 0xc8200d6278, 0x9, 0x9, 0x9, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:297 +0xe6
io.ReadFull(0x7f4528966238, 0xc8201baa80, 0xc8200d6278, 0x9, 0x9, 0xc8202d4d80, 0x0, 0x0)
/usr/local/go/src/io/io.go:315 +0x62
golang.org/x/net/http2.readFrameHeader(0xc8200d6278, 0x9, 0x9, 0x7f4528966238, 0xc8201baa80, 0x20000000, 0x0, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:237 +0xa5
golang.org/x/net/http2.(*Framer).ReadFrame(0xc8200d6240, 0x0, 0x0, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:464 +0x106
google.golang.org/grpc/transport.(*framer).readFrame(0xc8201f5b00, 0x0, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/http_util.go:406 +0x3d
google.golang.org/grpc/transport.(*http2Server).HandleStreams(0xc8201341b0, 0xc8201f5b60)
/go/src/google.golang.org/grpc/transport/http2_server.go:243 +0x646
google.golang.org/grpc.(*Server).serveStreams(0xc820222000, 0x7f45289b1b38, 0xc8201341b0)
/go/src/google.golang.org/grpc/server.go:352 +0x159
google.golang.org/grpc.(*Server).serveNewHTTP2Transport(0xc820222000, 0x7f4528928800, 0xc820496030, 0x0, 0x0)
/go/src/google.golang.org/grpc/server.go:339 +0x49d
google.golang.org/grpc.(*Server).handleRawConn(0xc820222000, 0x7f4528928800, 0xc820496030)
/go/src/google.golang.org/grpc/server.go:316 +0x4ee
created by google.golang.org/grpc.(*Server).Serve
/go/src/google.golang.org/grpc/server.go:288 +0x38c
goroutine 405 [select, 9 minutes]:
google.golang.org/grpc/transport.(*http2Server).controller(0xc8201341b0)
/go/src/google.golang.org/grpc/transport/http2_server.go:652 +0x5da
created by google.golang.org/grpc/transport.newHTTP2Server
/go/src/google.golang.org/grpc/transport/http2_server.go:134 +0x84f
goroutine 406 [select, 9 minutes]:
github.com/cockroachdb/cockroach/gossip.(*server).Gossip(0xc8202d23c0, 0x7f45289b1ce0, 0xc820110380, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:134 +0xa0f
github.com/cockroachdb/cockroach/gossip._Gossip_Gossip_Handler(0xbc9f60, 0xc8202d23c0, 0x7f45289b1c98, 0xc820466000, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/gossip.pb.go:209 +0xd8
google.golang.org/grpc.(*Server).processStreamingRPC(0xc820222000, 0x7f45289b1b38, 0xc8201341b0, 0xc820180540, 0xc82000a220, 0x1221740, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/server.go:604 +0x47a
google.golang.org/grpc.(*Server).handleStream(0xc820222000, 0x7f45289b1b38, 0xc8201341b0, 0xc820180540, 0x0)
/go/src/google.golang.org/grpc/server.go:688 +0x114e
google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc8201102f0, 0xc820222000, 0x7f45289b1b38, 0xc8201341b0, 0xc820180540)
/go/src/google.golang.org/grpc/server.go:350 +0xa0
created by google.golang.org/grpc.(*Server).serveStreams.func1
/go/src/google.golang.org/grpc/server.go:351 +0x9a
goroutine 407 [select, 9 minutes]:
google.golang.org/grpc/transport.(*recvBufferReader).Read(0xc8201f5dd0, 0xc8200ed030, 0x5, 0x5, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/transport.go:141 +0x7e6
google.golang.org/grpc/transport.(*Stream).Read(0xc820180540, 0xc8200ed030, 0x5, 0x5, 0x1b, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/transport.go:294 +0x71
io.ReadAtLeast(0x7f4528966420, 0xc820180540, 0xc8200ed030, 0x5, 0x5, 0x5, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:297 +0xe6
io.ReadFull(0x7f4528966420, 0xc820180540, 0xc8200ed030, 0x5, 0x5, 0xc82024ba18, 0x0, 0x0)
/usr/local/go/src/io/io.go:315 +0x62
google.golang.org/grpc.(*parser).recvMsg(0xc8200ed020, 0x7, 0x0, 0x0, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/rpc_util.go:216 +0xb9
google.golang.org/grpc.recv(0xc8200ed020, 0x7f45289b0478, 0x1268c48, 0xc820180540, 0x0, 0x0, 0xbf67c0, 0xc8202ab0c0, 0x0, 0x0)
/go/src/google.golang.org/grpc/rpc_util.go:297 +0x45
google.golang.org/grpc.(*serverStream).RecvMsg(0xc820466000, 0xbf67c0, 0xc8202ab0c0, 0x0, 0x0)
/go/src/google.golang.org/grpc/stream.go:413 +0xe4
github.com/cockroachdb/cockroach/gossip.(*gossipGossipServer).Recv(0xc820110380, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/gossip.pb.go:228 +0x7e
github.com/cockroachdb/cockroach/gossip.(Gossip_GossipServer).Recv-fm(0xc8202d23c8, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:101 +0x40
github.com/cockroachdb/cockroach/gossip.(*server).gossipReceiver(0xc8202d23c0, 0xc820496038, 0xc8201f43c0, 0xc82024bf50, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:227 +0x747
github.com/cockroachdb/cockroach/gossip.(*server).Gossip.func3.1()
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:101 +0x8b
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc820153ec0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 408 [IO wait]:
net.runtime_pollWait(0x7f45289b0d60, 0x72, 0xc82024c000)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc820424f40, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc820424f40, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).Read(0xc820424ee0, 0xc82024c000, 0x8000, 0x8000, 0x0, 0x7f45289e6050, 0xc82000e068)
/usr/local/go/src/net/fd_unix.go:250 +0x23a
net.(*conn).Read(0xc820496040, 0xc82024c000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:172 +0xe4
bufio.(*Reader).fill(0xc8201bb140)
/usr/local/go/src/bufio/bufio.go:97 +0x1e9
bufio.(*Reader).Read(0xc8201bb140, 0xc8200d6338, 0x9, 0x9, 0xc81ffe2b5f, 0x0, 0x0)
/usr/local/go/src/bufio/bufio.go:207 +0x260
io.ReadAtLeast(0x7f4528966238, 0xc8201bb140, 0xc8200d6338, 0x9, 0x9, 0x9, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:297 +0xe6
io.ReadFull(0x7f4528966238, 0xc8201bb140, 0xc8200d6338, 0x9, 0x9, 0xc820393598, 0x0, 0x0)
/usr/local/go/src/io/io.go:315 +0x62
golang.org/x/net/http2.readFrameHeader(0xc8200d6338, 0x9, 0x9, 0x7f4528966238, 0xc8201bb140, 0x0, 0xc800000000, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:237 +0xa5
golang.org/x/net/http2.(*Framer).ReadFrame(0xc8200d6300, 0x0, 0x0, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:464 +0x106
google.golang.org/grpc/transport.(*framer).readFrame(0xc8201f4960, 0x0, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/http_util.go:406 +0x3d
google.golang.org/grpc/transport.(*http2Client).reader(0xc8201e63c0)
/go/src/google.golang.org/grpc/transport/http2_client.go:791 +0x109
created by google.golang.org/grpc/transport.newHTTP2Client
/go/src/google.golang.org/grpc/transport/http2_client.go:174 +0xd21
goroutine 409 [select, 9 minutes]:
google.golang.org/grpc/transport.(*http2Client).controller(0xc8201e63c0)
/go/src/google.golang.org/grpc/transport/http2_client.go:869 +0x5da
created by google.golang.org/grpc/transport.newHTTP2Client
/go/src/google.golang.org/grpc/transport/http2_client.go:201 +0x15c2
goroutine 410 [select, 9 minutes]:
google.golang.org/grpc.(*Conn).transportMonitor(0xc8201e62d0)
/go/src/google.golang.org/grpc/clientconn.go:547 +0x1d3
created by google.golang.org/grpc.NewConn
/go/src/google.golang.org/grpc/clientconn.go:346 +0x49f
goroutine 411 [select]:
github.com/cockroachdb/cockroach/rpc.(*Context).runHeartbeat(0xc8200e8300, 0xc82012a160, 0xc8201f61b0, 0xf, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:231 +0x649
github.com/cockroachdb/cockroach/rpc.(*Context).GRPCDial.func1()
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:171 +0x66
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8201f5020)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 412 [select, 9 minutes]:
google.golang.org/grpc.NewClientStream.func1(0x7f4528966288, 0xc8201e63c0, 0xc820180620, 0xc8201d2640)
/go/src/google.golang.org/grpc/stream.go:151 +0x258
created by google.golang.org/grpc.NewClientStream
/go/src/google.golang.org/grpc/stream.go:159 +0xab2
goroutine 435 [IO wait, 9 minutes]:
net.runtime_pollWait(0x7f45289b0be0, 0x72, 0x0)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc82006b870, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc82006b870, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).accept(0xc82006b810, 0x0, 0x7f45289b18b8, 0xc8201af1e0)
/usr/local/go/src/net/fd_unix.go:426 +0x27c
net.(*TCPListener).AcceptTCP(0xc820158118, 0x454730, 0x0, 0x0)
/usr/local/go/src/net/tcpsock_posix.go:254 +0x4d
net.(*TCPListener).Accept(0xc820158118, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/net/tcpsock_posix.go:264 +0x3d
google.golang.org/grpc.(*Server).Serve(0xc820222240, 0x7f45289b0538, 0xc820158118, 0x0, 0x0)
/go/src/google.golang.org/grpc/server.go:279 +0x1cf
github.com/cockroachdb/cockroach/util.ListenAndServeGRPC.func2()
/go/src/github.com/cockroachdb/cockroach/util/net.go:52 +0x3f
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8201af040)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 414 [IO wait]:
net.runtime_pollWait(0x7f45289b0b20, 0x72, 0xc82030c000)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc8200e3a30, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc8200e3a30, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).Read(0xc8200e39d0, 0xc82030c000, 0x8000, 0x8000, 0x0, 0x7f45289e6050, 0xc82000e068)
/usr/local/go/src/net/fd_unix.go:250 +0x23a
net.(*conn).Read(0xc820496050, 0xc82030c000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:172 +0xe4
bufio.(*Reader).fill(0xc8201bb500)
/usr/local/go/src/bufio/bufio.go:97 +0x1e9
bufio.(*Reader).Read(0xc8201bb500, 0xc8200d63f8, 0x9, 0x9, 0xc81ffe2b4a, 0x0, 0x0)
/usr/local/go/src/bufio/bufio.go:207 +0x260
io.ReadAtLeast(0x7f4528966238, 0xc8201bb500, 0xc8200d63f8, 0x9, 0x9, 0x9, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:297 +0xe6
io.ReadFull(0x7f4528966238, 0xc8201bb500, 0xc8200d63f8, 0x9, 0x9, 0xc8203acf68, 0x0, 0x0)
/usr/local/go/src/io/io.go:315 +0x62
golang.org/x/net/http2.readFrameHeader(0xc8200d63f8, 0x9, 0x9, 0x7f4528966238, 0xc8201bb500, 0x0, 0xc800000000, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:237 +0xa5
golang.org/x/net/http2.(*Framer).ReadFrame(0xc8200d63c0, 0x0, 0x0, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:464 +0x106
google.golang.org/grpc/transport.(*framer).readFrame(0xc820177380, 0x0, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/http_util.go:406 +0x3d
google.golang.org/grpc/transport.(*http2Client).reader(0xc8201e64b0)
/go/src/google.golang.org/grpc/transport/http2_client.go:791 +0x109
created by google.golang.org/grpc/transport.newHTTP2Client
/go/src/google.golang.org/grpc/transport/http2_client.go:174 +0xd21
goroutine 438 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/gossip.(*server).start.func3()
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:298 +0x5c
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8201af1c0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 422 [select, 9 minutes]:
github.com/cockroachdb/cockroach/gossip.(*client).gossip(0xc8201f08d0, 0xc8201b2000, 0x7f45289b19c8, 0xc8200f0298, 0xc82006a0e0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/client.go:239 +0x619
github.com/cockroachdb/cockroach/gossip.(*client).start.func1()
/go/src/github.com/cockroachdb/cockroach/gossip/client.go:80 +0x2c7
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8201f09c0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 423 [select, 9 minutes]:
google.golang.org/grpc.(*Conn).transportMonitor(0xc820afe0f0)
/go/src/google.golang.org/grpc/clientconn.go:547 +0x1d3
google.golang.org/grpc.NewConn.func1(0xc820afe0f0)
/go/src/google.golang.org/grpc/clientconn.go:355 +0x1b5
created by google.golang.org/grpc.NewConn
/go/src/google.golang.org/grpc/clientconn.go:356 +0x4e3
goroutine 424 [select]:
github.com/cockroachdb/cockroach/rpc.(*Context).runHeartbeat(0xc8202d3080, 0xc8201b1130, 0xc8201f61b0, 0xf, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:231 +0x649
github.com/cockroachdb/cockroach/rpc.(*Context).GRPCDial.func1()
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:171 +0x66
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8201f0a50)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 415 [select, 9 minutes]:
google.golang.org/grpc/transport.(*http2Client).controller(0xc8201e64b0)
/go/src/google.golang.org/grpc/transport/http2_client.go:869 +0x5da
created by google.golang.org/grpc/transport.newHTTP2Client
/go/src/google.golang.org/grpc/transport/http2_client.go:201 +0x15c2
goroutine 416 [select, 9 minutes]:
google.golang.org/grpc.NewClientStream.func1(0x7f4528966288, 0xc8201e64b0, 0xc8201808c0, 0xc8201d2a00)
/go/src/google.golang.org/grpc/stream.go:151 +0x258
created by google.golang.org/grpc.NewClientStream
/go/src/google.golang.org/grpc/stream.go:159 +0xab2
goroutine 451 [IO wait]:
net.runtime_pollWait(0x7f45289b0a60, 0x72, 0xc820324000)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc820425640, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc820425640, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).Read(0xc8204255e0, 0xc820324000, 0x8000, 0x8000, 0x0, 0x7f45289e6050, 0xc82000e068)
/usr/local/go/src/net/fd_unix.go:250 +0x23a
net.(*conn).Read(0xc820496068, 0xc820324000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:172 +0xe4
bufio.(*Reader).fill(0xc8201bb9e0)
/usr/local/go/src/bufio/bufio.go:97 +0x1e9
bufio.(*Reader).Read(0xc8201bb9e0, 0xc8200d6578, 0x9, 0x9, 0x0, 0x0, 0x0)
/usr/local/go/src/bufio/bufio.go:207 +0x260
io.ReadAtLeast(0x7f4528966238, 0xc8201bb9e0, 0xc8200d6578, 0x9, 0x9, 0x9, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:297 +0xe6
io.ReadFull(0x7f4528966238, 0xc8201bb9e0, 0xc8200d6578, 0x9, 0x9, 0xc8202ae000, 0x0, 0x0)
/usr/local/go/src/io/io.go:315 +0x62
golang.org/x/net/http2.readFrameHeader(0xc8200d6578, 0x9, 0x9, 0x7f4528966238, 0xc8201bb9e0, 0x20000000, 0x0, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:237 +0xa5
golang.org/x/net/http2.(*Framer).ReadFrame(0xc8200d6540, 0x0, 0x0, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:464 +0x106
google.golang.org/grpc/transport.(*framer).readFrame(0xc8201777d0, 0x0, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/http_util.go:406 +0x3d
google.golang.org/grpc/transport.(*http2Server).HandleStreams(0xc820134480, 0xc820177860)
/go/src/google.golang.org/grpc/transport/http2_server.go:243 +0x646
google.golang.org/grpc.(*Server).serveStreams(0xc820222000, 0x7f45289b1b38, 0xc820134480)
/go/src/google.golang.org/grpc/server.go:352 +0x159
google.golang.org/grpc.(*Server).serveNewHTTP2Transport(0xc820222000, 0x7f4528928800, 0xc820496068, 0x0, 0x0)
/go/src/google.golang.org/grpc/server.go:339 +0x49d
google.golang.org/grpc.(*Server).handleRawConn(0xc820222000, 0x7f4528928800, 0xc820496068)
/go/src/google.golang.org/grpc/server.go:316 +0x4ee
created by google.golang.org/grpc.(*Server).Serve
/go/src/google.golang.org/grpc/server.go:288 +0x38c
goroutine 450 [select, 9 minutes]:
google.golang.org/grpc/transport.(*recvBufferReader).Read(0xc8201775c0, 0xc8200ed670, 0x5, 0x5, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/transport.go:141 +0x7e6
google.golang.org/grpc/transport.(*Stream).Read(0xc8201808c0, 0xc8200ed670, 0x5, 0x5, 0x49d424, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/transport.go:294 +0x71
io.ReadAtLeast(0x7f4528966420, 0xc8201808c0, 0xc8200ed670, 0x5, 0x5, 0x5, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:297 +0xe6
io.ReadFull(0x7f4528966420, 0xc8201808c0, 0xc8200ed670, 0x5, 0x5, 0xc8200eed80, 0x0, 0x0)
/usr/local/go/src/io/io.go:315 +0x62
google.golang.org/grpc.(*parser).recvMsg(0xc8200ed660, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/rpc_util.go:216 +0xb9
google.golang.org/grpc.recv(0xc8200ed660, 0x7f45289b0478, 0x1268c48, 0xc8201808c0, 0x0, 0x0, 0xbedd80, 0xc8201f9180, 0x0, 0x0)
/go/src/google.golang.org/grpc/rpc_util.go:297 +0x45
google.golang.org/grpc.(*clientStream).RecvMsg(0xc8201d2a00, 0xbedd80, 0xc8201f9180, 0x0, 0x0)
/go/src/google.golang.org/grpc/stream.go:234 +0xac
github.com/cockroachdb/cockroach/gossip.(*gossipGossipClient).Recv(0xc820110690, 0xc8201b2000, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/gossip.pb.go:192 +0x7e
github.com/cockroachdb/cockroach/gossip.(*client).gossip.func2.1(0x7f4528966498, 0xc820110690, 0xc8201f08d0, 0xc8201b2000, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/client.go:227 +0x37
github.com/cockroachdb/cockroach/gossip.(*client).gossip.func2()
/go/src/github.com/cockroachdb/cockroach/gossip/client.go:235 +0x51
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc820177680)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 452 [select, 9 minutes]:
google.golang.org/grpc/transport.(*http2Server).controller(0xc820134480)
/go/src/google.golang.org/grpc/transport/http2_server.go:652 +0x5da
created by google.golang.org/grpc/transport.newHTTP2Server
/go/src/google.golang.org/grpc/transport/http2_server.go:134 +0x84f
goroutine 454 [select, 9 minutes]:
github.com/cockroachdb/cockroach/gossip.(*server).Gossip(0xc8202d23c0, 0x7f45289b1ce0, 0xc820110860, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:134 +0xa0f
github.com/cockroachdb/cockroach/gossip._Gossip_Gossip_Handler(0xbc9f60, 0xc8202d23c0, 0x7f45289b1c98, 0xc820466100, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/gossip.pb.go:209 +0xd8
google.golang.org/grpc.(*Server).processStreamingRPC(0xc820222000, 0x7f45289b1b38, 0xc820134480, 0xc820180a80, 0xc82000a220, 0x1221740, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/server.go:604 +0x47a
google.golang.org/grpc.(*Server).handleStream(0xc820222000, 0x7f45289b1b38, 0xc820134480, 0xc820180a80, 0x0)
/go/src/google.golang.org/grpc/server.go:688 +0x114e
google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc820110750, 0xc820222000, 0x7f45289b1b38, 0xc820134480, 0xc820180a80)
/go/src/google.golang.org/grpc/server.go:350 +0xa0
created by google.golang.org/grpc.(*Server).serveStreams.func1
/go/src/google.golang.org/grpc/server.go:351 +0x9a
goroutine 455 [select, 9 minutes]:
google.golang.org/grpc/transport.(*recvBufferReader).Read(0xc820177c20, 0xc8200ed9f0, 0x5, 0x5, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/transport.go:141 +0x7e6
google.golang.org/grpc/transport.(*Stream).Read(0xc820180a80, 0xc8200ed9f0, 0x5, 0x5, 0x29, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/transport.go:294 +0x71
io.ReadAtLeast(0x7f4528966420, 0xc820180a80, 0xc8200ed9f0, 0x5, 0x5, 0x5, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:297 +0xe6
io.ReadFull(0x7f4528966420, 0xc820180a80, 0xc8200ed9f0, 0x5, 0x5, 0xc82031da18, 0x0, 0x0)
/usr/local/go/src/io/io.go:315 +0x62
google.golang.org/grpc.(*parser).recvMsg(0xc8200ed9e0, 0x7, 0x0, 0x0, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/rpc_util.go:216 +0xb9
google.golang.org/grpc.recv(0xc8200ed9e0, 0x7f45289b0478, 0x1268c48, 0xc820180a80, 0x0, 0x0, 0xbf67c0, 0xc8200ef940, 0x0, 0x0)
/go/src/google.golang.org/grpc/rpc_util.go:297 +0x45
google.golang.org/grpc.(*serverStream).RecvMsg(0xc820466100, 0xbf67c0, 0xc8200ef940, 0x0, 0x0)
/go/src/google.golang.org/grpc/stream.go:413 +0xe4
github.com/cockroachdb/cockroach/gossip.(*gossipGossipServer).Recv(0xc820110860, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/gossip.pb.go:228 +0x7e
github.com/cockroachdb/cockroach/gossip.(Gossip_GossipServer).Recv-fm(0xc8202d23c8, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:101 +0x40
github.com/cockroachdb/cockroach/gossip.(*server).gossipReceiver(0xc8202d23c0, 0xc820496070, 0xc820177e30, 0xc82031df50, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:227 +0x747
github.com/cockroachdb/cockroach/gossip.(*server).Gossip.func3.1()
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:101 +0x8b
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8200ef700)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
rax 0xca
rbx 0x0
rcx 0x4612a3
rdx 0x0
rdi 0x1248c08
rsi 0x0
rbp 0x1
rsp 0x7ffecfcf17a0
r8 0x0
r9 0x0
r10 0x0
r11 0x286
r12 0xc8201f83c0
r13 0xc
r14 0xc69cd0
r15 0x8
rip 0x4612a1
rflags 0x286
cs 0x33
fs 0x0
gs 0x0
ERROR: exit status 2
```
Run Details:
```
81 runs so far, 0 failures, over 5s
176 runs so far, 0 failures, over 10s
268 runs so far, 0 failures, over 15s
360 runs so far, 0 failures, over 20s
449 runs so far, 0 failures, over 25s
544 runs so far, 0 failures, over 30s
636 runs so far, 0 failures, over 35s
730 runs so far, 0 failures, over 40s
828 runs so far, 0 failures, over 45s
922 runs so far, 0 failures, over 50s
1019 runs so far, 0 failures, over 55s
1109 runs so far, 0 failures, over 1m0s
1206 runs so far, 0 failures, over 1m5s
1296 runs so far, 0 failures, over 1m10s
1389 runs so far, 0 failures, over 1m15s
1483 runs so far, 0 failures, over 1m20s
1576 runs so far, 0 failures, over 1m25s
1673 runs so far, 0 failures, over 1m30s
1761 runs so far, 0 failures, over 1m35s
1853 runs so far, 0 failures, over 1m40s
1951 runs so far, 0 failures, over 1m45s
2040 runs so far, 0 failures, over 1m50s
2127 runs so far, 0 failures, over 1m55s
2221 runs so far, 0 failures, over 2m0s
2318 runs so far, 0 failures, over 2m5s
2410 runs so far, 0 failures, over 2m10s
2501 runs so far, 0 failures, over 2m15s
2594 runs so far, 0 failures, over 2m20s
2688 runs so far, 0 failures, over 2m25s
2787 runs so far, 0 failures, over 2m30s
2880 runs so far, 0 failures, over 2m35s
2973 runs so far, 0 failures, over 2m40s
3068 runs so far, 0 failures, over 2m45s
3159 runs so far, 0 failures, over 2m50s
3253 runs so far, 0 failures, over 2m55s
3345 runs so far, 0 failures, over 3m0s
3437 runs so far, 0 failures, over 3m5s
3532 runs so far, 0 failures, over 3m10s
3626 runs so far, 0 failures, over 3m15s
3719 runs so far, 0 failures, over 3m20s
3813 runs so far, 0 failures, over 3m25s
3904 runs so far, 0 failures, over 3m30s
3994 runs so far, 0 failures, over 3m35s
4086 runs so far, 0 failures, over 3m40s
4178 runs so far, 0 failures, over 3m45s
4270 runs so far, 0 failures, over 3m50s
4361 runs so far, 0 failures, over 3m55s
4451 runs so far, 0 failures, over 4m0s
4538 runs so far, 0 failures, over 4m5s
4630 runs so far, 0 failures, over 4m10s
4726 runs so far, 0 failures, over 4m15s
4815 runs so far, 0 failures, over 4m20s
4906 runs so far, 0 failures, over 4m25s
4999 runs so far, 0 failures, over 4m30s
5091 runs so far, 0 failures, over 4m35s
5183 runs so far, 0 failures, over 4m40s
5273 runs so far, 0 failures, over 4m45s
5364 runs so far, 0 failures, over 4m50s
5455 runs so far, 0 failures, over 4m55s
5547 runs so far, 0 failures, over 5m0s
5638 runs so far, 0 failures, over 5m5s
5730 runs so far, 0 failures, over 5m10s
5818 runs so far, 0 failures, over 5m15s
5912 runs so far, 0 failures, over 5m20s
6004 runs so far, 0 failures, over 5m25s
6095 runs so far, 0 failures, over 5m30s
6185 runs so far, 0 failures, over 5m35s
6280 runs so far, 0 failures, over 5m40s
6369 runs so far, 0 failures, over 5m45s
6461 runs so far, 0 failures, over 5m50s
6548 runs so far, 0 failures, over 5m55s
6642 runs so far, 0 failures, over 6m0s
6735 runs so far, 0 failures, over 6m5s
6821 runs so far, 0 failures, over 6m10s
6913 runs so far, 0 failures, over 6m15s
7005 runs so far, 0 failures, over 6m20s
7098 runs so far, 0 failures, over 6m25s
7192 runs so far, 0 failures, over 6m30s
7281 runs so far, 0 failures, over 6m35s
7373 runs so far, 0 failures, over 6m40s
7465 runs so far, 0 failures, over 6m45s
7557 runs so far, 0 failures, over 6m50s
7652 runs so far, 0 failures, over 6m55s
7743 runs so far, 0 failures, over 7m0s
7835 runs so far, 0 failures, over 7m5s
7930 runs so far, 0 failures, over 7m10s
8019 runs so far, 0 failures, over 7m15s
8111 runs so far, 0 failures, over 7m20s
8200 runs so far, 0 failures, over 7m25s
8293 runs so far, 0 failures, over 7m30s
8384 runs so far, 0 failures, over 7m35s
8474 runs so far, 0 failures, over 7m40s
8565 runs so far, 0 failures, over 7m45s
8654 runs so far, 0 failures, over 7m50s
8747 runs so far, 0 failures, over 7m55s
8837 runs so far, 0 failures, over 8m0s
8928 runs so far, 0 failures, over 8m5s
9022 runs so far, 0 failures, over 8m10s
9110 runs so far, 0 failures, over 8m15s
9199 runs so far, 0 failures, over 8m20s
9295 runs so far, 0 failures, over 8m25s
9387 runs so far, 0 failures, over 8m30s
9476 runs so far, 0 failures, over 8m35s
9565 runs so far, 0 failures, over 8m40s
9655 runs so far, 0 failures, over 8m45s
9750 runs so far, 0 failures, over 8m50s
9843 runs so far, 0 failures, over 8m55s
9939 runs so far, 0 failures, over 9m0s
10028 runs so far, 0 failures, over 9m5s
10118 runs so far, 0 failures, over 9m10s
10210 runs so far, 0 failures, over 9m15s
10304 runs so far, 0 failures, over 9m20s
10394 runs so far, 0 failures, over 9m25s
10484 runs so far, 0 failures, over 9m30s
10574 runs so far, 0 failures, over 9m35s
10667 runs so far, 0 failures, over 9m40s
10758 runs so far, 0 failures, over 9m45s
10852 runs so far, 0 failures, over 9m50s
10947 runs so far, 0 failures, over 9m55s
11037 runs so far, 0 failures, over 10m0s
11128 runs so far, 0 failures, over 10m5s
11221 runs so far, 0 failures, over 10m10s
11316 runs so far, 0 failures, over 10m15s
11405 runs so far, 0 failures, over 10m20s
11498 runs so far, 0 failures, over 10m25s
11589 runs so far, 0 failures, over 10m30s
11685 runs so far, 0 failures, over 10m35s
11781 runs so far, 0 failures, over 10m40s
11870 runs so far, 0 failures, over 10m45s
11965 runs so far, 0 failures, over 10m50s
12060 runs so far, 0 failures, over 10m55s
12150 runs so far, 0 failures, over 11m0s
12240 runs so far, 0 failures, over 11m5s
12333 runs so far, 0 failures, over 11m10s
12422 runs so far, 0 failures, over 11m15s
12514 runs so far, 0 failures, over 11m20s
12606 runs so far, 0 failures, over 11m25s
12697 runs so far, 0 failures, over 11m30s
12792 runs so far, 0 failures, over 11m35s
12883 runs so far, 0 failures, over 11m40s
12975 runs so far, 0 failures, over 11m45s
13071 runs so far, 0 failures, over 11m50s
13165 runs so far, 0 failures, over 11m55s
13256 runs so far, 0 failures, over 12m0s
13345 runs so far, 0 failures, over 12m5s
13437 runs so far, 0 failures, over 12m10s
13528 runs so far, 0 failures, over 12m15s
13615 runs so far, 0 failures, over 12m20s
13704 runs so far, 0 failures, over 12m25s
13797 runs so far, 0 failures, over 12m30s
13891 runs so far, 0 failures, over 12m35s
13983 runs so far, 0 failures, over 12m40s
14074 runs so far, 0 failures, over 12m45s
14167 runs so far, 0 failures, over 12m50s
14254 runs so far, 0 failures, over 12m55s
14348 runs so far, 0 failures, over 13m0s
14444 runs so far, 0 failures, over 13m5s
14533 runs so far, 0 failures, over 13m10s
14626 runs completed, 1 failures, over 13m15s
FAIL
```
Please assign, take a look and update the issue accordingly.
|
1.0
|
stress: failed test in cockroach/gossip/gossip.test: TestGossipNoForwardSelf - Binary: cockroach/static-tests.tar.gz sha: https://github.com/cockroachdb/cockroach/commits/2bb99ef73064de323277b965dba0c74e4c834413
Stress build found a failed test:
```
=== RUN TestGossipNoForwardSelf
W160528 03:49:32.816174 gossip/gossip.go:897 not connected to cluster; use --join to specify a connected node
W160528 03:49:32.818144 gossip/gossip.go:897 not connected to cluster; use --join to specify a connected node
W160528 03:49:32.819333 gossip/gossip.go:897 not connected to cluster; use --join to specify a connected node
W160528 03:49:32.820711 gossip/gossip.go:897 not connected to cluster; use --join to specify a connected node
W160528 03:49:32.824784 gossip/gossip.go:897 not connected to cluster; use --join to specify a connected node
I160528 03:49:32.828531 gossip/server.go:185 refusing gossip from node 4 (max 3 conns); forwarding to 5 ({tcp 127.0.0.1:43757})
SIGABRT: abort
PC=0x4612a1 m=0
goroutine 0 [idle]:
runtime.futex(0x1248c08, 0x0, 0x0, 0x0, 0x0, 0x12481b0, 0x0, 0x0, 0x40fc14, 0x1248c08, ...)
/usr/local/go/src/runtime/sys_linux_amd64.s:306 +0x21
runtime.futexsleep(0x1248c08, 0x0, 0xffffffffffffffff)
/usr/local/go/src/runtime/os1_linux.go:40 +0x53
runtime.notesleep(0x1248c08)
/usr/local/go/src/runtime/lock_futex.go:145 +0xa4
runtime.stopm()
/usr/local/go/src/runtime/proc.go:1538 +0x10b
runtime.findrunnable(0xc820015500, 0x0)
/usr/local/go/src/runtime/proc.go:1976 +0x739
runtime.schedule()
/usr/local/go/src/runtime/proc.go:2075 +0x24f
runtime.park_m(0xc8202d4a80)
/usr/local/go/src/runtime/proc.go:2140 +0x18b
runtime.mcall(0x7ffecfcf1940)
/usr/local/go/src/runtime/asm_amd64.s:233 +0x5b
goroutine 1 [chan receive, 9 minutes]:
testing.RunTests(0xdafea8, 0x1227d60, 0x21, 0x21, 0xc820186b01)
/usr/local/go/src/testing/testing.go:583 +0x8d2
testing.(*M).Run(0xc820042f08, 0x4098a3)
/usr/local/go/src/testing/testing.go:515 +0x81
main.main()
github.com/cockroachdb/cockroach/gossip/_test/_testmain.go:120 +0x117
goroutine 17 [syscall, 9 minutes, locked to thread]:
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1998 +0x1
goroutine 5 [chan receive]:
github.com/cockroachdb/cockroach/util/log.(*loggingT).flushDaemon(0x12480c0)
/go/src/github.com/cockroachdb/cockroach/util/log/clog.go:1011 +0x64
created by github.com/cockroachdb/cockroach/util/log.init.1
/go/src/github.com/cockroachdb/cockroach/util/log/clog.go:598 +0x8a
goroutine 308 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/gossip.TestGossipNoForwardSelf(0xc8202222d0)
/go/src/github.com/cockroachdb/cockroach/gossip/gossip_test.go:148 +0x809
testing.tRunner(0xc8202222d0, 0x1227e50)
/usr/local/go/src/testing/testing.go:473 +0x98
created by testing.RunTests
/usr/local/go/src/testing/testing.go:582 +0x892
goroutine 382 [select]:
github.com/cockroachdb/cockroach/rpc.(*Context).runHeartbeat(0xc8200102a0, 0xc8201b0fd0, 0xc8201f61b0, 0xf, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:231 +0x649
github.com/cockroachdb/cockroach/rpc.(*Context).GRPCDial.func1()
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:171 +0x66
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc82048f020)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 317 [IO wait, 9 minutes]:
net.runtime_pollWait(0x7f45289b1420, 0x72, 0x0)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc8200e2a00, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc8200e2a00, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).accept(0xc8200e29a0, 0x0, 0x7f45289b18b8, 0xc8201727a0)
/usr/local/go/src/net/fd_unix.go:426 +0x27c
net.(*TCPListener).AcceptTCP(0xc8200f0230, 0x454730, 0x0, 0x0)
/usr/local/go/src/net/tcpsock_posix.go:254 +0x4d
net.(*TCPListener).Accept(0xc8200f0230, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/net/tcpsock_posix.go:264 +0x3d
google.golang.org/grpc.(*Server).Serve(0xc8202d01b0, 0x7f45289b0538, 0xc8200f0230, 0x0, 0x0)
/go/src/google.golang.org/grpc/server.go:279 +0x1cf
github.com/cockroachdb/cockroach/util.ListenAndServeGRPC.func2()
/go/src/github.com/cockroachdb/cockroach/util/net.go:52 +0x3f
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc820172ee0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 380 [select, 9 minutes]:
google.golang.org/grpc/transport.(*http2Client).controller(0xc82012e000)
/go/src/google.golang.org/grpc/transport/http2_client.go:869 +0x5da
created by google.golang.org/grpc/transport.newHTTP2Client
/go/src/google.golang.org/grpc/transport/http2_client.go:201 +0x15c2
goroutine 379 [IO wait]:
net.runtime_pollWait(0x7f45289b1360, 0x72, 0xc820456000)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc8200e3410, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc8200e3410, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).Read(0xc8200e33b0, 0xc820456000, 0x8000, 0x8000, 0x0, 0x7f45289e6050, 0xc82000e068)
/usr/local/go/src/net/fd_unix.go:250 +0x23a
net.(*conn).Read(0xc8201580f0, 0xc820456000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:172 +0xe4
bufio.(*Reader).fill(0xc8202d2ba0)
/usr/local/go/src/bufio/bufio.go:97 +0x1e9
bufio.(*Reader).Read(0xc8202d2ba0, 0xc820afc0f8, 0x9, 0x9, 0xc81ffd6f48, 0x0, 0x0)
/usr/local/go/src/bufio/bufio.go:207 +0x260
io.ReadAtLeast(0x7f4528966238, 0xc8202d2ba0, 0xc820afc0f8, 0x9, 0x9, 0x9, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:297 +0xe6
io.ReadFull(0x7f4528966238, 0xc8202d2ba0, 0xc820afc0f8, 0x9, 0x9, 0xc820396e18, 0x0, 0x0)
/usr/local/go/src/io/io.go:315 +0x62
golang.org/x/net/http2.readFrameHeader(0xc820afc0f8, 0x9, 0x9, 0x7f4528966238, 0xc8202d2ba0, 0x0, 0xc800000000, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:237 +0xa5
golang.org/x/net/http2.(*Framer).ReadFrame(0xc820afc0c0, 0x0, 0x0, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:464 +0x106
google.golang.org/grpc/transport.(*framer).readFrame(0xc82048ef60, 0x0, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/http_util.go:406 +0x3d
google.golang.org/grpc/transport.(*http2Client).reader(0xc82012e000)
/go/src/google.golang.org/grpc/transport/http2_client.go:791 +0x109
created by google.golang.org/grpc/transport.newHTTP2Client
/go/src/google.golang.org/grpc/transport/http2_client.go:174 +0xd21
goroutine 314 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/gossip.(*server).start.func3()
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:298 +0x5c
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8201724a0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 315 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/rpc.NewContext.func1()
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:104 +0x57
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc820172560)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 378 [select, 9 minutes]:
google.golang.org/grpc/transport.(*recvBufferReader).Read(0xc82048ec60, 0xc8201ae790, 0x5, 0x5, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/transport.go:141 +0x7e6
google.golang.org/grpc/transport.(*Stream).Read(0xc8202a80e0, 0xc8201ae790, 0x5, 0x5, 0x1b, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/transport.go:294 +0x71
io.ReadAtLeast(0x7f4528966420, 0xc8202a80e0, 0xc8201ae790, 0x5, 0x5, 0x5, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:297 +0xe6
io.ReadFull(0x7f4528966420, 0xc8202a80e0, 0xc8201ae790, 0x5, 0x5, 0xc820299a18, 0x0, 0x0)
/usr/local/go/src/io/io.go:315 +0x62
google.golang.org/grpc.(*parser).recvMsg(0xc8201ae780, 0x7, 0x0, 0x0, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/rpc_util.go:216 +0xb9
google.golang.org/grpc.recv(0xc8201ae780, 0x7f45289b0478, 0x1268c48, 0xc8202a80e0, 0x0, 0x0, 0xbf67c0, 0xc8202aadc0, 0x0, 0x0)
/go/src/google.golang.org/grpc/rpc_util.go:297 +0x45
google.golang.org/grpc.(*serverStream).RecvMsg(0xc820164000, 0xbf67c0, 0xc8202aadc0, 0x0, 0x0)
/go/src/google.golang.org/grpc/stream.go:413 +0xe4
github.com/cockroachdb/cockroach/gossip.(*gossipGossipServer).Recv(0xc8201f6460, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/gossip.pb.go:228 +0x7e
github.com/cockroachdb/cockroach/gossip.(Gossip_GossipServer).Recv-fm(0xc8202d23c8, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:101 +0x40
github.com/cockroachdb/cockroach/gossip.(*server).gossipReceiver(0xc8202d23c0, 0xc8201580e8, 0xc82048ede0, 0xc820299f50, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:227 +0x747
github.com/cockroachdb/cockroach/gossip.(*server).Gossip.func3.1()
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:101 +0x8b
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8202aad80)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 390 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/gossip.(*server).start.func3()
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:298 +0x5c
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8200cef40)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 377 [select, 9 minutes]:
github.com/cockroachdb/cockroach/gossip.(*server).Gossip(0xc8202d23c0, 0x7f45289b1ce0, 0xc8201f6460, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:134 +0xa0f
github.com/cockroachdb/cockroach/gossip._Gossip_Gossip_Handler(0xbc9f60, 0xc8202d23c0, 0x7f45289b1c98, 0xc820164000, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/gossip.pb.go:209 +0xd8
google.golang.org/grpc.(*Server).processStreamingRPC(0xc820222000, 0x7f45289b1b38, 0xc820222120, 0xc8202a80e0, 0xc82000a220, 0x1221740, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/server.go:604 +0x47a
google.golang.org/grpc.(*Server).handleStream(0xc820222000, 0x7f45289b1b38, 0xc820222120, 0xc8202a80e0, 0x0)
/go/src/google.golang.org/grpc/server.go:688 +0x114e
google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc8201f6390, 0xc820222000, 0x7f45289b1b38, 0xc820222120, 0xc8202a80e0)
/go/src/google.golang.org/grpc/server.go:350 +0xa0
created by google.golang.org/grpc.(*Server).serveStreams.func1
/go/src/google.golang.org/grpc/server.go:351 +0x9a
goroutine 353 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/rpc.NewContext.func1()
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:104 +0x57
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8200ce7a0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 310 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/util.ListenAndServeGRPC.func1()
/go/src/github.com/cockroachdb/cockroach/util/net.go:47 +0x47
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8201720e0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 311 [IO wait, 9 minutes]:
net.runtime_pollWait(0x7f45289b1060, 0x72, 0x0)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc8200e2060, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc8200e2060, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).accept(0xc8200e2000, 0x0, 0x7f45289b18b8, 0xc820172520)
/usr/local/go/src/net/fd_unix.go:426 +0x27c
net.(*TCPListener).AcceptTCP(0xc8200f0000, 0x454730, 0x0, 0x0)
/usr/local/go/src/net/tcpsock_posix.go:254 +0x4d
net.(*TCPListener).Accept(0xc8200f0000, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/net/tcpsock_posix.go:264 +0x3d
google.golang.org/grpc.(*Server).Serve(0xc8202d0000, 0x7f45289b0538, 0xc8200f0000, 0x0, 0x0)
/go/src/google.golang.org/grpc/server.go:279 +0x1cf
github.com/cockroachdb/cockroach/util.ListenAndServeGRPC.func2()
/go/src/github.com/cockroachdb/cockroach/util/net.go:52 +0x3f
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8201721e0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 391 [IO wait]:
net.runtime_pollWait(0x7f45289b11e0, 0x72, 0xc820108000)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc82020a840, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc82020a840, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).Read(0xc82020a7e0, 0xc820108000, 0x8000, 0x8000, 0x0, 0x7f45289e6050, 0xc82000e068)
/usr/local/go/src/net/fd_unix.go:250 +0x23a
net.(*conn).Read(0xc820034050, 0xc820108000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:172 +0xe4
bufio.(*Reader).fill(0xc820010840)
/usr/local/go/src/bufio/bufio.go:97 +0x1e9
bufio.(*Reader).Read(0xc820010840, 0xc82043a278, 0x9, 0x9, 0x0, 0x0, 0x0)
/usr/local/go/src/bufio/bufio.go:207 +0x260
io.ReadAtLeast(0x7f4528966238, 0xc820010840, 0xc82043a278, 0x9, 0x9, 0x9, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:297 +0xe6
io.ReadFull(0x7f4528966238, 0xc820010840, 0xc82043a278, 0x9, 0x9, 0xc82043de00, 0x0, 0x0)
/usr/local/go/src/io/io.go:315 +0x62
golang.org/x/net/http2.readFrameHeader(0xc82043a278, 0x9, 0x9, 0x7f4528966238, 0xc820010840, 0x20000000, 0x0, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:237 +0xa5
golang.org/x/net/http2.(*Framer).ReadFrame(0xc82043a240, 0x0, 0x0, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:464 +0x106
google.golang.org/grpc/transport.(*framer).readFrame(0xc8201d16e0, 0x0, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/http_util.go:406 +0x3d
google.golang.org/grpc/transport.(*http2Server).HandleStreams(0xc82048a240, 0xc8201d1800)
/go/src/google.golang.org/grpc/transport/http2_server.go:243 +0x646
google.golang.org/grpc.(*Server).serveStreams(0xc820222000, 0x7f45289b1b38, 0xc82048a240)
/go/src/google.golang.org/grpc/server.go:352 +0x159
google.golang.org/grpc.(*Server).serveNewHTTP2Transport(0xc820222000, 0x7f4528928800, 0xc820034050, 0x0, 0x0)
/go/src/google.golang.org/grpc/server.go:339 +0x49d
google.golang.org/grpc.(*Server).handleRawConn(0xc820222000, 0x7f4528928800, 0xc820034050)
/go/src/google.golang.org/grpc/server.go:316 +0x4ee
created by google.golang.org/grpc.(*Server).Serve
/go/src/google.golang.org/grpc/server.go:288 +0x38c
goroutine 434 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/util.ListenAndServeGRPC.func1()
/go/src/github.com/cockroachdb/cockroach/util/net.go:47 +0x47
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8201af020)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 393 [select, 9 minutes]:
github.com/cockroachdb/cockroach/gossip.(*server).Gossip(0xc8202d23c0, 0x7f45289b1ce0, 0xc82028c410, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:134 +0xa0f
github.com/cockroachdb/cockroach/gossip._Gossip_Gossip_Handler(0xbc9f60, 0xc8202d23c0, 0x7f45289b1c98, 0xc82016e180, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/gossip.pb.go:209 +0xd8
google.golang.org/grpc.(*Server).processStreamingRPC(0xc820222000, 0x7f45289b1b38, 0xc82048a240, 0xc82013a380, 0xc82000a220, 0x1221740, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/server.go:604 +0x47a
google.golang.org/grpc.(*Server).handleStream(0xc820222000, 0x7f45289b1b38, 0xc82048a240, 0xc82013a380, 0x0)
/go/src/google.golang.org/grpc/server.go:688 +0x114e
google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc82028c370, 0xc820222000, 0x7f45289b1b38, 0xc82048a240, 0xc82013a380)
/go/src/google.golang.org/grpc/server.go:350 +0xa0
created by google.golang.org/grpc.(*Server).serveStreams.func1
/go/src/google.golang.org/grpc/server.go:351 +0x9a
goroutine 386 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/util.ListenAndServeGRPC.func1()
/go/src/github.com/cockroachdb/cockroach/util/net.go:47 +0x47
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8200ce7e0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 385 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/rpc.NewContext.func1()
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:104 +0x57
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8201aeee0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 370 [IO wait, 9 minutes]:
net.runtime_pollWait(0x7f45289b14e0, 0x72, 0x0)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc82006a1b0, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc82006a1b0, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).accept(0xc82006a150, 0x0, 0x7f45289b18b8, 0xc8200ed7e0)
/usr/local/go/src/net/fd_unix.go:426 +0x27c
net.(*TCPListener).AcceptTCP(0xc820158000, 0xc820040ea8, 0x0, 0x0)
/usr/local/go/src/net/tcpsock_posix.go:254 +0x4d
net.(*TCPListener).Accept(0xc820158000, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/net/tcpsock_posix.go:264 +0x3d
google.golang.org/grpc.(*Server).Serve(0xc820222000, 0x7f45289b0538, 0xc820158000, 0x0, 0x0)
/go/src/google.golang.org/grpc/server.go:279 +0x1cf
github.com/cockroachdb/cockroach/util.ListenAndServeGRPC.func2()
/go/src/github.com/cockroachdb/cockroach/util/net.go:52 +0x3f
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8201502e0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 375 [IO wait]:
net.runtime_pollWait(0x7f45289b0ee0, 0x72, 0xc820aec000)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc82006a920, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc82006a920, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).Read(0xc82006a8c0, 0xc820aec000, 0x8000, 0x8000, 0x0, 0x7f45289e6050, 0xc82000e068)
/usr/local/go/src/net/fd_unix.go:250 +0x23a
net.(*conn).Read(0xc8201580e0, 0xc820aec000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:172 +0xe4
bufio.(*Reader).fill(0xc8202d2600)
/usr/local/go/src/bufio/bufio.go:97 +0x1e9
bufio.(*Reader).Read(0xc8202d2600, 0xc820afc038, 0x9, 0x9, 0x0, 0x0, 0x0)
/usr/local/go/src/bufio/bufio.go:207 +0x260
io.ReadAtLeast(0x7f4528966238, 0xc8202d2600, 0xc820afc038, 0x9, 0x9, 0x9, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:297 +0xe6
io.ReadFull(0x7f4528966238, 0xc8202d2600, 0xc820afc038, 0x9, 0x9, 0xc8202b0900, 0x0, 0x0)
/usr/local/go/src/io/io.go:315 +0x62
golang.org/x/net/http2.readFrameHeader(0xc820afc038, 0x9, 0x9, 0x7f4528966238, 0xc8202d2600, 0x20000000, 0x0, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:237 +0xa5
golang.org/x/net/http2.(*Framer).ReadFrame(0xc820afc000, 0x0, 0x0, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:464 +0x106
google.golang.org/grpc/transport.(*framer).readFrame(0xc82048e9f0, 0x0, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/http_util.go:406 +0x3d
google.golang.org/grpc/transport.(*http2Server).HandleStreams(0xc820222120, 0xc82048ea80)
/go/src/google.golang.org/grpc/transport/http2_server.go:243 +0x646
google.golang.org/grpc.(*Server).serveStreams(0xc820222000, 0x7f45289b1b38, 0xc820222120)
/go/src/google.golang.org/grpc/server.go:352 +0x159
google.golang.org/grpc.(*Server).serveNewHTTP2Transport(0xc820222000, 0x7f4528928800, 0xc8201580e0, 0x0, 0x0)
/go/src/google.golang.org/grpc/server.go:339 +0x49d
google.golang.org/grpc.(*Server).handleRawConn(0xc820222000, 0x7f4528928800, 0xc8201580e0)
/go/src/google.golang.org/grpc/server.go:316 +0x4ee
created by google.golang.org/grpc.(*Server).Serve
/go/src/google.golang.org/grpc/server.go:288 +0x38c
goroutine 381 [select, 9 minutes]:
google.golang.org/grpc.(*Conn).transportMonitor(0xc820afe000)
/go/src/google.golang.org/grpc/clientconn.go:547 +0x1d3
created by google.golang.org/grpc.NewConn
/go/src/google.golang.org/grpc/clientconn.go:346 +0x49f
goroutine 304 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/rpc.NewContext.func1()
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:104 +0x57
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc820150180)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 376 [select, 9 minutes]:
google.golang.org/grpc/transport.(*http2Server).controller(0xc820222120)
/go/src/google.golang.org/grpc/transport/http2_server.go:652 +0x5da
created by google.golang.org/grpc/transport.newHTTP2Server
/go/src/google.golang.org/grpc/transport/http2_server.go:134 +0x84f
goroutine 305 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/util.ListenAndServeGRPC.func1()
/go/src/github.com/cockroachdb/cockroach/util/net.go:47 +0x47
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8201502c0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 387 [IO wait, 9 minutes]:
net.runtime_pollWait(0x7f45289b0fa0, 0x72, 0x0)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc82020a060, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc82020a060, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).accept(0xc82020a000, 0x0, 0x7f45289b18b8, 0xc8200cef60)
/usr/local/go/src/net/fd_unix.go:426 +0x27c
net.(*TCPListener).AcceptTCP(0xc820034000, 0x454730, 0x0, 0x0)
/usr/local/go/src/net/tcpsock_posix.go:254 +0x4d
net.(*TCPListener).Accept(0xc820034000, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/net/tcpsock_posix.go:264 +0x3d
google.golang.org/grpc.(*Server).Serve(0xc82048a000, 0x7f45289b0538, 0xc820034000, 0x0, 0x0)
/go/src/google.golang.org/grpc/server.go:279 +0x1cf
github.com/cockroachdb/cockroach/util.ListenAndServeGRPC.func2()
/go/src/github.com/cockroachdb/cockroach/util/net.go:52 +0x3f
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8200ce800)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 383 [select, 9 minutes]:
google.golang.org/grpc.NewClientStream.func1(0x7f4528966288, 0xc82012e000, 0xc8202a81c0, 0xc820098f00)
/go/src/google.golang.org/grpc/stream.go:151 +0x258
created by google.golang.org/grpc.NewClientStream
/go/src/google.golang.org/grpc/stream.go:159 +0xab2
goroutine 392 [select, 9 minutes]:
google.golang.org/grpc/transport.(*http2Server).controller(0xc82048a240)
/go/src/google.golang.org/grpc/transport/http2_server.go:652 +0x5da
created by google.golang.org/grpc/transport.newHTTP2Server
/go/src/google.golang.org/grpc/transport/http2_server.go:134 +0x84f
goroutine 373 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/gossip.(*server).start.func3()
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:298 +0x5c
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc82000a3c0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 394 [select, 9 minutes]:
google.golang.org/grpc/transport.(*recvBufferReader).Read(0xc8201d1cb0, 0xc8200cf1f0, 0x5, 0x5, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/transport.go:141 +0x7e6
google.golang.org/grpc/transport.(*Stream).Read(0xc82013a380, 0xc8200cf1f0, 0x5, 0x5, 0x1b, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/transport.go:294 +0x71
io.ReadAtLeast(0x7f4528966420, 0xc82013a380, 0xc8200cf1f0, 0x5, 0x5, 0x5, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:297 +0xe6
io.ReadFull(0x7f4528966420, 0xc82013a380, 0xc8200cf1f0, 0x5, 0x5, 0xc82029fa18, 0x0, 0x0)
/usr/local/go/src/io/io.go:315 +0x62
google.golang.org/grpc.(*parser).recvMsg(0xc8200cf1e0, 0x7, 0x0, 0x0, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/rpc_util.go:216 +0xb9
google.golang.org/grpc.recv(0xc8200cf1e0, 0x7f45289b0478, 0x1268c48, 0xc82013a380, 0x0, 0x0, 0xbf67c0, 0xc8202646c0, 0x0, 0x0)
/go/src/google.golang.org/grpc/rpc_util.go:297 +0x45
google.golang.org/grpc.(*serverStream).RecvMsg(0xc82016e180, 0xbf67c0, 0xc8202646c0, 0x0, 0x0)
/go/src/google.golang.org/grpc/stream.go:413 +0xe4
github.com/cockroachdb/cockroach/gossip.(*gossipGossipServer).Recv(0xc82028c410, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/gossip.pb.go:228 +0x7e
github.com/cockroachdb/cockroach/gossip.(Gossip_GossipServer).Recv-fm(0xc8202d23c8, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:101 +0x40
github.com/cockroachdb/cockroach/gossip.(*server).gossipReceiver(0xc8202d23c0, 0xc820034058, 0xc8201d1ef0, 0xc82029ff50, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:227 +0x747
github.com/cockroachdb/cockroach/gossip.(*server).Gossip.func3.1()
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:101 +0x8b
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc820264640)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 309 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/rpc.NewContext.func1()
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:104 +0x57
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8201721c0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 363 [select, 9 minutes]:
google.golang.org/grpc/transport.(*http2Client).controller(0xc8201e6000)
/go/src/google.golang.org/grpc/transport/http2_client.go:869 +0x5da
created by google.golang.org/grpc/transport.newHTTP2Client
/go/src/google.golang.org/grpc/transport/http2_client.go:201 +0x15c2
goroutine 362 [IO wait]:
net.runtime_pollWait(0x7f45289b12a0, 0x72, 0xc820268000)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc82020a7d0, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc82020a7d0, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).Read(0xc82020a770, 0xc820268000, 0x8000, 0x8000, 0x0, 0x7f45289e6050, 0xc82000e068)
/usr/local/go/src/net/fd_unix.go:250 +0x23a
net.(*conn).Read(0xc820496000, 0xc820268000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:172 +0xe4
bufio.(*Reader).fill(0xc8201ba300)
/usr/local/go/src/bufio/bufio.go:97 +0x1e9
bufio.(*Reader).Read(0xc8201ba300, 0xc8200d60f8, 0x9, 0x9, 0xc81ffe298d, 0x0, 0x0)
/usr/local/go/src/bufio/bufio.go:207 +0x260
io.ReadAtLeast(0x7f4528966238, 0xc8201ba300, 0xc8200d60f8, 0x9, 0x9, 0x9, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:297 +0xe6
io.ReadFull(0x7f4528966238, 0xc8201ba300, 0xc8200d60f8, 0x9, 0x9, 0xc820396ab8, 0x0, 0x0)
/usr/local/go/src/io/io.go:315 +0x62
golang.org/x/net/http2.readFrameHeader(0xc8200d60f8, 0x9, 0x9, 0x7f4528966238, 0xc8201ba300, 0x0, 0xc800000000, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:237 +0xa5
golang.org/x/net/http2.(*Framer).ReadFrame(0xc8200d60c0, 0x0, 0x0, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:464 +0x106
google.golang.org/grpc/transport.(*framer).readFrame(0xc8200e0090, 0x0, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/http_util.go:406 +0x3d
google.golang.org/grpc/transport.(*http2Client).reader(0xc8201e6000)
/go/src/google.golang.org/grpc/transport/http2_client.go:791 +0x109
created by google.golang.org/grpc/transport.newHTTP2Client
/go/src/google.golang.org/grpc/transport/http2_client.go:174 +0xd21
goroutine 316 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/util.ListenAndServeGRPC.func1()
/go/src/github.com/cockroachdb/cockroach/util/net.go:47 +0x47
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc820172ec0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 418 [IO wait]:
net.runtime_pollWait(0x7f45289b0ca0, 0x72, 0xc82025c000)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc8200e34f0, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc8200e34f0, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).Read(0xc8200e3490, 0xc82025c000, 0x8000, 0x8000, 0x0, 0x7f45289e6050, 0xc82000e068)
/usr/local/go/src/net/fd_unix.go:250 +0x23a
net.(*conn).Read(0xc8200f0280, 0xc82025c000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:172 +0xe4
bufio.(*Reader).fill(0xc8200e8d80)
/usr/local/go/src/bufio/bufio.go:97 +0x1e9
bufio.(*Reader).Read(0xc8200e8d80, 0xc820468038, 0x9, 0x9, 0x0, 0x0, 0x0)
/usr/local/go/src/bufio/bufio.go:207 +0x260
io.ReadAtLeast(0x7f4528966238, 0xc8200e8d80, 0xc820468038, 0x9, 0x9, 0x9, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:297 +0xe6
io.ReadFull(0x7f4528966238, 0xc8200e8d80, 0xc820468038, 0x9, 0x9, 0xc820232900, 0x0, 0x0)
/usr/local/go/src/io/io.go:315 +0x62
golang.org/x/net/http2.readFrameHeader(0xc820468038, 0x9, 0x9, 0x7f4528966238, 0xc8200e8d80, 0x20000000, 0x0, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:237 +0xa5
golang.org/x/net/http2.(*Framer).ReadFrame(0xc820468000, 0x0, 0x0, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:464 +0x106
google.golang.org/grpc/transport.(*framer).readFrame(0xc8201f0060, 0x0, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/http_util.go:406 +0x3d
google.golang.org/grpc/transport.(*http2Server).HandleStreams(0xc8202d03f0, 0xc8201f0180)
/go/src/google.golang.org/grpc/transport/http2_server.go:243 +0x646
google.golang.org/grpc.(*Server).serveStreams(0xc820222000, 0x7f45289b1b38, 0xc8202d03f0)
/go/src/google.golang.org/grpc/server.go:352 +0x159
google.golang.org/grpc.(*Server).serveNewHTTP2Transport(0xc820222000, 0x7f4528928800, 0xc8200f0280, 0x0, 0x0)
/go/src/google.golang.org/grpc/server.go:339 +0x49d
google.golang.org/grpc.(*Server).handleRawConn(0xc820222000, 0x7f4528928800, 0xc8200f0280)
/go/src/google.golang.org/grpc/server.go:316 +0x4ee
created by google.golang.org/grpc.(*Server).Serve
/go/src/google.golang.org/grpc/server.go:288 +0x38c
goroutine 419 [select, 9 minutes]:
google.golang.org/grpc/transport.(*http2Server).controller(0xc8202d03f0)
/go/src/google.golang.org/grpc/transport/http2_server.go:652 +0x5da
created by google.golang.org/grpc/transport.newHTTP2Server
/go/src/google.golang.org/grpc/transport/http2_server.go:134 +0x84f
goroutine 320 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/gossip.(*server).start.func3()
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:298 +0x5c
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc820172780)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 364 [select, 9 minutes]:
google.golang.org/grpc.(*Conn).transportMonitor(0xc8201fa000)
/go/src/google.golang.org/grpc/clientconn.go:547 +0x1d3
created by google.golang.org/grpc.NewConn
/go/src/google.golang.org/grpc/clientconn.go:346 +0x49f
goroutine 365 [select]:
github.com/cockroachdb/cockroach/rpc.(*Context).runHeartbeat(0xc8202d2240, 0xc8200d2210, 0xc8201f61b0, 0xf, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:231 +0x649
github.com/cockroachdb/cockroach/rpc.(*Context).GRPCDial.func1()
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:171 +0x66
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8200e0750)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 366 [select, 9 minutes]:
google.golang.org/grpc.NewClientStream.func1(0x7f4528966288, 0xc8201e6000, 0xc8201801c0, 0xc8201d2000)
/go/src/google.golang.org/grpc/stream.go:151 +0x258
created by google.golang.org/grpc.NewClientStream
/go/src/google.golang.org/grpc/stream.go:159 +0xab2
goroutine 367 [IO wait]:
net.runtime_pollWait(0x7f45289b1120, 0x72, 0xc820ae4000)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc8204244c0, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc8204244c0, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).Read(0xc820424460, 0xc820ae4000, 0x8000, 0x8000, 0x0, 0x7f45289e6050, 0xc82000e068)
/usr/local/go/src/net/fd_unix.go:250 +0x23a
net.(*conn).Read(0xc820496018, 0xc820ae4000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:172 +0xe4
bufio.(*Reader).fill(0xc8201ba360)
/usr/local/go/src/bufio/bufio.go:97 +0x1e9
bufio.(*Reader).Read(0xc8201ba360, 0xc8200d61b8, 0x9, 0x9, 0xc81ffd6f33, 0x0, 0x0)
/usr/local/go/src/bufio/bufio.go:207 +0x260
io.ReadAtLeast(0x7f4528966238, 0xc8201ba360, 0xc8200d61b8, 0x9, 0x9, 0x9, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:297 +0xe6
io.ReadFull(0x7f4528966238, 0xc8201ba360, 0xc8200d61b8, 0x9, 0x9, 0xc820396f08, 0x0, 0x0)
/usr/local/go/src/io/io.go:315 +0x62
golang.org/x/net/http2.readFrameHeader(0xc8200d61b8, 0x9, 0x9, 0x7f4528966238, 0xc8201ba360, 0x0, 0xc800000000, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:237 +0xa5
golang.org/x/net/http2.(*Framer).ReadFrame(0xc8200d6180, 0x0, 0x0, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:464 +0x106
google.golang.org/grpc/transport.(*framer).readFrame(0xc8201f5650, 0x0, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/http_util.go:406 +0x3d
google.golang.org/grpc/transport.(*http2Client).reader(0xc8201e61e0)
/go/src/google.golang.org/grpc/transport/http2_client.go:791 +0x109
created by google.golang.org/grpc/transport.newHTTP2Client
/go/src/google.golang.org/grpc/transport/http2_client.go:174 +0xd21
goroutine 368 [select, 9 minutes]:
google.golang.org/grpc/transport.(*http2Client).controller(0xc8201e61e0)
/go/src/google.golang.org/grpc/transport/http2_client.go:869 +0x5da
created by google.golang.org/grpc/transport.newHTTP2Client
/go/src/google.golang.org/grpc/transport/http2_client.go:201 +0x15c2
goroutine 369 [select, 9 minutes]:
google.golang.org/grpc.(*Conn).transportMonitor(0xc8201e60f0)
/go/src/google.golang.org/grpc/clientconn.go:547 +0x1d3
created by google.golang.org/grpc.NewConn
/go/src/google.golang.org/grpc/clientconn.go:346 +0x49f
goroutine 402 [select]:
github.com/cockroachdb/cockroach/rpc.(*Context).runHeartbeat(0xc8200e8180, 0xc82012a0b0, 0xc8201f61b0, 0xf, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:231 +0x649
github.com/cockroachdb/cockroach/rpc.(*Context).GRPCDial.func1()
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:171 +0x66
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8201f5710)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 403 [select, 9 minutes]:
google.golang.org/grpc.NewClientStream.func1(0x7f4528966288, 0xc8201e61e0, 0xc820180380, 0xc8201d2280)
/go/src/google.golang.org/grpc/stream.go:151 +0x258
created by google.golang.org/grpc.NewClientStream
/go/src/google.golang.org/grpc/stream.go:159 +0xab2
goroutine 404 [IO wait]:
net.runtime_pollWait(0x7f45289b0e20, 0x72, 0xc820234000)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc820424840, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc820424840, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).Read(0xc8204247e0, 0xc820234000, 0x8000, 0x8000, 0x0, 0x7f45289e6050, 0xc82000e068)
/usr/local/go/src/net/fd_unix.go:250 +0x23a
net.(*conn).Read(0xc820496030, 0xc820234000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:172 +0xe4
bufio.(*Reader).fill(0xc8201baa80)
/usr/local/go/src/bufio/bufio.go:97 +0x1e9
bufio.(*Reader).Read(0xc8201baa80, 0xc8200d6278, 0x9, 0x9, 0x0, 0x0, 0x0)
/usr/local/go/src/bufio/bufio.go:207 +0x260
io.ReadAtLeast(0x7f4528966238, 0xc8201baa80, 0xc8200d6278, 0x9, 0x9, 0x9, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:297 +0xe6
io.ReadFull(0x7f4528966238, 0xc8201baa80, 0xc8200d6278, 0x9, 0x9, 0xc8202d4d80, 0x0, 0x0)
/usr/local/go/src/io/io.go:315 +0x62
golang.org/x/net/http2.readFrameHeader(0xc8200d6278, 0x9, 0x9, 0x7f4528966238, 0xc8201baa80, 0x20000000, 0x0, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:237 +0xa5
golang.org/x/net/http2.(*Framer).ReadFrame(0xc8200d6240, 0x0, 0x0, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:464 +0x106
google.golang.org/grpc/transport.(*framer).readFrame(0xc8201f5b00, 0x0, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/http_util.go:406 +0x3d
google.golang.org/grpc/transport.(*http2Server).HandleStreams(0xc8201341b0, 0xc8201f5b60)
/go/src/google.golang.org/grpc/transport/http2_server.go:243 +0x646
google.golang.org/grpc.(*Server).serveStreams(0xc820222000, 0x7f45289b1b38, 0xc8201341b0)
/go/src/google.golang.org/grpc/server.go:352 +0x159
google.golang.org/grpc.(*Server).serveNewHTTP2Transport(0xc820222000, 0x7f4528928800, 0xc820496030, 0x0, 0x0)
/go/src/google.golang.org/grpc/server.go:339 +0x49d
google.golang.org/grpc.(*Server).handleRawConn(0xc820222000, 0x7f4528928800, 0xc820496030)
/go/src/google.golang.org/grpc/server.go:316 +0x4ee
created by google.golang.org/grpc.(*Server).Serve
/go/src/google.golang.org/grpc/server.go:288 +0x38c
goroutine 405 [select, 9 minutes]:
google.golang.org/grpc/transport.(*http2Server).controller(0xc8201341b0)
/go/src/google.golang.org/grpc/transport/http2_server.go:652 +0x5da
created by google.golang.org/grpc/transport.newHTTP2Server
/go/src/google.golang.org/grpc/transport/http2_server.go:134 +0x84f
goroutine 406 [select, 9 minutes]:
github.com/cockroachdb/cockroach/gossip.(*server).Gossip(0xc8202d23c0, 0x7f45289b1ce0, 0xc820110380, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:134 +0xa0f
github.com/cockroachdb/cockroach/gossip._Gossip_Gossip_Handler(0xbc9f60, 0xc8202d23c0, 0x7f45289b1c98, 0xc820466000, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/gossip.pb.go:209 +0xd8
google.golang.org/grpc.(*Server).processStreamingRPC(0xc820222000, 0x7f45289b1b38, 0xc8201341b0, 0xc820180540, 0xc82000a220, 0x1221740, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/server.go:604 +0x47a
google.golang.org/grpc.(*Server).handleStream(0xc820222000, 0x7f45289b1b38, 0xc8201341b0, 0xc820180540, 0x0)
/go/src/google.golang.org/grpc/server.go:688 +0x114e
google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc8201102f0, 0xc820222000, 0x7f45289b1b38, 0xc8201341b0, 0xc820180540)
/go/src/google.golang.org/grpc/server.go:350 +0xa0
created by google.golang.org/grpc.(*Server).serveStreams.func1
/go/src/google.golang.org/grpc/server.go:351 +0x9a
goroutine 407 [select, 9 minutes]:
google.golang.org/grpc/transport.(*recvBufferReader).Read(0xc8201f5dd0, 0xc8200ed030, 0x5, 0x5, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/transport.go:141 +0x7e6
google.golang.org/grpc/transport.(*Stream).Read(0xc820180540, 0xc8200ed030, 0x5, 0x5, 0x1b, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/transport.go:294 +0x71
io.ReadAtLeast(0x7f4528966420, 0xc820180540, 0xc8200ed030, 0x5, 0x5, 0x5, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:297 +0xe6
io.ReadFull(0x7f4528966420, 0xc820180540, 0xc8200ed030, 0x5, 0x5, 0xc82024ba18, 0x0, 0x0)
/usr/local/go/src/io/io.go:315 +0x62
google.golang.org/grpc.(*parser).recvMsg(0xc8200ed020, 0x7, 0x0, 0x0, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/rpc_util.go:216 +0xb9
google.golang.org/grpc.recv(0xc8200ed020, 0x7f45289b0478, 0x1268c48, 0xc820180540, 0x0, 0x0, 0xbf67c0, 0xc8202ab0c0, 0x0, 0x0)
/go/src/google.golang.org/grpc/rpc_util.go:297 +0x45
google.golang.org/grpc.(*serverStream).RecvMsg(0xc820466000, 0xbf67c0, 0xc8202ab0c0, 0x0, 0x0)
/go/src/google.golang.org/grpc/stream.go:413 +0xe4
github.com/cockroachdb/cockroach/gossip.(*gossipGossipServer).Recv(0xc820110380, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/gossip.pb.go:228 +0x7e
github.com/cockroachdb/cockroach/gossip.(Gossip_GossipServer).Recv-fm(0xc8202d23c8, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:101 +0x40
github.com/cockroachdb/cockroach/gossip.(*server).gossipReceiver(0xc8202d23c0, 0xc820496038, 0xc8201f43c0, 0xc82024bf50, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:227 +0x747
github.com/cockroachdb/cockroach/gossip.(*server).Gossip.func3.1()
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:101 +0x8b
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc820153ec0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 408 [IO wait]:
net.runtime_pollWait(0x7f45289b0d60, 0x72, 0xc82024c000)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc820424f40, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc820424f40, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).Read(0xc820424ee0, 0xc82024c000, 0x8000, 0x8000, 0x0, 0x7f45289e6050, 0xc82000e068)
/usr/local/go/src/net/fd_unix.go:250 +0x23a
net.(*conn).Read(0xc820496040, 0xc82024c000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:172 +0xe4
bufio.(*Reader).fill(0xc8201bb140)
/usr/local/go/src/bufio/bufio.go:97 +0x1e9
bufio.(*Reader).Read(0xc8201bb140, 0xc8200d6338, 0x9, 0x9, 0xc81ffe2b5f, 0x0, 0x0)
/usr/local/go/src/bufio/bufio.go:207 +0x260
io.ReadAtLeast(0x7f4528966238, 0xc8201bb140, 0xc8200d6338, 0x9, 0x9, 0x9, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:297 +0xe6
io.ReadFull(0x7f4528966238, 0xc8201bb140, 0xc8200d6338, 0x9, 0x9, 0xc820393598, 0x0, 0x0)
/usr/local/go/src/io/io.go:315 +0x62
golang.org/x/net/http2.readFrameHeader(0xc8200d6338, 0x9, 0x9, 0x7f4528966238, 0xc8201bb140, 0x0, 0xc800000000, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:237 +0xa5
golang.org/x/net/http2.(*Framer).ReadFrame(0xc8200d6300, 0x0, 0x0, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:464 +0x106
google.golang.org/grpc/transport.(*framer).readFrame(0xc8201f4960, 0x0, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/http_util.go:406 +0x3d
google.golang.org/grpc/transport.(*http2Client).reader(0xc8201e63c0)
/go/src/google.golang.org/grpc/transport/http2_client.go:791 +0x109
created by google.golang.org/grpc/transport.newHTTP2Client
/go/src/google.golang.org/grpc/transport/http2_client.go:174 +0xd21
goroutine 409 [select, 9 minutes]:
google.golang.org/grpc/transport.(*http2Client).controller(0xc8201e63c0)
/go/src/google.golang.org/grpc/transport/http2_client.go:869 +0x5da
created by google.golang.org/grpc/transport.newHTTP2Client
/go/src/google.golang.org/grpc/transport/http2_client.go:201 +0x15c2
goroutine 410 [select, 9 minutes]:
google.golang.org/grpc.(*Conn).transportMonitor(0xc8201e62d0)
/go/src/google.golang.org/grpc/clientconn.go:547 +0x1d3
created by google.golang.org/grpc.NewConn
/go/src/google.golang.org/grpc/clientconn.go:346 +0x49f
goroutine 411 [select]:
github.com/cockroachdb/cockroach/rpc.(*Context).runHeartbeat(0xc8200e8300, 0xc82012a160, 0xc8201f61b0, 0xf, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:231 +0x649
github.com/cockroachdb/cockroach/rpc.(*Context).GRPCDial.func1()
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:171 +0x66
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8201f5020)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 412 [select, 9 minutes]:
google.golang.org/grpc.NewClientStream.func1(0x7f4528966288, 0xc8201e63c0, 0xc820180620, 0xc8201d2640)
/go/src/google.golang.org/grpc/stream.go:151 +0x258
created by google.golang.org/grpc.NewClientStream
/go/src/google.golang.org/grpc/stream.go:159 +0xab2
goroutine 435 [IO wait, 9 minutes]:
net.runtime_pollWait(0x7f45289b0be0, 0x72, 0x0)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc82006b870, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc82006b870, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).accept(0xc82006b810, 0x0, 0x7f45289b18b8, 0xc8201af1e0)
/usr/local/go/src/net/fd_unix.go:426 +0x27c
net.(*TCPListener).AcceptTCP(0xc820158118, 0x454730, 0x0, 0x0)
/usr/local/go/src/net/tcpsock_posix.go:254 +0x4d
net.(*TCPListener).Accept(0xc820158118, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/net/tcpsock_posix.go:264 +0x3d
google.golang.org/grpc.(*Server).Serve(0xc820222240, 0x7f45289b0538, 0xc820158118, 0x0, 0x0)
/go/src/google.golang.org/grpc/server.go:279 +0x1cf
github.com/cockroachdb/cockroach/util.ListenAndServeGRPC.func2()
/go/src/github.com/cockroachdb/cockroach/util/net.go:52 +0x3f
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8201af040)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 414 [IO wait]:
net.runtime_pollWait(0x7f45289b0b20, 0x72, 0xc82030c000)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc8200e3a30, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc8200e3a30, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).Read(0xc8200e39d0, 0xc82030c000, 0x8000, 0x8000, 0x0, 0x7f45289e6050, 0xc82000e068)
/usr/local/go/src/net/fd_unix.go:250 +0x23a
net.(*conn).Read(0xc820496050, 0xc82030c000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:172 +0xe4
bufio.(*Reader).fill(0xc8201bb500)
/usr/local/go/src/bufio/bufio.go:97 +0x1e9
bufio.(*Reader).Read(0xc8201bb500, 0xc8200d63f8, 0x9, 0x9, 0xc81ffe2b4a, 0x0, 0x0)
/usr/local/go/src/bufio/bufio.go:207 +0x260
io.ReadAtLeast(0x7f4528966238, 0xc8201bb500, 0xc8200d63f8, 0x9, 0x9, 0x9, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:297 +0xe6
io.ReadFull(0x7f4528966238, 0xc8201bb500, 0xc8200d63f8, 0x9, 0x9, 0xc8203acf68, 0x0, 0x0)
/usr/local/go/src/io/io.go:315 +0x62
golang.org/x/net/http2.readFrameHeader(0xc8200d63f8, 0x9, 0x9, 0x7f4528966238, 0xc8201bb500, 0x0, 0xc800000000, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:237 +0xa5
golang.org/x/net/http2.(*Framer).ReadFrame(0xc8200d63c0, 0x0, 0x0, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:464 +0x106
google.golang.org/grpc/transport.(*framer).readFrame(0xc820177380, 0x0, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/http_util.go:406 +0x3d
google.golang.org/grpc/transport.(*http2Client).reader(0xc8201e64b0)
/go/src/google.golang.org/grpc/transport/http2_client.go:791 +0x109
created by google.golang.org/grpc/transport.newHTTP2Client
/go/src/google.golang.org/grpc/transport/http2_client.go:174 +0xd21
goroutine 438 [chan receive, 9 minutes]:
github.com/cockroachdb/cockroach/gossip.(*server).start.func3()
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:298 +0x5c
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8201af1c0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 422 [select, 9 minutes]:
github.com/cockroachdb/cockroach/gossip.(*client).gossip(0xc8201f08d0, 0xc8201b2000, 0x7f45289b19c8, 0xc8200f0298, 0xc82006a0e0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/client.go:239 +0x619
github.com/cockroachdb/cockroach/gossip.(*client).start.func1()
/go/src/github.com/cockroachdb/cockroach/gossip/client.go:80 +0x2c7
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8201f09c0)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 423 [select, 9 minutes]:
google.golang.org/grpc.(*Conn).transportMonitor(0xc820afe0f0)
/go/src/google.golang.org/grpc/clientconn.go:547 +0x1d3
google.golang.org/grpc.NewConn.func1(0xc820afe0f0)
/go/src/google.golang.org/grpc/clientconn.go:355 +0x1b5
created by google.golang.org/grpc.NewConn
/go/src/google.golang.org/grpc/clientconn.go:356 +0x4e3
goroutine 424 [select]:
github.com/cockroachdb/cockroach/rpc.(*Context).runHeartbeat(0xc8202d3080, 0xc8201b1130, 0xc8201f61b0, 0xf, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:231 +0x649
github.com/cockroachdb/cockroach/rpc.(*Context).GRPCDial.func1()
/go/src/github.com/cockroachdb/cockroach/rpc/context.go:171 +0x66
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8201f0a50)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 415 [select, 9 minutes]:
google.golang.org/grpc/transport.(*http2Client).controller(0xc8201e64b0)
/go/src/google.golang.org/grpc/transport/http2_client.go:869 +0x5da
created by google.golang.org/grpc/transport.newHTTP2Client
/go/src/google.golang.org/grpc/transport/http2_client.go:201 +0x15c2
goroutine 416 [select, 9 minutes]:
google.golang.org/grpc.NewClientStream.func1(0x7f4528966288, 0xc8201e64b0, 0xc8201808c0, 0xc8201d2a00)
/go/src/google.golang.org/grpc/stream.go:151 +0x258
created by google.golang.org/grpc.NewClientStream
/go/src/google.golang.org/grpc/stream.go:159 +0xab2
goroutine 451 [IO wait]:
net.runtime_pollWait(0x7f45289b0a60, 0x72, 0xc820324000)
/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc820425640, 0x72, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc820425640, 0x0, 0x0)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).Read(0xc8204255e0, 0xc820324000, 0x8000, 0x8000, 0x0, 0x7f45289e6050, 0xc82000e068)
/usr/local/go/src/net/fd_unix.go:250 +0x23a
net.(*conn).Read(0xc820496068, 0xc820324000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:172 +0xe4
bufio.(*Reader).fill(0xc8201bb9e0)
/usr/local/go/src/bufio/bufio.go:97 +0x1e9
bufio.(*Reader).Read(0xc8201bb9e0, 0xc8200d6578, 0x9, 0x9, 0x0, 0x0, 0x0)
/usr/local/go/src/bufio/bufio.go:207 +0x260
io.ReadAtLeast(0x7f4528966238, 0xc8201bb9e0, 0xc8200d6578, 0x9, 0x9, 0x9, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:297 +0xe6
io.ReadFull(0x7f4528966238, 0xc8201bb9e0, 0xc8200d6578, 0x9, 0x9, 0xc8202ae000, 0x0, 0x0)
/usr/local/go/src/io/io.go:315 +0x62
golang.org/x/net/http2.readFrameHeader(0xc8200d6578, 0x9, 0x9, 0x7f4528966238, 0xc8201bb9e0, 0x20000000, 0x0, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:237 +0xa5
golang.org/x/net/http2.(*Framer).ReadFrame(0xc8200d6540, 0x0, 0x0, 0x0, 0x0)
/go/src/golang.org/x/net/http2/frame.go:464 +0x106
google.golang.org/grpc/transport.(*framer).readFrame(0xc8201777d0, 0x0, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/http_util.go:406 +0x3d
google.golang.org/grpc/transport.(*http2Server).HandleStreams(0xc820134480, 0xc820177860)
/go/src/google.golang.org/grpc/transport/http2_server.go:243 +0x646
google.golang.org/grpc.(*Server).serveStreams(0xc820222000, 0x7f45289b1b38, 0xc820134480)
/go/src/google.golang.org/grpc/server.go:352 +0x159
google.golang.org/grpc.(*Server).serveNewHTTP2Transport(0xc820222000, 0x7f4528928800, 0xc820496068, 0x0, 0x0)
/go/src/google.golang.org/grpc/server.go:339 +0x49d
google.golang.org/grpc.(*Server).handleRawConn(0xc820222000, 0x7f4528928800, 0xc820496068)
/go/src/google.golang.org/grpc/server.go:316 +0x4ee
created by google.golang.org/grpc.(*Server).Serve
/go/src/google.golang.org/grpc/server.go:288 +0x38c
goroutine 450 [select, 9 minutes]:
google.golang.org/grpc/transport.(*recvBufferReader).Read(0xc8201775c0, 0xc8200ed670, 0x5, 0x5, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/transport.go:141 +0x7e6
google.golang.org/grpc/transport.(*Stream).Read(0xc8201808c0, 0xc8200ed670, 0x5, 0x5, 0x49d424, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/transport.go:294 +0x71
io.ReadAtLeast(0x7f4528966420, 0xc8201808c0, 0xc8200ed670, 0x5, 0x5, 0x5, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:297 +0xe6
io.ReadFull(0x7f4528966420, 0xc8201808c0, 0xc8200ed670, 0x5, 0x5, 0xc8200eed80, 0x0, 0x0)
/usr/local/go/src/io/io.go:315 +0x62
google.golang.org/grpc.(*parser).recvMsg(0xc8200ed660, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/rpc_util.go:216 +0xb9
google.golang.org/grpc.recv(0xc8200ed660, 0x7f45289b0478, 0x1268c48, 0xc8201808c0, 0x0, 0x0, 0xbedd80, 0xc8201f9180, 0x0, 0x0)
/go/src/google.golang.org/grpc/rpc_util.go:297 +0x45
google.golang.org/grpc.(*clientStream).RecvMsg(0xc8201d2a00, 0xbedd80, 0xc8201f9180, 0x0, 0x0)
/go/src/google.golang.org/grpc/stream.go:234 +0xac
github.com/cockroachdb/cockroach/gossip.(*gossipGossipClient).Recv(0xc820110690, 0xc8201b2000, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/gossip.pb.go:192 +0x7e
github.com/cockroachdb/cockroach/gossip.(*client).gossip.func2.1(0x7f4528966498, 0xc820110690, 0xc8201f08d0, 0xc8201b2000, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/client.go:227 +0x37
github.com/cockroachdb/cockroach/gossip.(*client).gossip.func2()
/go/src/github.com/cockroachdb/cockroach/gossip/client.go:235 +0x51
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc820177680)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
goroutine 452 [select, 9 minutes]:
google.golang.org/grpc/transport.(*http2Server).controller(0xc820134480)
/go/src/google.golang.org/grpc/transport/http2_server.go:652 +0x5da
created by google.golang.org/grpc/transport.newHTTP2Server
/go/src/google.golang.org/grpc/transport/http2_server.go:134 +0x84f
goroutine 454 [select, 9 minutes]:
github.com/cockroachdb/cockroach/gossip.(*server).Gossip(0xc8202d23c0, 0x7f45289b1ce0, 0xc820110860, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:134 +0xa0f
github.com/cockroachdb/cockroach/gossip._Gossip_Gossip_Handler(0xbc9f60, 0xc8202d23c0, 0x7f45289b1c98, 0xc820466100, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/gossip.pb.go:209 +0xd8
google.golang.org/grpc.(*Server).processStreamingRPC(0xc820222000, 0x7f45289b1b38, 0xc820134480, 0xc820180a80, 0xc82000a220, 0x1221740, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/server.go:604 +0x47a
google.golang.org/grpc.(*Server).handleStream(0xc820222000, 0x7f45289b1b38, 0xc820134480, 0xc820180a80, 0x0)
/go/src/google.golang.org/grpc/server.go:688 +0x114e
google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc820110750, 0xc820222000, 0x7f45289b1b38, 0xc820134480, 0xc820180a80)
/go/src/google.golang.org/grpc/server.go:350 +0xa0
created by google.golang.org/grpc.(*Server).serveStreams.func1
/go/src/google.golang.org/grpc/server.go:351 +0x9a
goroutine 455 [select, 9 minutes]:
google.golang.org/grpc/transport.(*recvBufferReader).Read(0xc820177c20, 0xc8200ed9f0, 0x5, 0x5, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/transport.go:141 +0x7e6
google.golang.org/grpc/transport.(*Stream).Read(0xc820180a80, 0xc8200ed9f0, 0x5, 0x5, 0x29, 0x0, 0x0)
/go/src/google.golang.org/grpc/transport/transport.go:294 +0x71
io.ReadAtLeast(0x7f4528966420, 0xc820180a80, 0xc8200ed9f0, 0x5, 0x5, 0x5, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:297 +0xe6
io.ReadFull(0x7f4528966420, 0xc820180a80, 0xc8200ed9f0, 0x5, 0x5, 0xc82031da18, 0x0, 0x0)
/usr/local/go/src/io/io.go:315 +0x62
google.golang.org/grpc.(*parser).recvMsg(0xc8200ed9e0, 0x7, 0x0, 0x0, 0x0, 0x0, 0x0)
/go/src/google.golang.org/grpc/rpc_util.go:216 +0xb9
google.golang.org/grpc.recv(0xc8200ed9e0, 0x7f45289b0478, 0x1268c48, 0xc820180a80, 0x0, 0x0, 0xbf67c0, 0xc8200ef940, 0x0, 0x0)
/go/src/google.golang.org/grpc/rpc_util.go:297 +0x45
google.golang.org/grpc.(*serverStream).RecvMsg(0xc820466100, 0xbf67c0, 0xc8200ef940, 0x0, 0x0)
/go/src/google.golang.org/grpc/stream.go:413 +0xe4
github.com/cockroachdb/cockroach/gossip.(*gossipGossipServer).Recv(0xc820110860, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/gossip.pb.go:228 +0x7e
github.com/cockroachdb/cockroach/gossip.(Gossip_GossipServer).Recv-fm(0xc8202d23c8, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:101 +0x40
github.com/cockroachdb/cockroach/gossip.(*server).gossipReceiver(0xc8202d23c0, 0xc820496070, 0xc820177e30, 0xc82031df50, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:227 +0x747
github.com/cockroachdb/cockroach/gossip.(*server).Gossip.func3.1()
/go/src/github.com/cockroachdb/cockroach/gossip/server.go:101 +0x8b
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82006a0e0, 0xc8200ef700)
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:139 +0x52
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:140 +0x62
rax 0xca
rbx 0x0
rcx 0x4612a3
rdx 0x0
rdi 0x1248c08
rsi 0x0
rbp 0x1
rsp 0x7ffecfcf17a0
r8 0x0
r9 0x0
r10 0x0
r11 0x286
r12 0xc8201f83c0
r13 0xc
r14 0xc69cd0
r15 0x8
rip 0x4612a1
rflags 0x286
cs 0x33
fs 0x0
gs 0x0
ERROR: exit status 2
```
Run Details:
```
81 runs so far, 0 failures, over 5s
176 runs so far, 0 failures, over 10s
268 runs so far, 0 failures, over 15s
360 runs so far, 0 failures, over 20s
449 runs so far, 0 failures, over 25s
544 runs so far, 0 failures, over 30s
636 runs so far, 0 failures, over 35s
730 runs so far, 0 failures, over 40s
828 runs so far, 0 failures, over 45s
922 runs so far, 0 failures, over 50s
1019 runs so far, 0 failures, over 55s
1109 runs so far, 0 failures, over 1m0s
1206 runs so far, 0 failures, over 1m5s
1296 runs so far, 0 failures, over 1m10s
1389 runs so far, 0 failures, over 1m15s
1483 runs so far, 0 failures, over 1m20s
1576 runs so far, 0 failures, over 1m25s
1673 runs so far, 0 failures, over 1m30s
1761 runs so far, 0 failures, over 1m35s
1853 runs so far, 0 failures, over 1m40s
1951 runs so far, 0 failures, over 1m45s
2040 runs so far, 0 failures, over 1m50s
2127 runs so far, 0 failures, over 1m55s
2221 runs so far, 0 failures, over 2m0s
2318 runs so far, 0 failures, over 2m5s
2410 runs so far, 0 failures, over 2m10s
2501 runs so far, 0 failures, over 2m15s
2594 runs so far, 0 failures, over 2m20s
2688 runs so far, 0 failures, over 2m25s
2787 runs so far, 0 failures, over 2m30s
2880 runs so far, 0 failures, over 2m35s
2973 runs so far, 0 failures, over 2m40s
3068 runs so far, 0 failures, over 2m45s
3159 runs so far, 0 failures, over 2m50s
3253 runs so far, 0 failures, over 2m55s
3345 runs so far, 0 failures, over 3m0s
3437 runs so far, 0 failures, over 3m5s
3532 runs so far, 0 failures, over 3m10s
3626 runs so far, 0 failures, over 3m15s
3719 runs so far, 0 failures, over 3m20s
3813 runs so far, 0 failures, over 3m25s
3904 runs so far, 0 failures, over 3m30s
3994 runs so far, 0 failures, over 3m35s
4086 runs so far, 0 failures, over 3m40s
4178 runs so far, 0 failures, over 3m45s
4270 runs so far, 0 failures, over 3m50s
4361 runs so far, 0 failures, over 3m55s
4451 runs so far, 0 failures, over 4m0s
4538 runs so far, 0 failures, over 4m5s
4630 runs so far, 0 failures, over 4m10s
4726 runs so far, 0 failures, over 4m15s
4815 runs so far, 0 failures, over 4m20s
4906 runs so far, 0 failures, over 4m25s
4999 runs so far, 0 failures, over 4m30s
5091 runs so far, 0 failures, over 4m35s
5183 runs so far, 0 failures, over 4m40s
5273 runs so far, 0 failures, over 4m45s
5364 runs so far, 0 failures, over 4m50s
5455 runs so far, 0 failures, over 4m55s
5547 runs so far, 0 failures, over 5m0s
5638 runs so far, 0 failures, over 5m5s
5730 runs so far, 0 failures, over 5m10s
5818 runs so far, 0 failures, over 5m15s
5912 runs so far, 0 failures, over 5m20s
6004 runs so far, 0 failures, over 5m25s
6095 runs so far, 0 failures, over 5m30s
6185 runs so far, 0 failures, over 5m35s
6280 runs so far, 0 failures, over 5m40s
6369 runs so far, 0 failures, over 5m45s
6461 runs so far, 0 failures, over 5m50s
6548 runs so far, 0 failures, over 5m55s
6642 runs so far, 0 failures, over 6m0s
6735 runs so far, 0 failures, over 6m5s
6821 runs so far, 0 failures, over 6m10s
6913 runs so far, 0 failures, over 6m15s
7005 runs so far, 0 failures, over 6m20s
7098 runs so far, 0 failures, over 6m25s
7192 runs so far, 0 failures, over 6m30s
7281 runs so far, 0 failures, over 6m35s
7373 runs so far, 0 failures, over 6m40s
7465 runs so far, 0 failures, over 6m45s
7557 runs so far, 0 failures, over 6m50s
7652 runs so far, 0 failures, over 6m55s
7743 runs so far, 0 failures, over 7m0s
7835 runs so far, 0 failures, over 7m5s
7930 runs so far, 0 failures, over 7m10s
8019 runs so far, 0 failures, over 7m15s
8111 runs so far, 0 failures, over 7m20s
8200 runs so far, 0 failures, over 7m25s
8293 runs so far, 0 failures, over 7m30s
8384 runs so far, 0 failures, over 7m35s
8474 runs so far, 0 failures, over 7m40s
8565 runs so far, 0 failures, over 7m45s
8654 runs so far, 0 failures, over 7m50s
8747 runs so far, 0 failures, over 7m55s
8837 runs so far, 0 failures, over 8m0s
8928 runs so far, 0 failures, over 8m5s
9022 runs so far, 0 failures, over 8m10s
9110 runs so far, 0 failures, over 8m15s
9199 runs so far, 0 failures, over 8m20s
9295 runs so far, 0 failures, over 8m25s
9387 runs so far, 0 failures, over 8m30s
9476 runs so far, 0 failures, over 8m35s
9565 runs so far, 0 failures, over 8m40s
9655 runs so far, 0 failures, over 8m45s
9750 runs so far, 0 failures, over 8m50s
9843 runs so far, 0 failures, over 8m55s
9939 runs so far, 0 failures, over 9m0s
10028 runs so far, 0 failures, over 9m5s
10118 runs so far, 0 failures, over 9m10s
10210 runs so far, 0 failures, over 9m15s
10304 runs so far, 0 failures, over 9m20s
10394 runs so far, 0 failures, over 9m25s
10484 runs so far, 0 failures, over 9m30s
10574 runs so far, 0 failures, over 9m35s
10667 runs so far, 0 failures, over 9m40s
10758 runs so far, 0 failures, over 9m45s
10852 runs so far, 0 failures, over 9m50s
10947 runs so far, 0 failures, over 9m55s
11037 runs so far, 0 failures, over 10m0s
11128 runs so far, 0 failures, over 10m5s
11221 runs so far, 0 failures, over 10m10s
11316 runs so far, 0 failures, over 10m15s
11405 runs so far, 0 failures, over 10m20s
11498 runs so far, 0 failures, over 10m25s
11589 runs so far, 0 failures, over 10m30s
11685 runs so far, 0 failures, over 10m35s
11781 runs so far, 0 failures, over 10m40s
11870 runs so far, 0 failures, over 10m45s
11965 runs so far, 0 failures, over 10m50s
12060 runs so far, 0 failures, over 10m55s
12150 runs so far, 0 failures, over 11m0s
12240 runs so far, 0 failures, over 11m5s
12333 runs so far, 0 failures, over 11m10s
12422 runs so far, 0 failures, over 11m15s
12514 runs so far, 0 failures, over 11m20s
12606 runs so far, 0 failures, over 11m25s
12697 runs so far, 0 failures, over 11m30s
12792 runs so far, 0 failures, over 11m35s
12883 runs so far, 0 failures, over 11m40s
12975 runs so far, 0 failures, over 11m45s
13071 runs so far, 0 failures, over 11m50s
13165 runs so far, 0 failures, over 11m55s
13256 runs so far, 0 failures, over 12m0s
13345 runs so far, 0 failures, over 12m5s
13437 runs so far, 0 failures, over 12m10s
13528 runs so far, 0 failures, over 12m15s
13615 runs so far, 0 failures, over 12m20s
13704 runs so far, 0 failures, over 12m25s
13797 runs so far, 0 failures, over 12m30s
13891 runs so far, 0 failures, over 12m35s
13983 runs so far, 0 failures, over 12m40s
14074 runs so far, 0 failures, over 12m45s
14167 runs so far, 0 failures, over 12m50s
14254 runs so far, 0 failures, over 12m55s
14348 runs so far, 0 failures, over 13m0s
14444 runs so far, 0 failures, over 13m5s
14533 runs so far, 0 failures, over 13m10s
14626 runs completed, 1 failures, over 13m15s
FAIL
```
Please assign, take a look and update the issue accordingly.
|
non_process
|
stress failed test in cockroach gossip gossip test testgossipnoforwardself binary cockroach static tests tar gz sha stress build found a failed test run testgossipnoforwardself gossip gossip go not connected to cluster use join to specify a connected node gossip gossip go not connected to cluster use join to specify a connected node gossip gossip go not connected to cluster use join to specify a connected node gossip gossip go not connected to cluster use join to specify a connected node gossip gossip go not connected to cluster use join to specify a connected node gossip server go refusing gossip from node max conns forwarding to tcp sigabrt abort pc m goroutine runtime futex usr local go src runtime sys linux s runtime futexsleep usr local go src runtime linux go runtime notesleep usr local go src runtime lock futex go runtime stopm usr local go src runtime proc go runtime findrunnable usr local go src runtime proc go runtime schedule usr local go src runtime proc go runtime park m usr local go src runtime proc go runtime mcall usr local go src runtime asm s goroutine testing runtests usr local go src testing testing go testing m run usr local go src testing testing go main main github com cockroachdb cockroach gossip test testmain go goroutine runtime goexit usr local go src runtime asm s goroutine github com cockroachdb cockroach util log loggingt flushdaemon go src github com cockroachdb cockroach util log clog go created by github com cockroachdb cockroach util log init go src github com cockroachdb cockroach util log clog go goroutine github com cockroachdb cockroach gossip testgossipnoforwardself go src github com cockroachdb cockroach gossip gossip test go testing trunner usr local go src testing testing go created by testing runtests usr local go src testing testing go goroutine github com cockroachdb cockroach rpc context runheartbeat go src github com cockroachdb cockroach rpc context go github com cockroachdb cockroach rpc context grpcdial go src github com cockroachdb cockroach rpc context go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine net runtime pollwait usr local go src runtime netpoll go net polldesc wait usr local go src net fd poll runtime go net polldesc waitread usr local go src net fd poll runtime go net netfd accept usr local go src net fd unix go net tcplistener accepttcp usr local go src net tcpsock posix go net tcplistener accept usr local go src net tcpsock posix go google golang org grpc server serve go src google golang org grpc server go github com cockroachdb cockroach util listenandservegrpc go src github com cockroachdb cockroach util net go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine google golang org grpc transport controller go src google golang org grpc transport client go created by google golang org grpc transport go src google golang org grpc transport client go goroutine net runtime pollwait usr local go src runtime netpoll go net polldesc wait usr local go src net fd poll runtime go net polldesc waitread usr local go src net fd poll runtime go net netfd read usr local go src net fd unix go net conn read usr local go src net net go bufio reader fill usr local go src bufio bufio go bufio reader read usr local go src bufio bufio go io readatleast usr local go src io io go io readfull usr local go src io io go golang org x net readframeheader go src golang org x net frame go golang org x net framer readframe go src golang org x net frame go google golang org grpc transport framer readframe go src google golang org grpc transport http util go google golang org grpc transport reader go src google golang org grpc transport client go created by google golang org grpc transport go src google golang org grpc transport client go goroutine github com cockroachdb cockroach gossip server start go src github com cockroachdb cockroach gossip server go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine github com cockroachdb cockroach rpc newcontext go src github com cockroachdb cockroach rpc context go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine google golang org grpc transport recvbufferreader read go src google golang org grpc transport transport go google golang org grpc transport stream read go src google golang org grpc transport transport go io readatleast usr local go src io io go io readfull usr local go src io io go google golang org grpc parser recvmsg go src google golang org grpc rpc util go google golang org grpc recv go src google golang org grpc rpc util go google golang org grpc serverstream recvmsg go src google golang org grpc stream go github com cockroachdb cockroach gossip gossipgossipserver recv go src github com cockroachdb cockroach gossip gossip pb go github com cockroachdb cockroach gossip gossip gossipserver recv fm go src github com cockroachdb cockroach gossip server go github com cockroachdb cockroach gossip server gossipreceiver go src github com cockroachdb cockroach gossip server go github com cockroachdb cockroach gossip server gossip go src github com cockroachdb cockroach gossip server go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine github com cockroachdb cockroach gossip server start go src github com cockroachdb cockroach gossip server go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine github com cockroachdb cockroach gossip server gossip go src github com cockroachdb cockroach gossip server go github com cockroachdb cockroach gossip gossip gossip handler go src github com cockroachdb cockroach gossip gossip pb go google golang org grpc server processstreamingrpc go src google golang org grpc server go google golang org grpc server handlestream go src google golang org grpc server go google golang org grpc server servestreams go src google golang org grpc server go created by google golang org grpc server servestreams go src google golang org grpc server go goroutine github com cockroachdb cockroach rpc newcontext go src github com cockroachdb cockroach rpc context go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine github com cockroachdb cockroach util listenandservegrpc go src github com cockroachdb cockroach util net go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine net runtime pollwait usr local go src runtime netpoll go net polldesc wait usr local go src net fd poll runtime go net polldesc waitread usr local go src net fd poll runtime go net netfd accept usr local go src net fd unix go net tcplistener accepttcp usr local go src net tcpsock posix go net tcplistener accept usr local go src net tcpsock posix go google golang org grpc server serve go src google golang org grpc server go github com cockroachdb cockroach util listenandservegrpc go src github com cockroachdb cockroach util net go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine net runtime pollwait usr local go src runtime netpoll go net polldesc wait usr local go src net fd poll runtime go net polldesc waitread usr local go src net fd poll runtime go net netfd read usr local go src net fd unix go net conn read usr local go src net net go bufio reader fill usr local go src bufio bufio go bufio reader read usr local go src bufio bufio go io readatleast usr local go src io io go io readfull usr local go src io io go golang org x net readframeheader go src golang org x net frame go golang org x net framer readframe go src golang org x net frame go google golang org grpc transport framer readframe go src google golang org grpc transport http util go google golang org grpc transport handlestreams go src google golang org grpc transport server go google golang org grpc server servestreams go src google golang org grpc server go google golang org grpc server go src google golang org grpc server go google golang org grpc server handlerawconn go src google golang org grpc server go created by google golang org grpc server serve go src google golang org grpc server go goroutine github com cockroachdb cockroach util listenandservegrpc go src github com cockroachdb cockroach util net go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine github com cockroachdb cockroach gossip server gossip go src github com cockroachdb cockroach gossip server go github com cockroachdb cockroach gossip gossip gossip handler go src github com cockroachdb cockroach gossip gossip pb go google golang org grpc server processstreamingrpc go src google golang org grpc server go google golang org grpc server handlestream go src google golang org grpc server go google golang org grpc server servestreams go src google golang org grpc server go created by google golang org grpc server servestreams go src google golang org grpc server go goroutine github com cockroachdb cockroach util listenandservegrpc go src github com cockroachdb cockroach util net go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine github com cockroachdb cockroach rpc newcontext go src github com cockroachdb cockroach rpc context go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine net runtime pollwait usr local go src runtime netpoll go net polldesc wait usr local go src net fd poll runtime go net polldesc waitread usr local go src net fd poll runtime go net netfd accept usr local go src net fd unix go net tcplistener accepttcp usr local go src net tcpsock posix go net tcplistener accept usr local go src net tcpsock posix go google golang org grpc server serve go src google golang org grpc server go github com cockroachdb cockroach util listenandservegrpc go src github com cockroachdb cockroach util net go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine net runtime pollwait usr local go src runtime netpoll go net polldesc wait usr local go src net fd poll runtime go net polldesc waitread usr local go src net fd poll runtime go net netfd read usr local go src net fd unix go net conn read usr local go src net net go bufio reader fill usr local go src bufio bufio go bufio reader read usr local go src bufio bufio go io readatleast usr local go src io io go io readfull usr local go src io io go golang org x net readframeheader go src golang org x net frame go golang org x net framer readframe go src golang org x net frame go google golang org grpc transport framer readframe go src google golang org grpc transport http util go google golang org grpc transport handlestreams go src google golang org grpc transport server go google golang org grpc server servestreams go src google golang org grpc server go google golang org grpc server go src google golang org grpc server go google golang org grpc server handlerawconn go src google golang org grpc server go created by google golang org grpc server serve go src google golang org grpc server go goroutine google golang org grpc conn transportmonitor go src google golang org grpc clientconn go created by google golang org grpc newconn go src google golang org grpc clientconn go goroutine github com cockroachdb cockroach rpc newcontext go src github com cockroachdb cockroach rpc context go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine google golang org grpc transport controller go src google golang org grpc transport server go created by google golang org grpc transport go src google golang org grpc transport server go goroutine github com cockroachdb cockroach util listenandservegrpc go src github com cockroachdb cockroach util net go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine net runtime pollwait usr local go src runtime netpoll go net polldesc wait usr local go src net fd poll runtime go net polldesc waitread usr local go src net fd poll runtime go net netfd accept usr local go src net fd unix go net tcplistener accepttcp usr local go src net tcpsock posix go net tcplistener accept usr local go src net tcpsock posix go google golang org grpc server serve go src google golang org grpc server go github com cockroachdb cockroach util listenandservegrpc go src github com cockroachdb cockroach util net go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine google golang org grpc newclientstream go src google golang org grpc stream go created by google golang org grpc newclientstream go src google golang org grpc stream go goroutine google golang org grpc transport controller go src google golang org grpc transport server go created by google golang org grpc transport go src google golang org grpc transport server go goroutine github com cockroachdb cockroach gossip server start go src github com cockroachdb cockroach gossip server go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine google golang org grpc transport recvbufferreader read go src google golang org grpc transport transport go google golang org grpc transport stream read go src google golang org grpc transport transport go io readatleast usr local go src io io go io readfull usr local go src io io go google golang org grpc parser recvmsg go src google golang org grpc rpc util go google golang org grpc recv go src google golang org grpc rpc util go google golang org grpc serverstream recvmsg go src google golang org grpc stream go github com cockroachdb cockroach gossip gossipgossipserver recv go src github com cockroachdb cockroach gossip gossip pb go github com cockroachdb cockroach gossip gossip gossipserver recv fm go src github com cockroachdb cockroach gossip server go github com cockroachdb cockroach gossip server gossipreceiver go src github com cockroachdb cockroach gossip server go github com cockroachdb cockroach gossip server gossip go src github com cockroachdb cockroach gossip server go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine github com cockroachdb cockroach rpc newcontext go src github com cockroachdb cockroach rpc context go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine google golang org grpc transport controller go src google golang org grpc transport client go created by google golang org grpc transport go src google golang org grpc transport client go goroutine net runtime pollwait usr local go src runtime netpoll go net polldesc wait usr local go src net fd poll runtime go net polldesc waitread usr local go src net fd poll runtime go net netfd read usr local go src net fd unix go net conn read usr local go src net net go bufio reader fill usr local go src bufio bufio go bufio reader read usr local go src bufio bufio go io readatleast usr local go src io io go io readfull usr local go src io io go golang org x net readframeheader go src golang org x net frame go golang org x net framer readframe go src golang org x net frame go google golang org grpc transport framer readframe go src google golang org grpc transport http util go google golang org grpc transport reader go src google golang org grpc transport client go created by google golang org grpc transport go src google golang org grpc transport client go goroutine github com cockroachdb cockroach util listenandservegrpc go src github com cockroachdb cockroach util net go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine net runtime pollwait usr local go src runtime netpoll go net polldesc wait usr local go src net fd poll runtime go net polldesc waitread usr local go src net fd poll runtime go net netfd read usr local go src net fd unix go net conn read usr local go src net net go bufio reader fill usr local go src bufio bufio go bufio reader read usr local go src bufio bufio go io readatleast usr local go src io io go io readfull usr local go src io io go golang org x net readframeheader go src golang org x net frame go golang org x net framer readframe go src golang org x net frame go google golang org grpc transport framer readframe go src google golang org grpc transport http util go google golang org grpc transport handlestreams go src google golang org grpc transport server go google golang org grpc server servestreams go src google golang org grpc server go google golang org grpc server go src google golang org grpc server go google golang org grpc server handlerawconn go src google golang org grpc server go created by google golang org grpc server serve go src google golang org grpc server go goroutine google golang org grpc transport controller go src google golang org grpc transport server go created by google golang org grpc transport go src google golang org grpc transport server go goroutine github com cockroachdb cockroach gossip server start go src github com cockroachdb cockroach gossip server go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine google golang org grpc conn transportmonitor go src google golang org grpc clientconn go created by google golang org grpc newconn go src google golang org grpc clientconn go goroutine github com cockroachdb cockroach rpc context runheartbeat go src github com cockroachdb cockroach rpc context go github com cockroachdb cockroach rpc context grpcdial go src github com cockroachdb cockroach rpc context go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine google golang org grpc newclientstream go src google golang org grpc stream go created by google golang org grpc newclientstream go src google golang org grpc stream go goroutine net runtime pollwait usr local go src runtime netpoll go net polldesc wait usr local go src net fd poll runtime go net polldesc waitread usr local go src net fd poll runtime go net netfd read usr local go src net fd unix go net conn read usr local go src net net go bufio reader fill usr local go src bufio bufio go bufio reader read usr local go src bufio bufio go io readatleast usr local go src io io go io readfull usr local go src io io go golang org x net readframeheader go src golang org x net frame go golang org x net framer readframe go src golang org x net frame go google golang org grpc transport framer readframe go src google golang org grpc transport http util go google golang org grpc transport reader go src google golang org grpc transport client go created by google golang org grpc transport go src google golang org grpc transport client go goroutine google golang org grpc transport controller go src google golang org grpc transport client go created by google golang org grpc transport go src google golang org grpc transport client go goroutine google golang org grpc conn transportmonitor go src google golang org grpc clientconn go created by google golang org grpc newconn go src google golang org grpc clientconn go goroutine github com cockroachdb cockroach rpc context runheartbeat go src github com cockroachdb cockroach rpc context go github com cockroachdb cockroach rpc context grpcdial go src github com cockroachdb cockroach rpc context go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine google golang org grpc newclientstream go src google golang org grpc stream go created by google golang org grpc newclientstream go src google golang org grpc stream go goroutine net runtime pollwait usr local go src runtime netpoll go net polldesc wait usr local go src net fd poll runtime go net polldesc waitread usr local go src net fd poll runtime go net netfd read usr local go src net fd unix go net conn read usr local go src net net go bufio reader fill usr local go src bufio bufio go bufio reader read usr local go src bufio bufio go io readatleast usr local go src io io go io readfull usr local go src io io go golang org x net readframeheader go src golang org x net frame go golang org x net framer readframe go src golang org x net frame go google golang org grpc transport framer readframe go src google golang org grpc transport http util go google golang org grpc transport handlestreams go src google golang org grpc transport server go google golang org grpc server servestreams go src google golang org grpc server go google golang org grpc server go src google golang org grpc server go google golang org grpc server handlerawconn go src google golang org grpc server go created by google golang org grpc server serve go src google golang org grpc server go goroutine google golang org grpc transport controller go src google golang org grpc transport server go created by google golang org grpc transport go src google golang org grpc transport server go goroutine github com cockroachdb cockroach gossip server gossip go src github com cockroachdb cockroach gossip server go github com cockroachdb cockroach gossip gossip gossip handler go src github com cockroachdb cockroach gossip gossip pb go google golang org grpc server processstreamingrpc go src google golang org grpc server go google golang org grpc server handlestream go src google golang org grpc server go google golang org grpc server servestreams go src google golang org grpc server go created by google golang org grpc server servestreams go src google golang org grpc server go goroutine google golang org grpc transport recvbufferreader read go src google golang org grpc transport transport go google golang org grpc transport stream read go src google golang org grpc transport transport go io readatleast usr local go src io io go io readfull usr local go src io io go google golang org grpc parser recvmsg go src google golang org grpc rpc util go google golang org grpc recv go src google golang org grpc rpc util go google golang org grpc serverstream recvmsg go src google golang org grpc stream go github com cockroachdb cockroach gossip gossipgossipserver recv go src github com cockroachdb cockroach gossip gossip pb go github com cockroachdb cockroach gossip gossip gossipserver recv fm go src github com cockroachdb cockroach gossip server go github com cockroachdb cockroach gossip server gossipreceiver go src github com cockroachdb cockroach gossip server go github com cockroachdb cockroach gossip server gossip go src github com cockroachdb cockroach gossip server go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine net runtime pollwait usr local go src runtime netpoll go net polldesc wait usr local go src net fd poll runtime go net polldesc waitread usr local go src net fd poll runtime go net netfd read usr local go src net fd unix go net conn read usr local go src net net go bufio reader fill usr local go src bufio bufio go bufio reader read usr local go src bufio bufio go io readatleast usr local go src io io go io readfull usr local go src io io go golang org x net readframeheader go src golang org x net frame go golang org x net framer readframe go src golang org x net frame go google golang org grpc transport framer readframe go src google golang org grpc transport http util go google golang org grpc transport reader go src google golang org grpc transport client go created by google golang org grpc transport go src google golang org grpc transport client go goroutine google golang org grpc transport controller go src google golang org grpc transport client go created by google golang org grpc transport go src google golang org grpc transport client go goroutine google golang org grpc conn transportmonitor go src google golang org grpc clientconn go created by google golang org grpc newconn go src google golang org grpc clientconn go goroutine github com cockroachdb cockroach rpc context runheartbeat go src github com cockroachdb cockroach rpc context go github com cockroachdb cockroach rpc context grpcdial go src github com cockroachdb cockroach rpc context go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine google golang org grpc newclientstream go src google golang org grpc stream go created by google golang org grpc newclientstream go src google golang org grpc stream go goroutine net runtime pollwait usr local go src runtime netpoll go net polldesc wait usr local go src net fd poll runtime go net polldesc waitread usr local go src net fd poll runtime go net netfd accept usr local go src net fd unix go net tcplistener accepttcp usr local go src net tcpsock posix go net tcplistener accept usr local go src net tcpsock posix go google golang org grpc server serve go src google golang org grpc server go github com cockroachdb cockroach util listenandservegrpc go src github com cockroachdb cockroach util net go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine net runtime pollwait usr local go src runtime netpoll go net polldesc wait usr local go src net fd poll runtime go net polldesc waitread usr local go src net fd poll runtime go net netfd read usr local go src net fd unix go net conn read usr local go src net net go bufio reader fill usr local go src bufio bufio go bufio reader read usr local go src bufio bufio go io readatleast usr local go src io io go io readfull usr local go src io io go golang org x net readframeheader go src golang org x net frame go golang org x net framer readframe go src golang org x net frame go google golang org grpc transport framer readframe go src google golang org grpc transport http util go google golang org grpc transport reader go src google golang org grpc transport client go created by google golang org grpc transport go src google golang org grpc transport client go goroutine github com cockroachdb cockroach gossip server start go src github com cockroachdb cockroach gossip server go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine github com cockroachdb cockroach gossip client gossip go src github com cockroachdb cockroach gossip client go github com cockroachdb cockroach gossip client start go src github com cockroachdb cockroach gossip client go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine google golang org grpc conn transportmonitor go src google golang org grpc clientconn go google golang org grpc newconn go src google golang org grpc clientconn go created by google golang org grpc newconn go src google golang org grpc clientconn go goroutine github com cockroachdb cockroach rpc context runheartbeat go src github com cockroachdb cockroach rpc context go github com cockroachdb cockroach rpc context grpcdial go src github com cockroachdb cockroach rpc context go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine google golang org grpc transport controller go src google golang org grpc transport client go created by google golang org grpc transport go src google golang org grpc transport client go goroutine google golang org grpc newclientstream go src google golang org grpc stream go created by google golang org grpc newclientstream go src google golang org grpc stream go goroutine net runtime pollwait usr local go src runtime netpoll go net polldesc wait usr local go src net fd poll runtime go net polldesc waitread usr local go src net fd poll runtime go net netfd read usr local go src net fd unix go net conn read usr local go src net net go bufio reader fill usr local go src bufio bufio go bufio reader read usr local go src bufio bufio go io readatleast usr local go src io io go io readfull usr local go src io io go golang org x net readframeheader go src golang org x net frame go golang org x net framer readframe go src golang org x net frame go google golang org grpc transport framer readframe go src google golang org grpc transport http util go google golang org grpc transport handlestreams go src google golang org grpc transport server go google golang org grpc server servestreams go src google golang org grpc server go google golang org grpc server go src google golang org grpc server go google golang org grpc server handlerawconn go src google golang org grpc server go created by google golang org grpc server serve go src google golang org grpc server go goroutine google golang org grpc transport recvbufferreader read go src google golang org grpc transport transport go google golang org grpc transport stream read go src google golang org grpc transport transport go io readatleast usr local go src io io go io readfull usr local go src io io go google golang org grpc parser recvmsg go src google golang org grpc rpc util go google golang org grpc recv go src google golang org grpc rpc util go google golang org grpc clientstream recvmsg go src google golang org grpc stream go github com cockroachdb cockroach gossip gossipgossipclient recv go src github com cockroachdb cockroach gossip gossip pb go github com cockroachdb cockroach gossip client gossip go src github com cockroachdb cockroach gossip client go github com cockroachdb cockroach gossip client gossip go src github com cockroachdb cockroach gossip client go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go goroutine google golang org grpc transport controller go src google golang org grpc transport server go created by google golang org grpc transport go src google golang org grpc transport server go goroutine github com cockroachdb cockroach gossip server gossip go src github com cockroachdb cockroach gossip server go github com cockroachdb cockroach gossip gossip gossip handler go src github com cockroachdb cockroach gossip gossip pb go google golang org grpc server processstreamingrpc go src google golang org grpc server go google golang org grpc server handlestream go src google golang org grpc server go google golang org grpc server servestreams go src google golang org grpc server go created by google golang org grpc server servestreams go src google golang org grpc server go goroutine google golang org grpc transport recvbufferreader read go src google golang org grpc transport transport go google golang org grpc transport stream read go src google golang org grpc transport transport go io readatleast usr local go src io io go io readfull usr local go src io io go google golang org grpc parser recvmsg go src google golang org grpc rpc util go google golang org grpc recv go src google golang org grpc rpc util go google golang org grpc serverstream recvmsg go src google golang org grpc stream go github com cockroachdb cockroach gossip gossipgossipserver recv go src github com cockroachdb cockroach gossip gossip pb go github com cockroachdb cockroach gossip gossip gossipserver recv fm go src github com cockroachdb cockroach gossip server go github com cockroachdb cockroach gossip server gossipreceiver go src github com cockroachdb cockroach gossip server go github com cockroachdb cockroach gossip server gossip go src github com cockroachdb cockroach gossip server go github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go created by github com cockroachdb cockroach util stop stopper runworker go src github com cockroachdb cockroach util stop stopper go rax rbx rcx rdx rdi rsi rbp rsp rip rflags cs fs gs error exit status run details runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs completed failures over fail please assign take a look and update the issue accordingly
| 0
|
80,911
| 10,217,404,398
|
IssuesEvent
|
2019-08-15 13:35:21
|
librosa/librosa
|
https://api.github.com/repos/librosa/librosa
|
opened
|
RFC: math formulae in docstrings?
|
discussion documentation management
|
#### Description
In preparing #954, I've been fixing up some other docstrings, and noticed that #926 introduced some latex notation to the docstring for defining reassigned spectra.
This looks great when rendered in the compiled documentation, but is pretty illegible if you're reading docs in a repl like ipython or jupyter notebook. This is why I've avoided using formal math notation in the docstrings so far, even though they do look nice, and there's precedent for it from the scipy package. However, we never really made a formal decision about how docstrings should be styled for this sort of thing, so I'm raising the issue now.
I think there are good arguments in both directions. Here's a quick summary of pros and cons from my perspective:
**Pro**
- Looks nice on the website
- Most people will read docs through the website anyway
- Easier to relate to literature
- Docstrings will have sphinx restructured text anyway, so we may as well lean into it
**Con**
- Looks bad in the repl
- Could wreak havoc on screen-readers
- Harder to relate to code, especially when subscripts and non-standard operators are involved
- ReST in docstrings is usually not too bad, but we should keep it minimal
|
1.0
|
RFC: math formulae in docstrings? - #### Description
In preparing #954, I've been fixing up some other docstrings, and noticed that #926 introduced some latex notation to the docstring for defining reassigned spectra.
This looks great when rendered in the compiled documentation, but is pretty illegible if you're reading docs in a repl like ipython or jupyter notebook. This is why I've avoided using formal math notation in the docstrings so far, even though they do look nice, and there's precedent for it from the scipy package. However, we never really made a formal decision about how docstrings should be styled for this sort of thing, so I'm raising the issue now.
I think there are good arguments in both directions. Here's a quick summary of pros and cons from my perspective:
**Pro**
- Looks nice on the website
- Most people will read docs through the website anyway
- Easier to relate to literature
- Docstrings will have sphinx restructured text anyway, so we may as well lean into it
**Con**
- Looks bad in the repl
- Could wreak havoc on screen-readers
- Harder to relate to code, especially when subscripts and non-standard operators are involved
- ReST in docstrings is usually not too bad, but we should keep it minimal
|
non_process
|
rfc math formulae in docstrings description in preparing i ve been fixing up some other docstrings and noticed that introduced some latex notation to the docstring for defining reassigned spectra this looks great when rendered in the compiled documentation but is pretty illegible if you re reading docs in a repl like ipython or jupyter notebook this is why i ve avoided using formal math notation in the docstrings so far even though they do look nice and there s precedent for it from the scipy package however we never really made a formal decision about how docstrings should be styled for this sort of thing so i m raising the issue now i think there are good arguments in both directions here s a quick summary of pros and cons from my perspective pro looks nice on the website most people will read docs through the website anyway easier to relate to literature docstrings will have sphinx restructured text anyway so we may as well lean into it con looks bad in the repl could wreak havoc on screen readers harder to relate to code especially when subscripts and non standard operators are involved rest in docstrings is usually not too bad but we should keep it minimal
| 0
|
9,741
| 12,733,150,059
|
IssuesEvent
|
2020-06-25 11:46:00
|
RIOT-OS/RIOT
|
https://api.github.com/repos/RIOT-OS/RIOT
|
opened
|
[tracking] Transition to `inline`-able IRQ API
|
Area: core Area: cpu Process: API change Type: tracking
|
### Description
In https://github.com/RIOT-OS/RIOT/pull/13999, @fjmolinas added a more efficient `inline`-able IRQ API. The new API requires updating the implementations of every platform. This issue should help tracking what is left
#### State
- [x] [ARM Cortex M](https://github.com/RIOT-OS/RIOT/pull/13999)
- [x] [ARM v7](https://github.com/RIOT-OS/RIOT/pull/14088)
- [x] [AVR](https://github.com/RIOT-OS/RIOT/pull/14085)
- [ ] ESP32 / ESP8266
- [ ] MSP430
- [ ] RISCV
- [ ] MIPS
|
1.0
|
[tracking] Transition to `inline`-able IRQ API - ### Description
In https://github.com/RIOT-OS/RIOT/pull/13999, @fjmolinas added a more efficient `inline`-able IRQ API. The new API requires updating the implementations of every platform. This issue should help tracking what is left
#### State
- [x] [ARM Cortex M](https://github.com/RIOT-OS/RIOT/pull/13999)
- [x] [ARM v7](https://github.com/RIOT-OS/RIOT/pull/14088)
- [x] [AVR](https://github.com/RIOT-OS/RIOT/pull/14085)
- [ ] ESP32 / ESP8266
- [ ] MSP430
- [ ] RISCV
- [ ] MIPS
|
process
|
transition to inline able irq api description in fjmolinas added a more efficient inline able irq api the new api requires updating the implementations of every platform this issue should help tracking what is left state riscv mips
| 1
|
313,008
| 26,894,313,406
|
IssuesEvent
|
2023-02-06 11:13:24
|
hazelcast/hazelcast-cpp-client
|
https://api.github.com/repos/hazelcast/hazelcast-cpp-client
|
closed
|
ReliableTopicTest.testAlwaysStartAfterTail crashes occassionally on github actions. [API-1815]
|
Type: Test-Failure to-jira
|
C++ compiler version: vc 14.1
Hazelcast Cpp client version: 5.1.0
Hazelcast server version: 5.1.3
Number of the clients:
Cluster size, i.e. the number of Hazelcast cluster members:
OS version (Windows/Linux/OSX):
Windows
Please attach relevant logs and files for client and server side.
#### Expected behaviour
Pass test
#### Actual behaviour
Crashes on test
#### Steps to reproduce the behaviour
Here is the [link](https://github.com/hazelcast/hazelcast-cpp-client/actions/runs/3814810754/jobs/6489383386)
[ReliableTopicTest_testAlwaysStartAfterTail.txt](https://github.com/hazelcast/hazelcast-cpp-client/files/10330378/ReliableTopicTest_testAlwaysStartAfterTail.txt)
|
1.0
|
ReliableTopicTest.testAlwaysStartAfterTail crashes occassionally on github actions. [API-1815] - C++ compiler version: vc 14.1
Hazelcast Cpp client version: 5.1.0
Hazelcast server version: 5.1.3
Number of the clients:
Cluster size, i.e. the number of Hazelcast cluster members:
OS version (Windows/Linux/OSX):
Windows
Please attach relevant logs and files for client and server side.
#### Expected behaviour
Pass test
#### Actual behaviour
Crashes on test
#### Steps to reproduce the behaviour
Here is the [link](https://github.com/hazelcast/hazelcast-cpp-client/actions/runs/3814810754/jobs/6489383386)
[ReliableTopicTest_testAlwaysStartAfterTail.txt](https://github.com/hazelcast/hazelcast-cpp-client/files/10330378/ReliableTopicTest_testAlwaysStartAfterTail.txt)
|
non_process
|
reliabletopictest testalwaysstartaftertail crashes occassionally on github actions c compiler version vc hazelcast cpp client version hazelcast server version number of the clients cluster size i e the number of hazelcast cluster members os version windows linux osx windows please attach relevant logs and files for client and server side expected behaviour pass test actual behaviour crashes on test steps to reproduce the behaviour here is the
| 0
|
20,904
| 6,115,014,229
|
IssuesEvent
|
2017-06-22 04:02:52
|
WayofTime/BloodMagic
|
https://api.github.com/repos/WayofTime/BloodMagic
|
closed
|
1.11.2 [IDEA] Reap of the Harvest Moon [Red Orchid harvest]
|
1.10 1.11 code complete enhancement
|
#### Issue Description:
Note: If this bug occurs in a modpack, please report this to the modpack author. Otherwise, delete this line and add your description here. If this is a feature request, this template does not apply to you. Just delete everything.
#### What happens:
#### What you expected to happen:
My idea is that the Ritual of the Harvest Moon could harvest the Red Orchid of Extra Utils 2.
A sample picture
http://www.bilder-upload.eu/show.php?file=e88a8e-1489775696.png
#### Steps to reproduce:
1.
2.
3.
...
____
#### Affected Versions (Do *not* use "latest"):
- BloodMagic: BloodMagic-1.11-2.1.7-76
- Minecraft: 1.11.2
- Forge: forge-1.11.2-13.20.0.2230-universal
|
1.0
|
1.11.2 [IDEA] Reap of the Harvest Moon [Red Orchid harvest] - #### Issue Description:
Note: If this bug occurs in a modpack, please report this to the modpack author. Otherwise, delete this line and add your description here. If this is a feature request, this template does not apply to you. Just delete everything.
#### What happens:
#### What you expected to happen:
My idea is that the Ritual of the Harvest Moon could harvest the Red Orchid of Extra Utils 2.
A sample picture
http://www.bilder-upload.eu/show.php?file=e88a8e-1489775696.png
#### Steps to reproduce:
1.
2.
3.
...
____
#### Affected Versions (Do *not* use "latest"):
- BloodMagic: BloodMagic-1.11-2.1.7-76
- Minecraft: 1.11.2
- Forge: forge-1.11.2-13.20.0.2230-universal
|
non_process
|
reap of the harvest moon issue description note if this bug occurs in a modpack please report this to the modpack author otherwise delete this line and add your description here if this is a feature request this template does not apply to you just delete everything what happens what you expected to happen my idea is that the ritual of the harvest moon could harvest the red orchid of extra utils a sample picture steps to reproduce affected versions do not use latest bloodmagic bloodmagic minecraft forge forge universal
| 0
|
11,018
| 13,806,390,400
|
IssuesEvent
|
2020-10-11 17:30:05
|
Mikts/Infobserve
|
https://api.github.com/repos/Mikts/Infobserve
|
opened
|
Use Redis for Queues
|
component/data source component/processing enhancement priority/medium
|
Library to be used: [aioredis](https://github.com/aio-libs/aioredis)
Use Redis for persistence and to allow us to break the processors away from the main loop.
|
1.0
|
Use Redis for Queues - Library to be used: [aioredis](https://github.com/aio-libs/aioredis)
Use Redis for persistence and to allow us to break the processors away from the main loop.
|
process
|
use redis for queues library to be used use redis for persistence and to allow us to break the processors away from the main loop
| 1
|
10,382
| 13,194,662,007
|
IssuesEvent
|
2020-08-13 17:13:35
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
$(Date:yyyyMMdd) did not work for me in my yaml based pipeline.
|
Pri2 devops-cicd-process/tech devops/prod support-request
|
[Enter feedback here]
I tried a bunch of options for specified on this page e.g. $(Year: yyyy) or $(DayOfMonth) to include in the filename I was creating as part of a task in my pipeline but it keeps failing saying following
/home/vsts/work/_temp/815341be-b6f1-44d4-a0d8-563c5c5bef29.sh: line 1: Date:yyyyMMdd: command not found
/home/vsts/work/_temp/99fb7032-7ba5-4ef8-bea6-47be6eebc034.sh: line 1: Year:yyyy: command not found
/home/vsts/work/_temp/99fb7032-7ba5-4ef8-bea6-47be6eebc034.sh: line 1: Month: command not found
/home/vsts/work/_temp/99fb7032-7ba5-4ef8-bea6-47be6eebc034.sh: line 1: DayOfMonth: command not found
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a57f8545-bb15-3a71-1876-3a9ec1a59b93
* Version Independent ID: 28c87c8d-c28d-7493-0c7c-8c38b04fbcd7
* Content: [Run (build) number - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/run-number?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/run-number.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/run-number.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
$(Date:yyyyMMdd) did not work for me in my yaml based pipeline. -
[Enter feedback here]
I tried a bunch of options for specified on this page e.g. $(Year: yyyy) or $(DayOfMonth) to include in the filename I was creating as part of a task in my pipeline but it keeps failing saying following
/home/vsts/work/_temp/815341be-b6f1-44d4-a0d8-563c5c5bef29.sh: line 1: Date:yyyyMMdd: command not found
/home/vsts/work/_temp/99fb7032-7ba5-4ef8-bea6-47be6eebc034.sh: line 1: Year:yyyy: command not found
/home/vsts/work/_temp/99fb7032-7ba5-4ef8-bea6-47be6eebc034.sh: line 1: Month: command not found
/home/vsts/work/_temp/99fb7032-7ba5-4ef8-bea6-47be6eebc034.sh: line 1: DayOfMonth: command not found
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a57f8545-bb15-3a71-1876-3a9ec1a59b93
* Version Independent ID: 28c87c8d-c28d-7493-0c7c-8c38b04fbcd7
* Content: [Run (build) number - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/run-number?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/run-number.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/run-number.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
date yyyymmdd did not work for me in my yaml based pipeline i tried a bunch of options for specified on this page e g year yyyy or dayofmonth to include in the filename i was creating as part of a task in my pipeline but it keeps failing saying following home vsts work temp sh line date yyyymmdd command not found home vsts work temp sh line year yyyy command not found home vsts work temp sh line month command not found home vsts work temp sh line dayofmonth command not found document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
534,565
| 15,625,153,957
|
IssuesEvent
|
2021-03-21 06:38:38
|
NevilleAntony98/Prototype
|
https://api.github.com/repos/NevilleAntony98/Prototype
|
opened
|
CreateRoomSubpage: URL textInputLayout's HelperText TextAppearance is not correctly set the first time
|
Low Priority
|
The `TextAppearance` of the URL `TextInputLayout`'s helper text is not correctly set the first time. The `TextAppearance` style is only correctly set after changing the URL to an invalid one after the very first try. Note the color of the helper text when trying to reproduce this issue.
|
1.0
|
CreateRoomSubpage: URL textInputLayout's HelperText TextAppearance is not correctly set the first time - The `TextAppearance` of the URL `TextInputLayout`'s helper text is not correctly set the first time. The `TextAppearance` style is only correctly set after changing the URL to an invalid one after the very first try. Note the color of the helper text when trying to reproduce this issue.
|
non_process
|
createroomsubpage url textinputlayout s helpertext textappearance is not correctly set the first time the textappearance of the url textinputlayout s helper text is not correctly set the first time the textappearance style is only correctly set after changing the url to an invalid one after the very first try note the color of the helper text when trying to reproduce this issue
| 0
|
6,462
| 9,546,586,633
|
IssuesEvent
|
2019-05-01 20:23:01
|
openopps/openopps-platform
|
https://api.github.com/repos/openopps/openopps-platform
|
closed
|
Department of State: Ineligible Citizenship
|
Apply Process Approved Requirements Ready State Dept.
|
Who: Student
What: Message of ineligibility due to Citizenship
Why: U.S. Citizenship is a requirement to apply to DoS Internship Program
A/C
- There will be a header: You are ineligible (Bold)
- There will be a warning message
- This message will be presented to the user when they click "apply" on any opportunity if they have indicated in their USAJOBS profile that they are not a U.S. citizen
-You are not a U.S. citizen (Bold)
- You're not eligible for this internship because you're profile says you're not a U.S. citizen. You must
be a U.S. citizen to be eligible for the U.S. Department of State Internship Program (Unpaid). Think
this is wrong? Edit Profile (this is a link to the users USAJOBS profile and will open in a new window)
- There will be content below alert box explaining the program.
- The U.S. Department of State Student Internship Program (Unpaid) link will open in a new window and will take the user to the following website: https://careers.state.gov/intern/student-internships/
- Learn more about the U.S. Department of State exchange programs link will open in a new window and will take the user to the following website: https://exchanges.state.gov/
- If the student is not a U.S. Citizen they will not be allowed to proceed with their application
InVision Mock: https://opm.invisionapp.com/d/main/#/console/15360465/333432320/preview
Public Link: https://opm.invisionapp.com/share/ZEPNZR09Q54
|
1.0
|
Department of State: Ineligible Citizenship - Who: Student
What: Message of ineligibility due to Citizenship
Why: U.S. Citizenship is a requirement to apply to DoS Internship Program
A/C
- There will be a header: You are ineligible (Bold)
- There will be a warning message
- This message will be presented to the user when they click "apply" on any opportunity if they have indicated in their USAJOBS profile that they are not a U.S. citizen
-You are not a U.S. citizen (Bold)
- You're not eligible for this internship because you're profile says you're not a U.S. citizen. You must
be a U.S. citizen to be eligible for the U.S. Department of State Internship Program (Unpaid). Think
this is wrong? Edit Profile (this is a link to the users USAJOBS profile and will open in a new window)
- There will be content below alert box explaining the program.
- The U.S. Department of State Student Internship Program (Unpaid) link will open in a new window and will take the user to the following website: https://careers.state.gov/intern/student-internships/
- Learn more about the U.S. Department of State exchange programs link will open in a new window and will take the user to the following website: https://exchanges.state.gov/
- If the student is not a U.S. Citizen they will not be allowed to proceed with their application
InVision Mock: https://opm.invisionapp.com/d/main/#/console/15360465/333432320/preview
Public Link: https://opm.invisionapp.com/share/ZEPNZR09Q54
|
process
|
department of state ineligible citizenship who student what message of ineligibility due to citizenship why u s citizenship is a requirement to apply to dos internship program a c there will be a header you are ineligible bold there will be a warning message this message will be presented to the user when they click apply on any opportunity if they have indicated in their usajobs profile that they are not a u s citizen you are not a u s citizen bold you re not eligible for this internship because you re profile says you re not a u s citizen you must be a u s citizen to be eligible for the u s department of state internship program unpaid think this is wrong edit profile this is a link to the users usajobs profile and will open in a new window there will be content below alert box explaining the program the u s department of state student internship program unpaid link will open in a new window and will take the user to the following website learn more about the u s department of state exchange programs link will open in a new window and will take the user to the following website if the student is not a u s citizen they will not be allowed to proceed with their application invision mock public link
| 1
|
17,036
| 22,409,526,843
|
IssuesEvent
|
2022-06-18 14:02:23
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
`Process.Modules` does not list all modules/shared objects of a target process when executed on Linux
|
area-System.Diagnostics.Process
|
### Description
Not all modules/shared objects are listed when calling `Process.Modules` on a process on Linux machines. MacOS machines may also be affected.
Of note for this issue is that `/proc/{proc.Id}/maps` (the file which holds all maps loaded by a process) lists most maps more than once, presumably because they take up multiple memory regions. Those maps look similar to the following output:
```
7f673eb42000-7f673eb6a000 r--p 00000000 08:20 9815 /usr/lib/x86_64-linux-gnu/libc.so.6
7f673eb6a000-7f673ecff000 r-xp 00028000 08:20 9815 /usr/lib/x86_64-linux-gnu/libc.so.6
7f673ecff000-7f673ed57000 r--p 001bd000 08:20 9815 /usr/lib/x86_64-linux-gnu/libc.so.6
7f673ed57000-7f673ed5b000 r--p 00214000 08:20 9815 /usr/lib/x86_64-linux-gnu/libc.so.6
7f673ed5b000-7f673ed5d000 rw-p 00218000 08:20 9815 /usr/lib/x86_64-linux-gnu/libc.so.6
```
The flags (`r`, `w`, `x`, `p`, `s`) in the second column are the root of this issue.
The maps that do not get listed only have a single entry which contains the flags `r`, and `s`:
```
7f673a9a3000-7f673a9ab000 r--s 00000000 08:20 2470 /usr/share/dotnet/shared/Microsoft.NETCore.App/6.0.6/System.Runtime.dll
```
### Reproduction Steps
Create a new console application with the following content in `Program.cs`:
```cs
using System.Diagnostics;
// use the executing process as a simple example
var proc = Process.GetCurrentProcess();
// /proc/{pId}/maps holds information about loaded memory maps
// it is also used by Process.Modules as per
// System.Diagnostics.Process/Interop.ProcFsStat.ParseMapModules.cs#L18
var maps = $"/proc/{proc.Id}/maps";
// since the issue is about maps which only have the `r--s` (read and shared),
// we only care about entries with those flags here
var readSharedMaps = File.ReadLines(maps)
.Where(line => line.Contains("r--s"));
// list missing maps
foreach (var map in readSharedMaps)
{
Console.WriteLine(map);
}
Console.WriteLine();
// list and compare to Process.Modules
foreach (ProcessModule module in proc.Modules)
{
Console.WriteLine(module.ModuleName);
}
```
Build using `dotnet build --os linux -c release` and run on a Linux machine or the Windows Subsystem for Linux using `dotnet TestConsole.dll` or `./TestConsole`.
The output will be similar to the following:
```
7fb7a43de000-7fb7a43e6000 r--s 00000000 08:20 2470 /usr/share/dotnet/shared/Microsoft.NETCore.App/6.0.6/System.Runtime.dll
7fb7a43e6000-7fb7a43e8000 r--s 00000000 08:20 2299 /home/just-ero/code/TestConsole.dll
TestConsole
System.Private.CoreLib.dll
System.Diagnostics.Process.dll
System.ComponentModel.Primitives.dll
System.Linq.dll
System.Console.dll
System.Collections.NonGeneric.dll
System.Collections.dll
System.Threading.dll
Microsoft.Win32.Primitives.dll
System.Memory.dll
libicui18n.so.70.1
libicudata.so.70.1
libicuuc.so.70.1
libSystem.Native.so
libclrjit.so
librt.so.1
libcoreclr.so
libhostpolicy.so
libhostfxr.so
libc.so.6
libgcc_s.so.1
libm.so.6
libstdc++.so.6.0.30
libdl.so.2
libpthread.so.0
ld-linux-x86-64.so.2
[vdso]
```
Evidently, `Process.Modules` did not list `System.Runtime.dll`.
### Expected behavior
`Process.Modules` should list all loaded modules.
### Actual behavior
`Process.Modules` does not list all modules.
### Regression?
_No response_
### Known Workarounds
_No response_
### Configuration
Tested on a .NET 6 console application, built using `dotnet build --os linux -c release` (results in a 64-bit build), executed on WSL2, Ubuntu 22.04 trying both `dotnet TestConsole.dll` and `./TestConsole`, both not listing `TestConsole.dll` and `System.Runtime.dll` (the latter will list `TestConsole` because it is the "executable" and therefore the main module, but not the library).
### Other information
Due to [the requirement of *both* the `read` and `exec` flags](https://github.com/dotnet/runtime/blob/a103efd28d46af39fc22a77458a11d204226e8d4/src/libraries/Common/src/Interop/Linux/procfs/Interop.ProcFsStat.ParseMapModules.cs#L88) for a module to be considered valid, modules which only contain the `read` and `shared` flags will get ignored. I'm unsure how or if this can be fixed without breaking things.
|
1.0
|
`Process.Modules` does not list all modules/shared objects of a target process when executed on Linux - ### Description
Not all modules/shared objects are listed when calling `Process.Modules` on a process on Linux machines. MacOS machines may also be affected.
Of note for this issue is that `/proc/{proc.Id}/maps` (the file which holds all maps loaded by a process) lists most maps more than once, presumably because they take up multiple memory regions. Those maps look similar to the following output:
```
7f673eb42000-7f673eb6a000 r--p 00000000 08:20 9815 /usr/lib/x86_64-linux-gnu/libc.so.6
7f673eb6a000-7f673ecff000 r-xp 00028000 08:20 9815 /usr/lib/x86_64-linux-gnu/libc.so.6
7f673ecff000-7f673ed57000 r--p 001bd000 08:20 9815 /usr/lib/x86_64-linux-gnu/libc.so.6
7f673ed57000-7f673ed5b000 r--p 00214000 08:20 9815 /usr/lib/x86_64-linux-gnu/libc.so.6
7f673ed5b000-7f673ed5d000 rw-p 00218000 08:20 9815 /usr/lib/x86_64-linux-gnu/libc.so.6
```
The flags (`r`, `w`, `x`, `p`, `s`) in the second column are the root of this issue.
The maps that do not get listed only have a single entry which contains the flags `r`, and `s`:
```
7f673a9a3000-7f673a9ab000 r--s 00000000 08:20 2470 /usr/share/dotnet/shared/Microsoft.NETCore.App/6.0.6/System.Runtime.dll
```
### Reproduction Steps
Create a new console application with the following content in `Program.cs`:
```cs
using System.Diagnostics;
// use the executing process as a simple example
var proc = Process.GetCurrentProcess();
// /proc/{pId}/maps holds information about loaded memory maps
// it is also used by Process.Modules as per
// System.Diagnostics.Process/Interop.ProcFsStat.ParseMapModules.cs#L18
var maps = $"/proc/{proc.Id}/maps";
// since the issue is about maps which only have the `r--s` (read and shared),
// we only care about entries with those flags here
var readSharedMaps = File.ReadLines(maps)
.Where(line => line.Contains("r--s"));
// list missing maps
foreach (var map in readSharedMaps)
{
Console.WriteLine(map);
}
Console.WriteLine();
// list and compare to Process.Modules
foreach (ProcessModule module in proc.Modules)
{
Console.WriteLine(module.ModuleName);
}
```
Build using `dotnet build --os linux -c release` and run on a Linux machine or the Windows Subsystem for Linux using `dotnet TestConsole.dll` or `./TestConsole`.
The output will be similar to the following:
```
7fb7a43de000-7fb7a43e6000 r--s 00000000 08:20 2470 /usr/share/dotnet/shared/Microsoft.NETCore.App/6.0.6/System.Runtime.dll
7fb7a43e6000-7fb7a43e8000 r--s 00000000 08:20 2299 /home/just-ero/code/TestConsole.dll
TestConsole
System.Private.CoreLib.dll
System.Diagnostics.Process.dll
System.ComponentModel.Primitives.dll
System.Linq.dll
System.Console.dll
System.Collections.NonGeneric.dll
System.Collections.dll
System.Threading.dll
Microsoft.Win32.Primitives.dll
System.Memory.dll
libicui18n.so.70.1
libicudata.so.70.1
libicuuc.so.70.1
libSystem.Native.so
libclrjit.so
librt.so.1
libcoreclr.so
libhostpolicy.so
libhostfxr.so
libc.so.6
libgcc_s.so.1
libm.so.6
libstdc++.so.6.0.30
libdl.so.2
libpthread.so.0
ld-linux-x86-64.so.2
[vdso]
```
Evidently, `Process.Modules` did not list `System.Runtime.dll`.
### Expected behavior
`Process.Modules` should list all loaded modules.
### Actual behavior
`Process.Modules` does not list all modules.
### Regression?
_No response_
### Known Workarounds
_No response_
### Configuration
Tested on a .NET 6 console application, built using `dotnet build --os linux -c release` (results in a 64-bit build), executed on WSL2, Ubuntu 22.04 trying both `dotnet TestConsole.dll` and `./TestConsole`, both not listing `TestConsole.dll` and `System.Runtime.dll` (the latter will list `TestConsole` because it is the "executable" and therefore the main module, but not the library).
### Other information
Due to [the requirement of *both* the `read` and `exec` flags](https://github.com/dotnet/runtime/blob/a103efd28d46af39fc22a77458a11d204226e8d4/src/libraries/Common/src/Interop/Linux/procfs/Interop.ProcFsStat.ParseMapModules.cs#L88) for a module to be considered valid, modules which only contain the `read` and `shared` flags will get ignored. I'm unsure how or if this can be fixed without breaking things.
|
process
|
process modules does not list all modules shared objects of a target process when executed on linux description not all modules shared objects are listed when calling process modules on a process on linux machines macos machines may also be affected of note for this issue is that proc proc id maps the file which holds all maps loaded by a process lists most maps more than once presumably because they take up multiple memory regions those maps look similar to the following output r p usr lib linux gnu libc so r xp usr lib linux gnu libc so r p usr lib linux gnu libc so r p usr lib linux gnu libc so rw p usr lib linux gnu libc so the flags r w x p s in the second column are the root of this issue the maps that do not get listed only have a single entry which contains the flags r and s r s usr share dotnet shared microsoft netcore app system runtime dll reproduction steps create a new console application with the following content in program cs cs using system diagnostics use the executing process as a simple example var proc process getcurrentprocess proc pid maps holds information about loaded memory maps it is also used by process modules as per system diagnostics process interop procfsstat parsemapmodules cs var maps proc proc id maps since the issue is about maps which only have the r s read and shared we only care about entries with those flags here var readsharedmaps file readlines maps where line line contains r s list missing maps foreach var map in readsharedmaps console writeline map console writeline list and compare to process modules foreach processmodule module in proc modules console writeline module modulename build using dotnet build os linux c release and run on a linux machine or the windows subsystem for linux using dotnet testconsole dll or testconsole the output will be similar to the following r s usr share dotnet shared microsoft netcore app system runtime dll r s home just ero code testconsole dll testconsole system private corelib dll system diagnostics process dll system componentmodel primitives dll system linq dll system console dll system collections nongeneric dll system collections dll system threading dll microsoft primitives dll system memory dll so libicudata so libicuuc so libsystem native so libclrjit so librt so libcoreclr so libhostpolicy so libhostfxr so libc so libgcc s so libm so libstdc so libdl so libpthread so ld linux so evidently process modules did not list system runtime dll expected behavior process modules should list all loaded modules actual behavior process modules does not list all modules regression no response known workarounds no response configuration tested on a net console application built using dotnet build os linux c release results in a bit build executed on ubuntu trying both dotnet testconsole dll and testconsole both not listing testconsole dll and system runtime dll the latter will list testconsole because it is the executable and therefore the main module but not the library other information due to for a module to be considered valid modules which only contain the read and shared flags will get ignored i m unsure how or if this can be fixed without breaking things
| 1
|
783,384
| 27,528,199,412
|
IssuesEvent
|
2023-03-06 19:47:07
|
googleapis/nodejs-storage
|
https://api.github.com/repos/googleapis/nodejs-storage
|
closed
|
refactor: replace uses of substr with substring.
|
type: cleanup api: storage priority: p3
|
There are a few places in the code base where `substr` is being utilized ([example](https://github.com/googleapis/nodejs-storage/blob/6851cd2ece430916ad6ff13dc2eb2fe7eeba1dcc/src/file.ts#L1513)). `substr` is not part of the ECMAScript specification and has been [deprecated](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/substr). Instances of `substr` should be replaced with `substring`.
|
1.0
|
refactor: replace uses of substr with substring. - There are a few places in the code base where `substr` is being utilized ([example](https://github.com/googleapis/nodejs-storage/blob/6851cd2ece430916ad6ff13dc2eb2fe7eeba1dcc/src/file.ts#L1513)). `substr` is not part of the ECMAScript specification and has been [deprecated](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/substr). Instances of `substr` should be replaced with `substring`.
|
non_process
|
refactor replace uses of substr with substring there are a few places in the code base where substr is being utilized substr is not part of the ecmascript specification and has been instances of substr should be replaced with substring
| 0
|
94,170
| 10,799,014,189
|
IssuesEvent
|
2019-11-06 11:12:52
|
AY1920S1-CS2103T-F12-3/main
|
https://api.github.com/repos/AY1920S1-CS2103T-F12-3/main
|
closed
|
Note list command issue
|
component.Note type.Documentation
|
Note list "untitled" did not work although there is a default Untitled note in your app.

<hr><sub>[original: shawnlsj97/ped#12]<br/>
</sub>
|
1.0
|
Note list command issue - Note list "untitled" did not work although there is a default Untitled note in your app.

<hr><sub>[original: shawnlsj97/ped#12]<br/>
</sub>
|
non_process
|
note list command issue note list untitled did not work although there is a default untitled note in your app
| 0
|
15,214
| 19,061,158,259
|
IssuesEvent
|
2021-11-26 07:58:00
|
trilinos/Trilinos
|
https://api.github.com/repos/trilinos/Trilinos
|
closed
|
Autotester: allow testing of external contributions w/o approval
|
type: enhancement process improvement
|
## Enhancement
@trilinos/framework
I'm putting out this idea for discussion.
### Current state and problem description
Currently, PRs submitted by external developers (i.e. not a member of the GitHub Trilinos organization) are only tested by the autotester, if they have been approved. However, this often leaves them in this weird state of being approved "just for the sake of testing", but without an actual code review. In theory, they could be merged "unreviewed".
Examples:
- https://github.com/trilinos/Trilinos/pull/9187#pullrequestreview-671551779
- https://github.com/trilinos/Trilinos/pull/9153
- https://github.com/trilinos/Trilinos/pull/9107#pullrequestreview-660018354
I can definitely understand, that one wants to somehow control, which/how many external PRs are tested. However, requiring approval just to trigger the tests seems weird to me as outlined above.
I also see value in starting testing early on, as some problems only become evident during testing and then often require several iterations of code changes and testing.
### Possible solution
As far as I know, labels can only be applied by members of the Trilinos organization. So, instead of requiring approval, one could trigger the autotester for external PRs by setting a label "AT: ready for testing". This would still require some action of a Trilinos member to trigger testing, hence uncontrolled overload of testing machines by external PRs can be avoided as well. However, testing would be possible _without premature PR approvals_.
|
1.0
|
Autotester: allow testing of external contributions w/o approval - ## Enhancement
@trilinos/framework
I'm putting out this idea for discussion.
### Current state and problem description
Currently, PRs submitted by external developers (i.e. not a member of the GitHub Trilinos organization) are only tested by the autotester, if they have been approved. However, this often leaves them in this weird state of being approved "just for the sake of testing", but without an actual code review. In theory, they could be merged "unreviewed".
Examples:
- https://github.com/trilinos/Trilinos/pull/9187#pullrequestreview-671551779
- https://github.com/trilinos/Trilinos/pull/9153
- https://github.com/trilinos/Trilinos/pull/9107#pullrequestreview-660018354
I can definitely understand, that one wants to somehow control, which/how many external PRs are tested. However, requiring approval just to trigger the tests seems weird to me as outlined above.
I also see value in starting testing early on, as some problems only become evident during testing and then often require several iterations of code changes and testing.
### Possible solution
As far as I know, labels can only be applied by members of the Trilinos organization. So, instead of requiring approval, one could trigger the autotester for external PRs by setting a label "AT: ready for testing". This would still require some action of a Trilinos member to trigger testing, hence uncontrolled overload of testing machines by external PRs can be avoided as well. However, testing would be possible _without premature PR approvals_.
|
process
|
autotester allow testing of external contributions w o approval enhancement trilinos framework i m putting out this idea for discussion current state and problem description currently prs submitted by external developers i e not a member of the github trilinos organization are only tested by the autotester if they have been approved however this often leaves them in this weird state of being approved just for the sake of testing but without an actual code review in theory they could be merged unreviewed examples i can definitely understand that one wants to somehow control which how many external prs are tested however requiring approval just to trigger the tests seems weird to me as outlined above i also see value in starting testing early on as some problems only become evident during testing and then often require several iterations of code changes and testing possible solution as far as i know labels can only be applied by members of the trilinos organization so instead of requiring approval one could trigger the autotester for external prs by setting a label at ready for testing this would still require some action of a trilinos member to trigger testing hence uncontrolled overload of testing machines by external prs can be avoided as well however testing would be possible without premature pr approvals
| 1
|
5,724
| 8,567,919,234
|
IssuesEvent
|
2018-11-10 16:35:00
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
transfer or transferFrom sometimes succeed but do not generate events
|
libs-etherlib status-inprocess type-enhancement
|
Check out this transaction on Etherscan: https://etherscan.io/tx/0x60b32a9330fb926430c25eab285c548e30d60881bdabd94d72c4cbaa93435d50
The transfer happened but there was no event. In this case, EtherScan reports a 'possible error because it can't find the matching event. This is very easy to find. If the input has the right signature, but no corresponding event was generated, there is a problem. This ties in with the idea of verifying the 'correctness' of a token to the standard which is another issue elsewhere.
|
1.0
|
transfer or transferFrom sometimes succeed but do not generate events - Check out this transaction on Etherscan: https://etherscan.io/tx/0x60b32a9330fb926430c25eab285c548e30d60881bdabd94d72c4cbaa93435d50
The transfer happened but there was no event. In this case, EtherScan reports a 'possible error because it can't find the matching event. This is very easy to find. If the input has the right signature, but no corresponding event was generated, there is a problem. This ties in with the idea of verifying the 'correctness' of a token to the standard which is another issue elsewhere.
|
process
|
transfer or transferfrom sometimes succeed but do not generate events check out this transaction on etherscan the transfer happened but there was no event in this case etherscan reports a possible error because it can t find the matching event this is very easy to find if the input has the right signature but no corresponding event was generated there is a problem this ties in with the idea of verifying the correctness of a token to the standard which is another issue elsewhere
| 1
|
175,675
| 27,956,020,472
|
IssuesEvent
|
2023-03-24 12:28:25
|
Tonomy-Foundation/Tonomy-ID
|
https://api.github.com/repos/Tonomy-Foundation/Tonomy-ID
|
opened
|
Re-enter PIN message
|
bug design
|
When a user logs in the PIN and Fingerprint screens ADD a… if you already have them set up the next screens should be used.
https://www.figma.com/file/cvV48t0f7O2znT6QBxK0Zj/Tonomy-ID?node-id=4229%3A23309&t=feVTF4UHiru0RGSe-1
and
https://www.figma.com/file/cvV48t0f7O2znT6QBxK0Zj/Tonomy-ID?node-id=4229%3A23368&t=feVTF4UHiru0RGSe-1
because they use language that doesn’t allude you need to ADD but use existing ones.
|
1.0
|
Re-enter PIN message - When a user logs in the PIN and Fingerprint screens ADD a… if you already have them set up the next screens should be used.
https://www.figma.com/file/cvV48t0f7O2znT6QBxK0Zj/Tonomy-ID?node-id=4229%3A23309&t=feVTF4UHiru0RGSe-1
and
https://www.figma.com/file/cvV48t0f7O2znT6QBxK0Zj/Tonomy-ID?node-id=4229%3A23368&t=feVTF4UHiru0RGSe-1
because they use language that doesn’t allude you need to ADD but use existing ones.
|
non_process
|
re enter pin message when a user logs in the pin and fingerprint screens add a… if you already have them set up the next screens should be used and because they use language that doesn’t allude you need to add but use existing ones
| 0
|
26,871
| 13,135,780,483
|
IssuesEvent
|
2020-08-07 03:58:47
|
golang/go
|
https://api.github.com/repos/golang/go
|
closed
|
runtime: working with small maps is 4x-10x slower than in nodejs
|
Performance
|
Please answer these questions before submitting your issue. Thanks!
#### What did you do?
Hello up there. Map performance was already discussed some time ago in #3885 and improved a bit. It was also said there that the map algorithm is choosen to work very well with very very large maps. However maps are not always very very large and imho in many practical cases they are small and medium.
So please consider the following 3 programs:
```go
package main
//import "fmt"
func main() {
a := make(map[int]int)
for i := 0; i < 100000000; i++ {
a[i & 0xffff] = i
//a[i & 0x7f] = i
//a[i] = i
}
//fmt.Println(a)
}
```
(https://play.golang.org/p/rPH1pSM1Xk)
```javascript
#!/usr/bin/env nodejs
function main() {
var a = {};
for (var i = 0; i < 100000000; i++) {
a[i & 0xffff] = i;
//a[i & 0x7f] = i;
//a[i] = i;
}
//console.log(a)
}
main()
```
```python
#!/usr/bin/env pypy
def main():
a = {}
for i in range(100000000):
a[i & 0xffff] = i
#a[i & 0x7f] = i
#a[i] = i
#print(a)
if __name__ == '__main__':
main()
```
The time it takes to run them on i7-6600U is as follows:
Program | Time (seconds, best of 5)
------------ | -------------
map.go | 3.668
map.js | 0.385
map.py | 1.988
The go version is 9.5x slower than javascript one, and ~ 1.8x slower than pypy one.
If we reduce the actual map size from 64K elements to 128 elements, activating the `a[i & 0x7f] = i` case via e.g. the following patch:
```diff
--- a/map.go.kirr
+++ b/map.go
@@ -5,8 +5,8 @@ func main() {
a := make(map[int]int)
for i := 0; i < 100000000; i++ {
- a[i & 0xffff] = i
- //a[i & 0x7f] = i
+ //a[i & 0xffff] = i
+ a[i & 0x7f] = i
//a[i] = i
}
```
timings become:
Program | Time (seconds, best of 5)
------------ | -------------
map.go | 1.571
map.js | 0.377
map.py | 0.896
javascript becomes only a bit faster here while go & pypy improved ~ 2.3x / 2.2x respectively. Still go is 4x slower than javascript and 1.7x slower than pypy.
We can also test how it works if we do not limit the map size and let it grow on every operation. Yes, javascript and pypy are more memory hungry and for original niter=1E8 I'm getting out-of-memory in their cases on my small laptop, but let's test with e.g. niter=1E7 (diff to original program):
```diff
--- a/map.go.kirr
+++ b/map.go
@@ -4,10 +4,10 @@ package main
func main() {
a := make(map[int]int)
- for i := 0; i < 100000000; i++ {
- a[i & 0xffff] = i
+ for i := 0; i < 100000000 / 10; i++ {
+ //a[i & 0xffff] = i
//a[i & 0x7f] = i
- //a[i] = i
+ a[i] = i
}
//fmt.Println(a)
```
timings become:
Program | Time (seconds, best of 5)
------------ | -------------
map.go | 2.877
map.js | 0.438
map.py | 1.277
So it is go/js ~6.5x slower and go/pypy is ~2.2x slower.
The profile for original program (`a[i & 0xffff] = i`) is:
```
File: map
Type: cpu
Time: Mar 10, 2017 at 7:18pm (MSK)
Duration: 3.70s, Total samples = 36ms ( 0.97%)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top10
Showing nodes accounting for 36000us, 100% of 36000us total
flat flat% sum% cum cum%
27800us 77.22% 77.22% 33900us 94.17% runtime.mapassign /home/kirr/src/tools/go/go/src/runtime/hashmap.go
3100us 8.61% 85.83% 3100us 8.61% runtime.aeshash64 /home/kirr/src/tools/go/go/src/runtime/asm_amd64.s
3000us 8.33% 94.17% 3000us 8.33% runtime.memequal64 /home/kirr/src/tools/go/go/src/runtime/alg.go
1700us 4.72% 98.89% 36000us 100% main.main /home/kirr/tmp/trashme/map/map.go
400us 1.11% 100% 400us 1.11% runtime.mapassign /home/kirr/src/tools/go/go/src/runtime/stubs.go
0 0% 100% 36000us 100% runtime.main /home/kirr/src/tools/go/go/src/runtime/proc.go
```
#### What did you expect to see?
Map operations for small / medium maps are as fast or better than in nodejs.
#### What did you see instead?
Map operations are 4x-10x slower than in javascript for maps sizes that are commonly present in many programs.
#### Does this issue reproduce with the latest release (go1.8)?
Yes.
#### System details
```
go version devel +d11a2184fb Fri Mar 10 01:39:09 2017 +0000 linux/amd64
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/kirr/go"
GORACE=""
GOROOT="/home/kirr/src/tools/go/go"
GOTOOLDIR="/home/kirr/src/tools/go/go/pkg/tool/linux_amd64"
GCCGO="/usr/bin/gccgo"
CC="gcc"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build714926978=/tmp/go-build -gno-record-gcc-switches"
CXX="g++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOROOT/bin/go version: go version devel +d11a2184fb Fri Mar 10 01:39:09 2017 +0000 linux/amd64
GOROOT/bin/go tool compile -V: compile version devel +d11a2184fb Fri Mar 10 01:39:09 2017 +0000 X:framepointer
uname -sr: Linux 4.9.0-2-amd64
Distributor ID: Debian
Description: Debian GNU/Linux 9.0 (stretch)
Release: 9.0
Codename: stretch
/lib/x86_64-linux-gnu/libc.so.6: GNU C Library (Debian GLIBC 2.24-9) stable release version 2.24, by Roland McGrath et al.
gdb --version: GNU gdb (Debian 7.12-6) 7.12.0.20161007-git
```
Thanks beforehand,
Kirill
/cc @rsc, @randall77
|
True
|
runtime: working with small maps is 4x-10x slower than in nodejs - Please answer these questions before submitting your issue. Thanks!
#### What did you do?
Hello up there. Map performance was already discussed some time ago in #3885 and improved a bit. It was also said there that the map algorithm is choosen to work very well with very very large maps. However maps are not always very very large and imho in many practical cases they are small and medium.
So please consider the following 3 programs:
```go
package main
//import "fmt"
func main() {
a := make(map[int]int)
for i := 0; i < 100000000; i++ {
a[i & 0xffff] = i
//a[i & 0x7f] = i
//a[i] = i
}
//fmt.Println(a)
}
```
(https://play.golang.org/p/rPH1pSM1Xk)
```javascript
#!/usr/bin/env nodejs
function main() {
var a = {};
for (var i = 0; i < 100000000; i++) {
a[i & 0xffff] = i;
//a[i & 0x7f] = i;
//a[i] = i;
}
//console.log(a)
}
main()
```
```python
#!/usr/bin/env pypy
def main():
a = {}
for i in range(100000000):
a[i & 0xffff] = i
#a[i & 0x7f] = i
#a[i] = i
#print(a)
if __name__ == '__main__':
main()
```
The time it takes to run them on i7-6600U is as follows:
Program | Time (seconds, best of 5)
------------ | -------------
map.go | 3.668
map.js | 0.385
map.py | 1.988
The go version is 9.5x slower than javascript one, and ~ 1.8x slower than pypy one.
If we reduce the actual map size from 64K elements to 128 elements, activating the `a[i & 0x7f] = i` case via e.g. the following patch:
```diff
--- a/map.go.kirr
+++ b/map.go
@@ -5,8 +5,8 @@ func main() {
a := make(map[int]int)
for i := 0; i < 100000000; i++ {
- a[i & 0xffff] = i
- //a[i & 0x7f] = i
+ //a[i & 0xffff] = i
+ a[i & 0x7f] = i
//a[i] = i
}
```
timings become:
Program | Time (seconds, best of 5)
------------ | -------------
map.go | 1.571
map.js | 0.377
map.py | 0.896
javascript becomes only a bit faster here while go & pypy improved ~ 2.3x / 2.2x respectively. Still go is 4x slower than javascript and 1.7x slower than pypy.
We can also test how it works if we do not limit the map size and let it grow on every operation. Yes, javascript and pypy are more memory hungry and for original niter=1E8 I'm getting out-of-memory in their cases on my small laptop, but let's test with e.g. niter=1E7 (diff to original program):
```diff
--- a/map.go.kirr
+++ b/map.go
@@ -4,10 +4,10 @@ package main
func main() {
a := make(map[int]int)
- for i := 0; i < 100000000; i++ {
- a[i & 0xffff] = i
+ for i := 0; i < 100000000 / 10; i++ {
+ //a[i & 0xffff] = i
//a[i & 0x7f] = i
- //a[i] = i
+ a[i] = i
}
//fmt.Println(a)
```
timings become:
Program | Time (seconds, best of 5)
------------ | -------------
map.go | 2.877
map.js | 0.438
map.py | 1.277
So it is go/js ~6.5x slower and go/pypy is ~2.2x slower.
The profile for original program (`a[i & 0xffff] = i`) is:
```
File: map
Type: cpu
Time: Mar 10, 2017 at 7:18pm (MSK)
Duration: 3.70s, Total samples = 36ms ( 0.97%)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top10
Showing nodes accounting for 36000us, 100% of 36000us total
flat flat% sum% cum cum%
27800us 77.22% 77.22% 33900us 94.17% runtime.mapassign /home/kirr/src/tools/go/go/src/runtime/hashmap.go
3100us 8.61% 85.83% 3100us 8.61% runtime.aeshash64 /home/kirr/src/tools/go/go/src/runtime/asm_amd64.s
3000us 8.33% 94.17% 3000us 8.33% runtime.memequal64 /home/kirr/src/tools/go/go/src/runtime/alg.go
1700us 4.72% 98.89% 36000us 100% main.main /home/kirr/tmp/trashme/map/map.go
400us 1.11% 100% 400us 1.11% runtime.mapassign /home/kirr/src/tools/go/go/src/runtime/stubs.go
0 0% 100% 36000us 100% runtime.main /home/kirr/src/tools/go/go/src/runtime/proc.go
```
#### What did you expect to see?
Map operations for small / medium maps are as fast or better than in nodejs.
#### What did you see instead?
Map operations are 4x-10x slower than in javascript for maps sizes that are commonly present in many programs.
#### Does this issue reproduce with the latest release (go1.8)?
Yes.
#### System details
```
go version devel +d11a2184fb Fri Mar 10 01:39:09 2017 +0000 linux/amd64
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/kirr/go"
GORACE=""
GOROOT="/home/kirr/src/tools/go/go"
GOTOOLDIR="/home/kirr/src/tools/go/go/pkg/tool/linux_amd64"
GCCGO="/usr/bin/gccgo"
CC="gcc"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build714926978=/tmp/go-build -gno-record-gcc-switches"
CXX="g++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOROOT/bin/go version: go version devel +d11a2184fb Fri Mar 10 01:39:09 2017 +0000 linux/amd64
GOROOT/bin/go tool compile -V: compile version devel +d11a2184fb Fri Mar 10 01:39:09 2017 +0000 X:framepointer
uname -sr: Linux 4.9.0-2-amd64
Distributor ID: Debian
Description: Debian GNU/Linux 9.0 (stretch)
Release: 9.0
Codename: stretch
/lib/x86_64-linux-gnu/libc.so.6: GNU C Library (Debian GLIBC 2.24-9) stable release version 2.24, by Roland McGrath et al.
gdb --version: GNU gdb (Debian 7.12-6) 7.12.0.20161007-git
```
Thanks beforehand,
Kirill
/cc @rsc, @randall77
|
non_process
|
runtime working with small maps is slower than in nodejs please answer these questions before submitting your issue thanks what did you do hello up there map performance was already discussed some time ago in and improved a bit it was also said there that the map algorithm is choosen to work very well with very very large maps however maps are not always very very large and imho in many practical cases they are small and medium so please consider the following programs go package main import fmt func main a make map int for i i i a i a i a i fmt println a javascript usr bin env nodejs function main var a for var i i i a i a i a i console log a main python usr bin env pypy def main a for i in range a i a i a i print a if name main main the time it takes to run them on is as follows program time seconds best of map go map js map py the go version is slower than javascript one and slower than pypy one if we reduce the actual map size from elements to elements activating the a i case via e g the following patch diff a map go kirr b map go func main a make map int for i i i a i a i a i a i a i timings become program time seconds best of map go map js map py javascript becomes only a bit faster here while go pypy improved respectively still go is slower than javascript and slower than pypy we can also test how it works if we do not limit the map size and let it grow on every operation yes javascript and pypy are more memory hungry and for original niter i m getting out of memory in their cases on my small laptop but let s test with e g niter diff to original program diff a map go kirr b map go package main func main a make map int for i i i a i for i i i a i a i a i a i fmt println a timings become program time seconds best of map go map js map py so it is go js slower and go pypy is slower the profile for original program a i is file map type cpu time mar at msk duration total samples entering interactive mode type help for commands o for options pprof showing nodes accounting for of total flat flat sum cum cum runtime mapassign home kirr src tools go go src runtime hashmap go runtime home kirr src tools go go src runtime asm s runtime home kirr src tools go go src runtime alg go main main home kirr tmp trashme map map go runtime mapassign home kirr src tools go go src runtime stubs go runtime main home kirr src tools go go src runtime proc go what did you expect to see map operations for small medium maps are as fast or better than in nodejs what did you see instead map operations are slower than in javascript for maps sizes that are commonly present in many programs does this issue reproduce with the latest release yes system details go version devel fri mar linux goarch gobin goexe gohostarch gohostos linux goos linux gopath home kirr go gorace goroot home kirr src tools go go gotooldir home kirr src tools go go pkg tool linux gccgo usr bin gccgo cc gcc gogccflags fpic pthread fmessage length fdebug prefix map tmp go tmp go build gno record gcc switches cxx g cgo enabled cgo cflags g cgo cppflags cgo cxxflags g cgo fflags g cgo ldflags g pkg config pkg config goroot bin go version go version devel fri mar linux goroot bin go tool compile v compile version devel fri mar x framepointer uname sr linux distributor id debian description debian gnu linux stretch release codename stretch lib linux gnu libc so gnu c library debian glibc stable release version by roland mcgrath et al gdb version gnu gdb debian git thanks beforehand kirill cc rsc
| 0
|
28,566
| 23,347,638,631
|
IssuesEvent
|
2022-08-09 19:37:50
|
RadicalZephyr/bitburner-scripts
|
https://api.github.com/repos/RadicalZephyr/bitburner-scripts
|
closed
|
Create script to update scripts from github
|
Infrastructure
|
- Should not change every/often
- Updates should be based on an index file
- Should include a self-update mechanism
|
1.0
|
Create script to update scripts from github - - Should not change every/often
- Updates should be based on an index file
- Should include a self-update mechanism
|
non_process
|
create script to update scripts from github should not change every often updates should be based on an index file should include a self update mechanism
| 0
|
7,409
| 10,531,226,849
|
IssuesEvent
|
2019-10-01 08:02:10
|
ViniciusDeep/Revill
|
https://api.github.com/repos/ViniciusDeep/Revill
|
opened
|
Add new tasks at Readme
|
Hacktoberfest Process
|
Add new tasks to made at app in Readme of repositorie
## Important
* Added in To-do
* Look de design
* Observe the idea
|
1.0
|
Add new tasks at Readme - Add new tasks to made at app in Readme of repositorie
## Important
* Added in To-do
* Look de design
* Observe the idea
|
process
|
add new tasks at readme add new tasks to made at app in readme of repositorie important added in to do look de design observe the idea
| 1
|
221,386
| 24,621,513,174
|
IssuesEvent
|
2022-10-16 01:03:39
|
Baneeishaque/spring_store_thymeleaf
|
https://api.github.com/repos/Baneeishaque/spring_store_thymeleaf
|
closed
|
CVE-2018-11698 (High) detected in node-sass-3.13.1.tgz, node-sass15fe42ed92dea8e086e7837e53ecd8190c0179b9 - autoclosed
|
security vulnerability
|
## CVE-2018-11698 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sass-3.13.1.tgz</b>, <b>node-sass15fe42ed92dea8e086e7837e53ecd8190c0179b9</b></p></summary>
<p>
<details><summary><b>node-sass-3.13.1.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-3.13.1.tgz">https://registry.npmjs.org/node-sass/-/node-sass-3.13.1.tgz</a></p>
<p>Path to dependency file: /html_site_template_customer/fashi/Source/jquery-nice-select-1.1.0/jquery-nice-select-1.1.0/jquery-nice-select-1.1.0/package.json</p>
<p>Path to vulnerable library: /html_site_template_customer/fashi/Source/SlickNav-master/SlickNav-master/node_modules/node-sass/package.json,/html_site_template_customer/fashi/Source/SlickNav-master/SlickNav-master/node_modules/node-sass/package.json,/html_site_template_customer/fashi/Source/SlickNav-master/SlickNav-master/node_modules/node-sass/package.json,/html_site_template_customer/fashi/Source/SlickNav-master/SlickNav-master/node_modules/node-sass/package.json,/html_site_template_customer/fashi/Source/SlickNav-master/SlickNav-master/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- gulp-sass-2.3.2.tgz (Root Library)
- :x: **node-sass-3.13.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/Baneeishaque/spring_store_thymeleaf/commit/08ba0922f3668b139df2a365e01b4d3e57faef86">08ba0922f3668b139df2a365e01b4d3e57faef86</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in LibSass through 3.5.4. An out-of-bounds read of a memory region was found in the function Sass::handle_error which could be leveraged by an attacker to disclose information or manipulated to read from unmapped memory causing a denial of service.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11698>CVE-2018-11698</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution (node-sass): 5.0.0</p>
<p>Direct dependency fix Resolution (gulp-sass): 5.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-11698 (High) detected in node-sass-3.13.1.tgz, node-sass15fe42ed92dea8e086e7837e53ecd8190c0179b9 - autoclosed - ## CVE-2018-11698 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sass-3.13.1.tgz</b>, <b>node-sass15fe42ed92dea8e086e7837e53ecd8190c0179b9</b></p></summary>
<p>
<details><summary><b>node-sass-3.13.1.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-3.13.1.tgz">https://registry.npmjs.org/node-sass/-/node-sass-3.13.1.tgz</a></p>
<p>Path to dependency file: /html_site_template_customer/fashi/Source/jquery-nice-select-1.1.0/jquery-nice-select-1.1.0/jquery-nice-select-1.1.0/package.json</p>
<p>Path to vulnerable library: /html_site_template_customer/fashi/Source/SlickNav-master/SlickNav-master/node_modules/node-sass/package.json,/html_site_template_customer/fashi/Source/SlickNav-master/SlickNav-master/node_modules/node-sass/package.json,/html_site_template_customer/fashi/Source/SlickNav-master/SlickNav-master/node_modules/node-sass/package.json,/html_site_template_customer/fashi/Source/SlickNav-master/SlickNav-master/node_modules/node-sass/package.json,/html_site_template_customer/fashi/Source/SlickNav-master/SlickNav-master/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- gulp-sass-2.3.2.tgz (Root Library)
- :x: **node-sass-3.13.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/Baneeishaque/spring_store_thymeleaf/commit/08ba0922f3668b139df2a365e01b4d3e57faef86">08ba0922f3668b139df2a365e01b4d3e57faef86</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in LibSass through 3.5.4. An out-of-bounds read of a memory region was found in the function Sass::handle_error which could be leveraged by an attacker to disclose information or manipulated to read from unmapped memory causing a denial of service.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11698>CVE-2018-11698</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution (node-sass): 5.0.0</p>
<p>Direct dependency fix Resolution (gulp-sass): 5.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in node sass tgz node autoclosed cve high severity vulnerability vulnerable libraries node sass tgz node node sass tgz wrapper around libsass library home page a href path to dependency file html site template customer fashi source jquery nice select jquery nice select jquery nice select package json path to vulnerable library html site template customer fashi source slicknav master slicknav master node modules node sass package json html site template customer fashi source slicknav master slicknav master node modules node sass package json html site template customer fashi source slicknav master slicknav master node modules node sass package json html site template customer fashi source slicknav master slicknav master node modules node sass package json html site template customer fashi source slicknav master slicknav master node modules node sass package json dependency hierarchy gulp sass tgz root library x node sass tgz vulnerable library found in head commit a href vulnerability details an issue was discovered in libsass through an out of bounds read of a memory region was found in the function sass handle error which could be leveraged by an attacker to disclose information or manipulated to read from unmapped memory causing a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution node sass direct dependency fix resolution gulp sass step up your open source security game with mend
| 0
|
20,606
| 27,269,973,440
|
IssuesEvent
|
2023-02-22 21:22:41
|
OpenDataScotland/the_od_bods
|
https://api.github.com/repos/OpenDataScotland/the_od_bods
|
closed
|
Replace csv storage formats
|
data processing back end
|
This project so far has relied on csv files as storage of outputs and inputs into following processes, and that's been working, but as the variety and volume of listings and publishers grow, we're starting to see issues with encoding, line endings, quoting, arrays etc.
This issue needs to consider replacing csv files as storage for:
- web scrapers output (extract)
- merge_data.py output (aggregate and clean)
JSON has been suggested but we're not closed to other options.
As a secondary outcome, we'll still want to provide a .csv as output for public users to download, but this should be published output only.
See [process flow](https://github.com/OpenDataScotland/the_od_bods/wiki/About-the-OD_BODS-project#tools--tech) for current system and PR #160
|
1.0
|
Replace csv storage formats - This project so far has relied on csv files as storage of outputs and inputs into following processes, and that's been working, but as the variety and volume of listings and publishers grow, we're starting to see issues with encoding, line endings, quoting, arrays etc.
This issue needs to consider replacing csv files as storage for:
- web scrapers output (extract)
- merge_data.py output (aggregate and clean)
JSON has been suggested but we're not closed to other options.
As a secondary outcome, we'll still want to provide a .csv as output for public users to download, but this should be published output only.
See [process flow](https://github.com/OpenDataScotland/the_od_bods/wiki/About-the-OD_BODS-project#tools--tech) for current system and PR #160
|
process
|
replace csv storage formats this project so far has relied on csv files as storage of outputs and inputs into following processes and that s been working but as the variety and volume of listings and publishers grow we re starting to see issues with encoding line endings quoting arrays etc this issue needs to consider replacing csv files as storage for web scrapers output extract merge data py output aggregate and clean json has been suggested but we re not closed to other options as a secondary outcome we ll still want to provide a csv as output for public users to download but this should be published output only see for current system and pr
| 1
|
49,810
| 13,466,214,535
|
IssuesEvent
|
2020-09-09 22:24:51
|
wrbejar/JavaVulnerableLab
|
https://api.github.com/repos/wrbejar/JavaVulnerableLab
|
opened
|
CVE-2020-2933 (Low) detected in mysql-connector-java-5.1.26.jar
|
security vulnerability
|
## CVE-2020-2933 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mysql-connector-java-5.1.26.jar</b></p></summary>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: /tmp/ws-ua_20200909222335_NVLNZE/archiveExtraction_GXAOOR/20200909222336/ws-scm_depth_0/JavaVulnerableLab/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/mysql/mysql-connector-java/5.1.26/mysql-connector-java-5.1.26.jar,_depth_0/JavaVulnerableLab/bin/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,_depth_0/JavaVulnerableLab/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,/JavaVulnerableLab/bin/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,/JavaVulnerableLab/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,_depth_0/JavaVulnerableLab/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,_depth_0/JavaVulnerableLab/bin/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,canner/.m2/repository/mysql/mysql-connector-java/5.1.26/mysql-connector-java-5.1.26.jar,canner/.m2/repository/mysql/mysql-connector-java/5.1.26/mysql-connector-java-5.1.26.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.26.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/wrbejar/JavaVulnerableLab/commit/aae1007aa718bfac5626f49a658ffcb83dd33104">aae1007aa718bfac5626f49a658ffcb83dd33104</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Vulnerability in the MySQL Connectors product of Oracle MySQL (component: Connector/J). Supported versions that are affected are 5.1.48 and prior. Difficult to exploit vulnerability allows high privileged attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks of this vulnerability can result in unauthorized ability to cause a partial denial of service (partial DOS) of MySQL Connectors. CVSS 3.0 Base Score 2.2 (Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:H/UI:N/S:U/C:N/I:N/A:L).
<p>Publish Date: 2020-04-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-2933>CVE-2020-2933</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>2.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://docs.oracle.com/javase/7/docs/api/javax/xml/XMLConstants.html#FEATURE_SECURE_PROCESSING">https://docs.oracle.com/javase/7/docs/api/javax/xml/XMLConstants.html#FEATURE_SECURE_PROCESSING</a></p>
<p>Release Date: 2020-04-15</p>
<p>Fix Resolution: mysql:mysql-connector-java:5.1.49</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"mysql","packageName":"mysql-connector-java","packageVersion":"5.1.26","isTransitiveDependency":false,"dependencyTree":"mysql:mysql-connector-java:5.1.26","isMinimumFixVersionAvailable":true,"minimumFixVersion":"mysql:mysql-connector-java:5.1.49"}],"vulnerabilityIdentifier":"CVE-2020-2933","vulnerabilityDetails":"Vulnerability in the MySQL Connectors product of Oracle MySQL (component: Connector/J). Supported versions that are affected are 5.1.48 and prior. Difficult to exploit vulnerability allows high privileged attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks of this vulnerability can result in unauthorized ability to cause a partial denial of service (partial DOS) of MySQL Connectors. CVSS 3.0 Base Score 2.2 (Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:H/UI:N/S:U/C:N/I:N/A:L).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-2933","cvss3Severity":"low","cvss3Score":"2.2","cvss3Metrics":{"A":"Low","AC":"High","PR":"High","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-2933 (Low) detected in mysql-connector-java-5.1.26.jar - ## CVE-2020-2933 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mysql-connector-java-5.1.26.jar</b></p></summary>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: /tmp/ws-ua_20200909222335_NVLNZE/archiveExtraction_GXAOOR/20200909222336/ws-scm_depth_0/JavaVulnerableLab/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/mysql/mysql-connector-java/5.1.26/mysql-connector-java-5.1.26.jar,_depth_0/JavaVulnerableLab/bin/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,_depth_0/JavaVulnerableLab/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,/JavaVulnerableLab/bin/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,/JavaVulnerableLab/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,_depth_0/JavaVulnerableLab/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,_depth_0/JavaVulnerableLab/bin/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,canner/.m2/repository/mysql/mysql-connector-java/5.1.26/mysql-connector-java-5.1.26.jar,canner/.m2/repository/mysql/mysql-connector-java/5.1.26/mysql-connector-java-5.1.26.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.26.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/wrbejar/JavaVulnerableLab/commit/aae1007aa718bfac5626f49a658ffcb83dd33104">aae1007aa718bfac5626f49a658ffcb83dd33104</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Vulnerability in the MySQL Connectors product of Oracle MySQL (component: Connector/J). Supported versions that are affected are 5.1.48 and prior. Difficult to exploit vulnerability allows high privileged attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks of this vulnerability can result in unauthorized ability to cause a partial denial of service (partial DOS) of MySQL Connectors. CVSS 3.0 Base Score 2.2 (Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:H/UI:N/S:U/C:N/I:N/A:L).
<p>Publish Date: 2020-04-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-2933>CVE-2020-2933</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>2.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://docs.oracle.com/javase/7/docs/api/javax/xml/XMLConstants.html#FEATURE_SECURE_PROCESSING">https://docs.oracle.com/javase/7/docs/api/javax/xml/XMLConstants.html#FEATURE_SECURE_PROCESSING</a></p>
<p>Release Date: 2020-04-15</p>
<p>Fix Resolution: mysql:mysql-connector-java:5.1.49</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"mysql","packageName":"mysql-connector-java","packageVersion":"5.1.26","isTransitiveDependency":false,"dependencyTree":"mysql:mysql-connector-java:5.1.26","isMinimumFixVersionAvailable":true,"minimumFixVersion":"mysql:mysql-connector-java:5.1.49"}],"vulnerabilityIdentifier":"CVE-2020-2933","vulnerabilityDetails":"Vulnerability in the MySQL Connectors product of Oracle MySQL (component: Connector/J). Supported versions that are affected are 5.1.48 and prior. Difficult to exploit vulnerability allows high privileged attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks of this vulnerability can result in unauthorized ability to cause a partial denial of service (partial DOS) of MySQL Connectors. CVSS 3.0 Base Score 2.2 (Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:H/UI:N/S:U/C:N/I:N/A:L).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-2933","cvss3Severity":"low","cvss3Score":"2.2","cvss3Metrics":{"A":"Low","AC":"High","PR":"High","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve low detected in mysql connector java jar cve low severity vulnerability vulnerable library mysql connector java jar mysql jdbc type driver library home page a href path to dependency file tmp ws ua nvlnze archiveextraction gxaoor ws scm depth javavulnerablelab target javavulnerablelab meta inf maven org cysecurity javavulnerablelab pom xml path to vulnerable library canner repository mysql mysql connector java mysql connector java jar depth javavulnerablelab bin target javavulnerablelab meta inf maven org cysecurity javavulnerablelab target javavulnerablelab web inf lib mysql connector java jar depth javavulnerablelab target javavulnerablelab meta inf maven org cysecurity javavulnerablelab target javavulnerablelab web inf lib mysql connector java jar javavulnerablelab bin target javavulnerablelab web inf lib mysql connector java jar javavulnerablelab target javavulnerablelab web inf lib mysql connector java jar depth javavulnerablelab target javavulnerablelab web inf lib mysql connector java jar depth javavulnerablelab bin target javavulnerablelab web inf lib mysql connector java jar canner repository mysql mysql connector java mysql connector java jar canner repository mysql mysql connector java mysql connector java jar dependency hierarchy x mysql connector java jar vulnerable library found in head commit a href vulnerability details vulnerability in the mysql connectors product of oracle mysql component connector j supported versions that are affected are and prior difficult to exploit vulnerability allows high privileged attacker with network access via multiple protocols to compromise mysql connectors successful attacks of this vulnerability can result in unauthorized ability to cause a partial denial of service partial dos of mysql connectors cvss base score availability impacts cvss vector cvss av n ac h pr h ui n s u c n i n a l publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required high user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution mysql mysql connector java check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails vulnerability in the mysql connectors product of oracle mysql component connector j supported versions that are affected are and prior difficult to exploit vulnerability allows high privileged attacker with network access via multiple protocols to compromise mysql connectors successful attacks of this vulnerability can result in unauthorized ability to cause a partial denial of service partial dos of mysql connectors cvss base score availability impacts cvss vector cvss av n ac h pr h ui n s u c n i n a l vulnerabilityurl
| 0
|
76,618
| 9,474,461,470
|
IssuesEvent
|
2019-04-19 07:30:57
|
Geonovum/KP-APIs
|
https://api.github.com/repos/Geonovum/KP-APIs
|
reopened
|
Use clientid as API-key when supporting both API-key & OAuth
|
te bespreken in werkgroep beveiliging to discuss design rules
|
When supporting both OAuth and API-key one can re-use the OAuth client -id as an API-key.
|
1.0
|
Use clientid as API-key when supporting both API-key & OAuth - When supporting both OAuth and API-key one can re-use the OAuth client -id as an API-key.
|
non_process
|
use clientid as api key when supporting both api key oauth when supporting both oauth and api key one can re use the oauth client id as an api key
| 0
|
273,233
| 8,528,135,040
|
IssuesEvent
|
2018-11-02 22:09:01
|
HabitRPG/habitica
|
https://api.github.com/repos/HabitRPG/habitica
|
opened
|
Replace chat @mentioning with a text-editable plugin
|
help wanted priority: minor section: Tavern Chat
|
[//]: # (Before logging this issue, please post to the Report a Bug guild from the Habitica website's Help menu. Most bugs can be handled quickly there. If a GitHub issue is needed, you will be advised of that by a moderator or staff member -- a player with a dark blue or purple name. It is recommended that you don't create a new issue unless advised to.)
[//]: # (Bugs in the mobile apps can also be reported there.)
[//]: # (If you have a feature request, use "Help > Request a Feature", not GitHub or the Report a Bug guild.)
[//]: # (For more guidelines see https://github.com/HabitRPG/habitica/issues/2760)
[//]: # (Fill out relevant information - UUID is found from the Habitia website at User Icon > Settings > API)
### Description
[//]: # (Describe bug in detail here. Include screenshots if helpful.)
Part of the username overhaul project underway is to make chat @mentioning prettier and more user-friendly. The majority of the implementation is accomplished on #10784, but given our use of a `textarea` component in chat, we weren't able to add styling for autocompleted @s within the text entry field itself. To make this possible, we'd need to replace our current autocomplete implementation, using a `contenteditable` div instead of a `textarea`. From there, we could employ an existing Vue plugin like https://github.com/fritx/vue-at to reimplement the rest of the functionality.
|
1.0
|
Replace chat @mentioning with a text-editable plugin - [//]: # (Before logging this issue, please post to the Report a Bug guild from the Habitica website's Help menu. Most bugs can be handled quickly there. If a GitHub issue is needed, you will be advised of that by a moderator or staff member -- a player with a dark blue or purple name. It is recommended that you don't create a new issue unless advised to.)
[//]: # (Bugs in the mobile apps can also be reported there.)
[//]: # (If you have a feature request, use "Help > Request a Feature", not GitHub or the Report a Bug guild.)
[//]: # (For more guidelines see https://github.com/HabitRPG/habitica/issues/2760)
[//]: # (Fill out relevant information - UUID is found from the Habitia website at User Icon > Settings > API)
### Description
[//]: # (Describe bug in detail here. Include screenshots if helpful.)
Part of the username overhaul project underway is to make chat @mentioning prettier and more user-friendly. The majority of the implementation is accomplished on #10784, but given our use of a `textarea` component in chat, we weren't able to add styling for autocompleted @s within the text entry field itself. To make this possible, we'd need to replace our current autocomplete implementation, using a `contenteditable` div instead of a `textarea`. From there, we could employ an existing Vue plugin like https://github.com/fritx/vue-at to reimplement the rest of the functionality.
|
non_process
|
replace chat mentioning with a text editable plugin before logging this issue please post to the report a bug guild from the habitica website s help menu most bugs can be handled quickly there if a github issue is needed you will be advised of that by a moderator or staff member a player with a dark blue or purple name it is recommended that you don t create a new issue unless advised to bugs in the mobile apps can also be reported there if you have a feature request use help request a feature not github or the report a bug guild for more guidelines see fill out relevant information uuid is found from the habitia website at user icon settings api description describe bug in detail here include screenshots if helpful part of the username overhaul project underway is to make chat mentioning prettier and more user friendly the majority of the implementation is accomplished on but given our use of a textarea component in chat we weren t able to add styling for autocompleted s within the text entry field itself to make this possible we d need to replace our current autocomplete implementation using a contenteditable div instead of a textarea from there we could employ an existing vue plugin like to reimplement the rest of the functionality
| 0
|
100,925
| 11,208,030,303
|
IssuesEvent
|
2020-01-06 06:23:11
|
addthoriq/El-Zakiy
|
https://api.github.com/repos/addthoriq/El-Zakiy
|
closed
|
No Leren, Deadline bergentayangan
|
documentation good first issue
|
Deadline telah tiba
Deadline telah tiba
Hore!! Hore!! Hore!!
*matamu hora hore_-
|
1.0
|
No Leren, Deadline bergentayangan - Deadline telah tiba
Deadline telah tiba
Hore!! Hore!! Hore!!
*matamu hora hore_-
|
non_process
|
no leren deadline bergentayangan deadline telah tiba deadline telah tiba hore hore hore matamu hora hore
| 0
|
746,940
| 26,051,262,707
|
IssuesEvent
|
2022-12-22 18:57:05
|
hdmf-dev/hdmf-zarr
|
https://api.github.com/repos/hdmf-dev/hdmf-zarr
|
opened
|
Update tox.ini to use test_gallery.py and fix gallery-python-3.7 tests
|
category: bug category: enhancement priority: high
|
1. Update tox.ini to use test_gallery.py to be in line with HDMF
2. Currently both linux-gallery-python3.7-minimum and windows-gallery-python3.7-minimum will pass locally when running "python test.py --example", but not during the github checks. I've also tested a version of test_gallery.py by running in a branch "python test_gallery.py"; however, this returns an error regarding missing files. (Refer to attached images)

|
1.0
|
Update tox.ini to use test_gallery.py and fix gallery-python-3.7 tests - 1. Update tox.ini to use test_gallery.py to be in line with HDMF
2. Currently both linux-gallery-python3.7-minimum and windows-gallery-python3.7-minimum will pass locally when running "python test.py --example", but not during the github checks. I've also tested a version of test_gallery.py by running in a branch "python test_gallery.py"; however, this returns an error regarding missing files. (Refer to attached images)

|
non_process
|
update tox ini to use test gallery py and fix gallery python tests update tox ini to use test gallery py to be in line with hdmf currently both linux gallery minimum and windows gallery minimum will pass locally when running python test py example but not during the github checks i ve also tested a version of test gallery py by running in a branch python test gallery py however this returns an error regarding missing files refer to attached images
| 0
|
11,389
| 13,338,337,412
|
IssuesEvent
|
2020-08-28 10:50:27
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
Exception connecting to Signalr hub when using windows auth
|
area-System.Net.Http bug tenet-compatibility
|
<!--
More information on our issue management policies can be found here: https://aka.ms/aspnet/issue-policies
Please keep in mind that the GitHub issue tracker is not intended as a general support forum, but for reporting **non-security** bugs and feature requests.
If you believe you have an issue that affects the SECURITY of the platform, please do NOT create an issue and instead email your issue details to secure@microsoft.com. Your report may be eligible for our [bug bounty](https://www.microsoft.com/en-us/msrc/bounty-dot-net-core) but ONLY if it is reported through email.
For other types of questions, consider using [StackOverflow](https://stackoverflow.com).
-->
### Describe the bug
The signalr client throws an exception connecting to the hub when using Windows Authentication with core 5.0 preview 6. The hub is hosted behind IIS express.
If I change the client TargetFramework to netcoreapp3.1 it connects normally (hub is still on 5):
```
info: Microsoft.AspNetCore.Http.Connections.Client.Internal.WebSocketsTransport[1]
Starting transport. Transfer mode: Text. Url: 'wss://localhost:44356/chathub?id=yrfeYHigZPZ1KxrRXa6xcg'.
info: Microsoft.AspNetCore.Http.Connections.Client.HttpConnection[3]
HttpConnection Started.
info: Microsoft.AspNetCore.SignalR.Client.HubConnection[24]
Using HubProtocol 'json v1'.
info: Microsoft.AspNetCore.SignalR.Client.HubConnection[44]
HubConnection started.
Starting connection. Press Ctrl-C to close.
```
### To Reproduce
<!--
We ❤ code! Point us to a minimalistic repro project hosted in a GitHub repo.
For a repro project, create a new ASP.NET Core project using the template of your your choice, apply the minimum required code to result in the issue you're observing.
We will close this issue if:
- the repro project you share with us is complex. We can't investigate custom projects, so don't point us to such, please.
- if we will not be able to repro the behavior you're reporting
-->
https://github.com/atj414/signalrbug
### Exceptions (if any)
<!--
Include the exception you get when facing this issue
-->
```
Exception thrown: 'System.ComponentModel.Win32Exception' in System.Private.CoreLib.dll
Exception thrown: 'System.ComponentModel.Win32Exception' in System.Private.CoreLib.dll
An exception of type 'System.ComponentModel.Win32Exception' occurred in System.Private.CoreLib.dll but was not handled in user code
fail: Microsoft.AspNetCore.Http.Connections.Client.HttpConnection[10]
Failed to start connection. Error getting negotiation response from 'https://localhost:44356/chathub'.
System.ComponentModel.Win32Exception (0x80090308): The token supplied to the function is invalid
at System.Net.NTAuthentication.GetOutgoingBlob(Byte[] incomingBlob, Boolean throwOnError, SecurityStatusPal& statusCode)
at System.Net.NTAuthentication.GetOutgoingBlob(String incomingBlob)
at System.Net.Http.AuthenticationHelper.SendWithNtAuthAsync(HttpRequestMessage request, Uri authUri, ICredentials credentials, Boolean isProxyAuth, HttpConnection connection, HttpConnectionPool connectionPool, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.SendWithRetryAsync(HttpRequestMessage request, Boolean doRequestAuth, CancellationToken cancellationToken)
at System.Net.Http.AuthenticationHelper.SendWithAuthAsync(HttpRequestMessage request, Uri authUri, ICredentials credentials, Boolean preAuthenticate, Boolean isProxyAuth, Boolean doRequestAuth, HttpConnectionPool pool, CancellationToken cancellationToken)
at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
at System.Net.Http.DiagnosticsHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
at Microsoft.AspNetCore.Http.Connections.Client.Internal.AccessTokenHttpMessageHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
at Microsoft.AspNetCore.Http.Connections.Client.Internal.LoggingHttpMessageHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
at System.Net.Http.HttpClient.FinishSendAsyncUnbuffered(Task`1 sendTask, HttpRequestMessage request, CancellationTokenSource cts, Boolean disposeCts, CancellationToken callerToken, Int64 timeoutTime)
at Microsoft.AspNetCore.Http.Connections.Client.HttpConnection.NegotiateAsync(Uri url, HttpClient httpClient, ILogger logger, CancellationToken cancellationToken)
```
### Further technical details
- VS2019 16.7.0 Preview 3.1
```
.NET SDK (reflecting any global.json):
Version: 5.0.100-preview.6.20318.15
Commit: 4356580024
Runtime Environment:
OS Name: Windows
OS Version: 10.0.14393
OS Platform: Windows
RID: win10-x64
Base Path: C:\Program Files\dotnet\sdk\5.0.100-preview.6.20318.15\
Host (useful for support):
Version: 5.0.0-preview.6.20305.6
Commit: 4ba9ecaabd
.NET SDKs installed:
1.0.0-preview2-003131 [C:\Program Files\dotnet\sdk]
1.0.4 [C:\Program Files\dotnet\sdk]
2.0.0 [C:\Program Files\dotnet\sdk]
2.1.4 [C:\Program Files\dotnet\sdk]
2.1.403 [C:\Program Files\dotnet\sdk]
2.1.503 [C:\Program Files\dotnet\sdk]
2.1.511 [C:\Program Files\dotnet\sdk]
3.1.102 [C:\Program Files\dotnet\sdk]
3.1.300 [C:\Program Files\dotnet\sdk]
3.1.400-preview-015178 [C:\Program Files\dotnet\sdk]
5.0.100-preview.6.20318.15 [C:\Program Files\dotnet\sdk]
.NET runtimes installed:
Microsoft.AspNetCore.All 2.1.5 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.1.7 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.1.15 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.1.16 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.1.18 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.2.8 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.App 2.1.5 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.1.7 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.1.15 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.1.16 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.1.18 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.2.8 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 3.0.3 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 3.1.2 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 3.1.4 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 5.0.0-preview.6.20312.15 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.NETCore.App 1.0.1 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 1.0.5 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 1.1.2 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.0.0 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.0.5 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.5 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.7 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.15 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.16 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.18 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.2.8 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 3.0.3 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 3.1.2 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 3.1.4 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 5.0.0-preview.6.20305.6 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.WindowsDesktop.App 3.0.3 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 3.1.2 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 3.1.4 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 5.0.0-preview.6.20308.1 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
```
|
True
|
Exception connecting to Signalr hub when using windows auth - <!--
More information on our issue management policies can be found here: https://aka.ms/aspnet/issue-policies
Please keep in mind that the GitHub issue tracker is not intended as a general support forum, but for reporting **non-security** bugs and feature requests.
If you believe you have an issue that affects the SECURITY of the platform, please do NOT create an issue and instead email your issue details to secure@microsoft.com. Your report may be eligible for our [bug bounty](https://www.microsoft.com/en-us/msrc/bounty-dot-net-core) but ONLY if it is reported through email.
For other types of questions, consider using [StackOverflow](https://stackoverflow.com).
-->
### Describe the bug
The signalr client throws an exception connecting to the hub when using Windows Authentication with core 5.0 preview 6. The hub is hosted behind IIS express.
If I change the client TargetFramework to netcoreapp3.1 it connects normally (hub is still on 5):
```
info: Microsoft.AspNetCore.Http.Connections.Client.Internal.WebSocketsTransport[1]
Starting transport. Transfer mode: Text. Url: 'wss://localhost:44356/chathub?id=yrfeYHigZPZ1KxrRXa6xcg'.
info: Microsoft.AspNetCore.Http.Connections.Client.HttpConnection[3]
HttpConnection Started.
info: Microsoft.AspNetCore.SignalR.Client.HubConnection[24]
Using HubProtocol 'json v1'.
info: Microsoft.AspNetCore.SignalR.Client.HubConnection[44]
HubConnection started.
Starting connection. Press Ctrl-C to close.
```
### To Reproduce
<!--
We ❤ code! Point us to a minimalistic repro project hosted in a GitHub repo.
For a repro project, create a new ASP.NET Core project using the template of your your choice, apply the minimum required code to result in the issue you're observing.
We will close this issue if:
- the repro project you share with us is complex. We can't investigate custom projects, so don't point us to such, please.
- if we will not be able to repro the behavior you're reporting
-->
https://github.com/atj414/signalrbug
### Exceptions (if any)
<!--
Include the exception you get when facing this issue
-->
```
Exception thrown: 'System.ComponentModel.Win32Exception' in System.Private.CoreLib.dll
Exception thrown: 'System.ComponentModel.Win32Exception' in System.Private.CoreLib.dll
An exception of type 'System.ComponentModel.Win32Exception' occurred in System.Private.CoreLib.dll but was not handled in user code
fail: Microsoft.AspNetCore.Http.Connections.Client.HttpConnection[10]
Failed to start connection. Error getting negotiation response from 'https://localhost:44356/chathub'.
System.ComponentModel.Win32Exception (0x80090308): The token supplied to the function is invalid
at System.Net.NTAuthentication.GetOutgoingBlob(Byte[] incomingBlob, Boolean throwOnError, SecurityStatusPal& statusCode)
at System.Net.NTAuthentication.GetOutgoingBlob(String incomingBlob)
at System.Net.Http.AuthenticationHelper.SendWithNtAuthAsync(HttpRequestMessage request, Uri authUri, ICredentials credentials, Boolean isProxyAuth, HttpConnection connection, HttpConnectionPool connectionPool, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.SendWithRetryAsync(HttpRequestMessage request, Boolean doRequestAuth, CancellationToken cancellationToken)
at System.Net.Http.AuthenticationHelper.SendWithAuthAsync(HttpRequestMessage request, Uri authUri, ICredentials credentials, Boolean preAuthenticate, Boolean isProxyAuth, Boolean doRequestAuth, HttpConnectionPool pool, CancellationToken cancellationToken)
at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
at System.Net.Http.DiagnosticsHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
at Microsoft.AspNetCore.Http.Connections.Client.Internal.AccessTokenHttpMessageHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
at Microsoft.AspNetCore.Http.Connections.Client.Internal.LoggingHttpMessageHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
at System.Net.Http.HttpClient.FinishSendAsyncUnbuffered(Task`1 sendTask, HttpRequestMessage request, CancellationTokenSource cts, Boolean disposeCts, CancellationToken callerToken, Int64 timeoutTime)
at Microsoft.AspNetCore.Http.Connections.Client.HttpConnection.NegotiateAsync(Uri url, HttpClient httpClient, ILogger logger, CancellationToken cancellationToken)
```
### Further technical details
- VS2019 16.7.0 Preview 3.1
```
.NET SDK (reflecting any global.json):
Version: 5.0.100-preview.6.20318.15
Commit: 4356580024
Runtime Environment:
OS Name: Windows
OS Version: 10.0.14393
OS Platform: Windows
RID: win10-x64
Base Path: C:\Program Files\dotnet\sdk\5.0.100-preview.6.20318.15\
Host (useful for support):
Version: 5.0.0-preview.6.20305.6
Commit: 4ba9ecaabd
.NET SDKs installed:
1.0.0-preview2-003131 [C:\Program Files\dotnet\sdk]
1.0.4 [C:\Program Files\dotnet\sdk]
2.0.0 [C:\Program Files\dotnet\sdk]
2.1.4 [C:\Program Files\dotnet\sdk]
2.1.403 [C:\Program Files\dotnet\sdk]
2.1.503 [C:\Program Files\dotnet\sdk]
2.1.511 [C:\Program Files\dotnet\sdk]
3.1.102 [C:\Program Files\dotnet\sdk]
3.1.300 [C:\Program Files\dotnet\sdk]
3.1.400-preview-015178 [C:\Program Files\dotnet\sdk]
5.0.100-preview.6.20318.15 [C:\Program Files\dotnet\sdk]
.NET runtimes installed:
Microsoft.AspNetCore.All 2.1.5 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.1.7 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.1.15 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.1.16 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.1.18 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.2.8 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.App 2.1.5 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.1.7 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.1.15 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.1.16 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.1.18 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.2.8 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 3.0.3 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 3.1.2 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 3.1.4 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 5.0.0-preview.6.20312.15 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.NETCore.App 1.0.1 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 1.0.5 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 1.1.2 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.0.0 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.0.5 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.5 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.7 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.15 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.16 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.18 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.2.8 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 3.0.3 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 3.1.2 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 3.1.4 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 5.0.0-preview.6.20305.6 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.WindowsDesktop.App 3.0.3 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 3.1.2 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 3.1.4 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 5.0.0-preview.6.20308.1 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
```
|
non_process
|
exception connecting to signalr hub when using windows auth more information on our issue management policies can be found here please keep in mind that the github issue tracker is not intended as a general support forum but for reporting non security bugs and feature requests if you believe you have an issue that affects the security of the platform please do not create an issue and instead email your issue details to secure microsoft com your report may be eligible for our but only if it is reported through email for other types of questions consider using describe the bug the signalr client throws an exception connecting to the hub when using windows authentication with core preview the hub is hosted behind iis express if i change the client targetframework to it connects normally hub is still on info microsoft aspnetcore http connections client internal websocketstransport starting transport transfer mode text url wss localhost chathub id info microsoft aspnetcore http connections client httpconnection httpconnection started info microsoft aspnetcore signalr client hubconnection using hubprotocol json info microsoft aspnetcore signalr client hubconnection hubconnection started starting connection press ctrl c to close to reproduce we ❤ code point us to a minimalistic repro project hosted in a github repo for a repro project create a new asp net core project using the template of your your choice apply the minimum required code to result in the issue you re observing we will close this issue if the repro project you share with us is complex we can t investigate custom projects so don t point us to such please if we will not be able to repro the behavior you re reporting exceptions if any include the exception you get when facing this issue exception thrown system componentmodel in system private corelib dll exception thrown system componentmodel in system private corelib dll an exception of type system componentmodel occurred in system private corelib dll but was not handled in user code fail microsoft aspnetcore http connections client httpconnection failed to start connection error getting negotiation response from system componentmodel the token supplied to the function is invalid at system net ntauthentication getoutgoingblob byte incomingblob boolean throwonerror securitystatuspal statuscode at system net ntauthentication getoutgoingblob string incomingblob at system net http authenticationhelper sendwithntauthasync httprequestmessage request uri authuri icredentials credentials boolean isproxyauth httpconnection connection httpconnectionpool connectionpool cancellationtoken cancellationtoken at system net http httpconnectionpool sendwithretryasync httprequestmessage request boolean dorequestauth cancellationtoken cancellationtoken at system net http authenticationhelper sendwithauthasync httprequestmessage request uri authuri icredentials credentials boolean preauthenticate boolean isproxyauth boolean dorequestauth httpconnectionpool pool cancellationtoken cancellationtoken at system net http redirecthandler sendasync httprequestmessage request cancellationtoken cancellationtoken at system net http diagnosticshandler sendasync httprequestmessage request cancellationtoken cancellationtoken at microsoft aspnetcore http connections client internal accesstokenhttpmessagehandler sendasync httprequestmessage request cancellationtoken cancellationtoken at microsoft aspnetcore http connections client internal logginghttpmessagehandler sendasync httprequestmessage request cancellationtoken cancellationtoken at system net http httpclient finishsendasyncunbuffered task sendtask httprequestmessage request cancellationtokensource cts boolean disposects cancellationtoken callertoken timeouttime at microsoft aspnetcore http connections client httpconnection negotiateasync uri url httpclient httpclient ilogger logger cancellationtoken cancellationtoken further technical details preview net sdk reflecting any global json version preview commit runtime environment os name windows os version os platform windows rid base path c program files dotnet sdk preview host useful for support version preview commit net sdks installed preview preview net runtimes installed microsoft aspnetcore all microsoft aspnetcore all microsoft aspnetcore all microsoft aspnetcore all microsoft aspnetcore all microsoft aspnetcore all microsoft aspnetcore app microsoft aspnetcore app microsoft aspnetcore app microsoft aspnetcore app microsoft aspnetcore app microsoft aspnetcore app microsoft aspnetcore app microsoft aspnetcore app microsoft aspnetcore app microsoft aspnetcore app preview microsoft netcore app microsoft netcore app microsoft netcore app microsoft netcore app microsoft netcore app microsoft netcore app microsoft netcore app microsoft netcore app microsoft netcore app microsoft netcore app microsoft netcore app microsoft netcore app microsoft netcore app microsoft netcore app microsoft netcore app preview microsoft windowsdesktop app microsoft windowsdesktop app microsoft windowsdesktop app microsoft windowsdesktop app preview
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.