Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 4
112
| repo_url
stringlengths 33
141
| action
stringclasses 3
values | title
stringlengths 1
1.02k
| labels
stringlengths 4
1.54k
| body
stringlengths 1
262k
| index
stringclasses 17
values | text_combine
stringlengths 95
262k
| label
stringclasses 2
values | text
stringlengths 96
252k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
384,036
| 26,573,492,130
|
IssuesEvent
|
2023-01-21 13:45:40
|
supabase/supabase
|
https://api.github.com/repos/supabase/supabase
|
closed
|
JWT generator in self-hosted docs is broken - cannot connect to default project.
|
documentation auth
|
### Discussed in https://github.com/supabase/supabase/discussions/8402
(I removed the original discussion, all the bug hunting was irrelevant)
I think the JWT token generator in the self-hosted docs is broken.
tokens generated by it are not accepted and lead to "cannot connect to default project"
AFAICT it comes down to iss being set to supabase instead of supabase-demo.
copy pasting a generated token into jwt.io changing the iss (and the secret) and pasting that into .env / kong.yml seems to work.
|
1.0
|
JWT generator in self-hosted docs is broken - cannot connect to default project. - ### Discussed in https://github.com/supabase/supabase/discussions/8402
(I removed the original discussion, all the bug hunting was irrelevant)
I think the JWT token generator in the self-hosted docs is broken.
tokens generated by it are not accepted and lead to "cannot connect to default project"
AFAICT it comes down to iss being set to supabase instead of supabase-demo.
copy pasting a generated token into jwt.io changing the iss (and the secret) and pasting that into .env / kong.yml seems to work.
|
non_test
|
jwt generator in self hosted docs is broken cannot connect to default project discussed in i removed the original discussion all the bug hunting was irrelevant i think the jwt token generator in the self hosted docs is broken tokens generated by it are not accepted and lead to cannot connect to default project afaict it comes down to iss being set to supabase instead of supabase demo copy pasting a generated token into jwt io changing the iss and the secret and pasting that into env kong yml seems to work
| 0
|
114,037
| 9,672,301,454
|
IssuesEvent
|
2019-05-22 02:50:47
|
flutter/flutter
|
https://api.github.com/repos/flutter/flutter
|
opened
|
flutter_tools test/commands/create_test.dart takes over 6 minutes
|
a: tests tool
|
We really need to make this test more efficient somehow. Is it doing something redundant? Something we can cache? (But without making it less hermetic.)
|
1.0
|
flutter_tools test/commands/create_test.dart takes over 6 minutes - We really need to make this test more efficient somehow. Is it doing something redundant? Something we can cache? (But without making it less hermetic.)
|
test
|
flutter tools test commands create test dart takes over minutes we really need to make this test more efficient somehow is it doing something redundant something we can cache but without making it less hermetic
| 1
|
47,212
| 6,044,996,167
|
IssuesEvent
|
2017-06-12 08:02:07
|
python-trio/trio
|
https://api.github.com/repos/python-trio/trio
|
closed
|
Design: higher-level stream abstractions
|
design discussion
|
There's a gesture towards moving beyond concrete objects like sockets and pipes in the `trio._streams` interfaces. I'm not sure how much of this belongs in trio proper (as opposed to a library on top), but I think even a simple bit of convention might go a long way. What should this look like?
Prior art: [Twisted endpoints](https://twistedmatrix.com/documents/current/core/howto/endpoints.html). I like the flexibility and power. I'm not as big a fan of the strings and plugin architecture, but hey.
|
1.0
|
Design: higher-level stream abstractions - There's a gesture towards moving beyond concrete objects like sockets and pipes in the `trio._streams` interfaces. I'm not sure how much of this belongs in trio proper (as opposed to a library on top), but I think even a simple bit of convention might go a long way. What should this look like?
Prior art: [Twisted endpoints](https://twistedmatrix.com/documents/current/core/howto/endpoints.html). I like the flexibility and power. I'm not as big a fan of the strings and plugin architecture, but hey.
|
non_test
|
design higher level stream abstractions there s a gesture towards moving beyond concrete objects like sockets and pipes in the trio streams interfaces i m not sure how much of this belongs in trio proper as opposed to a library on top but i think even a simple bit of convention might go a long way what should this look like prior art i like the flexibility and power i m not as big a fan of the strings and plugin architecture but hey
| 0
|
209,986
| 16,074,428,010
|
IssuesEvent
|
2021-04-25 04:10:53
|
pingcap/br
|
https://api.github.com/repos/pingcap/br
|
opened
|
`br_tikv_outage` sometimes got stuck
|
component/test type/bug
|
Please answer these questions before submitting your issue. Thanks!
1. What did you do?
If possible, provide a recipe for reproducing the error.
run `br_tikv_outage`
2. What did you expect to see?
test passed.
3. What did you see instead?
Sometimes, if one TiKV leave permanently, there would be some regions cannot elect the new leader, then the backup progress get stuck.
4. What version of BR and TiDB/TiKV/PD are you using?
Current master.
5. Operation logs
- [BR](https://github.com/pingcap/br/files/6371225/outage.log)
- [pd](https://github.com/pingcap/br/files/6371226/pd.log)
- [tikv1](https://github.com/pingcap/br/files/6371227/tikv1.log)
- [tikv2](https://github.com/pingcap/br/files/6371228/tikv2.log)
- [tikv3 (The killed one)](https://github.com/pingcap/br/files/6371229/tikv3.log)
|
1.0
|
`br_tikv_outage` sometimes got stuck - Please answer these questions before submitting your issue. Thanks!
1. What did you do?
If possible, provide a recipe for reproducing the error.
run `br_tikv_outage`
2. What did you expect to see?
test passed.
3. What did you see instead?
Sometimes, if one TiKV leave permanently, there would be some regions cannot elect the new leader, then the backup progress get stuck.
4. What version of BR and TiDB/TiKV/PD are you using?
Current master.
5. Operation logs
- [BR](https://github.com/pingcap/br/files/6371225/outage.log)
- [pd](https://github.com/pingcap/br/files/6371226/pd.log)
- [tikv1](https://github.com/pingcap/br/files/6371227/tikv1.log)
- [tikv2](https://github.com/pingcap/br/files/6371228/tikv2.log)
- [tikv3 (The killed one)](https://github.com/pingcap/br/files/6371229/tikv3.log)
|
test
|
br tikv outage sometimes got stuck please answer these questions before submitting your issue thanks what did you do if possible provide a recipe for reproducing the error run br tikv outage what did you expect to see test passed what did you see instead sometimes if one tikv leave permanently there would be some regions cannot elect the new leader then the backup progress get stuck what version of br and tidb tikv pd are you using current master operation logs
| 1
|
77,643
| 7,583,052,484
|
IssuesEvent
|
2018-04-25 07:34:07
|
red/red
|
https://api.github.com/repos/red/red
|
closed
|
GUI Console: Cursor misaligned when using input or when using ask with an empty string
|
GUI status.built status.tested type.bug
|
### Expected behavior
`input` goes to the beginning of the new line before capturing input,
### Actual behavior
Cursor position is lined up with the end of the previous line, awaiting input.
### Steps to reproduce the problem
Type `input` then Enter, or `input`, then any string, then Enter.
`ask` with an empty string has the same behavior, and it looks like `input` is just a wrapped `ask` call with an empty string.
### Red version and build date, operating system with version.
Red for Windows version 0.6.3 built 19-Dec-2017/15:46:56-07:00
Possibly related issue #2978
|
1.0
|
GUI Console: Cursor misaligned when using input or when using ask with an empty string - ### Expected behavior
`input` goes to the beginning of the new line before capturing input,
### Actual behavior
Cursor position is lined up with the end of the previous line, awaiting input.
### Steps to reproduce the problem
Type `input` then Enter, or `input`, then any string, then Enter.
`ask` with an empty string has the same behavior, and it looks like `input` is just a wrapped `ask` call with an empty string.
### Red version and build date, operating system with version.
Red for Windows version 0.6.3 built 19-Dec-2017/15:46:56-07:00
Possibly related issue #2978
|
test
|
gui console cursor misaligned when using input or when using ask with an empty string expected behavior input goes to the beginning of the new line before capturing input actual behavior cursor position is lined up with the end of the previous line awaiting input steps to reproduce the problem type input then enter or input then any string then enter ask with an empty string has the same behavior and it looks like input is just a wrapped ask call with an empty string red version and build date operating system with version red for windows version built dec possibly related issue
| 1
|
23,328
| 3,793,333,647
|
IssuesEvent
|
2016-03-22 13:36:37
|
PowerDNS/pdns
|
https://api.github.com/repos/PowerDNS/pdns
|
closed
|
CNAME on a name directly under the zone-apex leads to DNSSEC SERVFAIL
|
defect rec
|
Commandline: `./pdns/pdns_recursor --daemon=no --loglevel=9 --trace --local-port=5300 --local-address=127.0.0.1 --socket-dir=. --dnssec=validate`
Considering the following records in a zone (where the NS and SOA are added by the registrar):
```
a.lieter.nl. IN CNAME b.lieter.nl.
b.lieter.nl. IN A 127.0.0.53
a.dns-issue.lieter.nl. IN CNAME b.dns-issue.lieter.nl.
b.dns-issue.lieter.nl. IN A 127.0.0.53
```
Trying to resolve (from an empty cache) a.lieter.nl leads to a SERVFAIL:
```
$ dig @127.0.0.1 -p 5300 a.lieter.nl
; <<>> DiG 9.10.3-P2 <<>> @127.0.0.1 -p 5300 a.lieter.nl
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 64770
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;a.lieter.nl. IN A
;; Query time: 717 msec
;; SERVER: 127.0.0.1#5300(127.0.0.1)
;; WHEN: Mon Jan 18 16:54:28 CET 2016
;; MSG SIZE rcvd: 29
```
[pdns.log](https://gist.github.com/anonymous/8bf0dcc211dd41a84a23)
While (again on an empty cache) resolving a.dns-issue.lieter.nl works:
```
$ dig @127.0.0.1 -p 5300 a.dns-issue.lieter.nl
; <<>> DiG 9.10.3-P2 <<>> @127.0.0.1 -p 5300 a.dns-issue.lieter.nl
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 22838
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;a.dns-issue.lieter.nl. IN A
;; ANSWER SECTION:
a.dns-issue.lieter.nl. 300 IN CNAME b.dns-issue.lieter.nl.
b.dns-issue.lieter.nl. 300 IN A 127.0.0.53
;; Query time: 677 msec
;; SERVER: 127.0.0.1#5300(127.0.0.1)
;; WHEN: Mon Jan 18 16:54:17 CET 2016
;; MSG SIZE rcvd: 82
```
[pdns.log](https://gist.github.com/3d25e42a379092870667)
These domain-names are online so testing can be done against them.
|
1.0
|
CNAME on a name directly under the zone-apex leads to DNSSEC SERVFAIL - Commandline: `./pdns/pdns_recursor --daemon=no --loglevel=9 --trace --local-port=5300 --local-address=127.0.0.1 --socket-dir=. --dnssec=validate`
Considering the following records in a zone (where the NS and SOA are added by the registrar):
```
a.lieter.nl. IN CNAME b.lieter.nl.
b.lieter.nl. IN A 127.0.0.53
a.dns-issue.lieter.nl. IN CNAME b.dns-issue.lieter.nl.
b.dns-issue.lieter.nl. IN A 127.0.0.53
```
Trying to resolve (from an empty cache) a.lieter.nl leads to a SERVFAIL:
```
$ dig @127.0.0.1 -p 5300 a.lieter.nl
; <<>> DiG 9.10.3-P2 <<>> @127.0.0.1 -p 5300 a.lieter.nl
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 64770
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;a.lieter.nl. IN A
;; Query time: 717 msec
;; SERVER: 127.0.0.1#5300(127.0.0.1)
;; WHEN: Mon Jan 18 16:54:28 CET 2016
;; MSG SIZE rcvd: 29
```
[pdns.log](https://gist.github.com/anonymous/8bf0dcc211dd41a84a23)
While (again on an empty cache) resolving a.dns-issue.lieter.nl works:
```
$ dig @127.0.0.1 -p 5300 a.dns-issue.lieter.nl
; <<>> DiG 9.10.3-P2 <<>> @127.0.0.1 -p 5300 a.dns-issue.lieter.nl
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 22838
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;a.dns-issue.lieter.nl. IN A
;; ANSWER SECTION:
a.dns-issue.lieter.nl. 300 IN CNAME b.dns-issue.lieter.nl.
b.dns-issue.lieter.nl. 300 IN A 127.0.0.53
;; Query time: 677 msec
;; SERVER: 127.0.0.1#5300(127.0.0.1)
;; WHEN: Mon Jan 18 16:54:17 CET 2016
;; MSG SIZE rcvd: 82
```
[pdns.log](https://gist.github.com/3d25e42a379092870667)
These domain-names are online so testing can be done against them.
|
non_test
|
cname on a name directly under the zone apex leads to dnssec servfail commandline pdns pdns recursor daemon no loglevel trace local port local address socket dir dnssec validate considering the following records in a zone where the ns and soa are added by the registrar a lieter nl in cname b lieter nl b lieter nl in a a dns issue lieter nl in cname b dns issue lieter nl b dns issue lieter nl in a trying to resolve from an empty cache a lieter nl leads to a servfail dig p a lieter nl dig p a lieter nl server found global options cmd got answer header opcode query status servfail id flags qr rd ra query answer authority additional question section a lieter nl in a query time msec server when mon jan cet msg size rcvd while again on an empty cache resolving a dns issue lieter nl works dig p a dns issue lieter nl dig p a dns issue lieter nl server found global options cmd got answer header opcode query status noerror id flags qr rd ra query answer authority additional opt pseudosection edns version flags udp question section a dns issue lieter nl in a answer section a dns issue lieter nl in cname b dns issue lieter nl b dns issue lieter nl in a query time msec server when mon jan cet msg size rcvd these domain names are online so testing can be done against them
| 0
|
41,269
| 5,345,738,286
|
IssuesEvent
|
2017-02-17 17:44:28
|
hacsoc/the_jolly_advisor
|
https://api.github.com/repos/hacsoc/the_jolly_advisor
|
closed
|
Search by keyword feature should be case-insensitive
|
beginner-friendly bug help wanted testing
|
```
Scenario: Search by keyword in a course description # features/course_explorer.feature:24
When I search for courses by a keyword # features/step_definitions/course_explorer_steps.rb:59
Then I see only classes with that keyword in the name # features/step_definitions/course_explorer_steps.rb:66
expected "Independent Study" to include "in" (RSpec::Expectations::ExpectationNotMetError)
./features/step_definitions/course_explorer_steps.rb:68:in `block (2 levels) in <top (required)>'
./features/step_definitions/course_explorer_steps.rb:67:in `/^I see only classes with that keyword in the name$/'
features/course_explorer.feature:26:in `Then I see only classes with that keyword in the name'
```
|
1.0
|
Search by keyword feature should be case-insensitive - ```
Scenario: Search by keyword in a course description # features/course_explorer.feature:24
When I search for courses by a keyword # features/step_definitions/course_explorer_steps.rb:59
Then I see only classes with that keyword in the name # features/step_definitions/course_explorer_steps.rb:66
expected "Independent Study" to include "in" (RSpec::Expectations::ExpectationNotMetError)
./features/step_definitions/course_explorer_steps.rb:68:in `block (2 levels) in <top (required)>'
./features/step_definitions/course_explorer_steps.rb:67:in `/^I see only classes with that keyword in the name$/'
features/course_explorer.feature:26:in `Then I see only classes with that keyword in the name'
```
|
test
|
search by keyword feature should be case insensitive scenario search by keyword in a course description features course explorer feature when i search for courses by a keyword features step definitions course explorer steps rb then i see only classes with that keyword in the name features step definitions course explorer steps rb expected independent study to include in rspec expectations expectationnotmeterror features step definitions course explorer steps rb in block levels in features step definitions course explorer steps rb in i see only classes with that keyword in the name features course explorer feature in then i see only classes with that keyword in the name
| 1
|
74,926
| 25,409,328,255
|
IssuesEvent
|
2022-11-22 17:35:10
|
FreeRADIUS/freeradius-server
|
https://api.github.com/repos/FreeRADIUS/freeradius-server
|
closed
|
Segmentation fault Freeradius 3.2.1 Robust proxy
|
defect v3.2.x
|
### What type of defect/bug is this?
Crash or memory corruption (segv, abort, etc...)
### How can the issue be reproduced?
When try to send accounting package to robust proxy radius crash with Segmentation fault
### Log output from the FreeRADIUS daemon
```shell
freeradius -Xxxxxx
Tue Nov 22 21:31:19 2022 : Debug: Server was built with:
Tue Nov 22 21:31:19 2022 : Debug: accounting : yes
Tue Nov 22 21:31:19 2022 : Debug: authentication : yes
Tue Nov 22 21:31:19 2022 : Debug: ascend-binary-attributes : yes
Tue Nov 22 21:31:19 2022 : Debug: coa : yes
Tue Nov 22 21:31:19 2022 : Debug: recv-coa-from-home-server : yes
Tue Nov 22 21:31:19 2022 : Debug: control-socket : yes
Tue Nov 22 21:31:19 2022 : Debug: detail : yes
Tue Nov 22 21:31:19 2022 : Debug: dhcp : yes
Tue Nov 22 21:31:19 2022 : Debug: dynamic-clients : yes
Tue Nov 22 21:31:19 2022 : Debug: osfc2 : no
Tue Nov 22 21:31:19 2022 : Debug: proxy : yes
Tue Nov 22 21:31:19 2022 : Debug: regex-pcre : yes
Tue Nov 22 21:31:19 2022 : Debug: regex-posix : no
Tue Nov 22 21:31:19 2022 : Debug: regex-posix-extended : no
Tue Nov 22 21:31:19 2022 : Debug: session-management : yes
Tue Nov 22 21:31:19 2022 : Debug: stats : yes
Tue Nov 22 21:31:19 2022 : Debug: systemd : no
Tue Nov 22 21:31:19 2022 : Debug: tcp : yes
Tue Nov 22 21:31:19 2022 : Debug: threads : yes
Tue Nov 22 21:31:19 2022 : Debug: tls : yes
Tue Nov 22 21:31:19 2022 : Debug: unlang : yes
Tue Nov 22 21:31:19 2022 : Debug: vmps : yes
Tue Nov 22 21:31:19 2022 : Debug: developer : no
Tue Nov 22 21:31:19 2022 : Debug: Server core libs:
Tue Nov 22 21:31:19 2022 : Debug: freeradius-server : 3.2.2
Tue Nov 22 21:31:19 2022 : Debug: talloc : 2.3.*
Tue Nov 22 21:31:19 2022 : Debug: ssl : 3.0.0g dev
Tue Nov 22 21:31:19 2022 : Debug: pcre : 8.45 2021-06-15
Tue Nov 22 21:31:19 2022 : Debug: Endianness:
Tue Nov 22 21:31:19 2022 : Debug: little
Tue Nov 22 21:31:19 2022 : Debug: Compilation flags:
Tue Nov 22 21:31:19 2022 : Debug: cppflags : -pipe -O3 -Wall -march=skylake -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -fno-stack-protector -mno-stackrealign -flto=4 -I/build/orionos/s64/include -I/build/orionos/s64/usr/include
Tue Nov 22 21:31:19 2022 : Debug: cflags : -I. -Isrc -include src/freeradius-devel/autoconf.h -include src/freeradius-devel/build.h -include src/freeradius-devel/features.h -include src/freeradius-devel/radpaths.h -fno-strict-aliasing -Wno-date-time -pipe -O3 -Wall -march=skylake -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -fno-stack-protector -mno-stackrealign -flto=4 -I/build/orionos/s64/include -I/build/orionos/s64/usr/include -Wall -std=c99 -D_GNU_SOURCE -D_REENTRANT -D_POSIX_PTHREAD_SEMANTICS -DOPENSSL_NO_KRB5 -DNDEBUG -DIS_MODULE=1
Tue Nov 22 21:31:19 2022 : Debug: ldflags : -L/build/orionos/s64/lib -L/build/orionos/s64/usr/lib
Tue Nov 22 21:31:19 2022 : Debug: libs : -lcrypto -lssl -ltalloc -latomic -lpcre -lcap -lresolv -ldl -lpthread -lz -lmariadb -lp11-kit -lhogweed -lgmp -lidn -lidn2 -lffi -lgnutls -lnettle -lmysqlclient -lreadline -lssl -lcrypto -lncurses -ltinfo -lltdl -liconv -lexpat -lpcre -lbz2 -llzma -lxml2 -lnl-3 -lnl-genl-3 -lnl-route-3 -ltinfo -ltirpc -lssl -lcrypto -lboost_regex -lboost_serialization -lboost_wserialization -lboost_system -lcap -lpcap -lreadline
Tue Nov 22 21:31:19 2022 : Debug:
Tue Nov 22 21:31:19 2022 : Info: FreeRADIUS Version 3.2.2
Tue Nov 22 21:31:19 2022 : Info: Copyright (C) 1999-2022 The FreeRADIUS server project and contributors
Tue Nov 22 21:31:19 2022 : Info: There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
Tue Nov 22 21:31:19 2022 : Info: PARTICULAR PURPOSE
Tue Nov 22 21:31:19 2022 : Info: You may redistribute copies of FreeRADIUS under the terms of the
Tue Nov 22 21:31:19 2022 : Info: GNU General Public License
Tue Nov 22 21:31:19 2022 : Info: For more information about these matters, see the file named COPYRIGHT
Tue Nov 22 21:31:19 2022 : Info: Starting - reading configuration files ...
Tue Nov 22 21:31:19 2022 : Debug: including dictionary file /etc/freeradius/dictionary
Tue Nov 22 21:31:19 2022 : Debug: including dictionary file /etc/freeradius/dictionary
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/freeradius.conf
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/proxy.conf
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/clients.conf
Tue Nov 22 21:31:19 2022 : Debug: including files in directory /etc/freeradius/modules/
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/modules/detail.example.com
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/modules/detail
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/modules/preprocess
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/modules/pap
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/modules/chap
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/modules/exec
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/modules/expr
Tue Nov 22 21:31:19 2022 : Debug: including files in directory /etc/freeradius/sites-enabled/
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/sites-enabled/robust-proxy-accounting
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/sites-enabled/default
Tue Nov 22 21:31:19 2022 : Debug: main {
Tue Nov 22 21:31:19 2022 : Debug: security {
Tue Nov 22 21:31:19 2022 : Debug: user = "freerad"
Tue Nov 22 21:31:19 2022 : Debug: group = "freerad"
Tue Nov 22 21:31:19 2022 : Debug: allow_core_dumps = no
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[38]: The item 'max_attributes' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[39]: The item 'reject_delay' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[40]: The item 'status_server' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[41]: The item 'allow_vulnerable_openssl' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: name = "radiusd"
Tue Nov 22 21:31:19 2022 : Debug: prefix = "/usr"
Tue Nov 22 21:31:19 2022 : Debug: localstatedir = "/var"
Tue Nov 22 21:31:19 2022 : Debug: logdir = "/var/log/freeradius"
Tue Nov 22 21:31:19 2022 : Debug: run_dir = "/var/run/freeradius"
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[2]: The item 'ignore_case' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[4]: The item 'sysconfdir' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[9]: The item 'log_file' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[10]: The item 'log_destination' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[12]: The item 'confdir' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[14]: The item 'libdir' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[15]: The item 'pidfile' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[16]: The item 'max_request_time' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[17]: The item 'cleanup_delay' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[18]: The item 'max_requests' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[29]: The item 'hostname_lookups' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[30]: The item 'regular_expressions' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[31]: The item 'extended_expressions' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[32]: The item 'checkrad' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[44]: The item 'proxy_requests' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: main {
Tue Nov 22 21:31:19 2022 : Debug: name = "radiusd"
Tue Nov 22 21:31:19 2022 : Debug: prefix = "/usr"
Tue Nov 22 21:31:19 2022 : Debug: localstatedir = "/var"
Tue Nov 22 21:31:19 2022 : Debug: sbindir = "/usr/sbin"
Tue Nov 22 21:31:19 2022 : Debug: logdir = "/var/log/freeradius"
Tue Nov 22 21:31:19 2022 : Debug: run_dir = "/var/run/freeradius"
Tue Nov 22 21:31:19 2022 : Debug: libdir = "/usr/lib/freeradius"
Tue Nov 22 21:31:19 2022 : Debug: radacctdir = "/var/log/freeradius/radacct"
Tue Nov 22 21:31:19 2022 : Debug: hostname_lookups = no
Tue Nov 22 21:31:19 2022 : Debug: max_request_time = 30
Tue Nov 22 21:31:19 2022 : Debug: cleanup_delay = 5
Tue Nov 22 21:31:19 2022 : Debug: max_requests = 1024
Tue Nov 22 21:31:19 2022 : Debug: postauth_client_lost = no
Tue Nov 22 21:31:19 2022 : Debug: pidfile = "/var/run/freeradius/freeradius.pid"
Tue Nov 22 21:31:19 2022 : Debug: checkrad = "/usr/sbin/checkrad"
Tue Nov 22 21:31:19 2022 : Debug: debug_level = 0
Tue Nov 22 21:31:19 2022 : Debug: proxy_requests = yes
Tue Nov 22 21:31:19 2022 : Debug: log {
Tue Nov 22 21:31:19 2022 : Debug: stripped_names = no
Tue Nov 22 21:31:19 2022 : Debug: auth = no
Tue Nov 22 21:31:19 2022 : Debug: auth_badpass = no
Tue Nov 22 21:31:19 2022 : Debug: auth_goodpass = no
Tue Nov 22 21:31:19 2022 : Debug: msg_denied = "You are already logged in - access denied"
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: resources {
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: security {
Tue Nov 22 21:31:19 2022 : Debug: max_attributes = 200
Tue Nov 22 21:31:19 2022 : Debug: reject_delay = 1.000000
Tue Nov 22 21:31:19 2022 : Debug: status_server = no
Tue Nov 22 21:31:19 2022 : Debug: allow_vulnerable_openssl = "no"
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[2]: The item 'ignore_case' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[4]: The item 'sysconfdir' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[9]: The item 'log_file' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[10]: The item 'log_destination' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[12]: The item 'confdir' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[30]: The item 'regular_expressions' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[31]: The item 'extended_expressions' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: freeradius: #### Loading Realms and Home Servers ####
Tue Nov 22 21:31:19 2022 : Debug: proxy server {
Tue Nov 22 21:31:19 2022 : Debug: retry_delay = 5
Tue Nov 22 21:31:19 2022 : Debug: retry_count = 3
Tue Nov 22 21:31:19 2022 : Debug: default_fallback = yes
Tue Nov 22 21:31:19 2022 : Debug: dead_time = 120
Tue Nov 22 21:31:19 2022 : Debug: wake_all_if_all_dead = no
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/proxy.conf[2]: The item 'synchronous' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/proxy.conf[7]: The item 'post_proxy_authorize' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: home_server home1.example.com {
Tue Nov 22 21:31:19 2022 : Debug: nonblock = no
Tue Nov 22 21:31:19 2022 : Debug: ipaddr = 139.5.241.16
Tue Nov 22 21:31:19 2022 : Debug: port = 1813
Tue Nov 22 21:31:19 2022 : Debug: type = "acct"
Tue Nov 22 21:31:19 2022 : Debug: secret = "secret"
Tue Nov 22 21:31:19 2022 : Debug: response_window = 20.000000
Tue Nov 22 21:31:19 2022 : Debug: response_timeouts = 1
Tue Nov 22 21:31:19 2022 : Debug: max_outstanding = 65536
Tue Nov 22 21:31:19 2022 : Debug: zombie_period = 40
Tue Nov 22 21:31:19 2022 : Debug: status_check = "request"
Tue Nov 22 21:31:19 2022 : Debug: ping_interval = 30
Tue Nov 22 21:31:19 2022 : Debug: check_interval = 30
Tue Nov 22 21:31:19 2022 : Debug: check_timeout = 4
Tue Nov 22 21:31:19 2022 : Debug: num_answers_to_alive = 3
Tue Nov 22 21:31:19 2022 : Debug: revive_interval = 120
Tue Nov 22 21:31:19 2022 : Debug: username = "test_bgbras"
Tue Nov 22 21:31:19 2022 : Debug: limit {
Tue Nov 22 21:31:19 2022 : Debug: max_connections = 16
Tue Nov 22 21:31:19 2022 : Debug: max_requests = 0
Tue Nov 22 21:31:19 2022 : Debug: lifetime = 0
Tue Nov 22 21:31:19 2022 : Debug: idle_timeout = 0
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: coa {
Tue Nov 22 21:31:19 2022 : Debug: irt = 2
Tue Nov 22 21:31:19 2022 : Debug: mrt = 16
Tue Nov 22 21:31:19 2022 : Debug: mrc = 5
Tue Nov 22 21:31:19 2022 : Debug: mrd = 30
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: recv_coa {
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: realm LOCAL {
Tue Nov 22 21:31:19 2022 : Debug: authhost = LOCAL
Tue Nov 22 21:31:19 2022 : Debug: accthost = LOCAL
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: home_server_pool acct_pool.example.com {
Tue Nov 22 21:31:19 2022 : Debug: type = fail-over
Tue Nov 22 21:31:19 2022 : Debug: virtual_server = home.example.com
Tue Nov 22 21:31:19 2022 : Debug: home_server = home1.example.com
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: realm acct_realm.example.com {
Tue Nov 22 21:31:19 2022 : Debug: acct_pool = acct_pool.example.com
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: freeradius: #### Loading Clients ####
Tue Nov 22 21:31:19 2022 : Debug: client 127.0.0.1 {
Tue Nov 22 21:31:19 2022 : Debug: ipaddr = 10.10.10.1
Tue Nov 22 21:31:19 2022 : Debug: require_message_authenticator = no
Tue Nov 22 21:31:19 2022 : Debug: secret = "secret"
Tue Nov 22 21:31:19 2022 : Debug: limit {
Tue Nov 22 21:31:19 2022 : Debug: max_connections = 16
Tue Nov 22 21:31:19 2022 : Debug: lifetime = 0
Tue Nov 22 21:31:19 2022 : Debug: idle_timeout = 30
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Adding client 10.10.10.1/32 (10.10.10.1) to prefix tree 32
Tue Nov 22 21:31:19 2022 : Info: Debugger not attached
Tue Nov 22 21:31:19 2022 : Debug: # Creating Post-Proxy-Type = Fail-Accounting
Tue Nov 22 21:31:19 2022 : Debug: # Creating Post-Proxy-Type = Fail-Authentication
Tue Nov 22 21:31:19 2022 : Debug: freeradius: #### Instantiating modules ####
Tue Nov 22 21:31:19 2022 : Debug: modules {
Tue Nov 22 21:31:19 2022 : Debug: Loading rlm_detail with path: /usr/lib/freeradius/rlm_detail.so
Tue Nov 22 21:31:19 2022 : Debug: Loaded rlm_detail, checking if it's valid
Tue Nov 22 21:31:19 2022 : Debug: # Loaded module rlm_detail
Tue Nov 22 21:31:19 2022 : Debug: # Loading module "detail.example.com" from file /etc/freeradius/modules/detail.example.com
Tue Nov 22 21:31:19 2022 : Debug: detail detail.example.com {
Tue Nov 22 21:31:19 2022 : Debug: filename = "/var/log/freeradius/radacct/detail.example.com/detail-%Y%m%d:%H:%G"
Tue Nov 22 21:31:19 2022 : Debug: header = "%t"
Tue Nov 22 21:31:19 2022 : Debug: permissions = 384
Tue Nov 22 21:31:19 2022 : Debug: locking = no
Tue Nov 22 21:31:19 2022 : Debug: escape_filenames = no
Tue Nov 22 21:31:19 2022 : Debug: log_packet_header = no
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: # Loading module "detail" from file /etc/freeradius/modules/detail
Tue Nov 22 21:31:19 2022 : Debug: detail {
Tue Nov 22 21:31:19 2022 : Debug: filename = "/var/log/freeradius/radacct/%{%{Packet-Src-IP-Address}:-%{Packet-Src-IPv6-Address}}/detail-%Y%m%d"
Tue Nov 22 21:31:19 2022 : Debug: header = "%t"
Tue Nov 22 21:31:19 2022 : Debug: permissions = 384
Tue Nov 22 21:31:19 2022 : Debug: locking = no
Tue Nov 22 21:31:19 2022 : Debug: escape_filenames = no
Tue Nov 22 21:31:19 2022 : Debug: log_packet_header = no
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Loading rlm_preprocess with path: /usr/lib/freeradius/rlm_preprocess.so
Tue Nov 22 21:31:19 2022 : Debug: Loaded rlm_preprocess, checking if it's valid
Tue Nov 22 21:31:19 2022 : Debug: # Loaded module rlm_preprocess
Tue Nov 22 21:31:19 2022 : Debug: # Loading module "preprocess" from file /etc/freeradius/modules/preprocess
Tue Nov 22 21:31:19 2022 : Debug: preprocess {
Tue Nov 22 21:31:19 2022 : Debug: huntgroups = "/etc/freeradius/huntgroups"
Tue Nov 22 21:31:19 2022 : Debug: hints = "/etc/freeradius/hints"
Tue Nov 22 21:31:19 2022 : Debug: with_ascend_hack = no
Tue Nov 22 21:31:19 2022 : Debug: ascend_channels_per_line = 23
Tue Nov 22 21:31:19 2022 : Debug: with_ntdomain_hack = no
Tue Nov 22 21:31:19 2022 : Debug: with_specialix_jetstream_hack = no
Tue Nov 22 21:31:19 2022 : Debug: with_cisco_vsa_hack = no
Tue Nov 22 21:31:19 2022 : Debug: with_alvarion_vsa_hack = no
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Loading rlm_pap with path: /usr/lib/freeradius/rlm_pap.so
Tue Nov 22 21:31:19 2022 : Debug: Loaded rlm_pap, checking if it's valid
Tue Nov 22 21:31:19 2022 : Debug: # Loaded module rlm_pap
Tue Nov 22 21:31:19 2022 : Debug: # Loading module "pap" from file /etc/freeradius/modules/pap
Tue Nov 22 21:31:19 2022 : Debug: pap {
Tue Nov 22 21:31:19 2022 : Debug: normalise = yes
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/modules/pap[21]: The item 'auto_header' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Loading rlm_chap with path: /usr/lib/freeradius/rlm_chap.so
Tue Nov 22 21:31:19 2022 : Debug: Loaded rlm_chap, checking if it's valid
Tue Nov 22 21:31:19 2022 : Debug: # Loaded module rlm_chap
Tue Nov 22 21:31:19 2022 : Debug: # Loading module "chap" from file /etc/freeradius/modules/chap
Tue Nov 22 21:31:19 2022 : Debug: Loading rlm_exec with path: /usr/lib/freeradius/rlm_exec.so
Tue Nov 22 21:31:19 2022 : Debug: Loaded rlm_exec, checking if it's valid
Tue Nov 22 21:31:19 2022 : Debug: # Loaded module rlm_exec
Tue Nov 22 21:31:19 2022 : Debug: # Loading module "exec" from file /etc/freeradius/modules/exec
Tue Nov 22 21:31:19 2022 : Debug: exec {
Tue Nov 22 21:31:19 2022 : Debug: wait = no
Tue Nov 22 21:31:19 2022 : Debug: input_pairs = "request"
Tue Nov 22 21:31:19 2022 : Debug: shell_escape = yes
Tue Nov 22 21:31:19 2022 : Debug: timeout = 10
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/modules/exec[28]: The item 'output' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Loading rlm_expr with path: /usr/lib/freeradius/rlm_expr.so
Tue Nov 22 21:31:19 2022 : Debug: Loaded rlm_expr, checking if it's valid
Tue Nov 22 21:31:19 2022 : Debug: # Loaded module rlm_expr
Tue Nov 22 21:31:19 2022 : Debug: # Loading module "expr" from file /etc/freeradius/modules/expr
Tue Nov 22 21:31:19 2022 : Debug: expr {
Tue Nov 22 21:31:19 2022 : Debug: safe_characters = "@abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789.-_: /"
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: instantiate {
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: # Instantiating module "detail.example.com" from file /etc/freeradius/modules/detail.example.com
Tue Nov 22 21:31:19 2022 : Debug: # Instantiating module "detail" from file /etc/freeradius/modules/detail
Tue Nov 22 21:31:19 2022 : Debug: # Instantiating module "preprocess" from file /etc/freeradius/modules/preprocess
Tue Nov 22 21:31:19 2022 : Debug: reading pairlist file /etc/freeradius/huntgroups
Tue Nov 22 21:31:19 2022 : Debug: reading pairlist file /etc/freeradius/hints
Tue Nov 22 21:31:19 2022 : Debug: # Instantiating module "pap" from file /etc/freeradius/modules/pap
Tue Nov 22 21:31:19 2022 : Debug: } # modules
Tue Nov 22 21:31:19 2022 : Debug: freeradius: #### Loading Virtual Servers ####
Tue Nov 22 21:31:19 2022 : Debug: server { # from file /etc/freeradius/freeradius.conf
Tue Nov 22 21:31:19 2022 : Error: /etc/freeradius/sites-enabled/default[7]: The authenticate section should be inside of a 'server { ... }' block!
Tue Nov 22 21:31:19 2022 : Debug: authenticate {
Tue Nov 22 21:31:19 2022 : Debug: Compiling Auth-Type PAP for attr Auth-Type
Tue Nov 22 21:31:19 2022 : Debug: group {
Tue Nov 22 21:31:19 2022 : Debug: pap
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Compiling Auth-Type CHAP for attr Auth-Type
Tue Nov 22 21:31:19 2022 : Debug: group {
Tue Nov 22 21:31:19 2022 : Debug: chap
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: } # authenticate
Tue Nov 22 21:31:19 2022 : Error: /etc/freeradius/sites-enabled/default[1]: The authorize section should be inside of a 'server { ... }' block!
Tue Nov 22 21:31:19 2022 : Debug: authorize {
Tue Nov 22 21:31:19 2022 : Debug: preprocess
Tue Nov 22 21:31:19 2022 : Debug: chap
Tue Nov 22 21:31:19 2022 : Debug: pap
Tue Nov 22 21:31:19 2022 : Debug: } # authorize
Tue Nov 22 21:31:19 2022 : Error: /etc/freeradius/sites-enabled/default[18]: The preacct section should be inside of a 'server { ... }' block!
Tue Nov 22 21:31:19 2022 : Debug: preacct {
Tue Nov 22 21:31:19 2022 : Debug: preprocess
Tue Nov 22 21:31:19 2022 : Debug: } # preacct
Tue Nov 22 21:31:19 2022 : Error: /etc/freeradius/sites-enabled/default[23]: The accounting section should be inside of a 'server { ... }' block!
Tue Nov 22 21:31:19 2022 : Debug: accounting {
Tue Nov 22 21:31:19 2022 : Debug: detail.example.com
Tue Nov 22 21:31:19 2022 : Debug: } # accounting
Tue Nov 22 21:31:19 2022 : Debug: } # server
Tue Nov 22 21:31:19 2022 : Debug: server acct_detail.example.com { # from file /etc/freeradius/sites-enabled/robust-proxy-accounting
Tue Nov 22 21:31:19 2022 : Debug: accounting {
Tue Nov 22 21:31:19 2022 : Debug: detail.example.com
Tue Nov 22 21:31:19 2022 : Debug: } # accounting
Tue Nov 22 21:31:19 2022 : Debug: } # server acct_detail.example.com
Tue Nov 22 21:31:19 2022 : Debug: server home.example.com { # from file /etc/freeradius/sites-enabled/robust-proxy-accounting
Tue Nov 22 21:31:19 2022 : Debug: accounting {
Tue Nov 22 21:31:19 2022 : Debug: update {
Tue Nov 22 21:31:19 2022 : Debug: &control:Proxy-To-Realm := 'acct_realm.example.com'
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: } # accounting
Tue Nov 22 21:31:19 2022 : Debug: post-proxy {
Tue Nov 22 21:31:19 2022 : Debug: Compiling Post-Proxy-Type Fail-Accounting for attr Post-Proxy-Type
Tue Nov 22 21:31:19 2022 : Debug: group {
Tue Nov 22 21:31:19 2022 : Debug: detail.example.com
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Compiling Post-Proxy-Type Fail-Authentication for attr Post-Proxy-Type
Tue Nov 22 21:31:19 2022 : Debug: group {
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: } # post-proxy
Tue Nov 22 21:31:19 2022 : Debug: } # server home.example.com
Tue Nov 22 21:31:19 2022 : Debug: Created signal pipe. Read end FD 5, write end FD 6
Tue Nov 22 21:31:19 2022 : Debug: freeradius: #### Opening IP addresses and Ports ####
Tue Nov 22 21:31:19 2022 : Debug: Loading proto_acct with path: /usr/lib/freeradius/proto_acct.so
Tue Nov 22 21:31:19 2022 : Debug: Loading proto_acct failed: /usr/lib/freeradius/proto_acct.so: cannot open shared object file: No such file or directory - No such file or directory
Tue Nov 22 21:31:19 2022 : Debug: Loading library using linker search path(s)
Tue Nov 22 21:31:19 2022 : Debug: Defaults : /lib:/usr/lib
Tue Nov 22 21:31:19 2022 : Debug: Failed with error: proto_acct.so: cannot open shared object file: No such file or directory
Tue Nov 22 21:31:19 2022 : Debug: listen {
Tue Nov 22 21:31:19 2022 : Debug: type = "acct"
Tue Nov 22 21:31:19 2022 : Debug: ipaddr = *
Tue Nov 22 21:31:19 2022 : Debug: port = 0
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Loading proto_detail with path: /usr/lib/freeradius/proto_detail.so
Tue Nov 22 21:31:19 2022 : Debug: Loading proto_detail failed: /usr/lib/freeradius/proto_detail.so: cannot open shared object file: No such file or directory - No such file or directory
Tue Nov 22 21:31:19 2022 : Debug: Loading library using linker search path(s)
Tue Nov 22 21:31:19 2022 : Debug: Defaults : /lib:/usr/lib
Tue Nov 22 21:31:19 2022 : Debug: Failed with error: proto_detail.so: cannot open shared object file: No such file or directory
Tue Nov 22 21:31:19 2022 : Debug: listen {
Tue Nov 22 21:31:19 2022 : Debug: type = "detail"
Tue Nov 22 21:31:19 2022 : Debug: listen {
Tue Nov 22 21:31:19 2022 : Debug: filename = "/var/log/freeradius/radacct/detail.example.com/detail-*:*"
Tue Nov 22 21:31:19 2022 : Debug: load_factor = 10
Tue Nov 22 21:31:19 2022 : Debug: poll_interval = 1
Tue Nov 22 21:31:19 2022 : Debug: retry_interval = 30
Tue Nov 22 21:31:19 2022 : Debug: one_shot = no
Tue Nov 22 21:31:19 2022 : Debug: track = no
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Listening on acct address * port 1813
Tue Nov 22 21:31:19 2022 : Debug: Listening on detail file /var/log/freeradius/radacct/detail.example.com/detail-*:* as server home.example.com
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - User-Name = "test_bgbras"
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - NAS-Identifier = "demo-bng"
Tue Nov 22 21:31:19 2022 : Info: Ready to process requests
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - NAS-IP-Address = 10.10.10.1
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - NAS-Port = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - NAS-Port-Id = "vlan100"
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - NAS-Port-Type = Virtual
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Service-Type = Framed-User
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Framed-Protocol = PPP
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Calling-Station-Id = "f6:cb:f6:40:1f:a0"
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Called-Station-Id = "1a:27:b5:27:bb:53"
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Class = 0x6375693d746573745f6267627261732c73753d3934323033392c736b3d6969786c6c7a776f3333386f2c626b69643d32323534303038392c626b7269643d332c6d763d3532393135323231333836302c6d743d3630353235392c676d743d3630343830302c73703d312c73723d2c61633d312c75693d3934323033392c6363693d3138382c63633d47656e6572616c2c61733d312c636f3d2c64623d3130323430302c75623d3130323430302c63626b69643d3232353430303930
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Status-Type = Start
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Authentic = RADIUS
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Session-Id = "1628808cd06d3273"
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Session-Time = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Input-Octets = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Output-Octets = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Input-Packets = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Output-Packets = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Input-Gigawords = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Output-Gigawords = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Framed-IP-Address = 10.0.0.5
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Event-Timestamp = "Nov 22 2022 21:21:45 IST"
Segmentation fault
```
### Relevant log output from client utilities
windows pppoe client try to connect
### Backtrace from LLDB or GDB
```shell
#0 0x00007ffff6bb1230 in open64 () from /lib64/libc.so.6
No symbol table info available.
#1 0x0000000000449243 in detail_poll ()
No symbol table info available.
#2 0x000000000044a278 in detail_handler_thread ()
No symbol table info available.
#3 0x00007ffff6b45026 in ?? () from /lib64/libc.so.6
No symbol table info available.
#4 0x00007ffff6bc0d60 in clone () from /lib64/libc.so.6
No symbol table info available.
```
|
1.0
|
Segmentation fault Freeradius 3.2.1 Robust proxy - ### What type of defect/bug is this?
Crash or memory corruption (segv, abort, etc...)
### How can the issue be reproduced?
When try to send accounting package to robust proxy radius crash with Segmentation fault
### Log output from the FreeRADIUS daemon
```shell
freeradius -Xxxxxx
Tue Nov 22 21:31:19 2022 : Debug: Server was built with:
Tue Nov 22 21:31:19 2022 : Debug: accounting : yes
Tue Nov 22 21:31:19 2022 : Debug: authentication : yes
Tue Nov 22 21:31:19 2022 : Debug: ascend-binary-attributes : yes
Tue Nov 22 21:31:19 2022 : Debug: coa : yes
Tue Nov 22 21:31:19 2022 : Debug: recv-coa-from-home-server : yes
Tue Nov 22 21:31:19 2022 : Debug: control-socket : yes
Tue Nov 22 21:31:19 2022 : Debug: detail : yes
Tue Nov 22 21:31:19 2022 : Debug: dhcp : yes
Tue Nov 22 21:31:19 2022 : Debug: dynamic-clients : yes
Tue Nov 22 21:31:19 2022 : Debug: osfc2 : no
Tue Nov 22 21:31:19 2022 : Debug: proxy : yes
Tue Nov 22 21:31:19 2022 : Debug: regex-pcre : yes
Tue Nov 22 21:31:19 2022 : Debug: regex-posix : no
Tue Nov 22 21:31:19 2022 : Debug: regex-posix-extended : no
Tue Nov 22 21:31:19 2022 : Debug: session-management : yes
Tue Nov 22 21:31:19 2022 : Debug: stats : yes
Tue Nov 22 21:31:19 2022 : Debug: systemd : no
Tue Nov 22 21:31:19 2022 : Debug: tcp : yes
Tue Nov 22 21:31:19 2022 : Debug: threads : yes
Tue Nov 22 21:31:19 2022 : Debug: tls : yes
Tue Nov 22 21:31:19 2022 : Debug: unlang : yes
Tue Nov 22 21:31:19 2022 : Debug: vmps : yes
Tue Nov 22 21:31:19 2022 : Debug: developer : no
Tue Nov 22 21:31:19 2022 : Debug: Server core libs:
Tue Nov 22 21:31:19 2022 : Debug: freeradius-server : 3.2.2
Tue Nov 22 21:31:19 2022 : Debug: talloc : 2.3.*
Tue Nov 22 21:31:19 2022 : Debug: ssl : 3.0.0g dev
Tue Nov 22 21:31:19 2022 : Debug: pcre : 8.45 2021-06-15
Tue Nov 22 21:31:19 2022 : Debug: Endianness:
Tue Nov 22 21:31:19 2022 : Debug: little
Tue Nov 22 21:31:19 2022 : Debug: Compilation flags:
Tue Nov 22 21:31:19 2022 : Debug: cppflags : -pipe -O3 -Wall -march=skylake -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -fno-stack-protector -mno-stackrealign -flto=4 -I/build/orionos/s64/include -I/build/orionos/s64/usr/include
Tue Nov 22 21:31:19 2022 : Debug: cflags : -I. -Isrc -include src/freeradius-devel/autoconf.h -include src/freeradius-devel/build.h -include src/freeradius-devel/features.h -include src/freeradius-devel/radpaths.h -fno-strict-aliasing -Wno-date-time -pipe -O3 -Wall -march=skylake -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -fno-stack-protector -mno-stackrealign -flto=4 -I/build/orionos/s64/include -I/build/orionos/s64/usr/include -Wall -std=c99 -D_GNU_SOURCE -D_REENTRANT -D_POSIX_PTHREAD_SEMANTICS -DOPENSSL_NO_KRB5 -DNDEBUG -DIS_MODULE=1
Tue Nov 22 21:31:19 2022 : Debug: ldflags : -L/build/orionos/s64/lib -L/build/orionos/s64/usr/lib
Tue Nov 22 21:31:19 2022 : Debug: libs : -lcrypto -lssl -ltalloc -latomic -lpcre -lcap -lresolv -ldl -lpthread -lz -lmariadb -lp11-kit -lhogweed -lgmp -lidn -lidn2 -lffi -lgnutls -lnettle -lmysqlclient -lreadline -lssl -lcrypto -lncurses -ltinfo -lltdl -liconv -lexpat -lpcre -lbz2 -llzma -lxml2 -lnl-3 -lnl-genl-3 -lnl-route-3 -ltinfo -ltirpc -lssl -lcrypto -lboost_regex -lboost_serialization -lboost_wserialization -lboost_system -lcap -lpcap -lreadline
Tue Nov 22 21:31:19 2022 : Debug:
Tue Nov 22 21:31:19 2022 : Info: FreeRADIUS Version 3.2.2
Tue Nov 22 21:31:19 2022 : Info: Copyright (C) 1999-2022 The FreeRADIUS server project and contributors
Tue Nov 22 21:31:19 2022 : Info: There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
Tue Nov 22 21:31:19 2022 : Info: PARTICULAR PURPOSE
Tue Nov 22 21:31:19 2022 : Info: You may redistribute copies of FreeRADIUS under the terms of the
Tue Nov 22 21:31:19 2022 : Info: GNU General Public License
Tue Nov 22 21:31:19 2022 : Info: For more information about these matters, see the file named COPYRIGHT
Tue Nov 22 21:31:19 2022 : Info: Starting - reading configuration files ...
Tue Nov 22 21:31:19 2022 : Debug: including dictionary file /etc/freeradius/dictionary
Tue Nov 22 21:31:19 2022 : Debug: including dictionary file /etc/freeradius/dictionary
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/freeradius.conf
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/proxy.conf
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/clients.conf
Tue Nov 22 21:31:19 2022 : Debug: including files in directory /etc/freeradius/modules/
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/modules/detail.example.com
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/modules/detail
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/modules/preprocess
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/modules/pap
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/modules/chap
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/modules/exec
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/modules/expr
Tue Nov 22 21:31:19 2022 : Debug: including files in directory /etc/freeradius/sites-enabled/
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/sites-enabled/robust-proxy-accounting
Tue Nov 22 21:31:19 2022 : Debug: including configuration file /etc/freeradius/sites-enabled/default
Tue Nov 22 21:31:19 2022 : Debug: main {
Tue Nov 22 21:31:19 2022 : Debug: security {
Tue Nov 22 21:31:19 2022 : Debug: user = "freerad"
Tue Nov 22 21:31:19 2022 : Debug: group = "freerad"
Tue Nov 22 21:31:19 2022 : Debug: allow_core_dumps = no
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[38]: The item 'max_attributes' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[39]: The item 'reject_delay' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[40]: The item 'status_server' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[41]: The item 'allow_vulnerable_openssl' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: name = "radiusd"
Tue Nov 22 21:31:19 2022 : Debug: prefix = "/usr"
Tue Nov 22 21:31:19 2022 : Debug: localstatedir = "/var"
Tue Nov 22 21:31:19 2022 : Debug: logdir = "/var/log/freeradius"
Tue Nov 22 21:31:19 2022 : Debug: run_dir = "/var/run/freeradius"
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[2]: The item 'ignore_case' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[4]: The item 'sysconfdir' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[9]: The item 'log_file' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[10]: The item 'log_destination' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[12]: The item 'confdir' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[14]: The item 'libdir' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[15]: The item 'pidfile' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[16]: The item 'max_request_time' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[17]: The item 'cleanup_delay' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[18]: The item 'max_requests' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[29]: The item 'hostname_lookups' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[30]: The item 'regular_expressions' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[31]: The item 'extended_expressions' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[32]: The item 'checkrad' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[44]: The item 'proxy_requests' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: main {
Tue Nov 22 21:31:19 2022 : Debug: name = "radiusd"
Tue Nov 22 21:31:19 2022 : Debug: prefix = "/usr"
Tue Nov 22 21:31:19 2022 : Debug: localstatedir = "/var"
Tue Nov 22 21:31:19 2022 : Debug: sbindir = "/usr/sbin"
Tue Nov 22 21:31:19 2022 : Debug: logdir = "/var/log/freeradius"
Tue Nov 22 21:31:19 2022 : Debug: run_dir = "/var/run/freeradius"
Tue Nov 22 21:31:19 2022 : Debug: libdir = "/usr/lib/freeradius"
Tue Nov 22 21:31:19 2022 : Debug: radacctdir = "/var/log/freeradius/radacct"
Tue Nov 22 21:31:19 2022 : Debug: hostname_lookups = no
Tue Nov 22 21:31:19 2022 : Debug: max_request_time = 30
Tue Nov 22 21:31:19 2022 : Debug: cleanup_delay = 5
Tue Nov 22 21:31:19 2022 : Debug: max_requests = 1024
Tue Nov 22 21:31:19 2022 : Debug: postauth_client_lost = no
Tue Nov 22 21:31:19 2022 : Debug: pidfile = "/var/run/freeradius/freeradius.pid"
Tue Nov 22 21:31:19 2022 : Debug: checkrad = "/usr/sbin/checkrad"
Tue Nov 22 21:31:19 2022 : Debug: debug_level = 0
Tue Nov 22 21:31:19 2022 : Debug: proxy_requests = yes
Tue Nov 22 21:31:19 2022 : Debug: log {
Tue Nov 22 21:31:19 2022 : Debug: stripped_names = no
Tue Nov 22 21:31:19 2022 : Debug: auth = no
Tue Nov 22 21:31:19 2022 : Debug: auth_badpass = no
Tue Nov 22 21:31:19 2022 : Debug: auth_goodpass = no
Tue Nov 22 21:31:19 2022 : Debug: msg_denied = "You are already logged in - access denied"
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: resources {
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: security {
Tue Nov 22 21:31:19 2022 : Debug: max_attributes = 200
Tue Nov 22 21:31:19 2022 : Debug: reject_delay = 1.000000
Tue Nov 22 21:31:19 2022 : Debug: status_server = no
Tue Nov 22 21:31:19 2022 : Debug: allow_vulnerable_openssl = "no"
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[2]: The item 'ignore_case' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[4]: The item 'sysconfdir' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[9]: The item 'log_file' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[10]: The item 'log_destination' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[12]: The item 'confdir' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[30]: The item 'regular_expressions' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/freeradius.conf[31]: The item 'extended_expressions' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: freeradius: #### Loading Realms and Home Servers ####
Tue Nov 22 21:31:19 2022 : Debug: proxy server {
Tue Nov 22 21:31:19 2022 : Debug: retry_delay = 5
Tue Nov 22 21:31:19 2022 : Debug: retry_count = 3
Tue Nov 22 21:31:19 2022 : Debug: default_fallback = yes
Tue Nov 22 21:31:19 2022 : Debug: dead_time = 120
Tue Nov 22 21:31:19 2022 : Debug: wake_all_if_all_dead = no
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/proxy.conf[2]: The item 'synchronous' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/proxy.conf[7]: The item 'post_proxy_authorize' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: home_server home1.example.com {
Tue Nov 22 21:31:19 2022 : Debug: nonblock = no
Tue Nov 22 21:31:19 2022 : Debug: ipaddr = 139.5.241.16
Tue Nov 22 21:31:19 2022 : Debug: port = 1813
Tue Nov 22 21:31:19 2022 : Debug: type = "acct"
Tue Nov 22 21:31:19 2022 : Debug: secret = "secret"
Tue Nov 22 21:31:19 2022 : Debug: response_window = 20.000000
Tue Nov 22 21:31:19 2022 : Debug: response_timeouts = 1
Tue Nov 22 21:31:19 2022 : Debug: max_outstanding = 65536
Tue Nov 22 21:31:19 2022 : Debug: zombie_period = 40
Tue Nov 22 21:31:19 2022 : Debug: status_check = "request"
Tue Nov 22 21:31:19 2022 : Debug: ping_interval = 30
Tue Nov 22 21:31:19 2022 : Debug: check_interval = 30
Tue Nov 22 21:31:19 2022 : Debug: check_timeout = 4
Tue Nov 22 21:31:19 2022 : Debug: num_answers_to_alive = 3
Tue Nov 22 21:31:19 2022 : Debug: revive_interval = 120
Tue Nov 22 21:31:19 2022 : Debug: username = "test_bgbras"
Tue Nov 22 21:31:19 2022 : Debug: limit {
Tue Nov 22 21:31:19 2022 : Debug: max_connections = 16
Tue Nov 22 21:31:19 2022 : Debug: max_requests = 0
Tue Nov 22 21:31:19 2022 : Debug: lifetime = 0
Tue Nov 22 21:31:19 2022 : Debug: idle_timeout = 0
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: coa {
Tue Nov 22 21:31:19 2022 : Debug: irt = 2
Tue Nov 22 21:31:19 2022 : Debug: mrt = 16
Tue Nov 22 21:31:19 2022 : Debug: mrc = 5
Tue Nov 22 21:31:19 2022 : Debug: mrd = 30
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: recv_coa {
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: realm LOCAL {
Tue Nov 22 21:31:19 2022 : Debug: authhost = LOCAL
Tue Nov 22 21:31:19 2022 : Debug: accthost = LOCAL
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: home_server_pool acct_pool.example.com {
Tue Nov 22 21:31:19 2022 : Debug: type = fail-over
Tue Nov 22 21:31:19 2022 : Debug: virtual_server = home.example.com
Tue Nov 22 21:31:19 2022 : Debug: home_server = home1.example.com
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: realm acct_realm.example.com {
Tue Nov 22 21:31:19 2022 : Debug: acct_pool = acct_pool.example.com
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: freeradius: #### Loading Clients ####
Tue Nov 22 21:31:19 2022 : Debug: client 127.0.0.1 {
Tue Nov 22 21:31:19 2022 : Debug: ipaddr = 10.10.10.1
Tue Nov 22 21:31:19 2022 : Debug: require_message_authenticator = no
Tue Nov 22 21:31:19 2022 : Debug: secret = "secret"
Tue Nov 22 21:31:19 2022 : Debug: limit {
Tue Nov 22 21:31:19 2022 : Debug: max_connections = 16
Tue Nov 22 21:31:19 2022 : Debug: lifetime = 0
Tue Nov 22 21:31:19 2022 : Debug: idle_timeout = 30
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Adding client 10.10.10.1/32 (10.10.10.1) to prefix tree 32
Tue Nov 22 21:31:19 2022 : Info: Debugger not attached
Tue Nov 22 21:31:19 2022 : Debug: # Creating Post-Proxy-Type = Fail-Accounting
Tue Nov 22 21:31:19 2022 : Debug: # Creating Post-Proxy-Type = Fail-Authentication
Tue Nov 22 21:31:19 2022 : Debug: freeradius: #### Instantiating modules ####
Tue Nov 22 21:31:19 2022 : Debug: modules {
Tue Nov 22 21:31:19 2022 : Debug: Loading rlm_detail with path: /usr/lib/freeradius/rlm_detail.so
Tue Nov 22 21:31:19 2022 : Debug: Loaded rlm_detail, checking if it's valid
Tue Nov 22 21:31:19 2022 : Debug: # Loaded module rlm_detail
Tue Nov 22 21:31:19 2022 : Debug: # Loading module "detail.example.com" from file /etc/freeradius/modules/detail.example.com
Tue Nov 22 21:31:19 2022 : Debug: detail detail.example.com {
Tue Nov 22 21:31:19 2022 : Debug: filename = "/var/log/freeradius/radacct/detail.example.com/detail-%Y%m%d:%H:%G"
Tue Nov 22 21:31:19 2022 : Debug: header = "%t"
Tue Nov 22 21:31:19 2022 : Debug: permissions = 384
Tue Nov 22 21:31:19 2022 : Debug: locking = no
Tue Nov 22 21:31:19 2022 : Debug: escape_filenames = no
Tue Nov 22 21:31:19 2022 : Debug: log_packet_header = no
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: # Loading module "detail" from file /etc/freeradius/modules/detail
Tue Nov 22 21:31:19 2022 : Debug: detail {
Tue Nov 22 21:31:19 2022 : Debug: filename = "/var/log/freeradius/radacct/%{%{Packet-Src-IP-Address}:-%{Packet-Src-IPv6-Address}}/detail-%Y%m%d"
Tue Nov 22 21:31:19 2022 : Debug: header = "%t"
Tue Nov 22 21:31:19 2022 : Debug: permissions = 384
Tue Nov 22 21:31:19 2022 : Debug: locking = no
Tue Nov 22 21:31:19 2022 : Debug: escape_filenames = no
Tue Nov 22 21:31:19 2022 : Debug: log_packet_header = no
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Loading rlm_preprocess with path: /usr/lib/freeradius/rlm_preprocess.so
Tue Nov 22 21:31:19 2022 : Debug: Loaded rlm_preprocess, checking if it's valid
Tue Nov 22 21:31:19 2022 : Debug: # Loaded module rlm_preprocess
Tue Nov 22 21:31:19 2022 : Debug: # Loading module "preprocess" from file /etc/freeradius/modules/preprocess
Tue Nov 22 21:31:19 2022 : Debug: preprocess {
Tue Nov 22 21:31:19 2022 : Debug: huntgroups = "/etc/freeradius/huntgroups"
Tue Nov 22 21:31:19 2022 : Debug: hints = "/etc/freeradius/hints"
Tue Nov 22 21:31:19 2022 : Debug: with_ascend_hack = no
Tue Nov 22 21:31:19 2022 : Debug: ascend_channels_per_line = 23
Tue Nov 22 21:31:19 2022 : Debug: with_ntdomain_hack = no
Tue Nov 22 21:31:19 2022 : Debug: with_specialix_jetstream_hack = no
Tue Nov 22 21:31:19 2022 : Debug: with_cisco_vsa_hack = no
Tue Nov 22 21:31:19 2022 : Debug: with_alvarion_vsa_hack = no
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Loading rlm_pap with path: /usr/lib/freeradius/rlm_pap.so
Tue Nov 22 21:31:19 2022 : Debug: Loaded rlm_pap, checking if it's valid
Tue Nov 22 21:31:19 2022 : Debug: # Loaded module rlm_pap
Tue Nov 22 21:31:19 2022 : Debug: # Loading module "pap" from file /etc/freeradius/modules/pap
Tue Nov 22 21:31:19 2022 : Debug: pap {
Tue Nov 22 21:31:19 2022 : Debug: normalise = yes
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/modules/pap[21]: The item 'auto_header' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Loading rlm_chap with path: /usr/lib/freeradius/rlm_chap.so
Tue Nov 22 21:31:19 2022 : Debug: Loaded rlm_chap, checking if it's valid
Tue Nov 22 21:31:19 2022 : Debug: # Loaded module rlm_chap
Tue Nov 22 21:31:19 2022 : Debug: # Loading module "chap" from file /etc/freeradius/modules/chap
Tue Nov 22 21:31:19 2022 : Debug: Loading rlm_exec with path: /usr/lib/freeradius/rlm_exec.so
Tue Nov 22 21:31:19 2022 : Debug: Loaded rlm_exec, checking if it's valid
Tue Nov 22 21:31:19 2022 : Debug: # Loaded module rlm_exec
Tue Nov 22 21:31:19 2022 : Debug: # Loading module "exec" from file /etc/freeradius/modules/exec
Tue Nov 22 21:31:19 2022 : Debug: exec {
Tue Nov 22 21:31:19 2022 : Debug: wait = no
Tue Nov 22 21:31:19 2022 : Debug: input_pairs = "request"
Tue Nov 22 21:31:19 2022 : Debug: shell_escape = yes
Tue Nov 22 21:31:19 2022 : Debug: timeout = 10
Tue Nov 22 21:31:19 2022 : Warning: /etc/freeradius/modules/exec[28]: The item 'output' is defined, but is unused by the configuration
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Loading rlm_expr with path: /usr/lib/freeradius/rlm_expr.so
Tue Nov 22 21:31:19 2022 : Debug: Loaded rlm_expr, checking if it's valid
Tue Nov 22 21:31:19 2022 : Debug: # Loaded module rlm_expr
Tue Nov 22 21:31:19 2022 : Debug: # Loading module "expr" from file /etc/freeradius/modules/expr
Tue Nov 22 21:31:19 2022 : Debug: expr {
Tue Nov 22 21:31:19 2022 : Debug: safe_characters = "@abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789.-_: /"
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: instantiate {
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: # Instantiating module "detail.example.com" from file /etc/freeradius/modules/detail.example.com
Tue Nov 22 21:31:19 2022 : Debug: # Instantiating module "detail" from file /etc/freeradius/modules/detail
Tue Nov 22 21:31:19 2022 : Debug: # Instantiating module "preprocess" from file /etc/freeradius/modules/preprocess
Tue Nov 22 21:31:19 2022 : Debug: reading pairlist file /etc/freeradius/huntgroups
Tue Nov 22 21:31:19 2022 : Debug: reading pairlist file /etc/freeradius/hints
Tue Nov 22 21:31:19 2022 : Debug: # Instantiating module "pap" from file /etc/freeradius/modules/pap
Tue Nov 22 21:31:19 2022 : Debug: } # modules
Tue Nov 22 21:31:19 2022 : Debug: freeradius: #### Loading Virtual Servers ####
Tue Nov 22 21:31:19 2022 : Debug: server { # from file /etc/freeradius/freeradius.conf
Tue Nov 22 21:31:19 2022 : Error: /etc/freeradius/sites-enabled/default[7]: The authenticate section should be inside of a 'server { ... }' block!
Tue Nov 22 21:31:19 2022 : Debug: authenticate {
Tue Nov 22 21:31:19 2022 : Debug: Compiling Auth-Type PAP for attr Auth-Type
Tue Nov 22 21:31:19 2022 : Debug: group {
Tue Nov 22 21:31:19 2022 : Debug: pap
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Compiling Auth-Type CHAP for attr Auth-Type
Tue Nov 22 21:31:19 2022 : Debug: group {
Tue Nov 22 21:31:19 2022 : Debug: chap
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: } # authenticate
Tue Nov 22 21:31:19 2022 : Error: /etc/freeradius/sites-enabled/default[1]: The authorize section should be inside of a 'server { ... }' block!
Tue Nov 22 21:31:19 2022 : Debug: authorize {
Tue Nov 22 21:31:19 2022 : Debug: preprocess
Tue Nov 22 21:31:19 2022 : Debug: chap
Tue Nov 22 21:31:19 2022 : Debug: pap
Tue Nov 22 21:31:19 2022 : Debug: } # authorize
Tue Nov 22 21:31:19 2022 : Error: /etc/freeradius/sites-enabled/default[18]: The preacct section should be inside of a 'server { ... }' block!
Tue Nov 22 21:31:19 2022 : Debug: preacct {
Tue Nov 22 21:31:19 2022 : Debug: preprocess
Tue Nov 22 21:31:19 2022 : Debug: } # preacct
Tue Nov 22 21:31:19 2022 : Error: /etc/freeradius/sites-enabled/default[23]: The accounting section should be inside of a 'server { ... }' block!
Tue Nov 22 21:31:19 2022 : Debug: accounting {
Tue Nov 22 21:31:19 2022 : Debug: detail.example.com
Tue Nov 22 21:31:19 2022 : Debug: } # accounting
Tue Nov 22 21:31:19 2022 : Debug: } # server
Tue Nov 22 21:31:19 2022 : Debug: server acct_detail.example.com { # from file /etc/freeradius/sites-enabled/robust-proxy-accounting
Tue Nov 22 21:31:19 2022 : Debug: accounting {
Tue Nov 22 21:31:19 2022 : Debug: detail.example.com
Tue Nov 22 21:31:19 2022 : Debug: } # accounting
Tue Nov 22 21:31:19 2022 : Debug: } # server acct_detail.example.com
Tue Nov 22 21:31:19 2022 : Debug: server home.example.com { # from file /etc/freeradius/sites-enabled/robust-proxy-accounting
Tue Nov 22 21:31:19 2022 : Debug: accounting {
Tue Nov 22 21:31:19 2022 : Debug: update {
Tue Nov 22 21:31:19 2022 : Debug: &control:Proxy-To-Realm := 'acct_realm.example.com'
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: } # accounting
Tue Nov 22 21:31:19 2022 : Debug: post-proxy {
Tue Nov 22 21:31:19 2022 : Debug: Compiling Post-Proxy-Type Fail-Accounting for attr Post-Proxy-Type
Tue Nov 22 21:31:19 2022 : Debug: group {
Tue Nov 22 21:31:19 2022 : Debug: detail.example.com
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Compiling Post-Proxy-Type Fail-Authentication for attr Post-Proxy-Type
Tue Nov 22 21:31:19 2022 : Debug: group {
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: } # post-proxy
Tue Nov 22 21:31:19 2022 : Debug: } # server home.example.com
Tue Nov 22 21:31:19 2022 : Debug: Created signal pipe. Read end FD 5, write end FD 6
Tue Nov 22 21:31:19 2022 : Debug: freeradius: #### Opening IP addresses and Ports ####
Tue Nov 22 21:31:19 2022 : Debug: Loading proto_acct with path: /usr/lib/freeradius/proto_acct.so
Tue Nov 22 21:31:19 2022 : Debug: Loading proto_acct failed: /usr/lib/freeradius/proto_acct.so: cannot open shared object file: No such file or directory - No such file or directory
Tue Nov 22 21:31:19 2022 : Debug: Loading library using linker search path(s)
Tue Nov 22 21:31:19 2022 : Debug: Defaults : /lib:/usr/lib
Tue Nov 22 21:31:19 2022 : Debug: Failed with error: proto_acct.so: cannot open shared object file: No such file or directory
Tue Nov 22 21:31:19 2022 : Debug: listen {
Tue Nov 22 21:31:19 2022 : Debug: type = "acct"
Tue Nov 22 21:31:19 2022 : Debug: ipaddr = *
Tue Nov 22 21:31:19 2022 : Debug: port = 0
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Loading proto_detail with path: /usr/lib/freeradius/proto_detail.so
Tue Nov 22 21:31:19 2022 : Debug: Loading proto_detail failed: /usr/lib/freeradius/proto_detail.so: cannot open shared object file: No such file or directory - No such file or directory
Tue Nov 22 21:31:19 2022 : Debug: Loading library using linker search path(s)
Tue Nov 22 21:31:19 2022 : Debug: Defaults : /lib:/usr/lib
Tue Nov 22 21:31:19 2022 : Debug: Failed with error: proto_detail.so: cannot open shared object file: No such file or directory
Tue Nov 22 21:31:19 2022 : Debug: listen {
Tue Nov 22 21:31:19 2022 : Debug: type = "detail"
Tue Nov 22 21:31:19 2022 : Debug: listen {
Tue Nov 22 21:31:19 2022 : Debug: filename = "/var/log/freeradius/radacct/detail.example.com/detail-*:*"
Tue Nov 22 21:31:19 2022 : Debug: load_factor = 10
Tue Nov 22 21:31:19 2022 : Debug: poll_interval = 1
Tue Nov 22 21:31:19 2022 : Debug: retry_interval = 30
Tue Nov 22 21:31:19 2022 : Debug: one_shot = no
Tue Nov 22 21:31:19 2022 : Debug: track = no
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: }
Tue Nov 22 21:31:19 2022 : Debug: Listening on acct address * port 1813
Tue Nov 22 21:31:19 2022 : Debug: Listening on detail file /var/log/freeradius/radacct/detail.example.com/detail-*:* as server home.example.com
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - User-Name = "test_bgbras"
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - NAS-Identifier = "demo-bng"
Tue Nov 22 21:31:19 2022 : Info: Ready to process requests
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - NAS-IP-Address = 10.10.10.1
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - NAS-Port = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - NAS-Port-Id = "vlan100"
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - NAS-Port-Type = Virtual
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Service-Type = Framed-User
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Framed-Protocol = PPP
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Calling-Station-Id = "f6:cb:f6:40:1f:a0"
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Called-Station-Id = "1a:27:b5:27:bb:53"
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Class = 0x6375693d746573745f6267627261732c73753d3934323033392c736b3d6969786c6c7a776f3333386f2c626b69643d32323534303038392c626b7269643d332c6d763d3532393135323231333836302c6d743d3630353235392c676d743d3630343830302c73703d312c73723d2c61633d312c75693d3934323033392c6363693d3138382c63633d47656e6572616c2c61733d312c636f3d2c64623d3130323430302c75623d3130323430302c63626b69643d3232353430303930
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Status-Type = Start
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Authentic = RADIUS
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Session-Id = "1628808cd06d3273"
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Session-Time = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Input-Octets = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Output-Octets = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Input-Packets = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Output-Packets = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Input-Gigawords = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Acct-Output-Gigawords = 0
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Framed-IP-Address = 10.0.0.5
Tue Nov 22 21:31:19 2022 : Debug: detail (/var/log/freeradius/radacct/detail.example.com/detail-*:*): Trying to read VP from line - Event-Timestamp = "Nov 22 2022 21:21:45 IST"
Segmentation fault
```
### Relevant log output from client utilities
windows pppoe client try to connect
### Backtrace from LLDB or GDB
```shell
#0 0x00007ffff6bb1230 in open64 () from /lib64/libc.so.6
No symbol table info available.
#1 0x0000000000449243 in detail_poll ()
No symbol table info available.
#2 0x000000000044a278 in detail_handler_thread ()
No symbol table info available.
#3 0x00007ffff6b45026 in ?? () from /lib64/libc.so.6
No symbol table info available.
#4 0x00007ffff6bc0d60 in clone () from /lib64/libc.so.6
No symbol table info available.
```
|
non_test
|
segmentation fault freeradius robust proxy what type of defect bug is this crash or memory corruption segv abort etc how can the issue be reproduced when try to send accounting package to robust proxy radius crash with segmentation fault log output from the freeradius daemon shell freeradius xxxxxx tue nov debug server was built with tue nov debug accounting yes tue nov debug authentication yes tue nov debug ascend binary attributes yes tue nov debug coa yes tue nov debug recv coa from home server yes tue nov debug control socket yes tue nov debug detail yes tue nov debug dhcp yes tue nov debug dynamic clients yes tue nov debug no tue nov debug proxy yes tue nov debug regex pcre yes tue nov debug regex posix no tue nov debug regex posix extended no tue nov debug session management yes tue nov debug stats yes tue nov debug systemd no tue nov debug tcp yes tue nov debug threads yes tue nov debug tls yes tue nov debug unlang yes tue nov debug vmps yes tue nov debug developer no tue nov debug server core libs tue nov debug freeradius server tue nov debug talloc tue nov debug ssl dev tue nov debug pcre tue nov debug endianness tue nov debug little tue nov debug compilation flags tue nov debug cppflags pipe wall march skylake d largefile source d source d file offset bits fno stack protector mno stackrealign flto i build orionos include i build orionos usr include tue nov debug cflags i isrc include src freeradius devel autoconf h include src freeradius devel build h include src freeradius devel features h include src freeradius devel radpaths h fno strict aliasing wno date time pipe wall march skylake d largefile source d source d file offset bits fno stack protector mno stackrealign flto i build orionos include i build orionos usr include wall std d gnu source d reentrant d posix pthread semantics dopenssl no dndebug dis module tue nov debug ldflags l build orionos lib l build orionos usr lib tue nov debug libs lcrypto lssl ltalloc latomic lpcre lcap lresolv ldl lpthread lz lmariadb kit lhogweed lgmp lidn lffi lgnutls lnettle lmysqlclient lreadline lssl lcrypto lncurses ltinfo lltdl liconv lexpat lpcre llzma lnl lnl genl lnl route ltinfo ltirpc lssl lcrypto lboost regex lboost serialization lboost wserialization lboost system lcap lpcap lreadline tue nov debug tue nov info freeradius version tue nov info copyright c the freeradius server project and contributors tue nov info there is no warranty not even for merchantability or fitness for a tue nov info particular purpose tue nov info you may redistribute copies of freeradius under the terms of the tue nov info gnu general public license tue nov info for more information about these matters see the file named copyright tue nov info starting reading configuration files tue nov debug including dictionary file etc freeradius dictionary tue nov debug including dictionary file etc freeradius dictionary tue nov debug including configuration file etc freeradius freeradius conf tue nov debug including configuration file etc freeradius proxy conf tue nov debug including configuration file etc freeradius clients conf tue nov debug including files in directory etc freeradius modules tue nov debug including configuration file etc freeradius modules detail example com tue nov debug including configuration file etc freeradius modules detail tue nov debug including configuration file etc freeradius modules preprocess tue nov debug including configuration file etc freeradius modules pap tue nov debug including configuration file etc freeradius modules chap tue nov debug including configuration file etc freeradius modules exec tue nov debug including configuration file etc freeradius modules expr tue nov debug including files in directory etc freeradius sites enabled tue nov debug including configuration file etc freeradius sites enabled robust proxy accounting tue nov debug including configuration file etc freeradius sites enabled default tue nov debug main tue nov debug security tue nov debug user freerad tue nov debug group freerad tue nov debug allow core dumps no tue nov warning etc freeradius freeradius conf the item max attributes is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item reject delay is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item status server is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item allow vulnerable openssl is defined but is unused by the configuration tue nov debug tue nov debug name radiusd tue nov debug prefix usr tue nov debug localstatedir var tue nov debug logdir var log freeradius tue nov debug run dir var run freeradius tue nov warning etc freeradius freeradius conf the item ignore case is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item sysconfdir is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item log file is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item log destination is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item confdir is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item libdir is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item pidfile is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item max request time is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item cleanup delay is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item max requests is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item hostname lookups is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item regular expressions is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item extended expressions is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item checkrad is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item proxy requests is defined but is unused by the configuration tue nov debug tue nov debug main tue nov debug name radiusd tue nov debug prefix usr tue nov debug localstatedir var tue nov debug sbindir usr sbin tue nov debug logdir var log freeradius tue nov debug run dir var run freeradius tue nov debug libdir usr lib freeradius tue nov debug radacctdir var log freeradius radacct tue nov debug hostname lookups no tue nov debug max request time tue nov debug cleanup delay tue nov debug max requests tue nov debug postauth client lost no tue nov debug pidfile var run freeradius freeradius pid tue nov debug checkrad usr sbin checkrad tue nov debug debug level tue nov debug proxy requests yes tue nov debug log tue nov debug stripped names no tue nov debug auth no tue nov debug auth badpass no tue nov debug auth goodpass no tue nov debug msg denied you are already logged in access denied tue nov debug tue nov debug resources tue nov debug tue nov debug security tue nov debug max attributes tue nov debug reject delay tue nov debug status server no tue nov debug allow vulnerable openssl no tue nov debug tue nov warning etc freeradius freeradius conf the item ignore case is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item sysconfdir is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item log file is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item log destination is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item confdir is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item regular expressions is defined but is unused by the configuration tue nov warning etc freeradius freeradius conf the item extended expressions is defined but is unused by the configuration tue nov debug tue nov debug freeradius loading realms and home servers tue nov debug proxy server tue nov debug retry delay tue nov debug retry count tue nov debug default fallback yes tue nov debug dead time tue nov debug wake all if all dead no tue nov warning etc freeradius proxy conf the item synchronous is defined but is unused by the configuration tue nov warning etc freeradius proxy conf the item post proxy authorize is defined but is unused by the configuration tue nov debug tue nov debug home server example com tue nov debug nonblock no tue nov debug ipaddr tue nov debug port tue nov debug type acct tue nov debug secret secret tue nov debug response window tue nov debug response timeouts tue nov debug max outstanding tue nov debug zombie period tue nov debug status check request tue nov debug ping interval tue nov debug check interval tue nov debug check timeout tue nov debug num answers to alive tue nov debug revive interval tue nov debug username test bgbras tue nov debug limit tue nov debug max connections tue nov debug max requests tue nov debug lifetime tue nov debug idle timeout tue nov debug tue nov debug coa tue nov debug irt tue nov debug mrt tue nov debug mrc tue nov debug mrd tue nov debug tue nov debug recv coa tue nov debug tue nov debug tue nov debug realm local tue nov debug authhost local tue nov debug accthost local tue nov debug tue nov debug home server pool acct pool example com tue nov debug type fail over tue nov debug virtual server home example com tue nov debug home server example com tue nov debug tue nov debug realm acct realm example com tue nov debug acct pool acct pool example com tue nov debug tue nov debug freeradius loading clients tue nov debug client tue nov debug ipaddr tue nov debug require message authenticator no tue nov debug secret secret tue nov debug limit tue nov debug max connections tue nov debug lifetime tue nov debug idle timeout tue nov debug tue nov debug tue nov debug adding client to prefix tree tue nov info debugger not attached tue nov debug creating post proxy type fail accounting tue nov debug creating post proxy type fail authentication tue nov debug freeradius instantiating modules tue nov debug modules tue nov debug loading rlm detail with path usr lib freeradius rlm detail so tue nov debug loaded rlm detail checking if it s valid tue nov debug loaded module rlm detail tue nov debug loading module detail example com from file etc freeradius modules detail example com tue nov debug detail detail example com tue nov debug filename var log freeradius radacct detail example com detail y m d h g tue nov debug header t tue nov debug permissions tue nov debug locking no tue nov debug escape filenames no tue nov debug log packet header no tue nov debug tue nov debug loading module detail from file etc freeradius modules detail tue nov debug detail tue nov debug filename var log freeradius radacct packet src ip address packet src address detail y m d tue nov debug header t tue nov debug permissions tue nov debug locking no tue nov debug escape filenames no tue nov debug log packet header no tue nov debug tue nov debug loading rlm preprocess with path usr lib freeradius rlm preprocess so tue nov debug loaded rlm preprocess checking if it s valid tue nov debug loaded module rlm preprocess tue nov debug loading module preprocess from file etc freeradius modules preprocess tue nov debug preprocess tue nov debug huntgroups etc freeradius huntgroups tue nov debug hints etc freeradius hints tue nov debug with ascend hack no tue nov debug ascend channels per line tue nov debug with ntdomain hack no tue nov debug with specialix jetstream hack no tue nov debug with cisco vsa hack no tue nov debug with alvarion vsa hack no tue nov debug tue nov debug loading rlm pap with path usr lib freeradius rlm pap so tue nov debug loaded rlm pap checking if it s valid tue nov debug loaded module rlm pap tue nov debug loading module pap from file etc freeradius modules pap tue nov debug pap tue nov debug normalise yes tue nov warning etc freeradius modules pap the item auto header is defined but is unused by the configuration tue nov debug tue nov debug loading rlm chap with path usr lib freeradius rlm chap so tue nov debug loaded rlm chap checking if it s valid tue nov debug loaded module rlm chap tue nov debug loading module chap from file etc freeradius modules chap tue nov debug loading rlm exec with path usr lib freeradius rlm exec so tue nov debug loaded rlm exec checking if it s valid tue nov debug loaded module rlm exec tue nov debug loading module exec from file etc freeradius modules exec tue nov debug exec tue nov debug wait no tue nov debug input pairs request tue nov debug shell escape yes tue nov debug timeout tue nov warning etc freeradius modules exec the item output is defined but is unused by the configuration tue nov debug tue nov debug loading rlm expr with path usr lib freeradius rlm expr so tue nov debug loaded rlm expr checking if it s valid tue nov debug loaded module rlm expr tue nov debug loading module expr from file etc freeradius modules expr tue nov debug expr tue nov debug safe characters tue nov debug tue nov debug instantiate tue nov debug tue nov debug instantiating module detail example com from file etc freeradius modules detail example com tue nov debug instantiating module detail from file etc freeradius modules detail tue nov debug instantiating module preprocess from file etc freeradius modules preprocess tue nov debug reading pairlist file etc freeradius huntgroups tue nov debug reading pairlist file etc freeradius hints tue nov debug instantiating module pap from file etc freeradius modules pap tue nov debug modules tue nov debug freeradius loading virtual servers tue nov debug server from file etc freeradius freeradius conf tue nov error etc freeradius sites enabled default the authenticate section should be inside of a server block tue nov debug authenticate tue nov debug compiling auth type pap for attr auth type tue nov debug group tue nov debug pap tue nov debug tue nov debug compiling auth type chap for attr auth type tue nov debug group tue nov debug chap tue nov debug tue nov debug authenticate tue nov error etc freeradius sites enabled default the authorize section should be inside of a server block tue nov debug authorize tue nov debug preprocess tue nov debug chap tue nov debug pap tue nov debug authorize tue nov error etc freeradius sites enabled default the preacct section should be inside of a server block tue nov debug preacct tue nov debug preprocess tue nov debug preacct tue nov error etc freeradius sites enabled default the accounting section should be inside of a server block tue nov debug accounting tue nov debug detail example com tue nov debug accounting tue nov debug server tue nov debug server acct detail example com from file etc freeradius sites enabled robust proxy accounting tue nov debug accounting tue nov debug detail example com tue nov debug accounting tue nov debug server acct detail example com tue nov debug server home example com from file etc freeradius sites enabled robust proxy accounting tue nov debug accounting tue nov debug update tue nov debug control proxy to realm acct realm example com tue nov debug tue nov debug accounting tue nov debug post proxy tue nov debug compiling post proxy type fail accounting for attr post proxy type tue nov debug group tue nov debug detail example com tue nov debug tue nov debug compiling post proxy type fail authentication for attr post proxy type tue nov debug group tue nov debug tue nov debug post proxy tue nov debug server home example com tue nov debug created signal pipe read end fd write end fd tue nov debug freeradius opening ip addresses and ports tue nov debug loading proto acct with path usr lib freeradius proto acct so tue nov debug loading proto acct failed usr lib freeradius proto acct so cannot open shared object file no such file or directory no such file or directory tue nov debug loading library using linker search path s tue nov debug defaults lib usr lib tue nov debug failed with error proto acct so cannot open shared object file no such file or directory tue nov debug listen tue nov debug type acct tue nov debug ipaddr tue nov debug port tue nov debug tue nov debug loading proto detail with path usr lib freeradius proto detail so tue nov debug loading proto detail failed usr lib freeradius proto detail so cannot open shared object file no such file or directory no such file or directory tue nov debug loading library using linker search path s tue nov debug defaults lib usr lib tue nov debug failed with error proto detail so cannot open shared object file no such file or directory tue nov debug listen tue nov debug type detail tue nov debug listen tue nov debug filename var log freeradius radacct detail example com detail tue nov debug load factor tue nov debug poll interval tue nov debug retry interval tue nov debug one shot no tue nov debug track no tue nov debug tue nov debug tue nov debug listening on acct address port tue nov debug listening on detail file var log freeradius radacct detail example com detail as server home example com tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line user name test bgbras tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line nas identifier demo bng tue nov info ready to process requests tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line nas ip address tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line nas port tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line nas port id tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line nas port type virtual tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line service type framed user tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line framed protocol ppp tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line calling station id cb tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line called station id bb tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line class tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line acct status type start tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line acct authentic radius tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line acct session id tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line acct session time tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line acct input octets tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line acct output octets tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line acct input packets tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line acct output packets tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line acct input gigawords tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line acct output gigawords tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line framed ip address tue nov debug detail var log freeradius radacct detail example com detail trying to read vp from line event timestamp nov ist segmentation fault relevant log output from client utilities windows pppoe client try to connect backtrace from lldb or gdb shell in from libc so no symbol table info available in detail poll no symbol table info available in detail handler thread no symbol table info available in from libc so no symbol table info available in clone from libc so no symbol table info available
| 0
|
159,144
| 6,041,203,894
|
IssuesEvent
|
2017-06-10 21:49:57
|
svof/svof
|
https://api.github.com/repos/svof/svof
|
closed
|
Wrong definition of `a_darkyellow`
|
bug low priority simple difficulty up for grabs
|
> Sent By: Lynara On 2017-03-20 00:54:22
A_darkyellow, a color made by svo, is wrong - it should be {179,179,0}. It is {0,179,0}. Which is a_darkgreen.
|
1.0
|
Wrong definition of `a_darkyellow` - > Sent By: Lynara On 2017-03-20 00:54:22
A_darkyellow, a color made by svo, is wrong - it should be {179,179,0}. It is {0,179,0}. Which is a_darkgreen.
|
non_test
|
wrong definition of a darkyellow sent by lynara on a darkyellow a color made by svo is wrong it should be it is which is a darkgreen
| 0
|
211,385
| 7,200,716,061
|
IssuesEvent
|
2018-02-05 19:59:22
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
closed
|
Server 0.7.0.0 - Consistent Server Crashing.
|
High Priority
|
[ServerCrash.txt](https://github.com/StrangeLoopGames/EcoIssues/files/1692641/ServerCrash.txt)
It seems to be roughly every half an hour that the server is online that it'll crash with the error log that has been attached above. This may be a Mono issue but I just thought I'd report the error log so someone could have a look over it.
|
1.0
|
Server 0.7.0.0 - Consistent Server Crashing. - [ServerCrash.txt](https://github.com/StrangeLoopGames/EcoIssues/files/1692641/ServerCrash.txt)
It seems to be roughly every half an hour that the server is online that it'll crash with the error log that has been attached above. This may be a Mono issue but I just thought I'd report the error log so someone could have a look over it.
|
non_test
|
server consistent server crashing it seems to be roughly every half an hour that the server is online that it ll crash with the error log that has been attached above this may be a mono issue but i just thought i d report the error log so someone could have a look over it
| 0
|
49,923
| 13,187,292,663
|
IssuesEvent
|
2020-08-13 02:57:12
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
opened
|
[dataclasses] SuperDST pulse width (again) (Trac #2296)
|
Incomplete Migration Migrated from Trac combo core defect
|
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/2296">https://code.icecube.wisc.edu/ticket/2296</a>, reported by david.schultz and owned by david.schultz</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-06-04T16:18:43",
"description": "Apparently if the pulses to merge are at the end of the pulse list in SuperDST, the merge does not happen. Example:\n\n{{{\n[I3RecoPulse:\n Time : 12955\n Charge : 3.275\n Width : 0\n Flags : LC ATWD FADC\n],\n[I3RecoPulse:\n Time : 12955\n Charge : 1.975\n Width : 1\n Flags : LC ATWD FADC\n]\n}}}\n\nLooks like this is because the last pulse is not considered in the loop.",
"reporter": "david.schultz",
"cc": "",
"resolution": "fixed",
"_ts": "1559665123695793",
"component": "combo core",
"summary": "[dataclasses] SuperDST pulse width (again)",
"priority": "blocker",
"keywords": "",
"time": "2019-06-04T16:02:44",
"milestone": "Summer Solstice 2019",
"owner": "david.schultz",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
[dataclasses] SuperDST pulse width (again) (Trac #2296) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/2296">https://code.icecube.wisc.edu/ticket/2296</a>, reported by david.schultz and owned by david.schultz</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-06-04T16:18:43",
"description": "Apparently if the pulses to merge are at the end of the pulse list in SuperDST, the merge does not happen. Example:\n\n{{{\n[I3RecoPulse:\n Time : 12955\n Charge : 3.275\n Width : 0\n Flags : LC ATWD FADC\n],\n[I3RecoPulse:\n Time : 12955\n Charge : 1.975\n Width : 1\n Flags : LC ATWD FADC\n]\n}}}\n\nLooks like this is because the last pulse is not considered in the loop.",
"reporter": "david.schultz",
"cc": "",
"resolution": "fixed",
"_ts": "1559665123695793",
"component": "combo core",
"summary": "[dataclasses] SuperDST pulse width (again)",
"priority": "blocker",
"keywords": "",
"time": "2019-06-04T16:02:44",
"milestone": "Summer Solstice 2019",
"owner": "david.schultz",
"type": "defect"
}
```
</p>
</details>
|
non_test
|
superdst pulse width again trac migrated from json status closed changetime description apparently if the pulses to merge are at the end of the pulse list in superdst the merge does not happen example n n n n n n nlooks like this is because the last pulse is not considered in the loop reporter david schultz cc resolution fixed ts component combo core summary superdst pulse width again priority blocker keywords time milestone summer solstice owner david schultz type defect
| 0
|
692,181
| 23,725,271,268
|
IssuesEvent
|
2022-08-30 18:58:21
|
cds-snc/notification-planning
|
https://api.github.com/repos/cds-snc/notification-planning
|
closed
|
(File Upload) Multiple Labels.
|
Accessiblity | Accessibilité Medium Priority | Priorité moyenne
|
# File Upload, Fields
Visibility: 2
Evaluation: Does not support
Success Criteria: 3.3.2: Labels or Instructions
Needs design:
### Description
For file upload inputs, there appear to be three <label> elements associated with the input. This may cause confusion for an AT and may result in the label not being reported correctly. There should only be one <label> element used.

|
1.0
|
(File Upload) Multiple Labels. - # File Upload, Fields
Visibility: 2
Evaluation: Does not support
Success Criteria: 3.3.2: Labels or Instructions
Needs design:
### Description
For file upload inputs, there appear to be three <label> elements associated with the input. This may cause confusion for an AT and may result in the label not being reported correctly. There should only be one <label> element used.

|
non_test
|
file upload multiple labels file upload fields visibility evaluation does not support success criteria labels or instructions needs design description for file upload inputs there appear to be three elements associated with the input this may cause confusion for an at and may result in the label not being reported correctly there should only be one element used
| 0
|
73,569
| 7,344,404,262
|
IssuesEvent
|
2018-03-07 14:35:48
|
kubeflow/kubeflow
|
https://api.github.com/repos/kubeflow/kubeflow
|
closed
|
E2E Testing For Kubeflow.
|
testing
|
We need to setup continuous E2E testing for google/kubeflow
- [ ] Create a basic an E2E test
- [ ] Deploy and verify JupyterHub is working
- [X] Deploy and verify TfJob is working
- [ ] Verify TfServing is working
- [X] Setup prow for google/kubeflow
- [X] Setup presubmit tests
- [ ] Setup postsubmit tests
I think it makes sense to write the E2E test first and then once we have it we can setup prow integration.
|
1.0
|
E2E Testing For Kubeflow. - We need to setup continuous E2E testing for google/kubeflow
- [ ] Create a basic an E2E test
- [ ] Deploy and verify JupyterHub is working
- [X] Deploy and verify TfJob is working
- [ ] Verify TfServing is working
- [X] Setup prow for google/kubeflow
- [X] Setup presubmit tests
- [ ] Setup postsubmit tests
I think it makes sense to write the E2E test first and then once we have it we can setup prow integration.
|
test
|
testing for kubeflow we need to setup continuous testing for google kubeflow create a basic an test deploy and verify jupyterhub is working deploy and verify tfjob is working verify tfserving is working setup prow for google kubeflow setup presubmit tests setup postsubmit tests i think it makes sense to write the test first and then once we have it we can setup prow integration
| 1
|
95,885
| 8,580,121,351
|
IssuesEvent
|
2018-11-13 11:02:05
|
elastic/beats
|
https://api.github.com/repos/elastic/beats
|
reopened
|
[Metricbeat] Flaky test_couchbase.Test.test_couchbase_0_bucket
|
:Testing Metricbeat flaky-test
|
link: https://beats-ci.elastic.co/job/elastic+beats+pull-request+multijob-linux/5782/beat=metricbeat,label=ubuntu/testReport/junit/test_couchbase/Test/test_couchbase_0_bucket/
platform: linux
```
Element counts were not equal:
First has 1, Second has 0: 'couchbase'
First has 0, Second has 1: u'error'
-------------------- >> begin captured stdout << ---------------------
{u'beat': {u'hostname': u'2fcadc37c9ec', u'name': u'2fcadc37c9ec', u'version': u'7.0.0-alpha1'}, u'@timestamp': u'2018-07-26T19:19:27.848Z', u'host': {u'name': u'2fcadc37c9ec'}, u'error': {u'message': u'HTTP error 401 in bucket: 401 Unauthorized'}, u'metricset': {u'rtt': 6355, u'host': u'couchbase:8091', u'name': u'bucket', u'module': u'couchbase'}}
--------------------- >> end captured stdout << ----------------------
-------------------- >> begin captured logging << --------------------
compose.config.config: DEBUG: Using configuration files: ./docker-compose.yml
docker.utils.config: DEBUG: Trying paths: ['/root/.docker/config.json', '/root/.dockercfg']
docker.utils.config: DEBUG: No config file found
docker.utils.config: DEBUG: Trying paths: ['/root/.docker/config.json', '/root/.dockercfg']
docker.utils.config: DEBUG: No config file found
compose.parallel: DEBUG: Pending: set([<Container: metricbeat17e29fb47ee56c6580032e2f535818f05f192ae4_mongodb_1 (14feab)>])
compose.parallel: DEBUG: Starting producer thread for <Container: metricbeat17e29fb47ee56c6580032e2f535818f05f192ae4_mongodb_1 (14feab)>
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Finished processing: <Container: metricbeat17e29fb47ee56c6580032e2f535818f05f192ae4_mongodb_1 (14feab)>
compose.parallel: DEBUG: Pending: set([])
compose.config.config: DEBUG: Using configuration files: ./docker-compose.yml
docker.utils.config: DEBUG: Trying paths: ['/root/.docker/config.json', '/root/.dockercfg']
docker.utils.config: DEBUG: No config file found
docker.utils.config: DEBUG: Trying paths: ['/root/.docker/config.json', '/root/.dockercfg']
docker.utils.config: DEBUG: No config file found
compose.service: INFO: Building couchbase
docker.api.build: DEBUG: Looking for auth config
docker.api.build: DEBUG: No auth config in memory - loading from filesystem
docker.utils.config: DEBUG: Trying paths: ['/root/.docker/config.json', '/root/.dockercfg']
docker.utils.config: DEBUG: No config file found
docker.api.build: DEBUG: No auth config found
compose.parallel: DEBUG: Pending: set([<Service: couchbase>])
compose.parallel: DEBUG: Starting producer thread for <Service: couchbase>
compose.parallel: DEBUG: Pending: set([ServiceName(project='metricbeat17e29fb47ee56c6580032e2f535818f05f192ae4', service='couchbase', number=1)])
compose.parallel: DEBUG: Starting producer thread for ServiceName(project='metricbeat17e29fb47ee56c6580032e2f535818f05f192ae4', service='couchbase', number=1)
compose.service: DEBUG: Added config hash: 35e2347d8dc619864aa81fd019891e9aff24166ad342271b2305019314494b3a
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Finished processing: ServiceName(project='metricbeat17e29fb47ee56c6580032e2f535818f05f192ae4', service='couchbase', number=1)
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Finished processing: <Service: couchbase>
compose.parallel: DEBUG: Pending: set([])
--------------------- >> end captured logging << ---------------------
Stacktrace
````
|
2.0
|
[Metricbeat] Flaky test_couchbase.Test.test_couchbase_0_bucket - link: https://beats-ci.elastic.co/job/elastic+beats+pull-request+multijob-linux/5782/beat=metricbeat,label=ubuntu/testReport/junit/test_couchbase/Test/test_couchbase_0_bucket/
platform: linux
```
Element counts were not equal:
First has 1, Second has 0: 'couchbase'
First has 0, Second has 1: u'error'
-------------------- >> begin captured stdout << ---------------------
{u'beat': {u'hostname': u'2fcadc37c9ec', u'name': u'2fcadc37c9ec', u'version': u'7.0.0-alpha1'}, u'@timestamp': u'2018-07-26T19:19:27.848Z', u'host': {u'name': u'2fcadc37c9ec'}, u'error': {u'message': u'HTTP error 401 in bucket: 401 Unauthorized'}, u'metricset': {u'rtt': 6355, u'host': u'couchbase:8091', u'name': u'bucket', u'module': u'couchbase'}}
--------------------- >> end captured stdout << ----------------------
-------------------- >> begin captured logging << --------------------
compose.config.config: DEBUG: Using configuration files: ./docker-compose.yml
docker.utils.config: DEBUG: Trying paths: ['/root/.docker/config.json', '/root/.dockercfg']
docker.utils.config: DEBUG: No config file found
docker.utils.config: DEBUG: Trying paths: ['/root/.docker/config.json', '/root/.dockercfg']
docker.utils.config: DEBUG: No config file found
compose.parallel: DEBUG: Pending: set([<Container: metricbeat17e29fb47ee56c6580032e2f535818f05f192ae4_mongodb_1 (14feab)>])
compose.parallel: DEBUG: Starting producer thread for <Container: metricbeat17e29fb47ee56c6580032e2f535818f05f192ae4_mongodb_1 (14feab)>
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Finished processing: <Container: metricbeat17e29fb47ee56c6580032e2f535818f05f192ae4_mongodb_1 (14feab)>
compose.parallel: DEBUG: Pending: set([])
compose.config.config: DEBUG: Using configuration files: ./docker-compose.yml
docker.utils.config: DEBUG: Trying paths: ['/root/.docker/config.json', '/root/.dockercfg']
docker.utils.config: DEBUG: No config file found
docker.utils.config: DEBUG: Trying paths: ['/root/.docker/config.json', '/root/.dockercfg']
docker.utils.config: DEBUG: No config file found
compose.service: INFO: Building couchbase
docker.api.build: DEBUG: Looking for auth config
docker.api.build: DEBUG: No auth config in memory - loading from filesystem
docker.utils.config: DEBUG: Trying paths: ['/root/.docker/config.json', '/root/.dockercfg']
docker.utils.config: DEBUG: No config file found
docker.api.build: DEBUG: No auth config found
compose.parallel: DEBUG: Pending: set([<Service: couchbase>])
compose.parallel: DEBUG: Starting producer thread for <Service: couchbase>
compose.parallel: DEBUG: Pending: set([ServiceName(project='metricbeat17e29fb47ee56c6580032e2f535818f05f192ae4', service='couchbase', number=1)])
compose.parallel: DEBUG: Starting producer thread for ServiceName(project='metricbeat17e29fb47ee56c6580032e2f535818f05f192ae4', service='couchbase', number=1)
compose.service: DEBUG: Added config hash: 35e2347d8dc619864aa81fd019891e9aff24166ad342271b2305019314494b3a
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Finished processing: ServiceName(project='metricbeat17e29fb47ee56c6580032e2f535818f05f192ae4', service='couchbase', number=1)
compose.parallel: DEBUG: Pending: set([])
compose.parallel: DEBUG: Finished processing: <Service: couchbase>
compose.parallel: DEBUG: Pending: set([])
--------------------- >> end captured logging << ---------------------
Stacktrace
````
|
test
|
flaky test couchbase test test couchbase bucket link platform linux element counts were not equal first has second has couchbase first has second has u error begin captured stdout u beat u hostname u u name u u version u u timestamp u u host u name u u error u message u http error in bucket unauthorized u metricset u rtt u host u couchbase u name u bucket u module u couchbase end captured stdout begin captured logging compose config config debug using configuration files docker compose yml docker utils config debug trying paths docker utils config debug no config file found docker utils config debug trying paths docker utils config debug no config file found compose parallel debug pending set compose parallel debug starting producer thread for compose parallel debug pending set compose parallel debug pending set compose parallel debug pending set compose parallel debug pending set compose parallel debug pending set compose parallel debug pending set compose parallel debug finished processing compose parallel debug pending set compose config config debug using configuration files docker compose yml docker utils config debug trying paths docker utils config debug no config file found docker utils config debug trying paths docker utils config debug no config file found compose service info building couchbase docker api build debug looking for auth config docker api build debug no auth config in memory loading from filesystem docker utils config debug trying paths docker utils config debug no config file found docker api build debug no auth config found compose parallel debug pending set compose parallel debug starting producer thread for compose parallel debug pending set compose parallel debug starting producer thread for servicename project service couchbase number compose service debug added config hash compose parallel debug pending set compose parallel debug pending set compose parallel debug pending set compose parallel debug pending set compose parallel debug pending set compose parallel debug pending set compose parallel debug pending set compose parallel debug pending set compose parallel debug pending set compose parallel debug pending set compose parallel debug pending set compose parallel debug pending set compose parallel debug pending set compose parallel debug pending set compose parallel debug pending set compose parallel debug finished processing servicename project service couchbase number compose parallel debug pending set compose parallel debug finished processing compose parallel debug pending set end captured logging stacktrace
| 1
|
161,894
| 13,879,643,500
|
IssuesEvent
|
2020-10-17 15:20:30
|
DnanaDev/Covid19-India-Analysis-and-Forecasting
|
https://api.github.com/repos/DnanaDev/Covid19-India-Analysis-and-Forecasting
|
closed
|
Multi-Step Forecast when using Lagged features
|
bug documentation
|
The models use multiple lagged features of the target from t-1 to t-7. This is a problem at inference time. Suppose you need to predict 2 days into the future. Two possible solutions exists:
1. Direct approach - Predict for the next day. Use all the data to train a new model. predict for the next day.
2. Recursive Approach - Predict for the next day. Use the prediction to as a feature t-1 for the next day and so on.
References : https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/151927
https://machinelearningmastery.com/multi-step-time-series-forecasting/
[Leakage]
For both growth ratio and growth factor forecasts.
The way I'm estimating performance on the test set is an example of leakage. The model is being fed the actual values as lagged values on day t+1 and so on.
|
1.0
|
Multi-Step Forecast when using Lagged features - The models use multiple lagged features of the target from t-1 to t-7. This is a problem at inference time. Suppose you need to predict 2 days into the future. Two possible solutions exists:
1. Direct approach - Predict for the next day. Use all the data to train a new model. predict for the next day.
2. Recursive Approach - Predict for the next day. Use the prediction to as a feature t-1 for the next day and so on.
References : https://www.kaggle.com/c/m5-forecasting-accuracy/discussion/151927
https://machinelearningmastery.com/multi-step-time-series-forecasting/
[Leakage]
For both growth ratio and growth factor forecasts.
The way I'm estimating performance on the test set is an example of leakage. The model is being fed the actual values as lagged values on day t+1 and so on.
|
non_test
|
multi step forecast when using lagged features the models use multiple lagged features of the target from t to t this is a problem at inference time suppose you need to predict days into the future two possible solutions exists direct approach predict for the next day use all the data to train a new model predict for the next day recursive approach predict for the next day use the prediction to as a feature t for the next day and so on references for both growth ratio and growth factor forecasts the way i m estimating performance on the test set is an example of leakage the model is being fed the actual values as lagged values on day t and so on
| 0
|
233,133
| 18,950,393,657
|
IssuesEvent
|
2021-11-18 14:39:17
|
geosolutions-it/geonode
|
https://api.github.com/repos/geosolutions-it/geonode
|
closed
|
Tests for the release of GN 3.3.0
|
Epic Testing
|
@ElenaGallo we are ready to run a full test of https://development.demo.geonode.org/ in view of the release of GN 3.3.0 next week.
Issues:
- mobile
- [x] [video](https://studio.cucumber.io/projects/291214/test-runs/605303/folder-snapshots/6156164/scenario-snapshots/21581429/test-snapshots/29431160) https://github.com/geonode/geonode/issues/8322
- [ ] **CAN'T REPRODUCE** [map TOC](https://studio.cucumber.io/projects/291214/test-runs/605303/folder-snapshots/6156164/scenario-snapshots/21581430/test-snapshots/29431161) https://github.com/geonode/geonode/issues/8323
- [x] [geostory](https://studio.cucumber.io/projects/291214/test-runs/605303/folder-snapshots/6156164/scenario-snapshots/21581431/test-snapshots/29431162) https://github.com/geonode/geonode/issues/8324
- [x] [dashboard](https://studio.cucumber.io/projects/291214/test-runs/605303/folder-snapshots/6156164/scenario-snapshots/21581432/test-snapshots/29431163) https://github.com/GeoNode/geonode-mapstore-client/issues/580
- download
- [x] https://github.com/geonode/geonode-mapstore-client/issues/573
- [x] https://github.com/geonode/geonode-mapstore-client/issues/572
- [x] [thumbnails cache](https://studio.cucumber.io/projects/291214/test-runs/605303/folder-snapshots/6123685/scenario-snapshots/21479974/test-snapshots/29281110) https://github.com/geonode/geonode/issues/8312
- [x] add thumbnails from save as ([geostory](https://studio.cucumber.io/projects/291214/test-runs/605303/folder-snapshots/6123719/scenario-snapshots/21480087/test-snapshots/29281223) and [dashbaord](https://studio.cucumber.io/projects/291214/test-runs/605303/folder-snapshots/6123725/scenario-snapshots/21480110/test-snapshots/29281246)) https://github.com/GeoNode/geonode/issues/8325
- dashboards
- [x] [dashboards and legends](https://studio.cucumber.io/projects/291214/test-runs/605303/folder-snapshots/6123723/scenario-snapshots/21480102/test-snapshots/29281238) https://github.com/geonode/geonode-mapstore-client/issues/579
- [x] [interactions stop working after saving](https://studio.cucumber.io/projects/291214/test-runs/605303/folder-snapshots/6123723/scenario-snapshots/21680501/test-snapshots/29578552) https://github.com/GeoNode/geonode-mapstore-client/issues/584
- [x] [add layer to existing](https://studio.cucumber.io/projects/291214/test-runs/605303/folder-snapshots/6123697/scenario-snapshots/21499202/test-snapshots/29306206) map https://github.com/GeoNode/geonode-mapstore-client/issues/582
- [ ] [annotations not printed](https://studio.cucumber.io/projects/291214/test-runs/605303/folder-snapshots/6123712/scenario-snapshots/21499555/test-snapshots/29306729) https://github.com/GeoNode/geonode-mapstore-client/issues/593
|
1.0
|
Tests for the release of GN 3.3.0 - @ElenaGallo we are ready to run a full test of https://development.demo.geonode.org/ in view of the release of GN 3.3.0 next week.
Issues:
- mobile
- [x] [video](https://studio.cucumber.io/projects/291214/test-runs/605303/folder-snapshots/6156164/scenario-snapshots/21581429/test-snapshots/29431160) https://github.com/geonode/geonode/issues/8322
- [ ] **CAN'T REPRODUCE** [map TOC](https://studio.cucumber.io/projects/291214/test-runs/605303/folder-snapshots/6156164/scenario-snapshots/21581430/test-snapshots/29431161) https://github.com/geonode/geonode/issues/8323
- [x] [geostory](https://studio.cucumber.io/projects/291214/test-runs/605303/folder-snapshots/6156164/scenario-snapshots/21581431/test-snapshots/29431162) https://github.com/geonode/geonode/issues/8324
- [x] [dashboard](https://studio.cucumber.io/projects/291214/test-runs/605303/folder-snapshots/6156164/scenario-snapshots/21581432/test-snapshots/29431163) https://github.com/GeoNode/geonode-mapstore-client/issues/580
- download
- [x] https://github.com/geonode/geonode-mapstore-client/issues/573
- [x] https://github.com/geonode/geonode-mapstore-client/issues/572
- [x] [thumbnails cache](https://studio.cucumber.io/projects/291214/test-runs/605303/folder-snapshots/6123685/scenario-snapshots/21479974/test-snapshots/29281110) https://github.com/geonode/geonode/issues/8312
- [x] add thumbnails from save as ([geostory](https://studio.cucumber.io/projects/291214/test-runs/605303/folder-snapshots/6123719/scenario-snapshots/21480087/test-snapshots/29281223) and [dashbaord](https://studio.cucumber.io/projects/291214/test-runs/605303/folder-snapshots/6123725/scenario-snapshots/21480110/test-snapshots/29281246)) https://github.com/GeoNode/geonode/issues/8325
- dashboards
- [x] [dashboards and legends](https://studio.cucumber.io/projects/291214/test-runs/605303/folder-snapshots/6123723/scenario-snapshots/21480102/test-snapshots/29281238) https://github.com/geonode/geonode-mapstore-client/issues/579
- [x] [interactions stop working after saving](https://studio.cucumber.io/projects/291214/test-runs/605303/folder-snapshots/6123723/scenario-snapshots/21680501/test-snapshots/29578552) https://github.com/GeoNode/geonode-mapstore-client/issues/584
- [x] [add layer to existing](https://studio.cucumber.io/projects/291214/test-runs/605303/folder-snapshots/6123697/scenario-snapshots/21499202/test-snapshots/29306206) map https://github.com/GeoNode/geonode-mapstore-client/issues/582
- [ ] [annotations not printed](https://studio.cucumber.io/projects/291214/test-runs/605303/folder-snapshots/6123712/scenario-snapshots/21499555/test-snapshots/29306729) https://github.com/GeoNode/geonode-mapstore-client/issues/593
|
test
|
tests for the release of gn elenagallo we are ready to run a full test of in view of the release of gn next week issues mobile can t reproduce download add thumbnails from save as and dashboards map
| 1
|
311,401
| 9,532,633,643
|
IssuesEvent
|
2019-04-29 19:06:36
|
dojot/dojot
|
https://api.github.com/repos/dojot/dojot
|
opened
|
[Flowbroker] Publishing on device using device out node causes persister failure
|
Priority:Critical Team:Backend Type:Bug
|
Publishing on device using device out node causes persister failure.
```
persister_1 | Exception in thread Persister:
persister_1 | Traceback (most recent call last):
persister_1 | File "/usr/local/lib/python3.6/threading.py", line 916, in _bootstrap_inner
persister_1 | self.run()
persister_1 | File "/usr/src/venv/lib/python3.6/site-packages/dojot.module-0.0.1a4-py3.6.egg/dojot/module/kafka/consumer.py", line 123, in run
persister_1 | self.callback(msg.topic, msg.value)
persister_1 | File "/usr/src/venv/lib/python3.6/site-packages/dojot.module-0.0.1a4-py3.6.egg/dojot/module/messenger.py", line 327, in __process_kafka_messages
persister_1 | self.emit(self.topics[topic]['subject'], self.topics[topic]['tenant'], "message", messages)
persister_1 | File "/usr/src/venv/lib/python3.6/site-packages/dojot.module-0.0.1a4-py3.6.egg/dojot/module/messenger.py", line 194, in emit
persister_1 | callback(tenant, data)
persister_1 | File "history/subscriber/persister.py", line 136, in handle_event_data
persister_1 | del metadata['timestamp']
persister_1 | KeyError: 'timestamp'
```
**Affected Version**: 61.1-20190423
|
1.0
|
[Flowbroker] Publishing on device using device out node causes persister failure - Publishing on device using device out node causes persister failure.
```
persister_1 | Exception in thread Persister:
persister_1 | Traceback (most recent call last):
persister_1 | File "/usr/local/lib/python3.6/threading.py", line 916, in _bootstrap_inner
persister_1 | self.run()
persister_1 | File "/usr/src/venv/lib/python3.6/site-packages/dojot.module-0.0.1a4-py3.6.egg/dojot/module/kafka/consumer.py", line 123, in run
persister_1 | self.callback(msg.topic, msg.value)
persister_1 | File "/usr/src/venv/lib/python3.6/site-packages/dojot.module-0.0.1a4-py3.6.egg/dojot/module/messenger.py", line 327, in __process_kafka_messages
persister_1 | self.emit(self.topics[topic]['subject'], self.topics[topic]['tenant'], "message", messages)
persister_1 | File "/usr/src/venv/lib/python3.6/site-packages/dojot.module-0.0.1a4-py3.6.egg/dojot/module/messenger.py", line 194, in emit
persister_1 | callback(tenant, data)
persister_1 | File "history/subscriber/persister.py", line 136, in handle_event_data
persister_1 | del metadata['timestamp']
persister_1 | KeyError: 'timestamp'
```
**Affected Version**: 61.1-20190423
|
non_test
|
publishing on device using device out node causes persister failure publishing on device using device out node causes persister failure persister exception in thread persister persister traceback most recent call last persister file usr local lib threading py line in bootstrap inner persister self run persister file usr src venv lib site packages dojot module egg dojot module kafka consumer py line in run persister self callback msg topic msg value persister file usr src venv lib site packages dojot module egg dojot module messenger py line in process kafka messages persister self emit self topics self topics message messages persister file usr src venv lib site packages dojot module egg dojot module messenger py line in emit persister callback tenant data persister file history subscriber persister py line in handle event data persister del metadata persister keyerror timestamp affected version
| 0
|
46,089
| 9,882,746,911
|
IssuesEvent
|
2019-06-24 17:39:04
|
MicrosoftDocs/live-share
|
https://api.github.com/repos/MicrosoftDocs/live-share
|
closed
|
[VS Code] Removing Terminal: Cannot read property 'terminalId' of undefined
|
area: terminal external logs attached vscode
|
<!--
For Visual Studio problems/feedback, please use the "Report a Problem..." feature built into the tool. See https://aka.ms/vsls-vsproblem.
For VS Code issues, attach verbose logs as follows:
1. Press F1 (or Ctrl-Shift-P), type "export logs" and run the "Live Share: Export Logs" command.
2. Drag and drop the zip to the issue on this screen and wait for it to upload before creating the issue.
For feature requests, please include enough of this same info so we know if the request is tool or language/platform specific.
-->
## Error:
Removing Terminal: Cannot read property 'terminalId' of undefined
## Steps to Reproduce:
1.
2.
||Version Data|
|-:|:-|
|**extensionName**|VSLS|
|**extensionVersion**|0.3.954|
|**protocolVersion**|2.2|
|**applicationName**|VSCode|
|**applicationVersion**|1.29.1|
|**platformName**|Windows|
|**platformVersion**|10.0.17763|
|
1.0
|
[VS Code] Removing Terminal: Cannot read property 'terminalId' of undefined - <!--
For Visual Studio problems/feedback, please use the "Report a Problem..." feature built into the tool. See https://aka.ms/vsls-vsproblem.
For VS Code issues, attach verbose logs as follows:
1. Press F1 (or Ctrl-Shift-P), type "export logs" and run the "Live Share: Export Logs" command.
2. Drag and drop the zip to the issue on this screen and wait for it to upload before creating the issue.
For feature requests, please include enough of this same info so we know if the request is tool or language/platform specific.
-->
## Error:
Removing Terminal: Cannot read property 'terminalId' of undefined
## Steps to Reproduce:
1.
2.
||Version Data|
|-:|:-|
|**extensionName**|VSLS|
|**extensionVersion**|0.3.954|
|**protocolVersion**|2.2|
|**applicationName**|VSCode|
|**applicationVersion**|1.29.1|
|**platformName**|Windows|
|**platformVersion**|10.0.17763|
|
non_test
|
removing terminal cannot read property terminalid of undefined for visual studio problems feedback please use the report a problem feature built into the tool see for vs code issues attach verbose logs as follows press or ctrl shift p type export logs and run the live share export logs command drag and drop the zip to the issue on this screen and wait for it to upload before creating the issue for feature requests please include enough of this same info so we know if the request is tool or language platform specific error removing terminal cannot read property terminalid of undefined steps to reproduce version data extensionname vsls extensionversion protocolversion applicationname vscode applicationversion platformname windows platformversion
| 0
|
56,377
| 6,518,056,414
|
IssuesEvent
|
2017-08-28 05:51:27
|
ThaDafinser/ZfcDatagrid
|
https://api.github.com/repos/ThaDafinser/ZfcDatagrid
|
closed
|
Cannot inject custom formatter
|
Verify/test needed
|
Though a column can use translation setting the `setTranslationEnabled()` method to true I need to translate my value inside a custom formatter.
This is more of _Question_ than an _Issue_.
Since this is my _DI_ attempt I tried to inject the `viewRenderer` into my custom formatter inside my `module.config.php`:
``` php
'service_manager' => array(
'factories' => [
'ContractStateFormatter' => function($sm) {
$viewRenderer = $sm->get('ViewRenderer');
$contractStateFormatter = new \Application\Datagrid\Column\Formatter\ContractState();
$contractStateFormatter->setView($viewRenderer);
return $contractStateFormatter;
}
]
),
```
Successfully setting the `formatter` on the `column`:
``` php
$col = new Column\Select('state_name');
$col->setLabel('State');
$col->setWidth(5);
$contractState = $this->getServiceLocator()->get('ContractStateFormatter');
$col->setFormatter($contractState);
#$col->setFormatter(new Formatter\ContractState()); // working fine
$col->setFilterDefaultValue($state);
$col->setTranslationEnabled(true);
$grid->addColumn($col);
```
As you can see before the _DI_ the `formatter` was working fine.
Here is the `formatter`:
``` php
<?php
namespace Application\Datagrid\Column\Formatter;
use ZfcDatagrid\Column\Formatter\AbstractFormatter;
use ZfcDatagrid\Column\AbstractColumn;
class ContractState extends AbstractFormatter
{
protected $validRenderers = array(
'jqGrid',
'bootstrapTable'
);
protected $view;
public function setView($view)
{
#echo $view->translate('Home');
#$this->view = 'test';
$this->view = $view;
}
public function getView()
{
return $this->view;
}
public function getFormattedValue(AbstractColumn $column)
{
$row = $this->getRowData();
$this->view->translate($row['state_name']));
$html = sprintf('<span class="state_name">%s</span><br>', $row['state_name']); // translation getTranslator()
// some custom formatting
return $html;
}
}
```
The setting works fine, the translation works fine. But though everything worked fine before I now get the following error:
**Could not save the datagrid cache. Does the directory "/home/.../Zend/workspaces/DefaultWorkspace10/PQ2/data/ZfcDatagrid" exists and is writeable?**
This actually makes no sense, right? Maybe the problem is caused by some kind of overhead or conflict with the `view` attribute. But I couldn't find any conflict inside the `formatter`.
Again, this is my first _DI_ attempt. Please tell me if my approach is wrong.
Thanks
|
1.0
|
Cannot inject custom formatter - Though a column can use translation setting the `setTranslationEnabled()` method to true I need to translate my value inside a custom formatter.
This is more of _Question_ than an _Issue_.
Since this is my _DI_ attempt I tried to inject the `viewRenderer` into my custom formatter inside my `module.config.php`:
``` php
'service_manager' => array(
'factories' => [
'ContractStateFormatter' => function($sm) {
$viewRenderer = $sm->get('ViewRenderer');
$contractStateFormatter = new \Application\Datagrid\Column\Formatter\ContractState();
$contractStateFormatter->setView($viewRenderer);
return $contractStateFormatter;
}
]
),
```
Successfully setting the `formatter` on the `column`:
``` php
$col = new Column\Select('state_name');
$col->setLabel('State');
$col->setWidth(5);
$contractState = $this->getServiceLocator()->get('ContractStateFormatter');
$col->setFormatter($contractState);
#$col->setFormatter(new Formatter\ContractState()); // working fine
$col->setFilterDefaultValue($state);
$col->setTranslationEnabled(true);
$grid->addColumn($col);
```
As you can see before the _DI_ the `formatter` was working fine.
Here is the `formatter`:
``` php
<?php
namespace Application\Datagrid\Column\Formatter;
use ZfcDatagrid\Column\Formatter\AbstractFormatter;
use ZfcDatagrid\Column\AbstractColumn;
class ContractState extends AbstractFormatter
{
protected $validRenderers = array(
'jqGrid',
'bootstrapTable'
);
protected $view;
public function setView($view)
{
#echo $view->translate('Home');
#$this->view = 'test';
$this->view = $view;
}
public function getView()
{
return $this->view;
}
public function getFormattedValue(AbstractColumn $column)
{
$row = $this->getRowData();
$this->view->translate($row['state_name']));
$html = sprintf('<span class="state_name">%s</span><br>', $row['state_name']); // translation getTranslator()
// some custom formatting
return $html;
}
}
```
The setting works fine, the translation works fine. But though everything worked fine before I now get the following error:
**Could not save the datagrid cache. Does the directory "/home/.../Zend/workspaces/DefaultWorkspace10/PQ2/data/ZfcDatagrid" exists and is writeable?**
This actually makes no sense, right? Maybe the problem is caused by some kind of overhead or conflict with the `view` attribute. But I couldn't find any conflict inside the `formatter`.
Again, this is my first _DI_ attempt. Please tell me if my approach is wrong.
Thanks
|
test
|
cannot inject custom formatter though a column can use translation setting the settranslationenabled method to true i need to translate my value inside a custom formatter this is more of question than an issue since this is my di attempt i tried to inject the viewrenderer into my custom formatter inside my module config php php service manager array factories contractstateformatter function sm viewrenderer sm get viewrenderer contractstateformatter new application datagrid column formatter contractstate contractstateformatter setview viewrenderer return contractstateformatter successfully setting the formatter on the column php col new column select state name col setlabel state col setwidth contractstate this getservicelocator get contractstateformatter col setformatter contractstate col setformatter new formatter contractstate working fine col setfilterdefaultvalue state col settranslationenabled true grid addcolumn col as you can see before the di the formatter was working fine here is the formatter php php namespace application datagrid column formatter use zfcdatagrid column formatter abstractformatter use zfcdatagrid column abstractcolumn class contractstate extends abstractformatter protected validrenderers array jqgrid bootstraptable protected view public function setview view echo view translate home this view test this view view public function getview return this view public function getformattedvalue abstractcolumn column row this getrowdata this view translate row html sprintf s row translation gettranslator some custom formatting return html the setting works fine the translation works fine but though everything worked fine before i now get the following error could not save the datagrid cache does the directory home zend workspaces data zfcdatagrid exists and is writeable this actually makes no sense right maybe the problem is caused by some kind of overhead or conflict with the view attribute but i couldn t find any conflict inside the formatter again this is my first di attempt please tell me if my approach is wrong thanks
| 1
|
114,051
| 24,536,725,494
|
IssuesEvent
|
2022-10-11 21:32:43
|
quiqueck/BCLib
|
https://api.github.com/repos/quiqueck/BCLib
|
closed
|
[Bug] Double plants don't have drop for top block
|
🔥 bug 🎉 Dev Code
|
### What happened?
BaseDoublePlantBlock don't have a drop list for top block:
```java
@Override
public List<ItemStack> getDrops(BlockState state, LootContext.Builder builder) {
if (state.getValue(TOP)) {
return Lists.newArrayList();
}
ItemStack tool = builder.getParameter(LootContextParams.TOOL);
if (tool != null && BaseShearsItem.isShear(tool) || EnchantmentHelper.getItemEnchantmentLevel(
Enchantments.SILK_TOUCH,
tool
) > 0) {
return Lists.newArrayList(new ItemStack(this));
} else {
return Lists.newArrayList();
}
}
```
As a result if you break top block (even with shears) - it will not drop anything.
This condition should be removed:
```java
if (state.getValue(TOP)) {
return Lists.newArrayList();
}
```
### BCLib
2.1.0
### Fabric API
0.60.0
### Fabric Loader
0.14.9
### Minecraft
1.19.1
### Relevant log output
_No response_
### Other Mods
_No response_
|
1.0
|
[Bug] Double plants don't have drop for top block - ### What happened?
BaseDoublePlantBlock don't have a drop list for top block:
```java
@Override
public List<ItemStack> getDrops(BlockState state, LootContext.Builder builder) {
if (state.getValue(TOP)) {
return Lists.newArrayList();
}
ItemStack tool = builder.getParameter(LootContextParams.TOOL);
if (tool != null && BaseShearsItem.isShear(tool) || EnchantmentHelper.getItemEnchantmentLevel(
Enchantments.SILK_TOUCH,
tool
) > 0) {
return Lists.newArrayList(new ItemStack(this));
} else {
return Lists.newArrayList();
}
}
```
As a result if you break top block (even with shears) - it will not drop anything.
This condition should be removed:
```java
if (state.getValue(TOP)) {
return Lists.newArrayList();
}
```
### BCLib
2.1.0
### Fabric API
0.60.0
### Fabric Loader
0.14.9
### Minecraft
1.19.1
### Relevant log output
_No response_
### Other Mods
_No response_
|
non_test
|
double plants don t have drop for top block what happened basedoubleplantblock don t have a drop list for top block java override public list getdrops blockstate state lootcontext builder builder if state getvalue top return lists newarraylist itemstack tool builder getparameter lootcontextparams tool if tool null baseshearsitem isshear tool enchantmenthelper getitemenchantmentlevel enchantments silk touch tool return lists newarraylist new itemstack this else return lists newarraylist as a result if you break top block even with shears it will not drop anything this condition should be removed java if state getvalue top return lists newarraylist bclib fabric api fabric loader minecraft relevant log output no response other mods no response
| 0
|
57,490
| 3,082,696,986
|
IssuesEvent
|
2015-08-24 00:11:25
|
magro/memcached-session-manager
|
https://api.github.com/repos/magro/memcached-session-manager
|
closed
|
When the jvmRoute contains a dash (-) sessions are not saved in memcached
|
bug imported Milestone-1.4.1 Priority-Medium
|
_From [rainer.j...@kippdata.de](https://code.google.com/u/102517300929192813948/) on March 16, 2011 16:59:57_
SessionIdFormat uses a pattern "[^-.]+-[^.]+(\\.[\\w-]+)?" to test for valid session ids.
I had a memcached called "n1" and a jvmRoute tc7-a (and tc7-b) which leads to session ids like ...-n1.tc7-a. Unfortunately "tc7-a" does not match "\w+". Thus SessionIdFormat.isValid() fails in BackupSesionService and no data is being sent to Memcached.
I think there is no need to restrict the jvmRoute pattern, so instead of
[\\w-]+
you could also use ".+". Everything after the dot should belong to the jvmRoute.
Regards,
Rainer
_Original issue: http://code.google.com/p/memcached-session-manager/issues/detail?id=90_
|
1.0
|
When the jvmRoute contains a dash (-) sessions are not saved in memcached - _From [rainer.j...@kippdata.de](https://code.google.com/u/102517300929192813948/) on March 16, 2011 16:59:57_
SessionIdFormat uses a pattern "[^-.]+-[^.]+(\\.[\\w-]+)?" to test for valid session ids.
I had a memcached called "n1" and a jvmRoute tc7-a (and tc7-b) which leads to session ids like ...-n1.tc7-a. Unfortunately "tc7-a" does not match "\w+". Thus SessionIdFormat.isValid() fails in BackupSesionService and no data is being sent to Memcached.
I think there is no need to restrict the jvmRoute pattern, so instead of
[\\w-]+
you could also use ".+". Everything after the dot should belong to the jvmRoute.
Regards,
Rainer
_Original issue: http://code.google.com/p/memcached-session-manager/issues/detail?id=90_
|
non_test
|
when the jvmroute contains a dash sessions are not saved in memcached from on march sessionidformat uses a pattern to test for valid session ids i had a memcached called and a jvmroute a and b which leads to session ids like a unfortunately a does not match w thus sessionidformat isvalid fails in backupsesionservice and no data is being sent to memcached i think there is no need to restrict the jvmroute pattern so instead of you could also use everything after the dot should belong to the jvmroute regards rainer original issue
| 0
|
196,613
| 14,881,729,116
|
IssuesEvent
|
2021-01-20 10:53:24
|
rancher/harvester
|
https://api.github.com/repos/rancher/harvester
|
closed
|
[BUG] UI incorrectly states that data will be deleted from the volume when a data disk is removed from a VM
|
area/ui bug to-test
|
**Describe the bug**
If you attach and mount a volume to a VM and then put data on it, when you delete that volume from a VM, you get a message stating that the data on the volume will be removed. However, if you mount that volume on a different VM, that data is available (as I would have expected it to be).
**To Reproduce**
Steps to reproduce the behavior:
1. Go to Volumes and create a new volume
2. Go to Virtual Machines and create a new VM
3. Under Volumes in the Create VM UI, add an existing Volume
4. Create the VM and login via the console
5. Make a new directory to house the mount: `sudo mkdir /data`
6. Create a partition on the drive (assuming volume is at `/dev/vdb`): `sudo fdisk /dev/vdb`
7. Create a filesystem on the partition: `sudo mkfs.ext4 /dev/vdb1`
8. Mount the filesystem: `sudo mount /dev/vdb1 /data`
9. Add some data to the volume: `sudo touch /data/test.md`
10. Then in the Harvester UI again, go to Virtual Machines > ... > Edit as Form > Volumes
11. Click the `X` on the data volume to remove it.
12. You are presented with the following popup:

Note: there are two things wrong here. One, is the fact that the data won't be removed from the volume when you hit OK. And Two, the modal has white text on a white background, so it i invisible.

13. Now create a new Virtual Machine and mount the volume you removed from the first VM by following steps 1-8.
14. Validate that the data on the volume that was written from the first VM is actually still on the volume when mounted to the second VM.
**Expected behavior**
I would expect this behavior as I would expect to be able to mount a volume, put data on it and then disconnect and mount to a different VM and that data to exist. However, the popup message makes it look like this is not how the functionality will work, even though it does work that way. In production, this would make everyone click "No" and be stuck if they needed to do this operation.
Note: I am currently using a 1 host cluster. It is possible this functionality does not work if working across hosts. I need to add another host to my setup to test this.
**Log**
I don't have them, but everything functioned as expected from a logs/events perspective.
**Environment:**
- Harvester ISO version: v0.1.0
- Installation Mode: ISO/app-mode
- Underlying Infrastructure (e.g. Baremetal with Dell PowerEdge R630): Bare Metal
**Additional context**
Nothing that I can think of...
|
1.0
|
[BUG] UI incorrectly states that data will be deleted from the volume when a data disk is removed from a VM - **Describe the bug**
If you attach and mount a volume to a VM and then put data on it, when you delete that volume from a VM, you get a message stating that the data on the volume will be removed. However, if you mount that volume on a different VM, that data is available (as I would have expected it to be).
**To Reproduce**
Steps to reproduce the behavior:
1. Go to Volumes and create a new volume
2. Go to Virtual Machines and create a new VM
3. Under Volumes in the Create VM UI, add an existing Volume
4. Create the VM and login via the console
5. Make a new directory to house the mount: `sudo mkdir /data`
6. Create a partition on the drive (assuming volume is at `/dev/vdb`): `sudo fdisk /dev/vdb`
7. Create a filesystem on the partition: `sudo mkfs.ext4 /dev/vdb1`
8. Mount the filesystem: `sudo mount /dev/vdb1 /data`
9. Add some data to the volume: `sudo touch /data/test.md`
10. Then in the Harvester UI again, go to Virtual Machines > ... > Edit as Form > Volumes
11. Click the `X` on the data volume to remove it.
12. You are presented with the following popup:

Note: there are two things wrong here. One, is the fact that the data won't be removed from the volume when you hit OK. And Two, the modal has white text on a white background, so it i invisible.

13. Now create a new Virtual Machine and mount the volume you removed from the first VM by following steps 1-8.
14. Validate that the data on the volume that was written from the first VM is actually still on the volume when mounted to the second VM.
**Expected behavior**
I would expect this behavior as I would expect to be able to mount a volume, put data on it and then disconnect and mount to a different VM and that data to exist. However, the popup message makes it look like this is not how the functionality will work, even though it does work that way. In production, this would make everyone click "No" and be stuck if they needed to do this operation.
Note: I am currently using a 1 host cluster. It is possible this functionality does not work if working across hosts. I need to add another host to my setup to test this.
**Log**
I don't have them, but everything functioned as expected from a logs/events perspective.
**Environment:**
- Harvester ISO version: v0.1.0
- Installation Mode: ISO/app-mode
- Underlying Infrastructure (e.g. Baremetal with Dell PowerEdge R630): Bare Metal
**Additional context**
Nothing that I can think of...
|
test
|
ui incorrectly states that data will be deleted from the volume when a data disk is removed from a vm describe the bug if you attach and mount a volume to a vm and then put data on it when you delete that volume from a vm you get a message stating that the data on the volume will be removed however if you mount that volume on a different vm that data is available as i would have expected it to be to reproduce steps to reproduce the behavior go to volumes and create a new volume go to virtual machines and create a new vm under volumes in the create vm ui add an existing volume create the vm and login via the console make a new directory to house the mount sudo mkdir data create a partition on the drive assuming volume is at dev vdb sudo fdisk dev vdb create a filesystem on the partition sudo mkfs dev mount the filesystem sudo mount dev data add some data to the volume sudo touch data test md then in the harvester ui again go to virtual machines edit as form volumes click the x on the data volume to remove it you are presented with the following popup note there are two things wrong here one is the fact that the data won t be removed from the volume when you hit ok and two the modal has white text on a white background so it i invisible now create a new virtual machine and mount the volume you removed from the first vm by following steps validate that the data on the volume that was written from the first vm is actually still on the volume when mounted to the second vm expected behavior i would expect this behavior as i would expect to be able to mount a volume put data on it and then disconnect and mount to a different vm and that data to exist however the popup message makes it look like this is not how the functionality will work even though it does work that way in production this would make everyone click no and be stuck if they needed to do this operation note i am currently using a host cluster it is possible this functionality does not work if working across hosts i need to add another host to my setup to test this log i don t have them but everything functioned as expected from a logs events perspective environment harvester iso version installation mode iso app mode underlying infrastructure e g baremetal with dell poweredge bare metal additional context nothing that i can think of
| 1
|
735,322
| 25,389,446,681
|
IssuesEvent
|
2022-11-22 01:58:36
|
tomm3hgunn/Jayhawk-Go
|
https://api.github.com/repos/tomm3hgunn/Jayhawk-Go
|
closed
|
Display Moneyline in matches.html
|
feature high priority
|
Similar to the currently implemented Spreads Pane, display the necessary data for Moneyline in the Moneyline pane. Inside the matches.html, look for the comment PANE 1 to see how the Spreads data was displayed. Do your work under the PANE 3 comment. The format of the row may need to be modified as the Moneyline data may need to be presented differently compared to Spreads.
Display all the data and columns for moneyline as seen in http://127.0.0.1:8000/apiDisplay/moneyline/

Recommended extension: Better comments (used to see different colored comments when starting with symbols !, ?, *, TODO)

What should be changed in http://127.0.0.1:8000/oddsAndEvents/matches:

Primary files to reference/work on:
/Jayhawk-Go/goforless/apiDisplay/moneyline.html
/Jayhawk-Go/goforless/apiDisplay/views.py
/Jayhawk-Go/goforless/oddsAndEvents/sportz/matches.html
/Jayhawk-Go/goforless/oddsAndEvents/views.py
|
1.0
|
Display Moneyline in matches.html - Similar to the currently implemented Spreads Pane, display the necessary data for Moneyline in the Moneyline pane. Inside the matches.html, look for the comment PANE 1 to see how the Spreads data was displayed. Do your work under the PANE 3 comment. The format of the row may need to be modified as the Moneyline data may need to be presented differently compared to Spreads.
Display all the data and columns for moneyline as seen in http://127.0.0.1:8000/apiDisplay/moneyline/

Recommended extension: Better comments (used to see different colored comments when starting with symbols !, ?, *, TODO)

What should be changed in http://127.0.0.1:8000/oddsAndEvents/matches:

Primary files to reference/work on:
/Jayhawk-Go/goforless/apiDisplay/moneyline.html
/Jayhawk-Go/goforless/apiDisplay/views.py
/Jayhawk-Go/goforless/oddsAndEvents/sportz/matches.html
/Jayhawk-Go/goforless/oddsAndEvents/views.py
|
non_test
|
display moneyline in matches html similar to the currently implemented spreads pane display the necessary data for moneyline in the moneyline pane inside the matches html look for the comment pane to see how the spreads data was displayed do your work under the pane comment the format of the row may need to be modified as the moneyline data may need to be presented differently compared to spreads display all the data and columns for moneyline as seen in recommended extension better comments used to see different colored comments when starting with symbols todo what should be changed in primary files to reference work on jayhawk go goforless apidisplay moneyline html jayhawk go goforless apidisplay views py jayhawk go goforless oddsandevents sportz matches html jayhawk go goforless oddsandevents views py
| 0
|
236,702
| 26,046,799,999
|
IssuesEvent
|
2022-12-22 15:02:59
|
Gal-Doron/private-gradle-github
|
https://api.github.com/repos/Gal-Doron/private-gradle-github
|
opened
|
CVE-2021-44228 (High) detected in log4j-core-2.13.1.jar
|
security vulnerability
|
## CVE-2021-44228 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4j-core-2.13.1.jar</b></p></summary>
<p>The Apache Log4j Implementation</p>
<p>Library home page: <a href="https://logging.apache.org/log4j/2.x/">https://logging.apache.org/log4j/2.x/</a></p>
<p>Path to dependency file: /build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.logging.log4j/log4j-core/2.13.1/533f6ae0bb0ce091493f2eeab0c1df4327e46ef1/log4j-core-2.13.1.jar</p>
<p>
Dependency Hierarchy:
- angie-galparty-1.0.jar (Root Library)
- :x: **log4j-core-2.13.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Gal-Doron/private-gradle-github/commit/c113c813ec855450de4e49bd5da3e0d6c77d087f">c113c813ec855450de4e49bd5da3e0d6c77d087f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Log4j2 2.0-beta9 through 2.15.0 (excluding security releases 2.12.2, 2.12.3, and 2.3.1) JNDI features used in configuration, log messages, and parameters do not protect against attacker controlled LDAP and other JNDI related endpoints. An attacker who can control log messages or log message parameters can execute arbitrary code loaded from LDAP servers when message lookup substitution is enabled. From log4j 2.15.0, this behavior has been disabled by default. From version 2.16.0 (along with 2.12.2, 2.12.3, and 2.3.1), this functionality has been completely removed. Note that this vulnerability is specific to log4j-core and does not affect log4net, log4cxx, or other Apache Logging Services projects.
<p>Publish Date: 2021-12-10
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-44228>CVE-2021-44228</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>10.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://logging.apache.org/log4j/2.x/security.html">https://logging.apache.org/log4j/2.x/security.html</a></p>
<p>Release Date: 2021-12-10</p>
<p>Fix Resolution: org.apache.logging.log4j:log4j-core:2.3.1,2.12.2,2.15.0;org.ops4j.pax.logging:pax-logging-log4j2:1.11.10,2.0.11</p>
</p>
</details>
<p></p>
|
True
|
CVE-2021-44228 (High) detected in log4j-core-2.13.1.jar - ## CVE-2021-44228 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4j-core-2.13.1.jar</b></p></summary>
<p>The Apache Log4j Implementation</p>
<p>Library home page: <a href="https://logging.apache.org/log4j/2.x/">https://logging.apache.org/log4j/2.x/</a></p>
<p>Path to dependency file: /build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.logging.log4j/log4j-core/2.13.1/533f6ae0bb0ce091493f2eeab0c1df4327e46ef1/log4j-core-2.13.1.jar</p>
<p>
Dependency Hierarchy:
- angie-galparty-1.0.jar (Root Library)
- :x: **log4j-core-2.13.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Gal-Doron/private-gradle-github/commit/c113c813ec855450de4e49bd5da3e0d6c77d087f">c113c813ec855450de4e49bd5da3e0d6c77d087f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Log4j2 2.0-beta9 through 2.15.0 (excluding security releases 2.12.2, 2.12.3, and 2.3.1) JNDI features used in configuration, log messages, and parameters do not protect against attacker controlled LDAP and other JNDI related endpoints. An attacker who can control log messages or log message parameters can execute arbitrary code loaded from LDAP servers when message lookup substitution is enabled. From log4j 2.15.0, this behavior has been disabled by default. From version 2.16.0 (along with 2.12.2, 2.12.3, and 2.3.1), this functionality has been completely removed. Note that this vulnerability is specific to log4j-core and does not affect log4net, log4cxx, or other Apache Logging Services projects.
<p>Publish Date: 2021-12-10
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-44228>CVE-2021-44228</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>10.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://logging.apache.org/log4j/2.x/security.html">https://logging.apache.org/log4j/2.x/security.html</a></p>
<p>Release Date: 2021-12-10</p>
<p>Fix Resolution: org.apache.logging.log4j:log4j-core:2.3.1,2.12.2,2.15.0;org.ops4j.pax.logging:pax-logging-log4j2:1.11.10,2.0.11</p>
</p>
</details>
<p></p>
|
non_test
|
cve high detected in core jar cve high severity vulnerability vulnerable library core jar the apache implementation library home page a href path to dependency file build gradle path to vulnerable library home wss scanner gradle caches modules files org apache logging core core jar dependency hierarchy angie galparty jar root library x core jar vulnerable library found in head commit a href found in base branch main vulnerability details apache through excluding security releases and jndi features used in configuration log messages and parameters do not protect against attacker controlled ldap and other jndi related endpoints an attacker who can control log messages or log message parameters can execute arbitrary code loaded from ldap servers when message lookup substitution is enabled from this behavior has been disabled by default from version along with and this functionality has been completely removed note that this vulnerability is specific to core and does not affect or other apache logging services projects publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache logging core org pax logging pax logging
| 0
|
123,607
| 10,276,745,218
|
IssuesEvent
|
2019-08-24 20:16:48
|
wasmjit-omr/wasmjit-omr
|
https://api.github.com/repos/wasmjit-omr/wasmjit-omr
|
opened
|
Find a better way to prevent JIT compilation of certain functions
|
testing
|
Currently, there are several tests that need to ensure that certain functions are run in the interpreter, e.g. the stack trace tests. Currently, these tests make use of the unsupported `memory.size` opcode to cause the function to not be JIT compiled. Unfortunately, this solution will only work until we run out of opcodes that the JIT does not support. A more sustainable solution is needed to ensure that these tests continue to work in the future.
|
1.0
|
Find a better way to prevent JIT compilation of certain functions - Currently, there are several tests that need to ensure that certain functions are run in the interpreter, e.g. the stack trace tests. Currently, these tests make use of the unsupported `memory.size` opcode to cause the function to not be JIT compiled. Unfortunately, this solution will only work until we run out of opcodes that the JIT does not support. A more sustainable solution is needed to ensure that these tests continue to work in the future.
|
test
|
find a better way to prevent jit compilation of certain functions currently there are several tests that need to ensure that certain functions are run in the interpreter e g the stack trace tests currently these tests make use of the unsupported memory size opcode to cause the function to not be jit compiled unfortunately this solution will only work until we run out of opcodes that the jit does not support a more sustainable solution is needed to ensure that these tests continue to work in the future
| 1
|
15,795
| 3,483,013,999
|
IssuesEvent
|
2015-12-30 07:12:34
|
sadikovi/octohaven
|
https://api.github.com/repos/sadikovi/octohaven
|
closed
|
[OCTO-46] Scheduling more items than half of the pool size
|
bug test
|
Scheduler bug when scheduling more items than half of the pool size results in fetching the same items, therefore, others are not updated properly. For example,
```
job1 -> CREATED
job2 -> CREATED
job3 -> CREATED
job4 -> CREATED
```
After fetching two jobs, you will be fetching those 2 jobs only, so the rest will not be updated, assuming pool size is 4.
```
job1 -> WAITING
job2 -> WAITING
job3 -> CREATED
job4 -> CREATED
```
|
1.0
|
[OCTO-46] Scheduling more items than half of the pool size - Scheduler bug when scheduling more items than half of the pool size results in fetching the same items, therefore, others are not updated properly. For example,
```
job1 -> CREATED
job2 -> CREATED
job3 -> CREATED
job4 -> CREATED
```
After fetching two jobs, you will be fetching those 2 jobs only, so the rest will not be updated, assuming pool size is 4.
```
job1 -> WAITING
job2 -> WAITING
job3 -> CREATED
job4 -> CREATED
```
|
test
|
scheduling more items than half of the pool size scheduler bug when scheduling more items than half of the pool size results in fetching the same items therefore others are not updated properly for example created created created created after fetching two jobs you will be fetching those jobs only so the rest will not be updated assuming pool size is waiting waiting created created
| 1
|
14,929
| 3,436,348,400
|
IssuesEvent
|
2015-12-12 09:32:23
|
akvo/akvo-caddisfly
|
https://api.github.com/repos/akvo/akvo-caddisfly
|
closed
|
Use 'level' quality parameter
|
Strip test
|
to make sure the calibration card image is straight enough, we can use a 'level' quality parameter.

Alternatively, we could show a 'level' indicator with an arrow: the arrow shown is the direction in which the user needs to move the camera.
The arrow to show can be determined by looking at the relative distances of the finder patterns. If the camera is held at an angle, the 'nearby' finder patterns will have a larger distance between them. From this, the location of the camera, and the needed adjustment, can be calculated.
|
1.0
|
Use 'level' quality parameter - to make sure the calibration card image is straight enough, we can use a 'level' quality parameter.

Alternatively, we could show a 'level' indicator with an arrow: the arrow shown is the direction in which the user needs to move the camera.
The arrow to show can be determined by looking at the relative distances of the finder patterns. If the camera is held at an angle, the 'nearby' finder patterns will have a larger distance between them. From this, the location of the camera, and the needed adjustment, can be calculated.
|
test
|
use level quality parameter to make sure the calibration card image is straight enough we can use a level quality parameter alternatively we could show a level indicator with an arrow the arrow shown is the direction in which the user needs to move the camera the arrow to show can be determined by looking at the relative distances of the finder patterns if the camera is held at an angle the nearby finder patterns will have a larger distance between them from this the location of the camera and the needed adjustment can be calculated
| 1
|
165,734
| 20,617,066,761
|
IssuesEvent
|
2022-03-07 14:12:52
|
ioana-nicolae/terraform-tests
|
https://api.github.com/repos/ioana-nicolae/terraform-tests
|
opened
|
CVE-2021-45105 (Medium) detected in log4j-core-2.12.1.jar
|
security vulnerability
|
## CVE-2021-45105 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4j-core-2.12.1.jar</b></p></summary>
<p>The Apache Log4j Implementation</p>
<p>Library home page: <a href="https://logging.apache.org/log4j/2.x/">https://logging.apache.org/log4j/2.x/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /itory/org/apache/logging/log4j/log4j-core/2.12.1/log4j-core-2.12.1.jar</p>
<p>
Dependency Hierarchy:
- :x: **log4j-core-2.12.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ioana-nicolae/terraform-tests/commit/8c4af9f8dd7d196227ddb35149342926543e9b8a">8c4af9f8dd7d196227ddb35149342926543e9b8a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Log4j2 versions 2.0-alpha1 through 2.16.0 (excluding 2.12.3 and 2.3.1) did not protect from uncontrolled recursion from self-referential lookups. This allows an attacker with control over Thread Context Map data to cause a denial of service when a crafted string is interpreted. This issue was fixed in Log4j 2.17.0, 2.12.3, and 2.3.1.
<p>Publish Date: 2021-12-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-45105>CVE-2021-45105</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://logging.apache.org/log4j/2.x/security.html">https://logging.apache.org/log4j/2.x/security.html</a></p>
<p>Release Date: 2021-12-18</p>
<p>Fix Resolution: org.apache.logging.log4j:log4j-core:2.3.1,2.12.3,2.17.0;org.ops4j.pax.logging:pax-logging-log4j2:1.11.10,2.0.11</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.logging.log4j","packageName":"log4j-core","packageVersion":"2.12.1","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.apache.logging.log4j:log4j-core:2.12.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.logging.log4j:log4j-core:2.3.1,2.12.3,2.17.0;org.ops4j.pax.logging:pax-logging-log4j2:1.11.10,2.0.11","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-45105","vulnerabilityDetails":"Apache Log4j2 versions 2.0-alpha1 through 2.16.0 (excluding 2.12.3 and 2.3.1) did not protect from uncontrolled recursion from self-referential lookups. This allows an attacker with control over Thread Context Map data to cause a denial of service when a crafted string is interpreted. This issue was fixed in Log4j 2.17.0, 2.12.3, and 2.3.1.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-45105","cvss3Severity":"medium","cvss3Score":"5.9","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-45105 (Medium) detected in log4j-core-2.12.1.jar - ## CVE-2021-45105 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4j-core-2.12.1.jar</b></p></summary>
<p>The Apache Log4j Implementation</p>
<p>Library home page: <a href="https://logging.apache.org/log4j/2.x/">https://logging.apache.org/log4j/2.x/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /itory/org/apache/logging/log4j/log4j-core/2.12.1/log4j-core-2.12.1.jar</p>
<p>
Dependency Hierarchy:
- :x: **log4j-core-2.12.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ioana-nicolae/terraform-tests/commit/8c4af9f8dd7d196227ddb35149342926543e9b8a">8c4af9f8dd7d196227ddb35149342926543e9b8a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Log4j2 versions 2.0-alpha1 through 2.16.0 (excluding 2.12.3 and 2.3.1) did not protect from uncontrolled recursion from self-referential lookups. This allows an attacker with control over Thread Context Map data to cause a denial of service when a crafted string is interpreted. This issue was fixed in Log4j 2.17.0, 2.12.3, and 2.3.1.
<p>Publish Date: 2021-12-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-45105>CVE-2021-45105</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://logging.apache.org/log4j/2.x/security.html">https://logging.apache.org/log4j/2.x/security.html</a></p>
<p>Release Date: 2021-12-18</p>
<p>Fix Resolution: org.apache.logging.log4j:log4j-core:2.3.1,2.12.3,2.17.0;org.ops4j.pax.logging:pax-logging-log4j2:1.11.10,2.0.11</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.logging.log4j","packageName":"log4j-core","packageVersion":"2.12.1","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.apache.logging.log4j:log4j-core:2.12.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.logging.log4j:log4j-core:2.3.1,2.12.3,2.17.0;org.ops4j.pax.logging:pax-logging-log4j2:1.11.10,2.0.11","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-45105","vulnerabilityDetails":"Apache Log4j2 versions 2.0-alpha1 through 2.16.0 (excluding 2.12.3 and 2.3.1) did not protect from uncontrolled recursion from self-referential lookups. This allows an attacker with control over Thread Context Map data to cause a denial of service when a crafted string is interpreted. This issue was fixed in Log4j 2.17.0, 2.12.3, and 2.3.1.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-45105","cvss3Severity":"medium","cvss3Score":"5.9","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_test
|
cve medium detected in core jar cve medium severity vulnerability vulnerable library core jar the apache implementation library home page a href path to dependency file pom xml path to vulnerable library itory org apache logging core core jar dependency hierarchy x core jar vulnerable library found in head commit a href found in base branch master vulnerability details apache versions through excluding and did not protect from uncontrolled recursion from self referential lookups this allows an attacker with control over thread context map data to cause a denial of service when a crafted string is interpreted this issue was fixed in and publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache logging core org pax logging pax logging isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree org apache logging core isminimumfixversionavailable true minimumfixversion org apache logging core org pax logging pax logging isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails apache versions through excluding and did not protect from uncontrolled recursion from self referential lookups this allows an attacker with control over thread context map data to cause a denial of service when a crafted string is interpreted this issue was fixed in and vulnerabilityurl
| 0
|
154,100
| 12,193,066,618
|
IssuesEvent
|
2020-04-29 13:53:32
|
bitcoin/bitcoin
|
https://api.github.com/repos/bitcoin/bitcoin
|
opened
|
Run functional tests from make check
|
Brainstorming Feature Tests
|
We have a bunch of tests:
* unit tests
* util tests written in python
* bench runner
* subtree tests
All of them are run when you type `make check`.
However, for running the functional tests, one has to type `./test/functional/test_runner.py`
I think some people asked for the functional tests to be run as part of `make check`, or maybe a different target like `make check-functional`?
|
1.0
|
Run functional tests from make check - We have a bunch of tests:
* unit tests
* util tests written in python
* bench runner
* subtree tests
All of them are run when you type `make check`.
However, for running the functional tests, one has to type `./test/functional/test_runner.py`
I think some people asked for the functional tests to be run as part of `make check`, or maybe a different target like `make check-functional`?
|
test
|
run functional tests from make check we have a bunch of tests unit tests util tests written in python bench runner subtree tests all of them are run when you type make check however for running the functional tests one has to type test functional test runner py i think some people asked for the functional tests to be run as part of make check or maybe a different target like make check functional
| 1
|
317,989
| 27,276,938,344
|
IssuesEvent
|
2023-02-23 06:33:16
|
TencentBlueKing/bk-ci
|
https://api.github.com/repos/TencentBlueKing/bk-ci
|
closed
|
【研发商店】插件私有配置当字段值是明文展示时,聚焦编辑框不需要清空
|
for gray for test kind/enhancement tested kind/version/sample streams/tested streams/for test streams/for gray streams/done sample/passed
|
当插件私有配置当字段值是明文展示时,聚焦编辑框不需要清空,便于用户修改配置或者复制内容
<img width="400" alt="image" src="https://user-images.githubusercontent.com/54432927/215701103-e122b895-4f57-4307-b993-84f6286ad393.png">
|
4.0
|
【研发商店】插件私有配置当字段值是明文展示时,聚焦编辑框不需要清空 - 当插件私有配置当字段值是明文展示时,聚焦编辑框不需要清空,便于用户修改配置或者复制内容
<img width="400" alt="image" src="https://user-images.githubusercontent.com/54432927/215701103-e122b895-4f57-4307-b993-84f6286ad393.png">
|
test
|
【研发商店】插件私有配置当字段值是明文展示时,聚焦编辑框不需要清空 当插件私有配置当字段值是明文展示时,聚焦编辑框不需要清空,便于用户修改配置或者复制内容 img width alt image src
| 1
|
334,808
| 29,992,362,854
|
IssuesEvent
|
2023-06-26 00:07:16
|
flojoy-io/studio
|
https://api.github.com/repos/flojoy-io/studio
|
closed
|
open & run every example app in the `apps` repo. validate that run results display in the CTRL panel
|
CI & test automation
|
To Do:
open & run every example app in the app’s repo. validate that run results display in the CTRL panel
- need more automation
- may need to add mocking, if so - convert to new tasks
|
1.0
|
open & run every example app in the `apps` repo. validate that run results display in the CTRL panel - To Do:
open & run every example app in the app’s repo. validate that run results display in the CTRL panel
- need more automation
- may need to add mocking, if so - convert to new tasks
|
test
|
open run every example app in the apps repo validate that run results display in the ctrl panel to do open run every example app in the app’s repo validate that run results display in the ctrl panel need more automation may need to add mocking if so convert to new tasks
| 1
|
758,473
| 26,556,911,009
|
IssuesEvent
|
2023-01-20 12:50:27
|
owid/owid-grapher
|
https://api.github.com/repos/owid/owid-grapher
|
closed
|
Project: Port Wordpress pages content to ArchieML JSON
|
site priority 2 - important
|
We want to move fully to the new google docs based authoring flow but we have a lot of content still in Wordpress. This project is about automatically moving wordpress pages into google docs/ArchieML.
A substantial fraction of components that we have in Wordpress do not exist yet in google docs. Part of this project is then to get an understanding of what fraction of the wordpress content we are able to translate yet. At the end of the cycle we can then decide if we want to proceed with another push for automating the remaining conversions or resolve to doing some of them by hand.
The basic approach will be to translate the content field in the posts table in Grapher into a new column that stores json in the same (ArchieML) format as the content of the posts_gdocs table. As part of this statistics will be generated to track which % of posts can be translated already (either fully faithfully or with some missing blocks). When this is done to a satisfactory degree the next step will be to create google docs pages that generate the content from ArchieML (thus adding a reverse mapping from ArchieML json to google doc that might be useful for migrations etc in the future). Finally we will decide what to do with remaining untranslated sections and then turn off baking from wordpress and remove it from our infrastructure.
# Steps
- [x] #1728
- [x] Add new column on posts for ArchieML json and one for statistics (json object with keys for simplicity as this will be temporary)
- [x] Set up translation code that transforms from WP html with wp comments to ArchieML
- [x] Start iterative process of converting more and more blocks - aligning with the project to add more components to google docs authoring
### Details to check
(The remaining open details will be tracked in a new follow-up issue: #1870)
- [x] {ref} can start in a header and continue for a few paragraphs, then end in a {/ref}. This needs to be handled
- [ ] `<!-- formatting-options -->` need to be handled. Possible keys are enumerated in owidTypes.ts
- [ ] divs are currently eliminated. Some might contain class information that is relevant. Try to check if this is causing issues
- [ ] Author information needs to be transferred from wordpress to the grapher posts table so we can use it for the byline
- [x] Check that charts work after migration
- [x] Check that explorers work after migration
- [ ] not all tags are translated yet. These are the things still to do:
- [x] ol
- [ ] pre
|
1.0
|
Project: Port Wordpress pages content to ArchieML JSON - We want to move fully to the new google docs based authoring flow but we have a lot of content still in Wordpress. This project is about automatically moving wordpress pages into google docs/ArchieML.
A substantial fraction of components that we have in Wordpress do not exist yet in google docs. Part of this project is then to get an understanding of what fraction of the wordpress content we are able to translate yet. At the end of the cycle we can then decide if we want to proceed with another push for automating the remaining conversions or resolve to doing some of them by hand.
The basic approach will be to translate the content field in the posts table in Grapher into a new column that stores json in the same (ArchieML) format as the content of the posts_gdocs table. As part of this statistics will be generated to track which % of posts can be translated already (either fully faithfully or with some missing blocks). When this is done to a satisfactory degree the next step will be to create google docs pages that generate the content from ArchieML (thus adding a reverse mapping from ArchieML json to google doc that might be useful for migrations etc in the future). Finally we will decide what to do with remaining untranslated sections and then turn off baking from wordpress and remove it from our infrastructure.
# Steps
- [x] #1728
- [x] Add new column on posts for ArchieML json and one for statistics (json object with keys for simplicity as this will be temporary)
- [x] Set up translation code that transforms from WP html with wp comments to ArchieML
- [x] Start iterative process of converting more and more blocks - aligning with the project to add more components to google docs authoring
### Details to check
(The remaining open details will be tracked in a new follow-up issue: #1870)
- [x] {ref} can start in a header and continue for a few paragraphs, then end in a {/ref}. This needs to be handled
- [ ] `<!-- formatting-options -->` need to be handled. Possible keys are enumerated in owidTypes.ts
- [ ] divs are currently eliminated. Some might contain class information that is relevant. Try to check if this is causing issues
- [ ] Author information needs to be transferred from wordpress to the grapher posts table so we can use it for the byline
- [x] Check that charts work after migration
- [x] Check that explorers work after migration
- [ ] not all tags are translated yet. These are the things still to do:
- [x] ol
- [ ] pre
|
non_test
|
project port wordpress pages content to archieml json we want to move fully to the new google docs based authoring flow but we have a lot of content still in wordpress this project is about automatically moving wordpress pages into google docs archieml a substantial fraction of components that we have in wordpress do not exist yet in google docs part of this project is then to get an understanding of what fraction of the wordpress content we are able to translate yet at the end of the cycle we can then decide if we want to proceed with another push for automating the remaining conversions or resolve to doing some of them by hand the basic approach will be to translate the content field in the posts table in grapher into a new column that stores json in the same archieml format as the content of the posts gdocs table as part of this statistics will be generated to track which of posts can be translated already either fully faithfully or with some missing blocks when this is done to a satisfactory degree the next step will be to create google docs pages that generate the content from archieml thus adding a reverse mapping from archieml json to google doc that might be useful for migrations etc in the future finally we will decide what to do with remaining untranslated sections and then turn off baking from wordpress and remove it from our infrastructure steps add new column on posts for archieml json and one for statistics json object with keys for simplicity as this will be temporary set up translation code that transforms from wp html with wp comments to archieml start iterative process of converting more and more blocks aligning with the project to add more components to google docs authoring details to check the remaining open details will be tracked in a new follow up issue ref can start in a header and continue for a few paragraphs then end in a ref this needs to be handled need to be handled possible keys are enumerated in owidtypes ts divs are currently eliminated some might contain class information that is relevant try to check if this is causing issues author information needs to be transferred from wordpress to the grapher posts table so we can use it for the byline check that charts work after migration check that explorers work after migration not all tags are translated yet these are the things still to do ol pre
| 0
|
46,593
| 5,824,148,869
|
IssuesEvent
|
2017-05-07 10:07:33
|
TechnionYP5777/Leonidas-FTW
|
https://api.github.com/repos/TechnionYP5777/Leonidas-FTW
|
closed
|
Find a way to automatically test GUI
|
Quality assurance Testing
|
@AnnaBel7 @amirsagiv83 do you know any convenient ways to do this? Who do you think this issue is most relevant for?
|
1.0
|
Find a way to automatically test GUI - @AnnaBel7 @amirsagiv83 do you know any convenient ways to do this? Who do you think this issue is most relevant for?
|
test
|
find a way to automatically test gui do you know any convenient ways to do this who do you think this issue is most relevant for
| 1
|
300,420
| 25,967,521,597
|
IssuesEvent
|
2022-12-19 08:28:24
|
betagouv/preuve-covoiturage
|
https://api.github.com/repos/betagouv/preuve-covoiturage
|
closed
|
Attestation - coquille dans le tableau ‘Contributions passager',
|
BUG ATTESTATION Stale
|
j'ai reçu un retour concernant les attestations, on a laissé traîner une coquille.
Dans la partie ‘Contributions passager', le tableau indique en dernière colonne les ‘Gains’ or, il s’agit du 'Reste à chargé’.
=> Il faudrait donc renommer la colonne ‘gains’ en ‘coûts’ ?
<img width="809" alt="Capture d’écran 2022-03-17 à 15 05 44" src="https://user-images.githubusercontent.com/89770924/159304740-1b54ced4-18e8-4fb5-bf33-53439b995265.png">
|
1.0
|
Attestation - coquille dans le tableau ‘Contributions passager', - j'ai reçu un retour concernant les attestations, on a laissé traîner une coquille.
Dans la partie ‘Contributions passager', le tableau indique en dernière colonne les ‘Gains’ or, il s’agit du 'Reste à chargé’.
=> Il faudrait donc renommer la colonne ‘gains’ en ‘coûts’ ?
<img width="809" alt="Capture d’écran 2022-03-17 à 15 05 44" src="https://user-images.githubusercontent.com/89770924/159304740-1b54ced4-18e8-4fb5-bf33-53439b995265.png">
|
test
|
attestation coquille dans le tableau ‘contributions passager j ai reçu un retour concernant les attestations on a laissé traîner une coquille dans la partie ‘contributions passager le tableau indique en dernière colonne les ‘gains’ or il s’agit du reste à chargé’ il faudrait donc renommer la colonne ‘gains’ en ‘coûts’ img width alt capture d’écran à src
| 1
|
400,009
| 27,265,488,584
|
IssuesEvent
|
2023-02-22 17:41:44
|
OpenC3/cosmos
|
https://api.github.com/repos/OpenC3/cosmos
|
closed
|
Even when specifying OPENC3_TAG=5.1.1 in .env , the latest beta version is built (OpenC3/COSMOS repo)
|
documentation
|
This is more a cautionary warning than a bug I believe...
I have to use the "OpenC3/cosmos" instead of the reccomended "OpenC3/cosmos-project" repo, because I want to create the Ethernet To Serial bridge connection, solved and documented in several other posts at BallAerospace/COSMOS.
When cloning that repo, it's the latest beta version (in lmy case 5.1.2-beta)
Setting OPENC3_TAG=5.1.1 does not prevent building that latest beta version. Is this the desired behavior?
I assume that i must reset the GIT "main" branch to repo to the label "5.1.1" before building OpenC3/cosmos
If this is the desired behavior, I believe it's not a bad Idea to add this to the documentation of Ethernet To Serial connection,
(all useful issues reside at the BallAerospace/COSMOS repo I 'm afraid)
Greetings, Jurgen
|
1.0
|
Even when specifying OPENC3_TAG=5.1.1 in .env , the latest beta version is built (OpenC3/COSMOS repo) - This is more a cautionary warning than a bug I believe...
I have to use the "OpenC3/cosmos" instead of the reccomended "OpenC3/cosmos-project" repo, because I want to create the Ethernet To Serial bridge connection, solved and documented in several other posts at BallAerospace/COSMOS.
When cloning that repo, it's the latest beta version (in lmy case 5.1.2-beta)
Setting OPENC3_TAG=5.1.1 does not prevent building that latest beta version. Is this the desired behavior?
I assume that i must reset the GIT "main" branch to repo to the label "5.1.1" before building OpenC3/cosmos
If this is the desired behavior, I believe it's not a bad Idea to add this to the documentation of Ethernet To Serial connection,
(all useful issues reside at the BallAerospace/COSMOS repo I 'm afraid)
Greetings, Jurgen
|
non_test
|
even when specifying tag in env the latest beta version is built cosmos repo this is more a cautionary warning than a bug i believe i have to use the cosmos instead of the reccomended cosmos project repo because i want to create the ethernet to serial bridge connection solved and documented in several other posts at ballaerospace cosmos when cloning that repo it s the latest beta version in lmy case beta setting tag does not prevent building that latest beta version is this the desired behavior i assume that i must reset the git main branch to repo to the label before building cosmos if this is the desired behavior i believe it s not a bad idea to add this to the documentation of ethernet to serial connection all useful issues reside at the ballaerospace cosmos repo i m afraid greetings jurgen
| 0
|
22,729
| 4,833,882,240
|
IssuesEvent
|
2016-11-08 12:36:06
|
TalatCikikci/Fall2016Swe573_HealthTracker
|
https://api.github.com/repos/TalatCikikci/Fall2016Swe573_HealthTracker
|
opened
|
Create the Project Plan
|
documentation in progress to-do
|
A project plan needs to be created to track the progress, assess the risks and deliver the project in a structured manner.
|
1.0
|
Create the Project Plan - A project plan needs to be created to track the progress, assess the risks and deliver the project in a structured manner.
|
non_test
|
create the project plan a project plan needs to be created to track the progress assess the risks and deliver the project in a structured manner
| 0
|
43,485
| 5,541,520,190
|
IssuesEvent
|
2017-03-22 13:05:27
|
openbmc/openbmc-test-automation
|
https://api.github.com/repos/openbmc/openbmc-test-automation
|
closed
|
Add support for 'fieldReplaceable' and 'cacheable'
|
Test
|
https://github.com/openbmc/openbmc/issues/1099 - This is equivalent to what deepak had dropped the changes.
Please feel free to estimate.
|
1.0
|
Add support for 'fieldReplaceable' and 'cacheable' - https://github.com/openbmc/openbmc/issues/1099 - This is equivalent to what deepak had dropped the changes.
Please feel free to estimate.
|
test
|
add support for fieldreplaceable and cacheable this is equivalent to what deepak had dropped the changes please feel free to estimate
| 1
|
74,508
| 15,350,273,111
|
IssuesEvent
|
2021-03-01 01:55:49
|
bitbar/test-samples
|
https://api.github.com/repos/bitbar/test-samples
|
closed
|
CVE-2017-5929 (High) detected in logback-core-1.1.7.jar, logback-classic-1.1.7.jar
|
closing security vulnerability
|
## CVE-2017-5929 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>logback-core-1.1.7.jar</b>, <b>logback-classic-1.1.7.jar</b></p></summary>
<p>
<details><summary><b>logback-core-1.1.7.jar</b></p></summary>
<p>logback-core module</p>
<p>Library home page: <a href="http://logback.qos.ch">http://logback.qos.ch</a></p>
<p>Path to dependency file: test-samples/samples/testing-frameworks/appium/server-side/image-recognition/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/ch/qos/logback/logback-core/1.1.7/logback-core-1.1.7.jar,/home/wss-scanner/.m2/repository/ch/qos/logback/logback-core/1.1.7/logback-core-1.1.7.jar</p>
<p>
Dependency Hierarchy:
- logback-classic-1.1.7.jar (Root Library)
- :x: **logback-core-1.1.7.jar** (Vulnerable Library)
</details>
<details><summary><b>logback-classic-1.1.7.jar</b></p></summary>
<p>logback-classic module</p>
<p>Library home page: <a href="http://logback.qos.ch">http://logback.qos.ch</a></p>
<p>Path to dependency file: test-samples/samples/testing-frameworks/appium/client-side/java/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/ch/qos/logback/logback-classic/1.1.7/logback-classic-1.1.7.jar,canner/.m2/repository/ch/qos/logback/logback-classic/1.1.7/logback-classic-1.1.7.jar</p>
<p>
Dependency Hierarchy:
- :x: **logback-classic-1.1.7.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/bitbar/test-samples/commit/12af4f854b64888df6e4492ecc94e141388e939a">12af4f854b64888df6e4492ecc94e141388e939a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
QOS.ch Logback before 1.2.0 has a serialization vulnerability affecting the SocketServer and ServerSocketReceiver components.
<p>Publish Date: 2017-03-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-5929>CVE-2017-5929</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-5929">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-5929</a></p>
<p>Release Date: 2017-03-13</p>
<p>Fix Resolution: ch.qos.logback:logback-core:1.2.0;ch.qos.logback:logback-access:1.2.0;ch.qos.logback:logback-classic:1.2.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"ch.qos.logback","packageName":"logback-core","packageVersion":"1.1.7","isTransitiveDependency":true,"dependencyTree":"ch.qos.logback:logback-classic:1.1.7;ch.qos.logback:logback-core:1.1.7","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ch.qos.logback:logback-core:1.2.0;ch.qos.logback:logback-access:1.2.0;ch.qos.logback:logback-classic:1.2.0"},{"packageType":"Java","groupId":"ch.qos.logback","packageName":"logback-classic","packageVersion":"1.1.7","isTransitiveDependency":false,"dependencyTree":"ch.qos.logback:logback-classic:1.1.7","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ch.qos.logback:logback-core:1.2.0;ch.qos.logback:logback-access:1.2.0;ch.qos.logback:logback-classic:1.2.0"}],"vulnerabilityIdentifier":"CVE-2017-5929","vulnerabilityDetails":"QOS.ch Logback before 1.2.0 has a serialization vulnerability affecting the SocketServer and ServerSocketReceiver components.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-5929","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2017-5929 (High) detected in logback-core-1.1.7.jar, logback-classic-1.1.7.jar - ## CVE-2017-5929 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>logback-core-1.1.7.jar</b>, <b>logback-classic-1.1.7.jar</b></p></summary>
<p>
<details><summary><b>logback-core-1.1.7.jar</b></p></summary>
<p>logback-core module</p>
<p>Library home page: <a href="http://logback.qos.ch">http://logback.qos.ch</a></p>
<p>Path to dependency file: test-samples/samples/testing-frameworks/appium/server-side/image-recognition/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/ch/qos/logback/logback-core/1.1.7/logback-core-1.1.7.jar,/home/wss-scanner/.m2/repository/ch/qos/logback/logback-core/1.1.7/logback-core-1.1.7.jar</p>
<p>
Dependency Hierarchy:
- logback-classic-1.1.7.jar (Root Library)
- :x: **logback-core-1.1.7.jar** (Vulnerable Library)
</details>
<details><summary><b>logback-classic-1.1.7.jar</b></p></summary>
<p>logback-classic module</p>
<p>Library home page: <a href="http://logback.qos.ch">http://logback.qos.ch</a></p>
<p>Path to dependency file: test-samples/samples/testing-frameworks/appium/client-side/java/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/ch/qos/logback/logback-classic/1.1.7/logback-classic-1.1.7.jar,canner/.m2/repository/ch/qos/logback/logback-classic/1.1.7/logback-classic-1.1.7.jar</p>
<p>
Dependency Hierarchy:
- :x: **logback-classic-1.1.7.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/bitbar/test-samples/commit/12af4f854b64888df6e4492ecc94e141388e939a">12af4f854b64888df6e4492ecc94e141388e939a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
QOS.ch Logback before 1.2.0 has a serialization vulnerability affecting the SocketServer and ServerSocketReceiver components.
<p>Publish Date: 2017-03-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-5929>CVE-2017-5929</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-5929">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-5929</a></p>
<p>Release Date: 2017-03-13</p>
<p>Fix Resolution: ch.qos.logback:logback-core:1.2.0;ch.qos.logback:logback-access:1.2.0;ch.qos.logback:logback-classic:1.2.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"ch.qos.logback","packageName":"logback-core","packageVersion":"1.1.7","isTransitiveDependency":true,"dependencyTree":"ch.qos.logback:logback-classic:1.1.7;ch.qos.logback:logback-core:1.1.7","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ch.qos.logback:logback-core:1.2.0;ch.qos.logback:logback-access:1.2.0;ch.qos.logback:logback-classic:1.2.0"},{"packageType":"Java","groupId":"ch.qos.logback","packageName":"logback-classic","packageVersion":"1.1.7","isTransitiveDependency":false,"dependencyTree":"ch.qos.logback:logback-classic:1.1.7","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ch.qos.logback:logback-core:1.2.0;ch.qos.logback:logback-access:1.2.0;ch.qos.logback:logback-classic:1.2.0"}],"vulnerabilityIdentifier":"CVE-2017-5929","vulnerabilityDetails":"QOS.ch Logback before 1.2.0 has a serialization vulnerability affecting the SocketServer and ServerSocketReceiver components.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-5929","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_test
|
cve high detected in logback core jar logback classic jar cve high severity vulnerability vulnerable libraries logback core jar logback classic jar logback core jar logback core module library home page a href path to dependency file test samples samples testing frameworks appium server side image recognition pom xml path to vulnerable library home wss scanner repository ch qos logback logback core logback core jar home wss scanner repository ch qos logback logback core logback core jar dependency hierarchy logback classic jar root library x logback core jar vulnerable library logback classic jar logback classic module library home page a href path to dependency file test samples samples testing frameworks appium client side java pom xml path to vulnerable library canner repository ch qos logback logback classic logback classic jar canner repository ch qos logback logback classic logback classic jar dependency hierarchy x logback classic jar vulnerable library found in head commit a href found in base branch master vulnerability details qos ch logback before has a serialization vulnerability affecting the socketserver and serversocketreceiver components publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ch qos logback logback core ch qos logback logback access ch qos logback logback classic isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails qos ch logback before has a serialization vulnerability affecting the socketserver and serversocketreceiver components vulnerabilityurl
| 0
|
782,644
| 27,501,918,586
|
IssuesEvent
|
2023-03-05 19:44:52
|
AUBGTheHUB/spa-website-2022
|
https://api.github.com/repos/AUBGTheHUB/spa-website-2022
|
closed
|
Add participants to respective teams
|
high priority api ADMIN PANEL
|
## Brief description:
Create a new team if the one entered in the form doesn't exist, otherwise, add that participant to their team in the DB.
## How to achieve it:
Check if TeamNoTeam is true or false:
If True -> Check if the team exists if not create it. If it exists add the participant to it.
Else -> Add participant to the group of people who don't have teams.
|
1.0
|
Add participants to respective teams - ## Brief description:
Create a new team if the one entered in the form doesn't exist, otherwise, add that participant to their team in the DB.
## How to achieve it:
Check if TeamNoTeam is true or false:
If True -> Check if the team exists if not create it. If it exists add the participant to it.
Else -> Add participant to the group of people who don't have teams.
|
non_test
|
add participants to respective teams brief description create a new team if the one entered in the form doesn t exist otherwise add that participant to their team in the db how to achieve it check if teamnoteam is true or false if true check if the team exists if not create it if it exists add the participant to it else add participant to the group of people who don t have teams
| 0
|
241,909
| 18,499,714,041
|
IssuesEvent
|
2021-10-19 12:37:25
|
chartjs/Chart.js
|
https://api.github.com/repos/chartjs/Chart.js
|
closed
|
Cant run example "Time Scale - Max Span"
|
type: documentation
|
Documentation Is:
<!-- Please place an x (no spaces!) in all [ ] that apply -->
- [ ] Missing or needed
- [x] Confusing
- [ ] Not Sure?
### Please Explain in Detail...
I try run example from https://www.chartjs.org/docs/latest/samples/scales/time-max-span.html but geting error `Uncaught Error: This method is not implemented: Check that a complete date adapter is provided.`
### Your Proposal for Changes
### Example
<!--
Provide a link to a live example demonstrating the issue or feature to be documented:
https://codepen.io/pen?template=wvezeOq
-->
https://codepen.io/gorlikitsme/pen/XWaKpKb
|
1.0
|
Cant run example "Time Scale - Max Span" - Documentation Is:
<!-- Please place an x (no spaces!) in all [ ] that apply -->
- [ ] Missing or needed
- [x] Confusing
- [ ] Not Sure?
### Please Explain in Detail...
I try run example from https://www.chartjs.org/docs/latest/samples/scales/time-max-span.html but geting error `Uncaught Error: This method is not implemented: Check that a complete date adapter is provided.`
### Your Proposal for Changes
### Example
<!--
Provide a link to a live example demonstrating the issue or feature to be documented:
https://codepen.io/pen?template=wvezeOq
-->
https://codepen.io/gorlikitsme/pen/XWaKpKb
|
non_test
|
cant run example time scale max span documentation is missing or needed confusing not sure please explain in detail i try run example from but geting error uncaught error this method is not implemented check that a complete date adapter is provided your proposal for changes example provide a link to a live example demonstrating the issue or feature to be documented
| 0
|
142,463
| 13,025,386,263
|
IssuesEvent
|
2020-07-27 13:28:11
|
mash-up-kr/Dionysos-Backend
|
https://api.github.com/repos/mash-up-kr/Dionysos-Backend
|
opened
|
스프린트 회의사항을 정리합니다.
|
documentation
|
## 목적
스프린트 회의사항을 정리하는 이슈입니다.
## 작업 상세 내용
- [ ] 회의 안건 취합하기
- [ ] 한 달 단위 스프린트 설정하기
- [ ] 일 주일 스프린트 설정하기
## 참고사항
.
|
1.0
|
스프린트 회의사항을 정리합니다. - ## 목적
스프린트 회의사항을 정리하는 이슈입니다.
## 작업 상세 내용
- [ ] 회의 안건 취합하기
- [ ] 한 달 단위 스프린트 설정하기
- [ ] 일 주일 스프린트 설정하기
## 참고사항
.
|
non_test
|
스프린트 회의사항을 정리합니다 목적 스프린트 회의사항을 정리하는 이슈입니다 작업 상세 내용 회의 안건 취합하기 한 달 단위 스프린트 설정하기 일 주일 스프린트 설정하기 참고사항
| 0
|
57,590
| 6,551,473,222
|
IssuesEvent
|
2017-09-05 14:52:12
|
pburns96/Revature-VenderBender
|
https://api.github.com/repos/pburns96/Revature-VenderBender
|
closed
|
As a manager, I would like to be able to add CDs and LPs
|
High Priority Testing
|
Task List
----------
-Be able to add a new CD and LP to the database
-Be able to view the newly added CD and LP?
|
1.0
|
As a manager, I would like to be able to add CDs and LPs - Task List
----------
-Be able to add a new CD and LP to the database
-Be able to view the newly added CD and LP?
|
test
|
as a manager i would like to be able to add cds and lps task list be able to add a new cd and lp to the database be able to view the newly added cd and lp
| 1
|
10,337
| 3,103,601,336
|
IssuesEvent
|
2015-08-31 11:06:02
|
sohelvali/Test-Git-Issue
|
https://api.github.com/repos/sohelvali/Test-Git-Issue
|
closed
|
Bactrim label Warnings&Precautions section contains unrelated sections
|
test Label
|
See highlights in the comparison in Warnings/Precautions section. Looks like it contains 3 unrelated sections under the actual warnings and precautions section. Some of them are also captured in their own sections separately as well. Why are they duplicated here? Can this be fixed? The issue is with the label in the database.
|
1.0
|
Bactrim label Warnings&Precautions section contains unrelated sections - See highlights in the comparison in Warnings/Precautions section. Looks like it contains 3 unrelated sections under the actual warnings and precautions section. Some of them are also captured in their own sections separately as well. Why are they duplicated here? Can this be fixed? The issue is with the label in the database.
|
test
|
bactrim label warnings precautions section contains unrelated sections see highlights in the comparison in warnings precautions section looks like it contains unrelated sections under the actual warnings and precautions section some of them are also captured in their own sections separately as well why are they duplicated here can this be fixed the issue is with the label in the database
| 1
|
117,341
| 9,924,160,967
|
IssuesEvent
|
2019-07-01 09:03:21
|
openshift/odo
|
https://api.github.com/repos/openshift/odo
|
closed
|
Add/improve test specs
|
area/testing priority/High
|
[kind/Enhancement]
<!--
Welcome! - We kindly ask you to:
1. Fill out the issue template below
2. Use the chat and talk to us if you have a question rather than a bug or feature request.
The chat room is at: https://chat.openshift.io/developers/channels/odo
Thanks for understanding, and for contributing to the project!
-->
## Which functionality do you think we should update/improve?
Test scenarios/specs
## Why is this needed?
The current test spec is not covering maximum number of implementation of commands. So it is necessary to verify properly those commands/flags which are provided by odo, even those extra coverage helps to catch any regression if that occurs.
|
1.0
|
Add/improve test specs - [kind/Enhancement]
<!--
Welcome! - We kindly ask you to:
1. Fill out the issue template below
2. Use the chat and talk to us if you have a question rather than a bug or feature request.
The chat room is at: https://chat.openshift.io/developers/channels/odo
Thanks for understanding, and for contributing to the project!
-->
## Which functionality do you think we should update/improve?
Test scenarios/specs
## Why is this needed?
The current test spec is not covering maximum number of implementation of commands. So it is necessary to verify properly those commands/flags which are provided by odo, even those extra coverage helps to catch any regression if that occurs.
|
test
|
add improve test specs welcome we kindly ask you to fill out the issue template below use the chat and talk to us if you have a question rather than a bug or feature request the chat room is at thanks for understanding and for contributing to the project which functionality do you think we should update improve test scenarios specs why is this needed the current test spec is not covering maximum number of implementation of commands so it is necessary to verify properly those commands flags which are provided by odo even those extra coverage helps to catch any regression if that occurs
| 1
|
356,392
| 25,176,178,829
|
IssuesEvent
|
2022-11-11 09:27:35
|
cliftonfelix/pe
|
https://api.github.com/repos/cliftonfelix/pe
|
opened
|
INDEX Constraints for View
|
type.DocumentationBug severity.Low
|


Eventhough it's already stated in the Placeholders section, I would still need to know the constraints of INDEX under view command. I wouldn't want to be bothered to go back and forth to see what is the constraint especially if the UG is 71 pages long.
<!--session: 1668145524208-25d2f708-da60-44a5-9b69-d29fc1345303-->
<!--Version: Web v3.4.4-->
|
1.0
|
INDEX Constraints for View -


Eventhough it's already stated in the Placeholders section, I would still need to know the constraints of INDEX under view command. I wouldn't want to be bothered to go back and forth to see what is the constraint especially if the UG is 71 pages long.
<!--session: 1668145524208-25d2f708-da60-44a5-9b69-d29fc1345303-->
<!--Version: Web v3.4.4-->
|
non_test
|
index constraints for view eventhough it s already stated in the placeholders section i would still need to know the constraints of index under view command i wouldn t want to be bothered to go back and forth to see what is the constraint especially if the ug is pages long
| 0
|
22,005
| 4,763,942,442
|
IssuesEvent
|
2016-10-25 15:40:45
|
PrismLibrary/Prism
|
https://api.github.com/repos/PrismLibrary/Prism
|
closed
|
Inconsistent rendering of doc headers
|
documentation
|
On first sight the title in both markdown files are the same (# followed by spaced followed by title)
https://raw.githubusercontent.com/PrismLibrary/Prism/master/docs/WPF/04-Modules.md
https://raw.githubusercontent.com/PrismLibrary/Prism/master/docs/WPF/05-Implementing-MVVM.md
But rendering is different, 04 fails and 05 is correct.
http://prismlibrary.readthedocs.io/en/latest/WPF/04-Modules/
http://prismlibrary.readthedocs.io/en/latest/WPF/05-Implementing-MVVM/
Any specialists around who spot the issue?
|
1.0
|
Inconsistent rendering of doc headers - On first sight the title in both markdown files are the same (# followed by spaced followed by title)
https://raw.githubusercontent.com/PrismLibrary/Prism/master/docs/WPF/04-Modules.md
https://raw.githubusercontent.com/PrismLibrary/Prism/master/docs/WPF/05-Implementing-MVVM.md
But rendering is different, 04 fails and 05 is correct.
http://prismlibrary.readthedocs.io/en/latest/WPF/04-Modules/
http://prismlibrary.readthedocs.io/en/latest/WPF/05-Implementing-MVVM/
Any specialists around who spot the issue?
|
non_test
|
inconsistent rendering of doc headers on first sight the title in both markdown files are the same followed by spaced followed by title but rendering is different fails and is correct any specialists around who spot the issue
| 0
|
40,745
| 5,314,904,060
|
IssuesEvent
|
2017-02-13 16:06:18
|
openbmc/openbmc-test-automation
|
https://api.github.com/repos/openbmc/openbmc-test-automation
|
opened
|
[Code Update] Fix MTD device block 5 check to avoid failure from new kernel changes
|
Test
|
Executing command 'df -h | grep -v /dev/mtdblock5 | cut -c 52-54 | grep 100 | wc -l'
This will fail since the initramfs block is going away..
This needs to be fixed https://github.com/openbmc/openbmc-test-automation/blob/master/extended/code_update/update_bmc.robot#L42
"Check BMC File System Performance"
|
1.0
|
[Code Update] Fix MTD device block 5 check to avoid failure from new kernel changes - Executing command 'df -h | grep -v /dev/mtdblock5 | cut -c 52-54 | grep 100 | wc -l'
This will fail since the initramfs block is going away..
This needs to be fixed https://github.com/openbmc/openbmc-test-automation/blob/master/extended/code_update/update_bmc.robot#L42
"Check BMC File System Performance"
|
test
|
fix mtd device block check to avoid failure from new kernel changes executing command df h grep v dev cut c grep wc l this will fail since the initramfs block is going away this needs to be fixed check bmc file system performance
| 1
|
327,735
| 28,080,572,939
|
IssuesEvent
|
2023-03-30 05:54:02
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
pkg/ccl/logictestccl/tests/multiregion-9node-3region-3azs-no-los/multiregion-9node-3region-3azs-no-los_test: TestCCLLogic_regional_by_row_hash_sharded_index failed
|
C-test-failure O-robot branch-master
|
pkg/ccl/logictestccl/tests/multiregion-9node-3region-3azs-no-los/multiregion-9node-3region-3azs-no-los_test.TestCCLLogic_regional_by_row_hash_sharded_index [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/9330243?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/9330243?buildTab=artifacts#/) on master @ [d671fad8a0ca13e2ff2ed4c5f7c8a596bb0b8b33](https://github.com/cockroachdb/cockroach/commits/d671fad8a0ca13e2ff2ed4c5f7c8a596bb0b8b33):
```
=== RUN TestCCLLogic_regional_by_row_hash_sharded_index
test_log_scope.go:161: test logs captured to: /artifacts/tmp/_tmp/b18615a2a85b82f2dc4668f3366969a7/logTestCCLLogic_regional_by_row_hash_sharded_index116913393
test_log_scope.go:79: use -show-logs to present logs inline
*
* INFO: Running test with the default test tenant. If you are only seeing a test case failure when this message appears, there may be a problem with your test case running within tenants.
*
*
* INFO: Running test with the default test tenant. If you are only seeing a test case failure when this message appears, there may be a problem with your test case running within tenants.
*
*
* INFO: Running test with the default test tenant. If you are only seeing a test case failure when this message appears, there may be a problem with your test case running within tenants.
*
*
* INFO: Running test with the default test tenant. If you are only seeing a test case failure when this message appears, there may be a problem with your test case running within tenants.
*
*
* INFO: Running test with the default test tenant. If you are only seeing a test case failure when this message appears, there may be a problem with your test case running within tenants.
*
*
* INFO: Running test with the default test tenant. If you are only seeing a test case failure when this message appears, there may be a problem with your test case running within tenants.
*
*
* INFO: Running test with the default test tenant. If you are only seeing a test case failure when this message appears, there may be a problem with your test case running within tenants.
*
*
* INFO: Running test with the default test tenant. If you are only seeing a test case failure when this message appears, there may be a problem with your test case running within tenants.
*
*
* INFO: Running test with the default test tenant. If you are only seeing a test case failure when this message appears, there may be a problem with your test case running within tenants.
*
[05:29:18] --- progress: /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3770/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-fastbuild/bin/pkg/ccl/logictestccl/tests/multiregion-9node-3region-3azs-no-los/multiregion-9node-3region-3azs-no-los_test_/multiregion-9node-3region-3azs-no-los_test.runfiles/com_github_cockroachdb_cockroach/pkg/ccl/logictestccl/testdata/logic_test/regional_by_row_hash_sharded_index: 11 statements
[05:29:33] --- progress: /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3770/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-fastbuild/bin/pkg/ccl/logictestccl/tests/multiregion-9node-3region-3azs-no-los/multiregion-9node-3region-3azs-no-los_test_/multiregion-9node-3region-3azs-no-los_test.runfiles/com_github_cockroachdb_cockroach/pkg/ccl/logictestccl/testdata/logic_test/regional_by_row_hash_sharded_index: 13 statements
[05:52:53] --- done: /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3770/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-fastbuild/bin/pkg/ccl/logictestccl/tests/multiregion-9node-3region-3azs-no-los/multiregion-9node-3region-3azs-no-los_test_/multiregion-9node-3region-3azs-no-los_test.runfiles/com_github_cockroachdb_cockroach/pkg/ccl/logictestccl/testdata/logic_test/regional_by_row_hash_sharded_index with config multiregion-9node-3region-3azs-no-los: 14 tests, 0 failures
logic.go:3942:
/home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3770/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-fastbuild/bin/pkg/ccl/logictestccl/tests/multiregion-9node-3region-3azs-no-los/multiregion-9node-3region-3azs-no-los_test_/multiregion-9node-3region-3azs-no-los_test.runfiles/com_github_cockroachdb_cockroach/pkg/ccl/logictestccl/testdata/logic_test/regional_by_row_hash_sharded_index:90: error while processing
logic.go:3942:
/home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3770/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-fastbuild/bin/pkg/ccl/logictestccl/tests/multiregion-9node-3region-3azs-no-los/multiregion-9node-3region-3azs-no-los_test_/multiregion-9node-3region-3azs-no-los_test.runfiles/com_github_cockroachdb_cockroach/pkg/ccl/logictestccl/testdata/logic_test/regional_by_row_hash_sharded_index:90:
expected success, but found
dial tcp 127.0.0.1:35271: connect: connection refused
panic.go:522: -- test log scope end --
test logs left over in: /artifacts/tmp/_tmp/b18615a2a85b82f2dc4668f3366969a7/logTestCCLLogic_regional_by_row_hash_sharded_index116913393
--- FAIL: TestCCLLogic_regional_by_row_hash_sharded_index (1446.76s)
```
<p>Parameters: <code>TAGS=bazel,gss</code>
</p>
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestCCLLogic_regional_by_row_hash_sharded_index.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
1.0
|
pkg/ccl/logictestccl/tests/multiregion-9node-3region-3azs-no-los/multiregion-9node-3region-3azs-no-los_test: TestCCLLogic_regional_by_row_hash_sharded_index failed - pkg/ccl/logictestccl/tests/multiregion-9node-3region-3azs-no-los/multiregion-9node-3region-3azs-no-los_test.TestCCLLogic_regional_by_row_hash_sharded_index [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/9330243?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/9330243?buildTab=artifacts#/) on master @ [d671fad8a0ca13e2ff2ed4c5f7c8a596bb0b8b33](https://github.com/cockroachdb/cockroach/commits/d671fad8a0ca13e2ff2ed4c5f7c8a596bb0b8b33):
```
=== RUN TestCCLLogic_regional_by_row_hash_sharded_index
test_log_scope.go:161: test logs captured to: /artifacts/tmp/_tmp/b18615a2a85b82f2dc4668f3366969a7/logTestCCLLogic_regional_by_row_hash_sharded_index116913393
test_log_scope.go:79: use -show-logs to present logs inline
*
* INFO: Running test with the default test tenant. If you are only seeing a test case failure when this message appears, there may be a problem with your test case running within tenants.
*
*
* INFO: Running test with the default test tenant. If you are only seeing a test case failure when this message appears, there may be a problem with your test case running within tenants.
*
*
* INFO: Running test with the default test tenant. If you are only seeing a test case failure when this message appears, there may be a problem with your test case running within tenants.
*
*
* INFO: Running test with the default test tenant. If you are only seeing a test case failure when this message appears, there may be a problem with your test case running within tenants.
*
*
* INFO: Running test with the default test tenant. If you are only seeing a test case failure when this message appears, there may be a problem with your test case running within tenants.
*
*
* INFO: Running test with the default test tenant. If you are only seeing a test case failure when this message appears, there may be a problem with your test case running within tenants.
*
*
* INFO: Running test with the default test tenant. If you are only seeing a test case failure when this message appears, there may be a problem with your test case running within tenants.
*
*
* INFO: Running test with the default test tenant. If you are only seeing a test case failure when this message appears, there may be a problem with your test case running within tenants.
*
*
* INFO: Running test with the default test tenant. If you are only seeing a test case failure when this message appears, there may be a problem with your test case running within tenants.
*
[05:29:18] --- progress: /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3770/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-fastbuild/bin/pkg/ccl/logictestccl/tests/multiregion-9node-3region-3azs-no-los/multiregion-9node-3region-3azs-no-los_test_/multiregion-9node-3region-3azs-no-los_test.runfiles/com_github_cockroachdb_cockroach/pkg/ccl/logictestccl/testdata/logic_test/regional_by_row_hash_sharded_index: 11 statements
[05:29:33] --- progress: /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3770/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-fastbuild/bin/pkg/ccl/logictestccl/tests/multiregion-9node-3region-3azs-no-los/multiregion-9node-3region-3azs-no-los_test_/multiregion-9node-3region-3azs-no-los_test.runfiles/com_github_cockroachdb_cockroach/pkg/ccl/logictestccl/testdata/logic_test/regional_by_row_hash_sharded_index: 13 statements
[05:52:53] --- done: /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3770/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-fastbuild/bin/pkg/ccl/logictestccl/tests/multiregion-9node-3region-3azs-no-los/multiregion-9node-3region-3azs-no-los_test_/multiregion-9node-3region-3azs-no-los_test.runfiles/com_github_cockroachdb_cockroach/pkg/ccl/logictestccl/testdata/logic_test/regional_by_row_hash_sharded_index with config multiregion-9node-3region-3azs-no-los: 14 tests, 0 failures
logic.go:3942:
/home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3770/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-fastbuild/bin/pkg/ccl/logictestccl/tests/multiregion-9node-3region-3azs-no-los/multiregion-9node-3region-3azs-no-los_test_/multiregion-9node-3region-3azs-no-los_test.runfiles/com_github_cockroachdb_cockroach/pkg/ccl/logictestccl/testdata/logic_test/regional_by_row_hash_sharded_index:90: error while processing
logic.go:3942:
/home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3770/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-fastbuild/bin/pkg/ccl/logictestccl/tests/multiregion-9node-3region-3azs-no-los/multiregion-9node-3region-3azs-no-los_test_/multiregion-9node-3region-3azs-no-los_test.runfiles/com_github_cockroachdb_cockroach/pkg/ccl/logictestccl/testdata/logic_test/regional_by_row_hash_sharded_index:90:
expected success, but found
dial tcp 127.0.0.1:35271: connect: connection refused
panic.go:522: -- test log scope end --
test logs left over in: /artifacts/tmp/_tmp/b18615a2a85b82f2dc4668f3366969a7/logTestCCLLogic_regional_by_row_hash_sharded_index116913393
--- FAIL: TestCCLLogic_regional_by_row_hash_sharded_index (1446.76s)
```
<p>Parameters: <code>TAGS=bazel,gss</code>
</p>
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestCCLLogic_regional_by_row_hash_sharded_index.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
test
|
pkg ccl logictestccl tests multiregion no los multiregion no los test testccllogic regional by row hash sharded index failed pkg ccl logictestccl tests multiregion no los multiregion no los test testccllogic regional by row hash sharded index with on master run testccllogic regional by row hash sharded index test log scope go test logs captured to artifacts tmp tmp logtestccllogic regional by row hash sharded test log scope go use show logs to present logs inline info running test with the default test tenant if you are only seeing a test case failure when this message appears there may be a problem with your test case running within tenants info running test with the default test tenant if you are only seeing a test case failure when this message appears there may be a problem with your test case running within tenants info running test with the default test tenant if you are only seeing a test case failure when this message appears there may be a problem with your test case running within tenants info running test with the default test tenant if you are only seeing a test case failure when this message appears there may be a problem with your test case running within tenants info running test with the default test tenant if you are only seeing a test case failure when this message appears there may be a problem with your test case running within tenants info running test with the default test tenant if you are only seeing a test case failure when this message appears there may be a problem with your test case running within tenants info running test with the default test tenant if you are only seeing a test case failure when this message appears there may be a problem with your test case running within tenants info running test with the default test tenant if you are only seeing a test case failure when this message appears there may be a problem with your test case running within tenants info running test with the default test tenant if you are only seeing a test case failure when this message appears there may be a problem with your test case running within tenants progress home roach cache bazel bazel roach sandbox processwrapper sandbox execroot com github cockroachdb cockroach bazel out fastbuild bin pkg ccl logictestccl tests multiregion no los multiregion no los test multiregion no los test runfiles com github cockroachdb cockroach pkg ccl logictestccl testdata logic test regional by row hash sharded index statements progress home roach cache bazel bazel roach sandbox processwrapper sandbox execroot com github cockroachdb cockroach bazel out fastbuild bin pkg ccl logictestccl tests multiregion no los multiregion no los test multiregion no los test runfiles com github cockroachdb cockroach pkg ccl logictestccl testdata logic test regional by row hash sharded index statements done home roach cache bazel bazel roach sandbox processwrapper sandbox execroot com github cockroachdb cockroach bazel out fastbuild bin pkg ccl logictestccl tests multiregion no los multiregion no los test multiregion no los test runfiles com github cockroachdb cockroach pkg ccl logictestccl testdata logic test regional by row hash sharded index with config multiregion no los tests failures logic go home roach cache bazel bazel roach sandbox processwrapper sandbox execroot com github cockroachdb cockroach bazel out fastbuild bin pkg ccl logictestccl tests multiregion no los multiregion no los test multiregion no los test runfiles com github cockroachdb cockroach pkg ccl logictestccl testdata logic test regional by row hash sharded index error while processing logic go home roach cache bazel bazel roach sandbox processwrapper sandbox execroot com github cockroachdb cockroach bazel out fastbuild bin pkg ccl logictestccl tests multiregion no los multiregion no los test multiregion no los test runfiles com github cockroachdb cockroach pkg ccl logictestccl testdata logic test regional by row hash sharded index expected success but found dial tcp connect connection refused panic go test log scope end test logs left over in artifacts tmp tmp logtestccllogic regional by row hash sharded fail testccllogic regional by row hash sharded index parameters tags bazel gss help see also cc cockroachdb sql queries
| 1
|
437,386
| 30,596,400,705
|
IssuesEvent
|
2023-07-21 22:47:58
|
ClimateImpactLab/dodola
|
https://api.github.com/repos/ClimateImpactLab/dodola
|
closed
|
Add label for code source to container images
|
documentation enhancement help wanted
|
We should put a label in the Dockerfile to add a URL to the code repository.
Basically add this
```
LABEL org.opencontainers.image.source="https://github.com/ClimateImpactLab/dodola"
```
to `Dockerfile`.
|
1.0
|
Add label for code source to container images - We should put a label in the Dockerfile to add a URL to the code repository.
Basically add this
```
LABEL org.opencontainers.image.source="https://github.com/ClimateImpactLab/dodola"
```
to `Dockerfile`.
|
non_test
|
add label for code source to container images we should put a label in the dockerfile to add a url to the code repository basically add this label org opencontainers image source to dockerfile
| 0
|
179,540
| 14,705,249,814
|
IssuesEvent
|
2021-01-04 17:48:29
|
pcdshub/happi
|
https://api.github.com/repos/pcdshub/happi
|
opened
|
Document container entrypoints
|
Documentation
|
## Current Behavior
Undocumented?
## Expected Behavior
Downstream package documentation for how to add entrypoints does not appear to exist. It seems like it would fit in here: https://pcdshub.github.io/happi/containers.html
Copying in my comment to #191, which may be a reasonable outline for the start of a document:
> For historical reasons we have some LCLS-specific containers inside of happi, but our general recommended approach from here on out is to add a happi.containers entrypoint to your package. This is unfortunately not yet documented as far as I recall. Our primary example would be in pcdsdevices:
>
> The entrypoint definition:
> https://github.com/pcdshub/pcdsdevices/blob/e5cf6f66c9db5d5d32bb4a654f76a74e17f3e5ab/setup.py#L13-L15
>
> And the sub-package containing those classes:
> https://github.com/pcdshub/pcdsdevices/blob/master/pcdsdevices/happi/containers.py
|
1.0
|
Document container entrypoints - ## Current Behavior
Undocumented?
## Expected Behavior
Downstream package documentation for how to add entrypoints does not appear to exist. It seems like it would fit in here: https://pcdshub.github.io/happi/containers.html
Copying in my comment to #191, which may be a reasonable outline for the start of a document:
> For historical reasons we have some LCLS-specific containers inside of happi, but our general recommended approach from here on out is to add a happi.containers entrypoint to your package. This is unfortunately not yet documented as far as I recall. Our primary example would be in pcdsdevices:
>
> The entrypoint definition:
> https://github.com/pcdshub/pcdsdevices/blob/e5cf6f66c9db5d5d32bb4a654f76a74e17f3e5ab/setup.py#L13-L15
>
> And the sub-package containing those classes:
> https://github.com/pcdshub/pcdsdevices/blob/master/pcdsdevices/happi/containers.py
|
non_test
|
document container entrypoints current behavior undocumented expected behavior downstream package documentation for how to add entrypoints does not appear to exist it seems like it would fit in here copying in my comment to which may be a reasonable outline for the start of a document for historical reasons we have some lcls specific containers inside of happi but our general recommended approach from here on out is to add a happi containers entrypoint to your package this is unfortunately not yet documented as far as i recall our primary example would be in pcdsdevices the entrypoint definition and the sub package containing those classes
| 0
|
11,534
| 4,237,484,255
|
IssuesEvent
|
2016-07-05 22:00:26
|
SleepyTrousers/EnderIO
|
https://api.github.com/repos/SleepyTrousers/EnderIO
|
closed
|
1.10.2 Crash on Respawn
|
1.9 Code Complete Logfile Missing Report Incomplete
|
I died, when I respawned the game locked up and crashed. At the time my friend was running a soul binder. LAN World: http://pastebin.com/GB8ETX6w
|
1.0
|
1.10.2 Crash on Respawn - I died, when I respawned the game locked up and crashed. At the time my friend was running a soul binder. LAN World: http://pastebin.com/GB8ETX6w
|
non_test
|
crash on respawn i died when i respawned the game locked up and crashed at the time my friend was running a soul binder lan world
| 0
|
14,966
| 5,028,931,929
|
IssuesEvent
|
2016-12-15 19:40:11
|
certbot/certbot
|
https://api.github.com/repos/certbot/certbot
|
opened
|
Refactor IDisplay interface
|
code health refactoring ui / ux
|
As of writing this, no third party plugins are using it. It's getting a little gross with tons of arguments being passed in, many of which aren't relevant anymore with the removal of `dialog` and lots of `*args` and `**kwargs` due to differences in `FileDisplay` and `NoninteractiveDisplay`. We should clean all this up while we have the chance.
|
1.0
|
Refactor IDisplay interface - As of writing this, no third party plugins are using it. It's getting a little gross with tons of arguments being passed in, many of which aren't relevant anymore with the removal of `dialog` and lots of `*args` and `**kwargs` due to differences in `FileDisplay` and `NoninteractiveDisplay`. We should clean all this up while we have the chance.
|
non_test
|
refactor idisplay interface as of writing this no third party plugins are using it it s getting a little gross with tons of arguments being passed in many of which aren t relevant anymore with the removal of dialog and lots of args and kwargs due to differences in filedisplay and noninteractivedisplay we should clean all this up while we have the chance
| 0
|
88,530
| 3,778,680,946
|
IssuesEvent
|
2016-03-18 02:17:29
|
phetsims/scenery
|
https://api.github.com/repos/phetsims/scenery
|
opened
|
Don't draw not-yet-loaded image elements in Canvas
|
priority:2-high type:bug
|
See https://github.com/phetsims/function-builder/issues/15
We'll want a better guard here:
```js
paintCanvas: function( wrapper, node ) {
if ( node._image ) {
wrapper.context.drawImage( node._image, 0, 0 );
}
},
```
|
1.0
|
Don't draw not-yet-loaded image elements in Canvas - See https://github.com/phetsims/function-builder/issues/15
We'll want a better guard here:
```js
paintCanvas: function( wrapper, node ) {
if ( node._image ) {
wrapper.context.drawImage( node._image, 0, 0 );
}
},
```
|
non_test
|
don t draw not yet loaded image elements in canvas see we ll want a better guard here js paintcanvas function wrapper node if node image wrapper context drawimage node image
| 0
|
224,604
| 17,760,998,435
|
IssuesEvent
|
2021-08-29 17:41:48
|
BenCodez/VotingPlugin
|
https://api.github.com/repos/BenCodez/VotingPlugin
|
closed
|
The command removepoints does not longer work
|
Needs Testing Possible Bug
|
**Versions**
6.6.1
**Describe the bug**
When I do /adminvote User <user> RemovePoints <points>, it does not decrease the points count.
**To Reproduce**
Use the command removepoints
**Expected behavior**
It should decrease the points count
**Screenshots/Configs**

**Additional context**
It seems like after some time, maybe after a vote from someone else, it finally removed the points but it still allows players to use those points endlessly during that time
|
1.0
|
The command removepoints does not longer work - **Versions**
6.6.1
**Describe the bug**
When I do /adminvote User <user> RemovePoints <points>, it does not decrease the points count.
**To Reproduce**
Use the command removepoints
**Expected behavior**
It should decrease the points count
**Screenshots/Configs**

**Additional context**
It seems like after some time, maybe after a vote from someone else, it finally removed the points but it still allows players to use those points endlessly during that time
|
test
|
the command removepoints does not longer work versions describe the bug when i do adminvote user removepoints it does not decrease the points count to reproduce use the command removepoints expected behavior it should decrease the points count screenshots configs additional context it seems like after some time maybe after a vote from someone else it finally removed the points but it still allows players to use those points endlessly during that time
| 1
|
217,192
| 16,848,834,339
|
IssuesEvent
|
2021-06-20 04:12:07
|
hakehuang/infoflow
|
https://api.github.com/repos/hakehuang/infoflow
|
opened
|
tests-ci :kernel.memory_protection.userspace.write_kernro : zephyr-v2.6.0-286-g46029914a7ac: lpcxpresso55s28: test Timeout
|
area: Tests
|
**Describe the bug**
kernel.memory_protection.userspace.write_kernro test is Timeout on zephyr-v2.6.0-286-g46029914a7ac on lpcxpresso55s28
see logs for details
**To Reproduce**
1.
```
scripts/twister --device-testing --device-serial /dev/ttyACM0 -p lpcxpresso55s28 --testcase-root tests --sub-test kernel.memory_protection
```
2. See error
**Expected behavior**
test pass
**Impact**
**Logs and console output**
```
*** Booting Zephyr OS build zephyr-v2.6.0-286-g46029914a7ac ***
Running test suite userspace
===================================================================
START - test_is_usermode
PASS - test_is_usermode in 0.1 seconds
===================================================================
START - test_write_control
PASS - test_write_control in 0.1 seconds
===================================================================
START - test_disable_mmu_mpu
ASSERTION FAIL [esf != ((void *)0)] @ WEST_TOPDIR/zephyr/arch/arm/core/aarch32/cortex_m/fault.c:993
ESF could not be retrieved successfully. Shall never occur.
ASSERTION FAIL [esf != ((void *)0)] @ WEST_TOPDIR/zephyr/arch/arm/core/aarch32/cortex_m/fault.c:993
ESF could not be retrieved successfully. Shall never occur.
```
**Environment (please complete the following information):**
- OS: (e.g. Linux )
- Toolchain (e.g Zephyr SDK)
- Commit SHA or Version used: zephyr-v2.6.0-286-g46029914a7ac
|
1.0
|
tests-ci :kernel.memory_protection.userspace.write_kernro : zephyr-v2.6.0-286-g46029914a7ac: lpcxpresso55s28: test Timeout
-
**Describe the bug**
kernel.memory_protection.userspace.write_kernro test is Timeout on zephyr-v2.6.0-286-g46029914a7ac on lpcxpresso55s28
see logs for details
**To Reproduce**
1.
```
scripts/twister --device-testing --device-serial /dev/ttyACM0 -p lpcxpresso55s28 --testcase-root tests --sub-test kernel.memory_protection
```
2. See error
**Expected behavior**
test pass
**Impact**
**Logs and console output**
```
*** Booting Zephyr OS build zephyr-v2.6.0-286-g46029914a7ac ***
Running test suite userspace
===================================================================
START - test_is_usermode
PASS - test_is_usermode in 0.1 seconds
===================================================================
START - test_write_control
PASS - test_write_control in 0.1 seconds
===================================================================
START - test_disable_mmu_mpu
ASSERTION FAIL [esf != ((void *)0)] @ WEST_TOPDIR/zephyr/arch/arm/core/aarch32/cortex_m/fault.c:993
ESF could not be retrieved successfully. Shall never occur.
ASSERTION FAIL [esf != ((void *)0)] @ WEST_TOPDIR/zephyr/arch/arm/core/aarch32/cortex_m/fault.c:993
ESF could not be retrieved successfully. Shall never occur.
```
**Environment (please complete the following information):**
- OS: (e.g. Linux )
- Toolchain (e.g Zephyr SDK)
- Commit SHA or Version used: zephyr-v2.6.0-286-g46029914a7ac
|
test
|
tests ci kernel memory protection userspace write kernro zephyr test timeout describe the bug kernel memory protection userspace write kernro test is timeout on zephyr on see logs for details to reproduce scripts twister device testing device serial dev p testcase root tests sub test kernel memory protection see error expected behavior test pass impact logs and console output booting zephyr os build zephyr running test suite userspace start test is usermode pass test is usermode in seconds start test write control pass test write control in seconds start test disable mmu mpu assertion fail west topdir zephyr arch arm core cortex m fault c esf could not be retrieved successfully shall never occur assertion fail west topdir zephyr arch arm core cortex m fault c esf could not be retrieved successfully shall never occur environment please complete the following information os e g linux toolchain e g zephyr sdk commit sha or version used zephyr
| 1
|
97,131
| 8,649,236,682
|
IssuesEvent
|
2018-11-26 18:48:26
|
learn-co-curriculum/oo-student-scraper
|
https://api.github.com/repos/learn-co-curriculum/oo-student-scraper
|
closed
|
Create From Collection Issue
|
Test
|
Hey, so in the spec for the create_from_collection method in the Student class, it says the method will use the Scraper class:
`.create_from_collection
uses the Scraper class to create new students with the correct name and location.`
But the solution does not use the Scraper class at all.
`def self.create_from_collection(students_array)
students_array.each do |student_hash|
Student.new(student_hash)
end
end`
|
1.0
|
Create From Collection Issue - Hey, so in the spec for the create_from_collection method in the Student class, it says the method will use the Scraper class:
`.create_from_collection
uses the Scraper class to create new students with the correct name and location.`
But the solution does not use the Scraper class at all.
`def self.create_from_collection(students_array)
students_array.each do |student_hash|
Student.new(student_hash)
end
end`
|
test
|
create from collection issue hey so in the spec for the create from collection method in the student class it says the method will use the scraper class create from collection uses the scraper class to create new students with the correct name and location but the solution does not use the scraper class at all def self create from collection students array students array each do student hash student new student hash end end
| 1
|
91,131
| 10,710,008,092
|
IssuesEvent
|
2019-10-25 00:19:18
|
Varying-Vagrant-Vagrants/VVV
|
https://api.github.com/repos/Varying-Vagrant-Vagrants/VVV
|
closed
|
Document vagrant disk size in changelog
|
documentation hacktoberfest
|
Related to #1915 we need to document this in the changelog for 3.2
|
1.0
|
Document vagrant disk size in changelog - Related to #1915 we need to document this in the changelog for 3.2
|
non_test
|
document vagrant disk size in changelog related to we need to document this in the changelog for
| 0
|
226,382
| 24,947,046,389
|
IssuesEvent
|
2022-11-01 01:47:11
|
githuballpractice/gameoflife
|
https://api.github.com/repos/githuballpractice/gameoflife
|
closed
|
CVE-2022-40154 (High) detected in xstream-1.3.1.jar - autoclosed
|
security vulnerability
|
## CVE-2022-40154 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.3.1.jar</b></p></summary>
<p></p>
<p>Path to vulnerable library: /gameoflife-web/tools/jmeter/lib/xstream-1.3.1.jar</p>
<p>
Dependency Hierarchy:
- :x: **xstream-1.3.1.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Those using Xstream to serialise XML data may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stack overflow. This effect may support a denial of service attack.
<p>Publish Date: 2022-09-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-40154>CVE-2022-40154</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-40154 (High) detected in xstream-1.3.1.jar - autoclosed - ## CVE-2022-40154 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.3.1.jar</b></p></summary>
<p></p>
<p>Path to vulnerable library: /gameoflife-web/tools/jmeter/lib/xstream-1.3.1.jar</p>
<p>
Dependency Hierarchy:
- :x: **xstream-1.3.1.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Those using Xstream to serialise XML data may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stack overflow. This effect may support a denial of service attack.
<p>Publish Date: 2022-09-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-40154>CVE-2022-40154</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve high detected in xstream jar autoclosed cve high severity vulnerability vulnerable library xstream jar path to vulnerable library gameoflife web tools jmeter lib xstream jar dependency hierarchy x xstream jar vulnerable library found in base branch master vulnerability details those using xstream to serialise xml data may be vulnerable to denial of service attacks dos if the parser is running on user supplied input an attacker may supply content that causes the parser to crash by stack overflow this effect may support a denial of service attack publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href step up your open source security game with mend
| 0
|
82,387
| 15,894,717,842
|
IssuesEvent
|
2021-04-11 11:18:19
|
sourcegraph/sourcegraph
|
https://api.github.com/repos/sourcegraph/sourcegraph
|
closed
|
Insights charts sometimes have dots, sometimes don't
|
bug team/code-insights webapp
|

Seems to be some kind of race condition in the charting lib (recharts)
|
1.0
|
Insights charts sometimes have dots, sometimes don't - 
Seems to be some kind of race condition in the charting lib (recharts)
|
non_test
|
insights charts sometimes have dots sometimes don t seems to be some kind of race condition in the charting lib recharts
| 0
|
422,324
| 28,434,754,069
|
IssuesEvent
|
2023-04-15 06:52:58
|
litestar-org/litestar
|
https://api.github.com/repos/litestar-org/litestar
|
closed
|
Docs: Issue in "Stores/Deleting expired values" section
|
documentation
|
### Summary
Hi guys, I found this bit pretty confusing:

As I understand, `after_response` function will be called after every awaited response , not "at most every 30 second"
|
1.0
|
Docs: Issue in "Stores/Deleting expired values" section - ### Summary
Hi guys, I found this bit pretty confusing:

As I understand, `after_response` function will be called after every awaited response , not "at most every 30 second"
|
non_test
|
docs issue in stores deleting expired values section summary hi guys i found this bit pretty confusing as i understand after response function will be called after every awaited response not at most every second
| 0
|
460,839
| 13,218,923,522
|
IssuesEvent
|
2020-08-17 09:34:19
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
closed
|
[0.9.0.0 beta staging-stable-3] view vote link doesn't show the correct vote in Web view
|
Category: Elections Website Priority: Medium Status: Fixed
|

As you can see it show that everyone voted No but when you see real result (in background of my image) you can see it's 50%/50%.
|
1.0
|
[0.9.0.0 beta staging-stable-3] view vote link doesn't show the correct vote in Web view - 
As you can see it show that everyone voted No but when you see real result (in background of my image) you can see it's 50%/50%.
|
non_test
|
view vote link doesn t show the correct vote in web view as you can see it show that everyone voted no but when you see real result in background of my image you can see it s
| 0
|
85,947
| 16,767,898,641
|
IssuesEvent
|
2021-06-14 11:16:56
|
GeekMasher/advanced-security-compliance
|
https://api.github.com/repos/GeekMasher/advanced-security-compliance
|
closed
|
SLA / Time to Remediate Policy as Code
|
codescanning dependabot enhancement licensing secretscanning
|
### Description
It would be awesome to define a "time to remediate" or SLA (service-level agreement) policy that only brings up an alert if certain criteria is meet. By default, this mode should not be present and can be enabled by the policy.
**Example Scenario**
- Dependabot opens a High security issue
- I define that High Security issues have to be fixed within 1 day
- Before then, the Action can be run and does not break the workflow
- After 1 day, the Action with start breaking the workflow
### Propose Solution
A clear and concise description of what you want to happen.
```yaml
# Applies everywhere
general:
remediate:
errors: 7
warnings: 30
all: 90
codescanning:
# Applies only to codescanning
remediate:
# Break when detected when set to `0`
errors: 0
# ...
conditions:
ids:
- */sql-injection
```
### [optional] Alternative Solutions
A clear and concise description of any alternative solutions or features you've considered.
Other suggestions are welcome.
|
1.0
|
SLA / Time to Remediate Policy as Code - ### Description
It would be awesome to define a "time to remediate" or SLA (service-level agreement) policy that only brings up an alert if certain criteria is meet. By default, this mode should not be present and can be enabled by the policy.
**Example Scenario**
- Dependabot opens a High security issue
- I define that High Security issues have to be fixed within 1 day
- Before then, the Action can be run and does not break the workflow
- After 1 day, the Action with start breaking the workflow
### Propose Solution
A clear and concise description of what you want to happen.
```yaml
# Applies everywhere
general:
remediate:
errors: 7
warnings: 30
all: 90
codescanning:
# Applies only to codescanning
remediate:
# Break when detected when set to `0`
errors: 0
# ...
conditions:
ids:
- */sql-injection
```
### [optional] Alternative Solutions
A clear and concise description of any alternative solutions or features you've considered.
Other suggestions are welcome.
|
non_test
|
sla time to remediate policy as code description it would be awesome to define a time to remediate or sla service level agreement policy that only brings up an alert if certain criteria is meet by default this mode should not be present and can be enabled by the policy example scenario dependabot opens a high security issue i define that high security issues have to be fixed within day before then the action can be run and does not break the workflow after day the action with start breaking the workflow propose solution a clear and concise description of what you want to happen yaml applies everywhere general remediate errors warnings all codescanning applies only to codescanning remediate break when detected when set to errors conditions ids sql injection alternative solutions a clear and concise description of any alternative solutions or features you ve considered other suggestions are welcome
| 0
|
41,776
| 5,396,561,989
|
IssuesEvent
|
2017-02-27 12:07:53
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
opened
|
ci-kubernetes-e2e-gci-gke: broken test run
|
kind/flake priority/P2 team/test-infra
|
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/5270/
Multiple broken tests:
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:80
Feb 27 04:00:26.491: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:318
```
Issues about this test specifically: #27443 #27835 #28900 #32512 #38549
Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:78
Expected error:
<*errors.errorString | 0xc4203d36a0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:77
```
Issues about this test specifically: #38556
Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:190
Expected error:
<*errors.errorString | 0xc4203fc0c0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:189
```
Issues about this test specifically: #30287 #35953
Failed: [k8s.io] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:98
Expected error:
<*errors.StatusError | 0xc420cb4b80>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "configmaps \"kube-dns-autoscaler\" not found",
Reason: "NotFound",
Details: {
Name: "kube-dns-autoscaler",
Group: "",
Kind: "configmaps",
Causes: nil,
RetryAfterSeconds: 0,
},
Code: 404,
},
}
configmaps "kube-dns-autoscaler" not found
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:67
```
Issues about this test specifically: #36569 #38446
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:92
Feb 27 03:57:37.137: timeout waiting 15m0s for pods size to be 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:318
```
Issues about this test specifically: #27196 #28998 #32403 #33341
Failed: Test {e2e.go}
```
exit status 1
```
Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048
Failed: [k8s.io] NodeProblemDetector [k8s.io] SystemLogMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:74
Expected error:
<*errors.errorString | 0xc4203d2830>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:73
```
Previous issues for this suite: #37184 #37852 #38007 #38019 #40826 #42082
|
1.0
|
ci-kubernetes-e2e-gci-gke: broken test run - https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke/5270/
Multiple broken tests:
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:80
Feb 27 04:00:26.491: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:318
```
Issues about this test specifically: #27443 #27835 #28900 #32512 #38549
Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:78
Expected error:
<*errors.errorString | 0xc4203d36a0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:77
```
Issues about this test specifically: #38556
Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:190
Expected error:
<*errors.errorString | 0xc4203fc0c0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:189
```
Issues about this test specifically: #30287 #35953
Failed: [k8s.io] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:98
Expected error:
<*errors.StatusError | 0xc420cb4b80>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "configmaps \"kube-dns-autoscaler\" not found",
Reason: "NotFound",
Details: {
Name: "kube-dns-autoscaler",
Group: "",
Kind: "configmaps",
Causes: nil,
RetryAfterSeconds: 0,
},
Code: 404,
},
}
configmaps "kube-dns-autoscaler" not found
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:67
```
Issues about this test specifically: #36569 #38446
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:92
Feb 27 03:57:37.137: timeout waiting 15m0s for pods size to be 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:318
```
Issues about this test specifically: #27196 #28998 #32403 #33341
Failed: Test {e2e.go}
```
exit status 1
```
Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048
Failed: [k8s.io] NodeProblemDetector [k8s.io] SystemLogMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:74
Expected error:
<*errors.errorString | 0xc4203d2830>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:73
```
Previous issues for this suite: #37184 #37852 #38007 #38019 #40826 #42082
|
test
|
ci kubernetes gci gke broken test run multiple broken tests failed horizontal pod autoscaling scale resource cpu replicationcontroller light should scale from pod to pods kubernetes suite go src io kubernetes output dockerized go src io kubernetes test horizontal pod autoscaling go feb timeout waiting for pods size to be go src io kubernetes output dockerized go src io kubernetes test common autoscaling utils go issues about this test specifically failed loadbalancing nginx should conform to ingress spec kubernetes suite go src io kubernetes output dockerized go src io kubernetes test ingress go expected error s timed out waiting for the condition timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test ingress go issues about this test specifically failed prestop should call prestop when killing a pod kubernetes suite go src io kubernetes output dockerized go src io kubernetes test pre stop go expected error s timed out waiting for the condition timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test pre stop go issues about this test specifically failed dns horizontal autoscaling kube dns autoscaler should scale kube dns pods in both nonfaulty and faulty scenarios kubernetes suite go src io kubernetes output dockerized go src io kubernetes test dns autoscaling go expected error errstatus typemeta kind apiversion listmeta selflink resourceversion status failure message configmaps kube dns autoscaler not found reason notfound details name kube dns autoscaler group kind configmaps causes nil retryafterseconds code configmaps kube dns autoscaler not found not to have occurred go src io kubernetes output dockerized go src io kubernetes test dns autoscaling go issues about this test specifically failed horizontal pod autoscaling scale resource cpu replicationcontroller light should scale from pods to pod kubernetes suite go src io kubernetes output dockerized go src io kubernetes test horizontal pod autoscaling go feb timeout waiting for pods size to be go src io kubernetes output dockerized go src io kubernetes test common autoscaling utils go issues about this test specifically failed test go exit status issues about this test specifically failed nodeproblemdetector systemlogmonitor should generate node condition and events for corresponding errors kubernetes suite go src io kubernetes output dockerized go src io kubernetes test node problem detector go expected error s timed out waiting for the condition timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test node problem detector go previous issues for this suite
| 1
|
175,355
| 13,547,620,118
|
IssuesEvent
|
2020-09-17 04:38:28
|
dotnet/roslyn
|
https://api.github.com/repos/dotnet/roslyn
|
closed
|
Running Editor tests prints out unnecessary outputs
|
Area-IDE Test
|
This is probably an issue with how loggers are removed or registered.
```
------ Test started: Assembly: Roslyn.Services.Editor2.UnitTests.dll ------
Output from Microsoft.CodeAnalysis.Editor.UnitTests.FindReferences.FindReferencesTests.TestNamedType_CaseSensitivity:
Looking for a cached skeleton assembly for (ProjectId, #4248efad-a058-4d06-a93a-545b5b339a8f - CSharpAssembly1) before taking the lock...
Build lock taken for (ProjectId, #4248efad-a058-4d06-a93a-545b5b339a8f - CSharpAssembly1)...
Looking to see if we already have a skeleton assembly for (ProjectId, #4248efad-a058-4d06-a93a-545b5b339a8f - CSharpAssembly1) before we build one...
Beginning to create a skeleton assembly for CSharpAssembly1...
Successfully emitted a skeleton assembly for CSharpAssembly1
Done trying to create a skeleton assembly for CSharpAssembly1
Successfully stored the metadata generated for (ProjectId, #4248efad-a058-4d06-a93a-545b5b339a8f - CSharpAssembly1)
Looking for a cached skeleton assembly for (ProjectId, #f249a074-7e09-4d8f-b269-ca35ee0f3f9d - CSharpAssembly1) before taking the lock...
Build lock taken for (ProjectId, #f249a074-7e09-4d8f-b269-ca35ee0f3f9d - CSharpAssembly1)...
Looking to see if we already have a skeleton assembly for (ProjectId, #f249a074-7e09-4d8f-b269-ca35ee0f3f9d - CSharpAssembly1) before we build one...
Beginning to create a skeleton assembly for CSharpAssembly1...
Successfully emitted a skeleton assembly for CSharpAssembly1
Done trying to create a skeleton assembly for CSharpAssembly1
Successfully stored the metadata generated for (ProjectId, #f249a074-7e09-4d8f-b269-ca35ee0f3f9d - CSharpAssembly1)
```
Tagging @jasonmalinowski
|
1.0
|
Running Editor tests prints out unnecessary outputs - This is probably an issue with how loggers are removed or registered.
```
------ Test started: Assembly: Roslyn.Services.Editor2.UnitTests.dll ------
Output from Microsoft.CodeAnalysis.Editor.UnitTests.FindReferences.FindReferencesTests.TestNamedType_CaseSensitivity:
Looking for a cached skeleton assembly for (ProjectId, #4248efad-a058-4d06-a93a-545b5b339a8f - CSharpAssembly1) before taking the lock...
Build lock taken for (ProjectId, #4248efad-a058-4d06-a93a-545b5b339a8f - CSharpAssembly1)...
Looking to see if we already have a skeleton assembly for (ProjectId, #4248efad-a058-4d06-a93a-545b5b339a8f - CSharpAssembly1) before we build one...
Beginning to create a skeleton assembly for CSharpAssembly1...
Successfully emitted a skeleton assembly for CSharpAssembly1
Done trying to create a skeleton assembly for CSharpAssembly1
Successfully stored the metadata generated for (ProjectId, #4248efad-a058-4d06-a93a-545b5b339a8f - CSharpAssembly1)
Looking for a cached skeleton assembly for (ProjectId, #f249a074-7e09-4d8f-b269-ca35ee0f3f9d - CSharpAssembly1) before taking the lock...
Build lock taken for (ProjectId, #f249a074-7e09-4d8f-b269-ca35ee0f3f9d - CSharpAssembly1)...
Looking to see if we already have a skeleton assembly for (ProjectId, #f249a074-7e09-4d8f-b269-ca35ee0f3f9d - CSharpAssembly1) before we build one...
Beginning to create a skeleton assembly for CSharpAssembly1...
Successfully emitted a skeleton assembly for CSharpAssembly1
Done trying to create a skeleton assembly for CSharpAssembly1
Successfully stored the metadata generated for (ProjectId, #f249a074-7e09-4d8f-b269-ca35ee0f3f9d - CSharpAssembly1)
```
Tagging @jasonmalinowski
|
test
|
running editor tests prints out unnecessary outputs this is probably an issue with how loggers are removed or registered test started assembly roslyn services unittests dll output from microsoft codeanalysis editor unittests findreferences findreferencestests testnamedtype casesensitivity looking for a cached skeleton assembly for projectid before taking the lock build lock taken for projectid looking to see if we already have a skeleton assembly for projectid before we build one beginning to create a skeleton assembly for successfully emitted a skeleton assembly for done trying to create a skeleton assembly for successfully stored the metadata generated for projectid looking for a cached skeleton assembly for projectid before taking the lock build lock taken for projectid looking to see if we already have a skeleton assembly for projectid before we build one beginning to create a skeleton assembly for successfully emitted a skeleton assembly for done trying to create a skeleton assembly for successfully stored the metadata generated for projectid tagging jasonmalinowski
| 1
|
9,154
| 2,615,134,066
|
IssuesEvent
|
2015-03-01 06:05:12
|
chrsmith/google-api-java-client
|
https://api.github.com/repos/chrsmith/google-api-java-client
|
closed
|
Maps Engine API library throws IllegalArgumentException when creating a GeoJsonGeometryCollection
|
auto-migrated Priority-Medium Type-Defect
|
```
Version of google-api-java-client? 1.18.0-rc
Java environment? OpenJDK 1.7.0
Describe the problem.
I'm having an issue when running code against the generated mapsengine library,
rev5 from Maven, 1.18.0-rc. I've attached the Jar I used, which was downloaded
from here:
http://search.maven.org/#artifactdetails%7Ccom.google.apis%7Cgoogle-api-services
-mapsengine%7Cv1-rev5-1.18.0-rc%7Cjar
This code:
import com.google.api.services.mapsengine.model.GeoJsonGeometryCollection;
class Repro {
public static void main(String[] args) {
GeoJsonGeometryCollection geoms = new GeoJsonGeometryCollection();
}
}
fails like so:
$ java -cp
google-api-services-mapsengine-v1-rev5-1.18.0-rc.jar:google-http-client-1.18.0-r
c.jar:. Repro
Exception in thread "main" java.lang.ExceptionInInitializerError
at Repro.main(Repro.java:6)
Caused by: java.lang.IllegalArgumentException: unable to create new instance of
class com.google.api.services.mapsengine.model.GeoJsonGeometry because it is
abstract and because it has no accessible default constructor
at com.google.api.client.util.Types.handleExceptionForNewInstance(Types.java:165)
at com.google.api.client.util.Types.newInstance(Types.java:120)
at com.google.api.client.util.Data.nullOf(Data.java:134)
at com.google.api.services.mapsengine.model.GeoJsonGeometryCollection.<clinit>(GeoJsonGeometryCollection.java:52)
... 1 more
Caused by: java.lang.InstantiationException:
com.google.api.services.mapsengine.model.GeoJsonGeometry
at java.lang.Class.newInstance(Class.java:355)
at com.google.api.client.util.Types.newInstance(Types.java:116)
... 3 more
The other geometry classes seem fine so far, it's just instantiating a
GeoJsonGeometryCollection that breaks.
```
Original issue reported on code.google.com by `m...@google.com` on 9 Apr 2014 at 12:38
|
1.0
|
Maps Engine API library throws IllegalArgumentException when creating a GeoJsonGeometryCollection - ```
Version of google-api-java-client? 1.18.0-rc
Java environment? OpenJDK 1.7.0
Describe the problem.
I'm having an issue when running code against the generated mapsengine library,
rev5 from Maven, 1.18.0-rc. I've attached the Jar I used, which was downloaded
from here:
http://search.maven.org/#artifactdetails%7Ccom.google.apis%7Cgoogle-api-services
-mapsengine%7Cv1-rev5-1.18.0-rc%7Cjar
This code:
import com.google.api.services.mapsengine.model.GeoJsonGeometryCollection;
class Repro {
public static void main(String[] args) {
GeoJsonGeometryCollection geoms = new GeoJsonGeometryCollection();
}
}
fails like so:
$ java -cp
google-api-services-mapsengine-v1-rev5-1.18.0-rc.jar:google-http-client-1.18.0-r
c.jar:. Repro
Exception in thread "main" java.lang.ExceptionInInitializerError
at Repro.main(Repro.java:6)
Caused by: java.lang.IllegalArgumentException: unable to create new instance of
class com.google.api.services.mapsengine.model.GeoJsonGeometry because it is
abstract and because it has no accessible default constructor
at com.google.api.client.util.Types.handleExceptionForNewInstance(Types.java:165)
at com.google.api.client.util.Types.newInstance(Types.java:120)
at com.google.api.client.util.Data.nullOf(Data.java:134)
at com.google.api.services.mapsengine.model.GeoJsonGeometryCollection.<clinit>(GeoJsonGeometryCollection.java:52)
... 1 more
Caused by: java.lang.InstantiationException:
com.google.api.services.mapsengine.model.GeoJsonGeometry
at java.lang.Class.newInstance(Class.java:355)
at com.google.api.client.util.Types.newInstance(Types.java:116)
... 3 more
The other geometry classes seem fine so far, it's just instantiating a
GeoJsonGeometryCollection that breaks.
```
Original issue reported on code.google.com by `m...@google.com` on 9 Apr 2014 at 12:38
|
non_test
|
maps engine api library throws illegalargumentexception when creating a geojsongeometrycollection version of google api java client rc java environment openjdk describe the problem i m having an issue when running code against the generated mapsengine library from maven rc i ve attached the jar i used which was downloaded from here mapsengine rc this code import com google api services mapsengine model geojsongeometrycollection class repro public static void main string args geojsongeometrycollection geoms new geojsongeometrycollection fails like so java cp google api services mapsengine rc jar google http client r c jar repro exception in thread main java lang exceptionininitializererror at repro main repro java caused by java lang illegalargumentexception unable to create new instance of class com google api services mapsengine model geojsongeometry because it is abstract and because it has no accessible default constructor at com google api client util types handleexceptionfornewinstance types java at com google api client util types newinstance types java at com google api client util data nullof data java at com google api services mapsengine model geojsongeometrycollection geojsongeometrycollection java more caused by java lang instantiationexception com google api services mapsengine model geojsongeometry at java lang class newinstance class java at com google api client util types newinstance types java more the other geometry classes seem fine so far it s just instantiating a geojsongeometrycollection that breaks original issue reported on code google com by m google com on apr at
| 0
|
45,502
| 7,187,924,567
|
IssuesEvent
|
2018-02-02 08:06:52
|
FriendsOfPHP/PHP-CS-Fixer
|
https://api.github.com/repos/FriendsOfPHP/PHP-CS-Fixer
|
closed
|
Fixers should be forced to have a rationale
|
documentation question
|
I would like to start a discussion about the need for every single fixers to be documented with their rationale. This should then appear in the `describe` command, and in the documentation (`README.rst`).
For now all fixers must have a summary and may have a description. The description usually explains in details _what_ it does, but it rarely says _why_ we should enable that particular fixer. This makes it difficult and/or time consuming to decide whether or not new fixers should be enabled during an upgrade.
A recent example of this is the `StaticLambdaFixer`. From the `describe` command it is extremely clear _what_ it does, but there not even a hint of _why_ it may be a good idea to consider it:
```sh
$ ./vendor/bin/php-cs-fixer describe static_lambda
Description of static_lambda rule.
Lambdas not (indirect) referencing `$this` must be declared `static`.
Fixer applying this rule is risky.
Risky when using "->bindTo" on lambdas without referencing to `$this`.
Fixing examples:
* Example #1.
---------- begin diff ----------
--- Original
+++ New
@@ -1,4 +1,4 @@
<?php
-$a = function () use ($b)
+$a = static function () use ($b)
{ echo $b;
};
----------- end diff -----------
```
This is even exacerbated by @localheinz who [kind of asked about it in the PR](https://github.com/FriendsOfPHP/PHP-CS-Fixer/pull/3187#issuecomment-340875577), but never gives an answer.
A counter-example of that is `ExplicitStringVariableFixer` whose description contains a rationale, and makes it much easier to make a decision about it:
```sh
$ ./vendor/bin/php-cs-fixer describe explicit_string_variable
Description of explicit_string_variable rule.
Converts implicit variables into explicit ones in double-quoted strings or heredoc syntax.
The reasoning behind this rule is the following:
- When there are two valid ways of doing the same thing, using both is confusing, there should be a coding standard to follow
- PHP manual marks `"$var"` syntax as implicit and `"${var}"` syntax as explicit: explicit code should always be preferred
- Explicit syntax allows word concatenation inside strings, e.g. `"${var}IsAVar"`, implicit doesn't
- Explicit syntax is easier to detect for IDE/editors and therefore has colors/hightlight with higher contrast, which is easier to read
Fixing examples:
* Example #1.
---------- begin diff ----------
--- Original
+++ New
@@ -1,4 +1,4 @@
<?php
-$a = "My name is $name !";
-$b = "I live in $state->country !";
-$c = "I have $farm[0] chickens !";
+$a = "My name is ${name} !";
+$b = "I live in {$state->country} !";
+$c = "I have {$farm[0]} chickens !";
----------- end diff -----------
```
Other projects, such as TSLint are doing that very well and have a short and clear rationale for every single rule ([see example](https://palantir.github.io/tslint/rules/no-any/)).
So I would like to suggest the following steps:
1. As of right now, reject all new fixers who don't include a rationale in their description
2. Enforce that requirement by code
1. Come up with a new signature for `FixerDefinition` that includes a mandatory rationale, separate from description
2. Update all fixers accordingly
3. Show that new info in documentation
3. As a bonus, generate one page per fixer to document them, which would include pretty much the output of `describe` command, because the single list in the README is getting very long
|
1.0
|
Fixers should be forced to have a rationale - I would like to start a discussion about the need for every single fixers to be documented with their rationale. This should then appear in the `describe` command, and in the documentation (`README.rst`).
For now all fixers must have a summary and may have a description. The description usually explains in details _what_ it does, but it rarely says _why_ we should enable that particular fixer. This makes it difficult and/or time consuming to decide whether or not new fixers should be enabled during an upgrade.
A recent example of this is the `StaticLambdaFixer`. From the `describe` command it is extremely clear _what_ it does, but there not even a hint of _why_ it may be a good idea to consider it:
```sh
$ ./vendor/bin/php-cs-fixer describe static_lambda
Description of static_lambda rule.
Lambdas not (indirect) referencing `$this` must be declared `static`.
Fixer applying this rule is risky.
Risky when using "->bindTo" on lambdas without referencing to `$this`.
Fixing examples:
* Example #1.
---------- begin diff ----------
--- Original
+++ New
@@ -1,4 +1,4 @@
<?php
-$a = function () use ($b)
+$a = static function () use ($b)
{ echo $b;
};
----------- end diff -----------
```
This is even exacerbated by @localheinz who [kind of asked about it in the PR](https://github.com/FriendsOfPHP/PHP-CS-Fixer/pull/3187#issuecomment-340875577), but never gives an answer.
A counter-example of that is `ExplicitStringVariableFixer` whose description contains a rationale, and makes it much easier to make a decision about it:
```sh
$ ./vendor/bin/php-cs-fixer describe explicit_string_variable
Description of explicit_string_variable rule.
Converts implicit variables into explicit ones in double-quoted strings or heredoc syntax.
The reasoning behind this rule is the following:
- When there are two valid ways of doing the same thing, using both is confusing, there should be a coding standard to follow
- PHP manual marks `"$var"` syntax as implicit and `"${var}"` syntax as explicit: explicit code should always be preferred
- Explicit syntax allows word concatenation inside strings, e.g. `"${var}IsAVar"`, implicit doesn't
- Explicit syntax is easier to detect for IDE/editors and therefore has colors/hightlight with higher contrast, which is easier to read
Fixing examples:
* Example #1.
---------- begin diff ----------
--- Original
+++ New
@@ -1,4 +1,4 @@
<?php
-$a = "My name is $name !";
-$b = "I live in $state->country !";
-$c = "I have $farm[0] chickens !";
+$a = "My name is ${name} !";
+$b = "I live in {$state->country} !";
+$c = "I have {$farm[0]} chickens !";
----------- end diff -----------
```
Other projects, such as TSLint are doing that very well and have a short and clear rationale for every single rule ([see example](https://palantir.github.io/tslint/rules/no-any/)).
So I would like to suggest the following steps:
1. As of right now, reject all new fixers who don't include a rationale in their description
2. Enforce that requirement by code
1. Come up with a new signature for `FixerDefinition` that includes a mandatory rationale, separate from description
2. Update all fixers accordingly
3. Show that new info in documentation
3. As a bonus, generate one page per fixer to document them, which would include pretty much the output of `describe` command, because the single list in the README is getting very long
|
non_test
|
fixers should be forced to have a rationale i would like to start a discussion about the need for every single fixers to be documented with their rationale this should then appear in the describe command and in the documentation readme rst for now all fixers must have a summary and may have a description the description usually explains in details what it does but it rarely says why we should enable that particular fixer this makes it difficult and or time consuming to decide whether or not new fixers should be enabled during an upgrade a recent example of this is the staticlambdafixer from the describe command it is extremely clear what it does but there not even a hint of why it may be a good idea to consider it sh vendor bin php cs fixer describe static lambda description of static lambda rule lambdas not indirect referencing this must be declared static fixer applying this rule is risky risky when using bindto on lambdas without referencing to this fixing examples example begin diff original new php a function use b a static function use b echo b end diff this is even exacerbated by localheinz who but never gives an answer a counter example of that is explicitstringvariablefixer whose description contains a rationale and makes it much easier to make a decision about it sh vendor bin php cs fixer describe explicit string variable description of explicit string variable rule converts implicit variables into explicit ones in double quoted strings or heredoc syntax the reasoning behind this rule is the following when there are two valid ways of doing the same thing using both is confusing there should be a coding standard to follow php manual marks var syntax as implicit and var syntax as explicit explicit code should always be preferred explicit syntax allows word concatenation inside strings e g var isavar implicit doesn t explicit syntax is easier to detect for ide editors and therefore has colors hightlight with higher contrast which is easier to read fixing examples example begin diff original new php a my name is name b i live in state country c i have farm chickens a my name is name b i live in state country c i have farm chickens end diff other projects such as tslint are doing that very well and have a short and clear rationale for every single rule so i would like to suggest the following steps as of right now reject all new fixers who don t include a rationale in their description enforce that requirement by code come up with a new signature for fixerdefinition that includes a mandatory rationale separate from description update all fixers accordingly show that new info in documentation as a bonus generate one page per fixer to document them which would include pretty much the output of describe command because the single list in the readme is getting very long
| 0
|
32,160
| 4,755,000,622
|
IssuesEvent
|
2016-10-24 09:22:04
|
Alfresco/alfresco-ng2-components
|
https://api.github.com/repos/Alfresco/alfresco-ng2-components
|
closed
|
Task service level agreement - Analytics component
|
automated test required comp: analytics New Feature
|
Reproduce the Task service level agreement graph of Activiti.
|
1.0
|
Task service level agreement - Analytics component - Reproduce the Task service level agreement graph of Activiti.
|
test
|
task service level agreement analytics component reproduce the task service level agreement graph of activiti
| 1
|
224,189
| 17,670,093,333
|
IssuesEvent
|
2021-08-23 04:05:08
|
apple/servicetalk
|
https://api.github.com/repos/apple/servicetalk
|
reopened
|
ClientClosureRaceTest.testPipelinedPosts test failure
|
flaky tests
|
https://ci.servicetalk.io/job/servicetalk-java11-prb/379/testReport/junit/io.servicetalk.http.netty/ClientClosureRaceTest/testPipelinedPosts/
```
Error Message
java.util.concurrent.ExecutionException: io.netty.channel.socket.ChannelOutputShutdownException: Channel output shutdown
Stacktrace
java.util.concurrent.ExecutionException: io.netty.channel.socket.ChannelOutputShutdownException: Channel output shutdown
at io.servicetalk.concurrent.api.SourceToFuture.reportGet(SourceToFuture.java:121)
at io.servicetalk.concurrent.api.SourceToFuture.get(SourceToFuture.java:92)
at io.servicetalk.http.netty.ClientClosureRaceTest.runIterations(ClientClosureRaceTest.java:148)
at io.servicetalk.http.netty.ClientClosureRaceTest.testPipelinedPosts(ClientClosureRaceTest.java:137)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at io.servicetalk.concurrent.internal.ServiceTalkTestTimeout$TimeoutStatement$CallableStatement.call(ServiceTalkTestTimeout.java:171)
at io.servicetalk.concurrent.internal.ServiceTalkTestTimeout$TimeoutStatement$CallableStatement.call(ServiceTalkTestTimeout.java:163)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: io.netty.channel.socket.ChannelOutputShutdownException: Channel output shutdown
at io.netty.channel.AbstractChannel$AbstractUnsafe.shutdownOutput(AbstractChannel.java:646)
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:954)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.flush0(AbstractEpollChannel.java:517)
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:906)
at io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1370)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:749)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:741)
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:727)
at io.netty.channel.DefaultChannelPipeline.flush(DefaultChannelPipeline.java:978)
at io.netty.channel.AbstractChannel.flush(AbstractChannel.java:253)
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber.lambda$new$0(Flush.java:68)
at io.servicetalk.transport.netty.internal.FlushOnEnd$1.writeTerminated(FlushOnEnd.java:31)
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber.onComplete(Flush.java:125)
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onComplete(SingleFlatMapPublisher.java:111)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71)
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.sendComplete(FromArrayPublisher.java:113)
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.request(FromArrayPublisher.java:89)
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43)
at io.servicetalk.concurrent.api.CancellableThenSubscription.setSubscription(CancellableThenSubscription.java:108)
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onSubscribe(SingleFlatMapPublisher.java:75)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onSubscribe(ContextPreservingSubscriber.java:38)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onSubscribe(ContextPreservingSubscriptionSubscriber.java:41)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onSubscribe(ContextPreservingSubscriber.java:38)
at io.servicetalk.concurrent.api.FromArrayPublisher.doSubscribe(FromArrayPublisher.java:45)
at io.servicetalk.concurrent.api.AbstractSynchronousPublisher.handleSubscribe(AbstractSynchronousPublisher.java:36)
at io.servicetalk.concurrent.api.Publisher.lambda$subscribeWithContext$10(Publisher.java:2436)
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37)
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:68)
at io.servicetalk.concurrent.api.Publisher.subscribeWithContext(Publisher.java:2435)
at io.servicetalk.concurrent.api.Publisher.subscribeInternal(Publisher.java:2206)
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onSuccess(SingleFlatMapPublisher.java:95)
at io.servicetalk.concurrent.api.ReduceSingle$ReduceSubscriber.onComplete(ReduceSingle.java:114)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56)
at io.servicetalk.concurrent.api.FilterPublisher$1.onComplete(FilterPublisher.java:72)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71)
at io.servicetalk.concurrent.internal.ScalarValueSubscription.request(ScalarValueSubscription.java:71)
at io.servicetalk.concurrent.internal.ConcurrentSubscription.request(ConcurrentSubscription.java:122)
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43)
at io.servicetalk.concurrent.internal.ConcurrentSubscription.request(ConcurrentSubscription.java:122)
at io.servicetalk.concurrent.api.ReduceSingle$ReduceSubscriber.onSubscribe(ReduceSingle.java:97)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onSubscribe(ContextPreservingSubscriptionSubscriber.java:41)
at io.servicetalk.concurrent.api.FilterPublisher$1.onSubscribe(FilterPublisher.java:49)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onSubscribe(ContextPreservingSubscriber.java:38)
at io.servicetalk.concurrent.api.FromSingleItemPublisher.doSubscribe(FromSingleItemPublisher.java:38)
at io.servicetalk.concurrent.api.AbstractSynchronousPublisher.handleSubscribe(AbstractSynchronousPublisher.java:36)
at io.servicetalk.concurrent.api.Publisher.delegateSubscribe(Publisher.java:2414)
at io.servicetalk.concurrent.api.AbstractSynchronousPublisherOperator.handleSubscribe(AbstractSynchronousPublisherOperator.java:48)
at io.servicetalk.concurrent.api.Publisher.delegateSubscribe(Publisher.java:2414)
at io.servicetalk.concurrent.api.ReduceSingle.handleSubscribe(ReduceSingle.java:76)
at io.servicetalk.concurrent.api.Single.delegateSubscribe(Single.java:1498)
at io.servicetalk.concurrent.api.SingleFlatMapPublisher.handleSubscribe(SingleFlatMapPublisher.java:44)
at io.servicetalk.concurrent.api.Publisher.delegateSubscribe(Publisher.java:2414)
at io.servicetalk.concurrent.api.AbstractSynchronousPublisherOperator.handleSubscribe(AbstractSynchronousPublisherOperator.java:48)
at io.servicetalk.concurrent.api.Publisher.lambda$subscribeWithContext$10(Publisher.java:2436)
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37)
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:68)
at io.servicetalk.concurrent.api.Publisher.subscribeWithContext(Publisher.java:2435)
at io.servicetalk.concurrent.api.Publisher.subscribeInternal(Publisher.java:2206)
at io.servicetalk.concurrent.api.AbstractNoHandleSubscribePublisher.subscribe(AbstractNoHandleSubscribePublisher.java:52)
at io.servicetalk.transport.netty.internal.DefaultNettyConnection$2.handleSubscribe(DefaultNettyConnection.java:313)
at io.servicetalk.concurrent.api.Completable.handleSubscribe(Completable.java:1564)
at io.servicetalk.concurrent.api.Completable.delegateSubscribe(Completable.java:1520)
at io.servicetalk.concurrent.api.AbstractSynchronousCompletableOperator.handleSubscribe(AbstractSynchronousCompletableOperator.java:46)
at io.servicetalk.concurrent.api.Completable.delegateSubscribe(Completable.java:1520)
at io.servicetalk.concurrent.api.ResumeCompletable.handleSubscribe(ResumeCompletable.java:44)
at io.servicetalk.concurrent.api.Completable.delegateSubscribe(Completable.java:1520)
at io.servicetalk.concurrent.api.AbstractSynchronousCompletableOperator.handleSubscribe(AbstractSynchronousCompletableOperator.java:46)
at io.servicetalk.concurrent.api.CompletableSubscribeShareContext.handleSubscribe(CompletableSubscribeShareContext.java:34)
at io.servicetalk.concurrent.api.Completable.lambda$subscribeWithContext$0(Completable.java:1541)
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37)
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:80)
at io.servicetalk.concurrent.api.Completable.subscribeWithContext(Completable.java:1540)
at io.servicetalk.concurrent.api.Completable.subscribeInternal(Completable.java:1137)
at io.servicetalk.concurrent.api.CompletableDefer.handleSubscribe(CompletableDefer.java:47)
at io.servicetalk.concurrent.api.Completable.handleSubscribe(Completable.java:1564)
at io.servicetalk.concurrent.api.Completable.lambda$subscribeWithContext$0(Completable.java:1541)
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37)
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:80)
at io.servicetalk.concurrent.api.Completable.subscribeWithContext(Completable.java:1540)
at io.servicetalk.concurrent.api.Completable.subscribeInternal(Completable.java:1137)
at io.servicetalk.concurrent.api.CompletableDefer.subscribe(CompletableDefer.java:52)
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteQueue.execute(DefaultNettyPipelinedConnection.java:235)
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteQueue.execute(DefaultNettyPipelinedConnection.java:222)
at io.servicetalk.transport.netty.internal.SequentialTaskQueue.executeNextTask(SequentialTaskQueue.java:108)
at io.servicetalk.transport.netty.internal.SequentialTaskQueue.postTaskTermination(SequentialTaskQueue.java:84)
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteSourceSubscriber.safePostTaskTermination(DefaultNettyPipelinedConnection.java:339)
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteSourceSubscriber.onComplete(DefaultNettyPipelinedConnection.java:311)
at io.servicetalk.concurrent.api.ContextPreservingCancellableCompletableSubscriber.onComplete(ContextPreservingCancellableCompletableSubscriber.java:41)
at io.servicetalk.concurrent.api.ContextPreservingCompletableSubscriber.onComplete(ContextPreservingCompletableSubscriber.java:49)
at io.servicetalk.concurrent.api.ContextPreservingCancellableCompletableSubscriber.onComplete(ContextPreservingCancellableCompletableSubscriber.java:41)
at io.servicetalk.concurrent.api.AfterFinallyCompletable$AfterFinallyCompletableSubscriber.onComplete(AfterFinallyCompletable.java:60)
at io.servicetalk.concurrent.api.ResumeCompletable$ResumeSubscriber.onComplete(ResumeCompletable.java:83)
at io.servicetalk.concurrent.api.BeforeFinallyCompletable$BeforeFinallyCompletableSubscriber.onComplete(BeforeFinallyCompletable.java:65)
at io.servicetalk.concurrent.api.ContextPreservingCompletableSubscriber.onComplete(ContextPreservingCompletableSubscriber.java:49)
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber$AllWritesPromise.terminateSubscriber(WriteStreamSubscriber.java:394)
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber$AllWritesPromise.sourceTerminated(WriteStreamSubscriber.java:284)
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber.onComplete(WriteStreamSubscriber.java:179)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56)
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber.onComplete(Flush.java:130)
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onComplete(SingleFlatMapPublisher.java:111)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71)
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.sendComplete(FromArrayPublisher.java:113)
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.request(FromArrayPublisher.java:89)
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43)
at io.servicetalk.concurrent.api.CancellableThenSubscription.request(CancellableThenSubscription.java:62)
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber$1.request(Flush.java:86)
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43)
at io.servicetalk.concurrent.internal.ConcurrentSubscription.request(ConcurrentSubscription.java:122)
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber.requestMoreIfRequired(WriteStreamSubscriber.java:245)
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber.lambda$onSubscribe$0(WriteStreamSubscriber.java:126)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:405)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:338)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
... 1 more
Caused by: io.netty.channel.unix.Errors$NativeIoException: syscall:writev(..) failed: Broken pipe
at io.netty.channel.unix.FileDescriptor.writeAddresses(..)(Unknown Source)
Standard Output
2019-06-28 03:04:22,252 Time-limited test [INFO ] ClientClosureRaceTest - Completed 141 requests
2019-06-28 03:04:22,429 Time-limited test [INFO ] ClientClosureRaceTest - Completed 121 requests
2019-06-28 03:04:22,436 servicetalk-global-io-executor-2-9 [WARN ] ChannelOutboundBuffer - Failed to mark a promise as failure because it has failed already: WriteStreamSubscriber$AllWritesPromise@4fe094b0(failure: io.netty.channel.socket.ChannelOutputShutdownException: Channel output shutdown), unnotified cause: io.netty.channel.socket.ChannelOutputShutdownException: Channel output shutdown
at io.netty.channel.AbstractChannel$AbstractUnsafe.shutdownOutput(AbstractChannel.java:646)
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:954)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.flush0(AbstractEpollChannel.java:517)
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:906)
at io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1370)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:749)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:741)
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:727)
at io.netty.channel.DefaultChannelPipeline.flush(DefaultChannelPipeline.java:978)
at io.netty.channel.AbstractChannel.flush(AbstractChannel.java:253)
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber.lambda$new$0(Flush.java:68)
at io.servicetalk.transport.netty.internal.FlushOnEnd$1.writeTerminated(FlushOnEnd.java:31)
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber.onComplete(Flush.java:125)
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onComplete(SingleFlatMapPublisher.java:111)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71)
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.sendComplete(FromArrayPublisher.java:113)
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.request(FromArrayPublisher.java:89)
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43)
at io.servicetalk.concurrent.api.CancellableThenSubscription.setSubscription(CancellableThenSubscription.java:108)
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onSubscribe(SingleFlatMapPublisher.java:75)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onSubscribe(ContextPreservingSubscriber.java:38)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onSubscribe(ContextPreservingSubscriptionSubscriber.java:41)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onSubscribe(ContextPreservingSubscriber.java:38)
at io.servicetalk.concurrent.api.FromArrayPublisher.doSubscribe(FromArrayPublisher.java:45)
at io.servicetalk.concurrent.api.AbstractSynchronousPublisher.handleSubscribe(AbstractSynchronousPublisher.java:36)
at io.servicetalk.concurrent.api.Publisher.lambda$subscribeWithContext$10(Publisher.java:2436)
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37)
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:68)
at io.servicetalk.concurrent.api.Publisher.subscribeWithContext(Publisher.java:2435)
at io.servicetalk.concurrent.api.Publisher.subscribeInternal(Publisher.java:2206)
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onSuccess(SingleFlatMapPublisher.java:95)
at io.servicetalk.concurrent.api.ReduceSingle$ReduceSubscriber.onComplete(ReduceSingle.java:114)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56)
at io.servicetalk.concurrent.api.FilterPublisher$1.onComplete(FilterPublisher.java:72)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71)
at io.servicetalk.concurrent.internal.ScalarValueSubscription.request(ScalarValueSubscription.java:71)
at io.servicetalk.concurrent.internal.ConcurrentSubscription.request(ConcurrentSubscription.java:122)
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43)
at io.servicetalk.concurrent.internal.ConcurrentSubscription.request(ConcurrentSubscription.java:122)
at io.servicetalk.concurrent.api.ReduceSingle$ReduceSubscriber.onSubscribe(ReduceSingle.java:97)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onSubscribe(ContextPreservingSubscriptionSubscriber.java:41)
at io.servicetalk.concurrent.api.FilterPublisher$1.onSubscribe(FilterPublisher.java:49)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onSubscribe(ContextPreservingSubscriber.java:38)
at io.servicetalk.concurrent.api.FromSingleItemPublisher.doSubscribe(FromSingleItemPublisher.java:38)
at io.servicetalk.concurrent.api.AbstractSynchronousPublisher.handleSubscribe(AbstractSynchronousPublisher.java:36)
at io.servicetalk.concurrent.api.Publisher.delegateSubscribe(Publisher.java:2414)
at io.servicetalk.concurrent.api.AbstractSynchronousPublisherOperator.handleSubscribe(AbstractSynchronousPublisherOperator.java:48)
at io.servicetalk.concurrent.api.Publisher.delegateSubscribe(Publisher.java:2414)
at io.servicetalk.concurrent.api.ReduceSingle.handleSubscribe(ReduceSingle.java:76)
at io.servicetalk.concurrent.api.Single.delegateSubscribe(Single.java:1498)
at io.servicetalk.concurrent.api.SingleFlatMapPublisher.handleSubscribe(SingleFlatMapPublisher.java:44)
at io.servicetalk.concurrent.api.Publisher.delegateSubscribe(Publisher.java:2414)
at io.servicetalk.concurrent.api.AbstractSynchronousPublisherOperator.handleSubscribe(AbstractSynchronousPublisherOperator.java:48)
at io.servicetalk.concurrent.api.Publisher.lambda$subscribeWithContext$10(Publisher.java:2436)
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37)
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:68)
at io.servicetalk.concurrent.api.Publisher.subscribeWithContext(Publisher.java:2435)
at io.servicetalk.concurrent.api.Publisher.subscribeInternal(Publisher.java:2206)
at io.servicetalk.concurrent.api.AbstractNoHandleSubscribePublisher.subscribe(AbstractNoHandleSubscribePublisher.java:52)
at io.servicetalk.transport.netty.internal.DefaultNettyConnection$2.handleSubscribe(DefaultNettyConnection.java:313)
at io.servicetalk.concurrent.api.Completable.handleSubscribe(Completable.java:1564)
at io.servicetalk.concurrent.api.Completable.delegateSubscribe(Completable.java:1520)
at io.servicetalk.concurrent.api.AbstractSynchronousCompletableOperator.handleSubscribe(AbstractSynchronousCompletableOperator.java:46)
at io.servicetalk.concurrent.api.Completable.delegateSubscribe(Completable.java:1520)
at io.servicetalk.concurrent.api.ResumeCompletable.handleSubscribe(ResumeCompletable.java:44)
at io.servicetalk.concurrent.api.Completable.delegateSubscribe(Completable.java:1520)
at io.servicetalk.concurrent.api.AbstractSynchronousCompletableOperator.handleSubscribe(AbstractSynchronousCompletableOperator.java:46)
at io.servicetalk.concurrent.api.CompletableSubscribeShareContext.handleSubscribe(CompletableSubscribeShareContext.java:34)
at io.servicetalk.concurrent.api.Completable.lambda$subscribeWithContext$0(Completable.java:1541)
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37)
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:80)
at io.servicetalk.concurrent.api.Completable.subscribeWithContext(Completable.java:1540)
at io.servicetalk.concurrent.api.Completable.subscribeInternal(Completable.java:1137)
at io.servicetalk.concurrent.api.CompletableDefer.handleSubscribe(CompletableDefer.java:47)
at io.servicetalk.concurrent.api.Completable.handleSubscribe(Completable.java:1564)
at io.servicetalk.concurrent.api.Completable.lambda$subscribeWithContext$0(Completable.java:1541)
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37)
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:80)
at io.servicetalk.concurrent.api.Completable.subscribeWithContext(Completable.java:1540)
at io.servicetalk.concurrent.api.Completable.subscribeInternal(Completable.java:1137)
at io.servicetalk.concurrent.api.CompletableDefer.subscribe(CompletableDefer.java:52)
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteQueue.execute(DefaultNettyPipelinedConnection.java:235)
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteQueue.execute(DefaultNettyPipelinedConnection.java:222)
at io.servicetalk.transport.netty.internal.SequentialTaskQueue.executeNextTask(SequentialTaskQueue.java:108)
at io.servicetalk.transport.netty.internal.SequentialTaskQueue.postTaskTermination(SequentialTaskQueue.java:84)
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteSourceSubscriber.safePostTaskTermination(DefaultNettyPipelinedConnection.java:339)
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteSourceSubscriber.onComplete(DefaultNettyPipelinedConnection.java:311)
at io.servicetalk.concurrent.api.ContextPreservingCancellableCompletableSubscriber.onComplete(ContextPreservingCancellableCompletableSubscriber.java:41)
at io.servicetalk.concurrent.api.ContextPreservingCompletableSubscriber.onComplete(ContextPreservingCompletableSubscriber.java:49)
at io.servicetalk.concurrent.api.ContextPreservingCancellableCompletableSubscriber.onComplete(ContextPreservingCancellableCompletableSubscriber.java:41)
at io.servicetalk.concurrent.api.AfterFinallyCompletable$AfterFinallyCompletableSubscriber.onComplete(AfterFinallyCompletable.java:60)
at io.servicetalk.concurrent.api.ResumeCompletable$ResumeSubscriber.onComplete(ResumeCompletable.java:83)
at io.servicetalk.concurrent.api.BeforeFinallyCompletable$BeforeFinallyCompletableSubscriber.onComplete(BeforeFinallyCompletable.java:65)
at io.servicetalk.concurrent.api.ContextPreservingCompletableSubscriber.onComplete(ContextPreservingCompletableSubscriber.java:49)
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber$AllWritesPromise.terminateSubscriber(WriteStreamSubscriber.java:394)
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber$AllWritesPromise.sourceTerminated(WriteStreamSubscriber.java:284)
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber.onComplete(WriteStreamSubscriber.java:179)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56)
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber.onComplete(Flush.java:130)
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onComplete(SingleFlatMapPublisher.java:111)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71)
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.sendComplete(FromArrayPublisher.java:113)
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.request(FromArrayPublisher.java:89)
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43)
at io.servicetalk.concurrent.api.CancellableThenSubscription.request(CancellableThenSubscription.java:62)
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber$1.request(Flush.java:86)
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43)
at io.servicetalk.concurrent.internal.ConcurrentSubscription.request(ConcurrentSubscription.java:122)
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber.requestMoreIfRequired(WriteStreamSubscriber.java:245)
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber.lambda$onSubscribe$0(WriteStreamSubscriber.java:126)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:405)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:338)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: io.netty.channel.unix.Errors$NativeIoException: syscall:writev(..) failed: Broken pipe
at io.netty.channel.unix.FileDescriptor.writeAddresses(..)(Unknown Source)
io.netty.channel.socket.ChannelOutputShutdownException: Channel output shutdown
at io.netty.channel.AbstractChannel$AbstractUnsafe.shutdownOutput(AbstractChannel.java:646) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:954) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.flush0(AbstractEpollChannel.java:517) [netty-transport-native-epoll-4.1.36.Final-linux-x86_64.jar:4.1.36.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:906) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1370) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:749) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:741) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:727) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.DefaultChannelPipeline.flush(DefaultChannelPipeline.java:978) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.AbstractChannel.flush(AbstractChannel.java:253) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber.lambda$new$0(Flush.java:68) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.FlushOnEnd$1.writeTerminated(FlushOnEnd.java:31) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber.onComplete(Flush.java:125) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onComplete(SingleFlatMapPublisher.java:111) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.sendComplete(FromArrayPublisher.java:113) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.request(FromArrayPublisher.java:89) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.CancellableThenSubscription.setSubscription(CancellableThenSubscription.java:108) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onSubscribe(SingleFlatMapPublisher.java:75) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onSubscribe(ContextPreservingSubscriber.java:38) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onSubscribe(ContextPreservingSubscriptionSubscriber.java:41) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onSubscribe(ContextPreservingSubscriber.java:38) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FromArrayPublisher.doSubscribe(FromArrayPublisher.java:45) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AbstractSynchronousPublisher.handleSubscribe(AbstractSynchronousPublisher.java:36) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.lambda$subscribeWithContext$10(Publisher.java:2436) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:68) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.subscribeWithContext(Publisher.java:2435) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.subscribeInternal(Publisher.java:2206) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onSuccess(SingleFlatMapPublisher.java:95) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ReduceSingle$ReduceSubscriber.onComplete(ReduceSingle.java:114) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FilterPublisher$1.onComplete(FilterPublisher.java:72) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.internal.ScalarValueSubscription.request(ScalarValueSubscription.java:71) [servicetalk-concurrent-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.internal.ConcurrentSubscription.request(ConcurrentSubscription.java:122) [servicetalk-concurrent-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.internal.ConcurrentSubscription.request(ConcurrentSubscription.java:122) [servicetalk-concurrent-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ReduceSingle$ReduceSubscriber.onSubscribe(ReduceSingle.java:97) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onSubscribe(ContextPreservingSubscriptionSubscriber.java:41) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FilterPublisher$1.onSubscribe(FilterPublisher.java:49) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onSubscribe(ContextPreservingSubscriber.java:38) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FromSingleItemPublisher.doSubscribe(FromSingleItemPublisher.java:38) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AbstractSynchronousPublisher.handleSubscribe(AbstractSynchronousPublisher.java:36) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.delegateSubscribe(Publisher.java:2414) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AbstractSynchronousPublisherOperator.handleSubscribe(AbstractSynchronousPublisherOperator.java:48) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.delegateSubscribe(Publisher.java:2414) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ReduceSingle.handleSubscribe(ReduceSingle.java:76) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Single.delegateSubscribe(Single.java:1498) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.SingleFlatMapPublisher.handleSubscribe(SingleFlatMapPublisher.java:44) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.delegateSubscribe(Publisher.java:2414) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AbstractSynchronousPublisherOperator.handleSubscribe(AbstractSynchronousPublisherOperator.java:48) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.lambda$subscribeWithContext$10(Publisher.java:2436) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:68) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.subscribeWithContext(Publisher.java:2435) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.subscribeInternal(Publisher.java:2206) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AbstractNoHandleSubscribePublisher.subscribe(AbstractNoHandleSubscribePublisher.java:52) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.DefaultNettyConnection$2.handleSubscribe(DefaultNettyConnection.java:313) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.handleSubscribe(Completable.java:1564) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.delegateSubscribe(Completable.java:1520) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AbstractSynchronousCompletableOperator.handleSubscribe(AbstractSynchronousCompletableOperator.java:46) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.delegateSubscribe(Completable.java:1520) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ResumeCompletable.handleSubscribe(ResumeCompletable.java:44) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.delegateSubscribe(Completable.java:1520) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AbstractSynchronousCompletableOperator.handleSubscribe(AbstractSynchronousCompletableOperator.java:46) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.CompletableSubscribeShareContext.handleSubscribe(CompletableSubscribeShareContext.java:34) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.lambda$subscribeWithContext$0(Completable.java:1541) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:80) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.subscribeWithContext(Completable.java:1540) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.subscribeInternal(Completable.java:1137) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.CompletableDefer.handleSubscribe(CompletableDefer.java:47) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.handleSubscribe(Completable.java:1564) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.lambda$subscribeWithContext$0(Completable.java:1541) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:80) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.subscribeWithContext(Completable.java:1540) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.subscribeInternal(Completable.java:1137) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.CompletableDefer.subscribe(CompletableDefer.java:52) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteQueue.execute(DefaultNettyPipelinedConnection.java:235) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteQueue.execute(DefaultNettyPipelinedConnection.java:222) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.SequentialTaskQueue.executeNextTask(SequentialTaskQueue.java:108) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.SequentialTaskQueue.postTaskTermination(SequentialTaskQueue.java:84) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteSourceSubscriber.safePostTaskTermination(DefaultNettyPipelinedConnection.java:339) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteSourceSubscriber.onComplete(DefaultNettyPipelinedConnection.java:311) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingCancellableCompletableSubscriber.onComplete(ContextPreservingCancellableCompletableSubscriber.java:41) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingCompletableSubscriber.onComplete(ContextPreservingCompletableSubscriber.java:49) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingCancellableCompletableSubscriber.onComplete(ContextPreservingCancellableCompletableSubscriber.java:41) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AfterFinallyCompletable$AfterFinallyCompletableSubscriber.onComplete(AfterFinallyCompletable.java:60) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ResumeCompletable$ResumeSubscriber.onComplete(ResumeCompletable.java:83) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.BeforeFinallyCompletable$BeforeFinallyCompletableSubscriber.onComplete(BeforeFinallyCompletable.java:65) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingCompletableSubscriber.onComplete(ContextPreservingCompletableSubscriber.java:49) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber$AllWritesPromise.terminateSubscriber(WriteStreamSubscriber.java:394) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber$AllWritesPromise.sourceTerminated(WriteStreamSubscriber.java:284) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber.onComplete(WriteStreamSubscriber.java:179) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber.onComplete(Flush.java:130) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onComplete(SingleFlatMapPublisher.java:111) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.sendComplete(FromArrayPublisher.java:113) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.request(FromArrayPublisher.java:89) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.CancellableThenSubscription.request(CancellableThenSubscription.java:62) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber$1.request(Flush.java:86) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.internal.ConcurrentSubscription.request(ConcurrentSubscription.java:122) [servicetalk-concurrent-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber.requestMoreIfRequired(WriteStreamSubscriber.java:245) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber.lambda$onSubscribe$0(WriteStreamSubscriber.java:126) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) [netty-common-4.1.36.Final.jar:4.1.36.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:405) [netty-common-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:338) [netty-transport-native-epoll-4.1.36.Final-linux-x86_64.jar:4.1.36.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906) [netty-common-4.1.36.Final.jar:4.1.36.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.36.Final.jar:4.1.36.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-common-4.1.36.Final.jar:4.1.36.Final]
at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: io.netty.channel.unix.Errors$NativeIoException: syscall:writev(..) failed: Broken pipe
at io.netty.channel.unix.FileDescriptor.writeAddresses(..)(Unknown Source) ~[netty-transport-native-unix-common-4.1.36.Final.jar:4.1.36.Final]
2019-06-28 03:04:22,445 servicetalk-global-io-executor-2-9 [WARN ] ChannelOutboundBuffer - Failed to mark a promise as failure because it has failed already: WriteStreamSubscriber$AllWritesPromise@4fe094b0(failure: io.netty.channel.socket.ChannelOutputShutdownException: Channel output shutdown), unnotified cause: io.netty.channel.socket.ChannelOutputShutdownException: Channel output shutdown
at io.netty.channel.AbstractChannel$AbstractUnsafe.shutdownOutput(AbstractChannel.java:646)
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:954)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.flush0(AbstractEpollChannel.java:517)
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:906)
at io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1370)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:749)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:741)
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:727)
at io.netty.channel.DefaultChannelPipeline.flush(DefaultChannelPipeline.java:978)
at io.netty.channel.AbstractChannel.flush(AbstractChannel.java:253)
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber.lambda$new$0(Flush.java:68)
at io.servicetalk.transport.netty.internal.FlushOnEnd$1.writeTerminated(FlushOnEnd.java:31)
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber.onComplete(Flush.java:125)
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onComplete(SingleFlatMapPublisher.java:111)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71)
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.sendComplete(FromArrayPublisher.java:113)
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.request(FromArrayPublisher.java:89)
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43)
at io.servicetalk.concurrent.api.CancellableThenSubscription.setSubscription(CancellableThenSubscription.java:108)
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onSubscribe(SingleFlatMapPublisher.java:75)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onSubscribe(ContextPreservingSubscriber.java:38)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onSubscribe(ContextPreservingSubscriptionSubscriber.java:41)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onSubscribe(ContextPreservingSubscriber.java:38)
at io.servicetalk.concurrent.api.FromArrayPublisher.doSubscribe(FromArrayPublisher.java:45)
at io.servicetalk.concurrent.api.AbstractSynchronousPublisher.handleSubscribe(AbstractSynchronousPublisher.java:36)
at io.servicetalk.concurrent.api.Publisher.lambda$subscribeWithContext$10(Publisher.java:2436)
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37)
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:68)
at io.servicetalk.concurrent.api.Publisher.subscribeWithContext(Publisher.java:2435)
at io.servicetalk.concurrent.api.Publisher.subscribeInternal(Publisher.java:2206)
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onSuccess(SingleFlatMapPublisher.java:95)
at io.servicetalk.concurrent.api.ReduceSingle$ReduceSubscriber.onComplete(ReduceSingle.java:114)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56)
at io.servicetalk.concurrent.api.FilterPublisher$1.onComplete(FilterPublisher.java:72)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71)
at io.servicetalk.concurrent.internal.ScalarValueSubscription.request(ScalarValueSubscription.java:71)
at io.servicetalk.concurrent.internal.ConcurrentSubscription.request(ConcurrentSubscription.java:122)
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43)
at io.servicetalk.concurrent.internal.ConcurrentSubscription.request(ConcurrentSubscription.java:122)
at io.servicetalk.concurrent.api.ReduceSingle$ReduceSubscriber.onSubscribe(ReduceSingle.java:97)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onSubscribe(ContextPreservingSubscriptionSubscriber.java:41)
at io.servicetalk.concurrent.api.FilterPublisher$1.onSubscribe(FilterPublisher.java:49)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onSubscribe(ContextPreservingSubscriber.java:38)
at io.servicetalk.concurrent.api.FromSingleItemPublisher.doSubscribe(FromSingleItemPublisher.java:38)
at io.servicetalk.concurrent.api.AbstractSynchronousPublisher.handleSubscribe(AbstractSynchronousPublisher.java:36)
at io.servicetalk.concurrent.api.Publisher.delegateSubscribe(Publisher.java:2414)
at io.servicetalk.concurrent.api.AbstractSynchronousPublisherOperator.handleSubscribe(AbstractSynchronousPublisherOperator.java:48)
at io.servicetalk.concurrent.api.Publisher.delegateSubscribe(Publisher.java:2414)
at io.servicetalk.concurrent.api.ReduceSingle.handleSubscribe(ReduceSingle.java:76)
at io.servicetalk.concurrent.api.Single.delegateSubscribe(Single.java:1498)
at io.servicetalk.concurrent.api.SingleFlatMapPublisher.handleSubscribe(SingleFlatMapPublisher.java:44)
at io.servicetalk.concurrent.api.Publisher.delegateSubscribe(Publisher.java:2414)
at io.servicetalk.concurrent.api.AbstractSynchronousPublisherOperator.handleSubscribe(AbstractSynchronousPublisherOperator.java:48)
at io.servicetalk.concurrent.api.Publisher.lambda$subscribeWithContext$10(Publisher.java:2436)
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37)
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:68)
at io.servicetalk.concurrent.api.Publisher.subscribeWithContext(Publisher.java:2435)
at io.servicetalk.concurrent.api.Publisher.subscribeInternal(Publisher.java:2206)
at io.servicetalk.concurrent.api.AbstractNoHandleSubscribePublisher.subscribe(AbstractNoHandleSubscribePublisher.java:52)
at io.servicetalk.transport.netty.internal.DefaultNettyConnection$2.handleSubscribe(DefaultNettyConnection.java:313)
at io.servicetalk.concurrent.api.Completable.handleSubscribe(Completable.java:1564)
at io.servicetalk.concurrent.api.Completable.delegateSubscribe(Completable.java:1520)
at io.servicetalk.concurrent.api.AbstractSynchronousCompletableOperator.handleSubscribe(AbstractSynchronousCompletableOperator.java:46)
at io.servicetalk.concurrent.api.Completable.delegateSubscribe(Completable.java:1520)
at io.servicetalk.concurrent.api.ResumeCompletable.handleSubscribe(ResumeCompletable.java:44)
at io.servicetalk.concurrent.api.Completable.delegateSubscribe(Completable.java:1520)
at io.servicetalk.concurrent.api.AbstractSynchronousCompletableOperator.handleSubscribe(AbstractSynchronousCompletableOperator.java:46)
at io.servicetalk.concurrent.api.CompletableSubscribeShareContext.handleSubscribe(CompletableSubscribeShareContext.java:34)
at io.servicetalk.concurrent.api.Completable.lambda$subscribeWithContext$0(Completable.java:1541)
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37)
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:80)
at io.servicetalk.concurrent.api.Completable.subscribeWithContext(Completable.java:1540)
at io.servicetalk.concurrent.api.Completable.subscribeInternal(Completable.java:1137)
at io.servicetalk.concurrent.api.CompletableDefer.handleSubscribe(CompletableDefer.java:47)
at io.servicetalk.concurrent.api.Completable.handleSubscribe(Completable.java:1564)
at io.servicetalk.concurrent.api.Completable.lambda$subscribeWithContext$0(Completable.java:1541)
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37)
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:80)
at io.servicetalk.concurrent.api.Completable.subscribeWithContext(Completable.java:1540)
at io.servicetalk.concurrent.api.Completable.subscribeInternal(Completable.java:1137)
at io.servicetalk.concurrent.api.CompletableDefer.subscribe(CompletableDefer.java:52)
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteQueue.execute(DefaultNettyPipelinedConnection.java:235)
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteQueue.execute(DefaultNettyPipelinedConnection.java:222)
at io.servicetalk.transport.netty.internal.SequentialTaskQueue.executeNextTask(SequentialTaskQueue.java:108)
at io.servicetalk.transport.netty.internal.SequentialTaskQueue.postTaskTermination(SequentialTaskQueue.java:84)
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteSourceSubscriber.safePostTaskTermination(DefaultNettyPipelinedConnection.java:339)
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteSourceSubscriber.onComplete(DefaultNettyPipelinedConnection.java:311)
at io.servicetalk.concurrent.api.ContextPreservingCancellableCompletableSubscriber.onComplete(ContextPreservingCancellableCompletableSubscriber.java:41)
at io.servicetalk.concurrent.api.ContextPreservingCompletableSubscriber.onComplete(ContextPreservingCompletableSubscriber.java:49)
at io.servicetalk.concurrent.api.ContextPreservingCancellableCompletableSubscriber.onComplete(ContextPreservingCancellableCompletableSubscriber.java:41)
at io.servicetalk.concurrent.api.AfterFinallyCompletable$AfterFinallyCompletableSubscriber.onComplete(AfterFinallyCompletable.java:60)
at io.servicetalk.concurrent.api.ResumeCompletable$ResumeSubscriber.onComplete(ResumeCompletable.java:83)
at io.servicetalk.concurrent.api.BeforeFinallyCompletable$BeforeFinallyCompletableSubscriber.onComplete(BeforeFinallyCompletable.java:65)
at io.servicetalk.concurrent.api.ContextPreservingCompletableSubscriber.onComplete(ContextPreservingCompletableSubscriber.java:49)
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber$AllWritesPromise.terminateSubscriber(WriteStreamSubscriber.java:394)
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber$AllWritesPromise.sourceTerminated(WriteStreamSubscriber.java:284)
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber.onComplete(WriteStreamSubscriber.java:179)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56)
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber.onComplete(Flush.java:130)
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onComplete(SingleFlatMapPublisher.java:111)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71)
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.sendComplete(FromArrayPublisher.java:113)
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.request(FromArrayPublisher.java:89)
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43)
at io.servicetalk.concurrent.api.CancellableThenSubscription.request(CancellableThenSubscription.java:62)
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber$1.request(Flush.java:86)
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43)
at io.servicetalk.concurrent.internal.ConcurrentSubscription.request(ConcurrentSubscription.java:122)
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber.requestMoreIfRequired(WriteStreamSubscriber.java:245)
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber.lambda$onSubscribe$0(WriteStreamSubscriber.java:126)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:405)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:338)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: io.netty.channel.unix.Errors$NativeIoException: syscall:writev(..) failed: Broken pipe
at io.netty.channel.unix.FileDescriptor.writeAddresses(..)(Unknown Source)
io.netty.channel.socket.ChannelOutputShutdownException: Channel output shutdown
at io.netty.channel.AbstractChannel$AbstractUnsafe.shutdownOutput(AbstractChannel.java:646) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:954) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.flush0(AbstractEpollChannel.java:517) [netty-transport-native-epoll-4.1.36.Final-linux-x86_64.jar:4.1.36.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:906) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1370) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:749) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:741) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:727) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.DefaultChannelPipeline.flush(DefaultChannelPipeline.java:978) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.AbstractChannel.flush(AbstractChannel.java:253) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber.lambda$new$0(Flush.java:68) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.FlushOnEnd$1.writeTerminated(FlushOnEnd.java:31) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber.onComplete(Flush.java:125) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onComplete(SingleFlatMapPublisher.java:111) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.sendComplete(FromArrayPublisher.java:113) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.request(FromArrayPublisher.java:89) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.CancellableThenSubscription.setSubscription(CancellableThenSubscription.java:108) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onSubscribe(SingleFlatMapPublisher.java:75) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onSubscribe(ContextPreservingSubscriber.java:38) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onSubscribe(ContextPreservingSubscriptionSubscriber.java:41) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onSubscribe(ContextPreservingSubscriber.java:38) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FromArrayPublisher.doSubscribe(FromArrayPublisher.java:45) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AbstractSynchronousPublisher.handleSubscribe(AbstractSynchronousPublisher.java:36) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.lambda$subscribeWithContext$10(Publisher.java:2436) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:68) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.subscribeWithContext(Publisher.java:2435) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.subscribeInternal(Publisher.java:2206) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onSuccess(SingleFlatMapPublisher.java:95) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ReduceSingle$ReduceSubscriber.onComplete(ReduceSingle.java:114) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FilterPublisher$1.onComplete(FilterPublisher.java:72) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.internal.ScalarValueSubscription.request(ScalarValueSubscription.java:71) [servicetalk-concurrent-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.internal.ConcurrentSubscription.request(ConcurrentSubscription.java:122) [servicetalk-concurrent-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.internal.ConcurrentSubscription.request(ConcurrentSubscription.java:122) [servicetalk-concurrent-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ReduceSingle$ReduceSubscriber.onSubscribe(ReduceSingle.java:97) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onSubscribe(ContextPreservingSubscriptionSubscriber.java:41) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FilterPublisher$1.onSubscribe(FilterPublisher.java:49) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onSubscribe(ContextPreservingSubscriber.java:38) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FromSingleItemPublisher.doSubscribe(FromSingleItemPublisher.java:38) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AbstractSynchronousPublisher.handleSubscribe(AbstractSynchronousPublisher.java:36) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.delegateSubscribe(Publisher.java:2414) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AbstractSynchronousPublisherOperator.handleSubscribe(AbstractSynchronousPublisherOperator.java:48) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.delegateSubscribe(Publisher.java:2414) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ReduceSingle.handleSubscribe(ReduceSingle.java:76) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Single.delegateSubscribe(Single.java:1498) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.SingleFlatMapPublisher.handleSubscribe(SingleFlatMapPublisher.java:44) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.delegateSubscribe(Publisher.java:2414) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AbstractSynchronousPublisherOperator.handleSubscribe(AbstractSynchronousPublisherOperator.java:48) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.lambda$subscribeWithContext$10(Publisher.java:2436) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:68) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.subscribeWithContext(Publisher.java:2435) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.subscribeInternal(Publisher.java:2206) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AbstractNoHandleSubscribePublisher.subscribe(AbstractNoHandleSubscribePublisher.java:52) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.DefaultNettyConnection$2.handleSubscribe(DefaultNettyConnection.java:313) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.handleSubscribe(Completable.java:1564) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.delegateSubscribe(Completable.java:1520) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AbstractSynchronousCompletableOperator.handleSubscribe(AbstractSynchronousCompletableOperator.java:46) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.delegateSubscribe(Completable.java:1520) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ResumeCompletable.handleSubscribe(ResumeCompletable.java:44) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.delegateSubscribe(Completable.java:1520) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AbstractSynchronousCompletableOperator.handleSubscribe(AbstractSynchronousCompletableOperator.java:46) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.CompletableSubscribeShareContext.handleSubscribe(CompletableSubscribeShareContext.java:34) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.lambda$subscribeWithContext$0(Completable.java:1541) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:80) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.subscribeWithContext(Completable.java:1540) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.subscribeInternal(Completable.java:1137) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.CompletableDefer.handleSubscribe(CompletableDefer.java:47) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.handleSubscribe(Completable.java:1564) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.lambda$subscribeWithContext$0(Completable.java:1541) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:80) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.subscribeWithContext(Completable.java:1540) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.subscribeInternal(Completable.java:1137) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.CompletableDefer.subscribe(CompletableDefer.java:52) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteQueue.execute(DefaultNettyPipelinedConnection.java:235) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteQueue.execute(DefaultNettyPipelinedConnection.java:222) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.SequentialTaskQueue.executeNextTask(SequentialTaskQueue.java:108) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.SequentialTaskQueue.postTaskTermination(SequentialTaskQueue.java:84) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteSourceSubscriber.safePostTaskTermination(DefaultNettyPipelinedConnection.java:339) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteSourceSubscriber.onComplete(DefaultNettyPipelinedConnection.java:311) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingCancellableCompletableSubscriber.onComplete(ContextPreservingCancellableCompletableSubscriber.java:41) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingCompletableSubscriber.onComplete(ContextPreservingCompletableSubscriber.java:49) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingCancellableCompletableSubscriber.onComplete(ContextPreservingCancellableCompletableSubscriber.java:41) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AfterFinallyCompletable$AfterFinallyCompletableSubscriber.onComplete(AfterFinallyCompletable.java:60) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ResumeCompletable$ResumeSubscriber.onComplete(ResumeCompletable.java:83) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.BeforeFinallyCompletable$BeforeFinallyCompletableSubscriber.onComplete(BeforeFinallyCompletable.java:65) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingCompletableSubscriber.onComplete(ContextPreservingCompletableSubscriber.java:49) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber$AllWritesPromise.terminateSubscriber(WriteStreamSubscriber.java:394) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber$AllWritesPromise.sourceTerminated(WriteStreamSubscriber.java:284) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber.onComplete(WriteStreamSubscriber.java:179) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber.onComplete(Flush.java:130) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onComplete(SingleFlatMapPublisher.java:111) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.sendComplete(FromArrayPublisher.java:113) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.request(FromArrayPublisher.java:89) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.CancellableThenSubscription.request(CancellableThenSubscription.java:62) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber$1.request(Flush.java:86) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.internal.ConcurrentSubscription.request(ConcurrentSubscription.java:122) [servicetalk-concurrent-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber.requestMoreIfRequired(WriteStreamSubscriber.java:245) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber.lambda$onSubscribe$0(WriteStreamSubscriber.java:126) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) [netty-common-4.1.36.Final.jar:4.1.36.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:405) [netty-common-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:338) [netty-transport-native-epoll-4.1.36.Final-linux-x86_64.jar:4.1.36.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906) [netty-common-4.1.36.Final.jar:4.1.36.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.36.Final.jar:4.1.36.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-common-4.1.36.Final.jar:4.1.36.Final]
at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: io.netty.channel.unix.Errors$NativeIoException: syscall:writev(..) failed: Broken pipe
at io.netty.channel.unix.FileDescriptor.writeAddresses(..)(Unknown Source) ~[netty-transport-native-unix-common-4.1.36.Final.jar:4.1.36.Final]
2019-06-28 03:04:22,486 Time-limited test [INFO ] ClientClosureRaceTest - Completed 12 requests
2019-06-28 03:04:24,356 Time-limited test [INFO ] ClientClosureRaceTest - Completed 1000 requests
```
|
1.0
|
ClientClosureRaceTest.testPipelinedPosts test failure - https://ci.servicetalk.io/job/servicetalk-java11-prb/379/testReport/junit/io.servicetalk.http.netty/ClientClosureRaceTest/testPipelinedPosts/
```
Error Message
java.util.concurrent.ExecutionException: io.netty.channel.socket.ChannelOutputShutdownException: Channel output shutdown
Stacktrace
java.util.concurrent.ExecutionException: io.netty.channel.socket.ChannelOutputShutdownException: Channel output shutdown
at io.servicetalk.concurrent.api.SourceToFuture.reportGet(SourceToFuture.java:121)
at io.servicetalk.concurrent.api.SourceToFuture.get(SourceToFuture.java:92)
at io.servicetalk.http.netty.ClientClosureRaceTest.runIterations(ClientClosureRaceTest.java:148)
at io.servicetalk.http.netty.ClientClosureRaceTest.testPipelinedPosts(ClientClosureRaceTest.java:137)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at io.servicetalk.concurrent.internal.ServiceTalkTestTimeout$TimeoutStatement$CallableStatement.call(ServiceTalkTestTimeout.java:171)
at io.servicetalk.concurrent.internal.ServiceTalkTestTimeout$TimeoutStatement$CallableStatement.call(ServiceTalkTestTimeout.java:163)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: io.netty.channel.socket.ChannelOutputShutdownException: Channel output shutdown
at io.netty.channel.AbstractChannel$AbstractUnsafe.shutdownOutput(AbstractChannel.java:646)
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:954)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.flush0(AbstractEpollChannel.java:517)
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:906)
at io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1370)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:749)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:741)
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:727)
at io.netty.channel.DefaultChannelPipeline.flush(DefaultChannelPipeline.java:978)
at io.netty.channel.AbstractChannel.flush(AbstractChannel.java:253)
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber.lambda$new$0(Flush.java:68)
at io.servicetalk.transport.netty.internal.FlushOnEnd$1.writeTerminated(FlushOnEnd.java:31)
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber.onComplete(Flush.java:125)
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onComplete(SingleFlatMapPublisher.java:111)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71)
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.sendComplete(FromArrayPublisher.java:113)
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.request(FromArrayPublisher.java:89)
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43)
at io.servicetalk.concurrent.api.CancellableThenSubscription.setSubscription(CancellableThenSubscription.java:108)
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onSubscribe(SingleFlatMapPublisher.java:75)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onSubscribe(ContextPreservingSubscriber.java:38)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onSubscribe(ContextPreservingSubscriptionSubscriber.java:41)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onSubscribe(ContextPreservingSubscriber.java:38)
at io.servicetalk.concurrent.api.FromArrayPublisher.doSubscribe(FromArrayPublisher.java:45)
at io.servicetalk.concurrent.api.AbstractSynchronousPublisher.handleSubscribe(AbstractSynchronousPublisher.java:36)
at io.servicetalk.concurrent.api.Publisher.lambda$subscribeWithContext$10(Publisher.java:2436)
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37)
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:68)
at io.servicetalk.concurrent.api.Publisher.subscribeWithContext(Publisher.java:2435)
at io.servicetalk.concurrent.api.Publisher.subscribeInternal(Publisher.java:2206)
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onSuccess(SingleFlatMapPublisher.java:95)
at io.servicetalk.concurrent.api.ReduceSingle$ReduceSubscriber.onComplete(ReduceSingle.java:114)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56)
at io.servicetalk.concurrent.api.FilterPublisher$1.onComplete(FilterPublisher.java:72)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71)
at io.servicetalk.concurrent.internal.ScalarValueSubscription.request(ScalarValueSubscription.java:71)
at io.servicetalk.concurrent.internal.ConcurrentSubscription.request(ConcurrentSubscription.java:122)
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43)
at io.servicetalk.concurrent.internal.ConcurrentSubscription.request(ConcurrentSubscription.java:122)
at io.servicetalk.concurrent.api.ReduceSingle$ReduceSubscriber.onSubscribe(ReduceSingle.java:97)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onSubscribe(ContextPreservingSubscriptionSubscriber.java:41)
at io.servicetalk.concurrent.api.FilterPublisher$1.onSubscribe(FilterPublisher.java:49)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onSubscribe(ContextPreservingSubscriber.java:38)
at io.servicetalk.concurrent.api.FromSingleItemPublisher.doSubscribe(FromSingleItemPublisher.java:38)
at io.servicetalk.concurrent.api.AbstractSynchronousPublisher.handleSubscribe(AbstractSynchronousPublisher.java:36)
at io.servicetalk.concurrent.api.Publisher.delegateSubscribe(Publisher.java:2414)
at io.servicetalk.concurrent.api.AbstractSynchronousPublisherOperator.handleSubscribe(AbstractSynchronousPublisherOperator.java:48)
at io.servicetalk.concurrent.api.Publisher.delegateSubscribe(Publisher.java:2414)
at io.servicetalk.concurrent.api.ReduceSingle.handleSubscribe(ReduceSingle.java:76)
at io.servicetalk.concurrent.api.Single.delegateSubscribe(Single.java:1498)
at io.servicetalk.concurrent.api.SingleFlatMapPublisher.handleSubscribe(SingleFlatMapPublisher.java:44)
at io.servicetalk.concurrent.api.Publisher.delegateSubscribe(Publisher.java:2414)
at io.servicetalk.concurrent.api.AbstractSynchronousPublisherOperator.handleSubscribe(AbstractSynchronousPublisherOperator.java:48)
at io.servicetalk.concurrent.api.Publisher.lambda$subscribeWithContext$10(Publisher.java:2436)
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37)
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:68)
at io.servicetalk.concurrent.api.Publisher.subscribeWithContext(Publisher.java:2435)
at io.servicetalk.concurrent.api.Publisher.subscribeInternal(Publisher.java:2206)
at io.servicetalk.concurrent.api.AbstractNoHandleSubscribePublisher.subscribe(AbstractNoHandleSubscribePublisher.java:52)
at io.servicetalk.transport.netty.internal.DefaultNettyConnection$2.handleSubscribe(DefaultNettyConnection.java:313)
at io.servicetalk.concurrent.api.Completable.handleSubscribe(Completable.java:1564)
at io.servicetalk.concurrent.api.Completable.delegateSubscribe(Completable.java:1520)
at io.servicetalk.concurrent.api.AbstractSynchronousCompletableOperator.handleSubscribe(AbstractSynchronousCompletableOperator.java:46)
at io.servicetalk.concurrent.api.Completable.delegateSubscribe(Completable.java:1520)
at io.servicetalk.concurrent.api.ResumeCompletable.handleSubscribe(ResumeCompletable.java:44)
at io.servicetalk.concurrent.api.Completable.delegateSubscribe(Completable.java:1520)
at io.servicetalk.concurrent.api.AbstractSynchronousCompletableOperator.handleSubscribe(AbstractSynchronousCompletableOperator.java:46)
at io.servicetalk.concurrent.api.CompletableSubscribeShareContext.handleSubscribe(CompletableSubscribeShareContext.java:34)
at io.servicetalk.concurrent.api.Completable.lambda$subscribeWithContext$0(Completable.java:1541)
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37)
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:80)
at io.servicetalk.concurrent.api.Completable.subscribeWithContext(Completable.java:1540)
at io.servicetalk.concurrent.api.Completable.subscribeInternal(Completable.java:1137)
at io.servicetalk.concurrent.api.CompletableDefer.handleSubscribe(CompletableDefer.java:47)
at io.servicetalk.concurrent.api.Completable.handleSubscribe(Completable.java:1564)
at io.servicetalk.concurrent.api.Completable.lambda$subscribeWithContext$0(Completable.java:1541)
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37)
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:80)
at io.servicetalk.concurrent.api.Completable.subscribeWithContext(Completable.java:1540)
at io.servicetalk.concurrent.api.Completable.subscribeInternal(Completable.java:1137)
at io.servicetalk.concurrent.api.CompletableDefer.subscribe(CompletableDefer.java:52)
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteQueue.execute(DefaultNettyPipelinedConnection.java:235)
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteQueue.execute(DefaultNettyPipelinedConnection.java:222)
at io.servicetalk.transport.netty.internal.SequentialTaskQueue.executeNextTask(SequentialTaskQueue.java:108)
at io.servicetalk.transport.netty.internal.SequentialTaskQueue.postTaskTermination(SequentialTaskQueue.java:84)
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteSourceSubscriber.safePostTaskTermination(DefaultNettyPipelinedConnection.java:339)
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteSourceSubscriber.onComplete(DefaultNettyPipelinedConnection.java:311)
at io.servicetalk.concurrent.api.ContextPreservingCancellableCompletableSubscriber.onComplete(ContextPreservingCancellableCompletableSubscriber.java:41)
at io.servicetalk.concurrent.api.ContextPreservingCompletableSubscriber.onComplete(ContextPreservingCompletableSubscriber.java:49)
at io.servicetalk.concurrent.api.ContextPreservingCancellableCompletableSubscriber.onComplete(ContextPreservingCancellableCompletableSubscriber.java:41)
at io.servicetalk.concurrent.api.AfterFinallyCompletable$AfterFinallyCompletableSubscriber.onComplete(AfterFinallyCompletable.java:60)
at io.servicetalk.concurrent.api.ResumeCompletable$ResumeSubscriber.onComplete(ResumeCompletable.java:83)
at io.servicetalk.concurrent.api.BeforeFinallyCompletable$BeforeFinallyCompletableSubscriber.onComplete(BeforeFinallyCompletable.java:65)
at io.servicetalk.concurrent.api.ContextPreservingCompletableSubscriber.onComplete(ContextPreservingCompletableSubscriber.java:49)
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber$AllWritesPromise.terminateSubscriber(WriteStreamSubscriber.java:394)
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber$AllWritesPromise.sourceTerminated(WriteStreamSubscriber.java:284)
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber.onComplete(WriteStreamSubscriber.java:179)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56)
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber.onComplete(Flush.java:130)
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onComplete(SingleFlatMapPublisher.java:111)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71)
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.sendComplete(FromArrayPublisher.java:113)
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.request(FromArrayPublisher.java:89)
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43)
at io.servicetalk.concurrent.api.CancellableThenSubscription.request(CancellableThenSubscription.java:62)
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber$1.request(Flush.java:86)
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43)
at io.servicetalk.concurrent.internal.ConcurrentSubscription.request(ConcurrentSubscription.java:122)
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber.requestMoreIfRequired(WriteStreamSubscriber.java:245)
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber.lambda$onSubscribe$0(WriteStreamSubscriber.java:126)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:405)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:338)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
... 1 more
Caused by: io.netty.channel.unix.Errors$NativeIoException: syscall:writev(..) failed: Broken pipe
at io.netty.channel.unix.FileDescriptor.writeAddresses(..)(Unknown Source)
Standard Output
2019-06-28 03:04:22,252 Time-limited test [INFO ] ClientClosureRaceTest - Completed 141 requests
2019-06-28 03:04:22,429 Time-limited test [INFO ] ClientClosureRaceTest - Completed 121 requests
2019-06-28 03:04:22,436 servicetalk-global-io-executor-2-9 [WARN ] ChannelOutboundBuffer - Failed to mark a promise as failure because it has failed already: WriteStreamSubscriber$AllWritesPromise@4fe094b0(failure: io.netty.channel.socket.ChannelOutputShutdownException: Channel output shutdown), unnotified cause: io.netty.channel.socket.ChannelOutputShutdownException: Channel output shutdown
at io.netty.channel.AbstractChannel$AbstractUnsafe.shutdownOutput(AbstractChannel.java:646)
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:954)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.flush0(AbstractEpollChannel.java:517)
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:906)
at io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1370)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:749)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:741)
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:727)
at io.netty.channel.DefaultChannelPipeline.flush(DefaultChannelPipeline.java:978)
at io.netty.channel.AbstractChannel.flush(AbstractChannel.java:253)
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber.lambda$new$0(Flush.java:68)
at io.servicetalk.transport.netty.internal.FlushOnEnd$1.writeTerminated(FlushOnEnd.java:31)
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber.onComplete(Flush.java:125)
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onComplete(SingleFlatMapPublisher.java:111)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71)
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.sendComplete(FromArrayPublisher.java:113)
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.request(FromArrayPublisher.java:89)
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43)
at io.servicetalk.concurrent.api.CancellableThenSubscription.setSubscription(CancellableThenSubscription.java:108)
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onSubscribe(SingleFlatMapPublisher.java:75)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onSubscribe(ContextPreservingSubscriber.java:38)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onSubscribe(ContextPreservingSubscriptionSubscriber.java:41)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onSubscribe(ContextPreservingSubscriber.java:38)
at io.servicetalk.concurrent.api.FromArrayPublisher.doSubscribe(FromArrayPublisher.java:45)
at io.servicetalk.concurrent.api.AbstractSynchronousPublisher.handleSubscribe(AbstractSynchronousPublisher.java:36)
at io.servicetalk.concurrent.api.Publisher.lambda$subscribeWithContext$10(Publisher.java:2436)
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37)
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:68)
at io.servicetalk.concurrent.api.Publisher.subscribeWithContext(Publisher.java:2435)
at io.servicetalk.concurrent.api.Publisher.subscribeInternal(Publisher.java:2206)
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onSuccess(SingleFlatMapPublisher.java:95)
at io.servicetalk.concurrent.api.ReduceSingle$ReduceSubscriber.onComplete(ReduceSingle.java:114)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56)
at io.servicetalk.concurrent.api.FilterPublisher$1.onComplete(FilterPublisher.java:72)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71)
at io.servicetalk.concurrent.internal.ScalarValueSubscription.request(ScalarValueSubscription.java:71)
at io.servicetalk.concurrent.internal.ConcurrentSubscription.request(ConcurrentSubscription.java:122)
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43)
at io.servicetalk.concurrent.internal.ConcurrentSubscription.request(ConcurrentSubscription.java:122)
at io.servicetalk.concurrent.api.ReduceSingle$ReduceSubscriber.onSubscribe(ReduceSingle.java:97)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onSubscribe(ContextPreservingSubscriptionSubscriber.java:41)
at io.servicetalk.concurrent.api.FilterPublisher$1.onSubscribe(FilterPublisher.java:49)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onSubscribe(ContextPreservingSubscriber.java:38)
at io.servicetalk.concurrent.api.FromSingleItemPublisher.doSubscribe(FromSingleItemPublisher.java:38)
at io.servicetalk.concurrent.api.AbstractSynchronousPublisher.handleSubscribe(AbstractSynchronousPublisher.java:36)
at io.servicetalk.concurrent.api.Publisher.delegateSubscribe(Publisher.java:2414)
at io.servicetalk.concurrent.api.AbstractSynchronousPublisherOperator.handleSubscribe(AbstractSynchronousPublisherOperator.java:48)
at io.servicetalk.concurrent.api.Publisher.delegateSubscribe(Publisher.java:2414)
at io.servicetalk.concurrent.api.ReduceSingle.handleSubscribe(ReduceSingle.java:76)
at io.servicetalk.concurrent.api.Single.delegateSubscribe(Single.java:1498)
at io.servicetalk.concurrent.api.SingleFlatMapPublisher.handleSubscribe(SingleFlatMapPublisher.java:44)
at io.servicetalk.concurrent.api.Publisher.delegateSubscribe(Publisher.java:2414)
at io.servicetalk.concurrent.api.AbstractSynchronousPublisherOperator.handleSubscribe(AbstractSynchronousPublisherOperator.java:48)
at io.servicetalk.concurrent.api.Publisher.lambda$subscribeWithContext$10(Publisher.java:2436)
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37)
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:68)
at io.servicetalk.concurrent.api.Publisher.subscribeWithContext(Publisher.java:2435)
at io.servicetalk.concurrent.api.Publisher.subscribeInternal(Publisher.java:2206)
at io.servicetalk.concurrent.api.AbstractNoHandleSubscribePublisher.subscribe(AbstractNoHandleSubscribePublisher.java:52)
at io.servicetalk.transport.netty.internal.DefaultNettyConnection$2.handleSubscribe(DefaultNettyConnection.java:313)
at io.servicetalk.concurrent.api.Completable.handleSubscribe(Completable.java:1564)
at io.servicetalk.concurrent.api.Completable.delegateSubscribe(Completable.java:1520)
at io.servicetalk.concurrent.api.AbstractSynchronousCompletableOperator.handleSubscribe(AbstractSynchronousCompletableOperator.java:46)
at io.servicetalk.concurrent.api.Completable.delegateSubscribe(Completable.java:1520)
at io.servicetalk.concurrent.api.ResumeCompletable.handleSubscribe(ResumeCompletable.java:44)
at io.servicetalk.concurrent.api.Completable.delegateSubscribe(Completable.java:1520)
at io.servicetalk.concurrent.api.AbstractSynchronousCompletableOperator.handleSubscribe(AbstractSynchronousCompletableOperator.java:46)
at io.servicetalk.concurrent.api.CompletableSubscribeShareContext.handleSubscribe(CompletableSubscribeShareContext.java:34)
at io.servicetalk.concurrent.api.Completable.lambda$subscribeWithContext$0(Completable.java:1541)
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37)
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:80)
at io.servicetalk.concurrent.api.Completable.subscribeWithContext(Completable.java:1540)
at io.servicetalk.concurrent.api.Completable.subscribeInternal(Completable.java:1137)
at io.servicetalk.concurrent.api.CompletableDefer.handleSubscribe(CompletableDefer.java:47)
at io.servicetalk.concurrent.api.Completable.handleSubscribe(Completable.java:1564)
at io.servicetalk.concurrent.api.Completable.lambda$subscribeWithContext$0(Completable.java:1541)
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37)
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:80)
at io.servicetalk.concurrent.api.Completable.subscribeWithContext(Completable.java:1540)
at io.servicetalk.concurrent.api.Completable.subscribeInternal(Completable.java:1137)
at io.servicetalk.concurrent.api.CompletableDefer.subscribe(CompletableDefer.java:52)
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteQueue.execute(DefaultNettyPipelinedConnection.java:235)
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteQueue.execute(DefaultNettyPipelinedConnection.java:222)
at io.servicetalk.transport.netty.internal.SequentialTaskQueue.executeNextTask(SequentialTaskQueue.java:108)
at io.servicetalk.transport.netty.internal.SequentialTaskQueue.postTaskTermination(SequentialTaskQueue.java:84)
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteSourceSubscriber.safePostTaskTermination(DefaultNettyPipelinedConnection.java:339)
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteSourceSubscriber.onComplete(DefaultNettyPipelinedConnection.java:311)
at io.servicetalk.concurrent.api.ContextPreservingCancellableCompletableSubscriber.onComplete(ContextPreservingCancellableCompletableSubscriber.java:41)
at io.servicetalk.concurrent.api.ContextPreservingCompletableSubscriber.onComplete(ContextPreservingCompletableSubscriber.java:49)
at io.servicetalk.concurrent.api.ContextPreservingCancellableCompletableSubscriber.onComplete(ContextPreservingCancellableCompletableSubscriber.java:41)
at io.servicetalk.concurrent.api.AfterFinallyCompletable$AfterFinallyCompletableSubscriber.onComplete(AfterFinallyCompletable.java:60)
at io.servicetalk.concurrent.api.ResumeCompletable$ResumeSubscriber.onComplete(ResumeCompletable.java:83)
at io.servicetalk.concurrent.api.BeforeFinallyCompletable$BeforeFinallyCompletableSubscriber.onComplete(BeforeFinallyCompletable.java:65)
at io.servicetalk.concurrent.api.ContextPreservingCompletableSubscriber.onComplete(ContextPreservingCompletableSubscriber.java:49)
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber$AllWritesPromise.terminateSubscriber(WriteStreamSubscriber.java:394)
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber$AllWritesPromise.sourceTerminated(WriteStreamSubscriber.java:284)
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber.onComplete(WriteStreamSubscriber.java:179)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56)
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber.onComplete(Flush.java:130)
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onComplete(SingleFlatMapPublisher.java:111)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71)
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.sendComplete(FromArrayPublisher.java:113)
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.request(FromArrayPublisher.java:89)
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43)
at io.servicetalk.concurrent.api.CancellableThenSubscription.request(CancellableThenSubscription.java:62)
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber$1.request(Flush.java:86)
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43)
at io.servicetalk.concurrent.internal.ConcurrentSubscription.request(ConcurrentSubscription.java:122)
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber.requestMoreIfRequired(WriteStreamSubscriber.java:245)
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber.lambda$onSubscribe$0(WriteStreamSubscriber.java:126)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:405)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:338)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: io.netty.channel.unix.Errors$NativeIoException: syscall:writev(..) failed: Broken pipe
at io.netty.channel.unix.FileDescriptor.writeAddresses(..)(Unknown Source)
io.netty.channel.socket.ChannelOutputShutdownException: Channel output shutdown
at io.netty.channel.AbstractChannel$AbstractUnsafe.shutdownOutput(AbstractChannel.java:646) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:954) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.flush0(AbstractEpollChannel.java:517) [netty-transport-native-epoll-4.1.36.Final-linux-x86_64.jar:4.1.36.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:906) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1370) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:749) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:741) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:727) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.DefaultChannelPipeline.flush(DefaultChannelPipeline.java:978) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.AbstractChannel.flush(AbstractChannel.java:253) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber.lambda$new$0(Flush.java:68) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.FlushOnEnd$1.writeTerminated(FlushOnEnd.java:31) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber.onComplete(Flush.java:125) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onComplete(SingleFlatMapPublisher.java:111) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.sendComplete(FromArrayPublisher.java:113) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.request(FromArrayPublisher.java:89) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.CancellableThenSubscription.setSubscription(CancellableThenSubscription.java:108) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onSubscribe(SingleFlatMapPublisher.java:75) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onSubscribe(ContextPreservingSubscriber.java:38) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onSubscribe(ContextPreservingSubscriptionSubscriber.java:41) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onSubscribe(ContextPreservingSubscriber.java:38) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FromArrayPublisher.doSubscribe(FromArrayPublisher.java:45) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AbstractSynchronousPublisher.handleSubscribe(AbstractSynchronousPublisher.java:36) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.lambda$subscribeWithContext$10(Publisher.java:2436) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:68) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.subscribeWithContext(Publisher.java:2435) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.subscribeInternal(Publisher.java:2206) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onSuccess(SingleFlatMapPublisher.java:95) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ReduceSingle$ReduceSubscriber.onComplete(ReduceSingle.java:114) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FilterPublisher$1.onComplete(FilterPublisher.java:72) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.internal.ScalarValueSubscription.request(ScalarValueSubscription.java:71) [servicetalk-concurrent-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.internal.ConcurrentSubscription.request(ConcurrentSubscription.java:122) [servicetalk-concurrent-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.internal.ConcurrentSubscription.request(ConcurrentSubscription.java:122) [servicetalk-concurrent-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ReduceSingle$ReduceSubscriber.onSubscribe(ReduceSingle.java:97) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onSubscribe(ContextPreservingSubscriptionSubscriber.java:41) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FilterPublisher$1.onSubscribe(FilterPublisher.java:49) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onSubscribe(ContextPreservingSubscriber.java:38) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FromSingleItemPublisher.doSubscribe(FromSingleItemPublisher.java:38) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AbstractSynchronousPublisher.handleSubscribe(AbstractSynchronousPublisher.java:36) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.delegateSubscribe(Publisher.java:2414) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AbstractSynchronousPublisherOperator.handleSubscribe(AbstractSynchronousPublisherOperator.java:48) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.delegateSubscribe(Publisher.java:2414) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ReduceSingle.handleSubscribe(ReduceSingle.java:76) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Single.delegateSubscribe(Single.java:1498) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.SingleFlatMapPublisher.handleSubscribe(SingleFlatMapPublisher.java:44) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.delegateSubscribe(Publisher.java:2414) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AbstractSynchronousPublisherOperator.handleSubscribe(AbstractSynchronousPublisherOperator.java:48) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.lambda$subscribeWithContext$10(Publisher.java:2436) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:68) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.subscribeWithContext(Publisher.java:2435) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.subscribeInternal(Publisher.java:2206) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AbstractNoHandleSubscribePublisher.subscribe(AbstractNoHandleSubscribePublisher.java:52) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.DefaultNettyConnection$2.handleSubscribe(DefaultNettyConnection.java:313) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.handleSubscribe(Completable.java:1564) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.delegateSubscribe(Completable.java:1520) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AbstractSynchronousCompletableOperator.handleSubscribe(AbstractSynchronousCompletableOperator.java:46) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.delegateSubscribe(Completable.java:1520) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ResumeCompletable.handleSubscribe(ResumeCompletable.java:44) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.delegateSubscribe(Completable.java:1520) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AbstractSynchronousCompletableOperator.handleSubscribe(AbstractSynchronousCompletableOperator.java:46) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.CompletableSubscribeShareContext.handleSubscribe(CompletableSubscribeShareContext.java:34) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.lambda$subscribeWithContext$0(Completable.java:1541) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:80) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.subscribeWithContext(Completable.java:1540) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.subscribeInternal(Completable.java:1137) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.CompletableDefer.handleSubscribe(CompletableDefer.java:47) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.handleSubscribe(Completable.java:1564) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.lambda$subscribeWithContext$0(Completable.java:1541) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:80) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.subscribeWithContext(Completable.java:1540) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.subscribeInternal(Completable.java:1137) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.CompletableDefer.subscribe(CompletableDefer.java:52) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteQueue.execute(DefaultNettyPipelinedConnection.java:235) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteQueue.execute(DefaultNettyPipelinedConnection.java:222) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.SequentialTaskQueue.executeNextTask(SequentialTaskQueue.java:108) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.SequentialTaskQueue.postTaskTermination(SequentialTaskQueue.java:84) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteSourceSubscriber.safePostTaskTermination(DefaultNettyPipelinedConnection.java:339) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteSourceSubscriber.onComplete(DefaultNettyPipelinedConnection.java:311) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingCancellableCompletableSubscriber.onComplete(ContextPreservingCancellableCompletableSubscriber.java:41) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingCompletableSubscriber.onComplete(ContextPreservingCompletableSubscriber.java:49) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingCancellableCompletableSubscriber.onComplete(ContextPreservingCancellableCompletableSubscriber.java:41) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AfterFinallyCompletable$AfterFinallyCompletableSubscriber.onComplete(AfterFinallyCompletable.java:60) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ResumeCompletable$ResumeSubscriber.onComplete(ResumeCompletable.java:83) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.BeforeFinallyCompletable$BeforeFinallyCompletableSubscriber.onComplete(BeforeFinallyCompletable.java:65) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingCompletableSubscriber.onComplete(ContextPreservingCompletableSubscriber.java:49) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber$AllWritesPromise.terminateSubscriber(WriteStreamSubscriber.java:394) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber$AllWritesPromise.sourceTerminated(WriteStreamSubscriber.java:284) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber.onComplete(WriteStreamSubscriber.java:179) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber.onComplete(Flush.java:130) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onComplete(SingleFlatMapPublisher.java:111) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.sendComplete(FromArrayPublisher.java:113) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.request(FromArrayPublisher.java:89) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.CancellableThenSubscription.request(CancellableThenSubscription.java:62) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber$1.request(Flush.java:86) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.internal.ConcurrentSubscription.request(ConcurrentSubscription.java:122) [servicetalk-concurrent-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber.requestMoreIfRequired(WriteStreamSubscriber.java:245) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber.lambda$onSubscribe$0(WriteStreamSubscriber.java:126) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) [netty-common-4.1.36.Final.jar:4.1.36.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:405) [netty-common-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:338) [netty-transport-native-epoll-4.1.36.Final-linux-x86_64.jar:4.1.36.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906) [netty-common-4.1.36.Final.jar:4.1.36.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.36.Final.jar:4.1.36.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-common-4.1.36.Final.jar:4.1.36.Final]
at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: io.netty.channel.unix.Errors$NativeIoException: syscall:writev(..) failed: Broken pipe
at io.netty.channel.unix.FileDescriptor.writeAddresses(..)(Unknown Source) ~[netty-transport-native-unix-common-4.1.36.Final.jar:4.1.36.Final]
2019-06-28 03:04:22,445 servicetalk-global-io-executor-2-9 [WARN ] ChannelOutboundBuffer - Failed to mark a promise as failure because it has failed already: WriteStreamSubscriber$AllWritesPromise@4fe094b0(failure: io.netty.channel.socket.ChannelOutputShutdownException: Channel output shutdown), unnotified cause: io.netty.channel.socket.ChannelOutputShutdownException: Channel output shutdown
at io.netty.channel.AbstractChannel$AbstractUnsafe.shutdownOutput(AbstractChannel.java:646)
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:954)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.flush0(AbstractEpollChannel.java:517)
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:906)
at io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1370)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:749)
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:741)
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:727)
at io.netty.channel.DefaultChannelPipeline.flush(DefaultChannelPipeline.java:978)
at io.netty.channel.AbstractChannel.flush(AbstractChannel.java:253)
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber.lambda$new$0(Flush.java:68)
at io.servicetalk.transport.netty.internal.FlushOnEnd$1.writeTerminated(FlushOnEnd.java:31)
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber.onComplete(Flush.java:125)
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onComplete(SingleFlatMapPublisher.java:111)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71)
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.sendComplete(FromArrayPublisher.java:113)
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.request(FromArrayPublisher.java:89)
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43)
at io.servicetalk.concurrent.api.CancellableThenSubscription.setSubscription(CancellableThenSubscription.java:108)
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onSubscribe(SingleFlatMapPublisher.java:75)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onSubscribe(ContextPreservingSubscriber.java:38)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onSubscribe(ContextPreservingSubscriptionSubscriber.java:41)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onSubscribe(ContextPreservingSubscriber.java:38)
at io.servicetalk.concurrent.api.FromArrayPublisher.doSubscribe(FromArrayPublisher.java:45)
at io.servicetalk.concurrent.api.AbstractSynchronousPublisher.handleSubscribe(AbstractSynchronousPublisher.java:36)
at io.servicetalk.concurrent.api.Publisher.lambda$subscribeWithContext$10(Publisher.java:2436)
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37)
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:68)
at io.servicetalk.concurrent.api.Publisher.subscribeWithContext(Publisher.java:2435)
at io.servicetalk.concurrent.api.Publisher.subscribeInternal(Publisher.java:2206)
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onSuccess(SingleFlatMapPublisher.java:95)
at io.servicetalk.concurrent.api.ReduceSingle$ReduceSubscriber.onComplete(ReduceSingle.java:114)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56)
at io.servicetalk.concurrent.api.FilterPublisher$1.onComplete(FilterPublisher.java:72)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71)
at io.servicetalk.concurrent.internal.ScalarValueSubscription.request(ScalarValueSubscription.java:71)
at io.servicetalk.concurrent.internal.ConcurrentSubscription.request(ConcurrentSubscription.java:122)
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43)
at io.servicetalk.concurrent.internal.ConcurrentSubscription.request(ConcurrentSubscription.java:122)
at io.servicetalk.concurrent.api.ReduceSingle$ReduceSubscriber.onSubscribe(ReduceSingle.java:97)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onSubscribe(ContextPreservingSubscriptionSubscriber.java:41)
at io.servicetalk.concurrent.api.FilterPublisher$1.onSubscribe(FilterPublisher.java:49)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onSubscribe(ContextPreservingSubscriber.java:38)
at io.servicetalk.concurrent.api.FromSingleItemPublisher.doSubscribe(FromSingleItemPublisher.java:38)
at io.servicetalk.concurrent.api.AbstractSynchronousPublisher.handleSubscribe(AbstractSynchronousPublisher.java:36)
at io.servicetalk.concurrent.api.Publisher.delegateSubscribe(Publisher.java:2414)
at io.servicetalk.concurrent.api.AbstractSynchronousPublisherOperator.handleSubscribe(AbstractSynchronousPublisherOperator.java:48)
at io.servicetalk.concurrent.api.Publisher.delegateSubscribe(Publisher.java:2414)
at io.servicetalk.concurrent.api.ReduceSingle.handleSubscribe(ReduceSingle.java:76)
at io.servicetalk.concurrent.api.Single.delegateSubscribe(Single.java:1498)
at io.servicetalk.concurrent.api.SingleFlatMapPublisher.handleSubscribe(SingleFlatMapPublisher.java:44)
at io.servicetalk.concurrent.api.Publisher.delegateSubscribe(Publisher.java:2414)
at io.servicetalk.concurrent.api.AbstractSynchronousPublisherOperator.handleSubscribe(AbstractSynchronousPublisherOperator.java:48)
at io.servicetalk.concurrent.api.Publisher.lambda$subscribeWithContext$10(Publisher.java:2436)
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37)
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:68)
at io.servicetalk.concurrent.api.Publisher.subscribeWithContext(Publisher.java:2435)
at io.servicetalk.concurrent.api.Publisher.subscribeInternal(Publisher.java:2206)
at io.servicetalk.concurrent.api.AbstractNoHandleSubscribePublisher.subscribe(AbstractNoHandleSubscribePublisher.java:52)
at io.servicetalk.transport.netty.internal.DefaultNettyConnection$2.handleSubscribe(DefaultNettyConnection.java:313)
at io.servicetalk.concurrent.api.Completable.handleSubscribe(Completable.java:1564)
at io.servicetalk.concurrent.api.Completable.delegateSubscribe(Completable.java:1520)
at io.servicetalk.concurrent.api.AbstractSynchronousCompletableOperator.handleSubscribe(AbstractSynchronousCompletableOperator.java:46)
at io.servicetalk.concurrent.api.Completable.delegateSubscribe(Completable.java:1520)
at io.servicetalk.concurrent.api.ResumeCompletable.handleSubscribe(ResumeCompletable.java:44)
at io.servicetalk.concurrent.api.Completable.delegateSubscribe(Completable.java:1520)
at io.servicetalk.concurrent.api.AbstractSynchronousCompletableOperator.handleSubscribe(AbstractSynchronousCompletableOperator.java:46)
at io.servicetalk.concurrent.api.CompletableSubscribeShareContext.handleSubscribe(CompletableSubscribeShareContext.java:34)
at io.servicetalk.concurrent.api.Completable.lambda$subscribeWithContext$0(Completable.java:1541)
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37)
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:80)
at io.servicetalk.concurrent.api.Completable.subscribeWithContext(Completable.java:1540)
at io.servicetalk.concurrent.api.Completable.subscribeInternal(Completable.java:1137)
at io.servicetalk.concurrent.api.CompletableDefer.handleSubscribe(CompletableDefer.java:47)
at io.servicetalk.concurrent.api.Completable.handleSubscribe(Completable.java:1564)
at io.servicetalk.concurrent.api.Completable.lambda$subscribeWithContext$0(Completable.java:1541)
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37)
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:80)
at io.servicetalk.concurrent.api.Completable.subscribeWithContext(Completable.java:1540)
at io.servicetalk.concurrent.api.Completable.subscribeInternal(Completable.java:1137)
at io.servicetalk.concurrent.api.CompletableDefer.subscribe(CompletableDefer.java:52)
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteQueue.execute(DefaultNettyPipelinedConnection.java:235)
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteQueue.execute(DefaultNettyPipelinedConnection.java:222)
at io.servicetalk.transport.netty.internal.SequentialTaskQueue.executeNextTask(SequentialTaskQueue.java:108)
at io.servicetalk.transport.netty.internal.SequentialTaskQueue.postTaskTermination(SequentialTaskQueue.java:84)
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteSourceSubscriber.safePostTaskTermination(DefaultNettyPipelinedConnection.java:339)
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteSourceSubscriber.onComplete(DefaultNettyPipelinedConnection.java:311)
at io.servicetalk.concurrent.api.ContextPreservingCancellableCompletableSubscriber.onComplete(ContextPreservingCancellableCompletableSubscriber.java:41)
at io.servicetalk.concurrent.api.ContextPreservingCompletableSubscriber.onComplete(ContextPreservingCompletableSubscriber.java:49)
at io.servicetalk.concurrent.api.ContextPreservingCancellableCompletableSubscriber.onComplete(ContextPreservingCancellableCompletableSubscriber.java:41)
at io.servicetalk.concurrent.api.AfterFinallyCompletable$AfterFinallyCompletableSubscriber.onComplete(AfterFinallyCompletable.java:60)
at io.servicetalk.concurrent.api.ResumeCompletable$ResumeSubscriber.onComplete(ResumeCompletable.java:83)
at io.servicetalk.concurrent.api.BeforeFinallyCompletable$BeforeFinallyCompletableSubscriber.onComplete(BeforeFinallyCompletable.java:65)
at io.servicetalk.concurrent.api.ContextPreservingCompletableSubscriber.onComplete(ContextPreservingCompletableSubscriber.java:49)
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber$AllWritesPromise.terminateSubscriber(WriteStreamSubscriber.java:394)
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber$AllWritesPromise.sourceTerminated(WriteStreamSubscriber.java:284)
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber.onComplete(WriteStreamSubscriber.java:179)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56)
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber.onComplete(Flush.java:130)
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onComplete(SingleFlatMapPublisher.java:111)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71)
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56)
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71)
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.sendComplete(FromArrayPublisher.java:113)
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.request(FromArrayPublisher.java:89)
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43)
at io.servicetalk.concurrent.api.CancellableThenSubscription.request(CancellableThenSubscription.java:62)
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber$1.request(Flush.java:86)
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43)
at io.servicetalk.concurrent.internal.ConcurrentSubscription.request(ConcurrentSubscription.java:122)
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber.requestMoreIfRequired(WriteStreamSubscriber.java:245)
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber.lambda$onSubscribe$0(WriteStreamSubscriber.java:126)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:405)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:338)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: io.netty.channel.unix.Errors$NativeIoException: syscall:writev(..) failed: Broken pipe
at io.netty.channel.unix.FileDescriptor.writeAddresses(..)(Unknown Source)
io.netty.channel.socket.ChannelOutputShutdownException: Channel output shutdown
at io.netty.channel.AbstractChannel$AbstractUnsafe.shutdownOutput(AbstractChannel.java:646) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:954) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.flush0(AbstractEpollChannel.java:517) [netty-transport-native-epoll-4.1.36.Final-linux-x86_64.jar:4.1.36.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:906) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1370) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:749) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:741) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:727) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.DefaultChannelPipeline.flush(DefaultChannelPipeline.java:978) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.AbstractChannel.flush(AbstractChannel.java:253) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber.lambda$new$0(Flush.java:68) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.FlushOnEnd$1.writeTerminated(FlushOnEnd.java:31) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber.onComplete(Flush.java:125) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onComplete(SingleFlatMapPublisher.java:111) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.sendComplete(FromArrayPublisher.java:113) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.request(FromArrayPublisher.java:89) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.CancellableThenSubscription.setSubscription(CancellableThenSubscription.java:108) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onSubscribe(SingleFlatMapPublisher.java:75) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onSubscribe(ContextPreservingSubscriber.java:38) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onSubscribe(ContextPreservingSubscriptionSubscriber.java:41) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onSubscribe(ContextPreservingSubscriber.java:38) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FromArrayPublisher.doSubscribe(FromArrayPublisher.java:45) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AbstractSynchronousPublisher.handleSubscribe(AbstractSynchronousPublisher.java:36) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.lambda$subscribeWithContext$10(Publisher.java:2436) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:68) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.subscribeWithContext(Publisher.java:2435) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.subscribeInternal(Publisher.java:2206) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onSuccess(SingleFlatMapPublisher.java:95) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ReduceSingle$ReduceSubscriber.onComplete(ReduceSingle.java:114) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FilterPublisher$1.onComplete(FilterPublisher.java:72) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.internal.ScalarValueSubscription.request(ScalarValueSubscription.java:71) [servicetalk-concurrent-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.internal.ConcurrentSubscription.request(ConcurrentSubscription.java:122) [servicetalk-concurrent-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.internal.ConcurrentSubscription.request(ConcurrentSubscription.java:122) [servicetalk-concurrent-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ReduceSingle$ReduceSubscriber.onSubscribe(ReduceSingle.java:97) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onSubscribe(ContextPreservingSubscriptionSubscriber.java:41) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FilterPublisher$1.onSubscribe(FilterPublisher.java:49) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onSubscribe(ContextPreservingSubscriber.java:38) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FromSingleItemPublisher.doSubscribe(FromSingleItemPublisher.java:38) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AbstractSynchronousPublisher.handleSubscribe(AbstractSynchronousPublisher.java:36) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.delegateSubscribe(Publisher.java:2414) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AbstractSynchronousPublisherOperator.handleSubscribe(AbstractSynchronousPublisherOperator.java:48) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.delegateSubscribe(Publisher.java:2414) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ReduceSingle.handleSubscribe(ReduceSingle.java:76) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Single.delegateSubscribe(Single.java:1498) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.SingleFlatMapPublisher.handleSubscribe(SingleFlatMapPublisher.java:44) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.delegateSubscribe(Publisher.java:2414) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AbstractSynchronousPublisherOperator.handleSubscribe(AbstractSynchronousPublisherOperator.java:48) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.lambda$subscribeWithContext$10(Publisher.java:2436) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:68) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.subscribeWithContext(Publisher.java:2435) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Publisher.subscribeInternal(Publisher.java:2206) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AbstractNoHandleSubscribePublisher.subscribe(AbstractNoHandleSubscribePublisher.java:52) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.DefaultNettyConnection$2.handleSubscribe(DefaultNettyConnection.java:313) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.handleSubscribe(Completable.java:1564) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.delegateSubscribe(Completable.java:1520) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AbstractSynchronousCompletableOperator.handleSubscribe(AbstractSynchronousCompletableOperator.java:46) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.delegateSubscribe(Completable.java:1520) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ResumeCompletable.handleSubscribe(ResumeCompletable.java:44) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.delegateSubscribe(Completable.java:1520) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AbstractSynchronousCompletableOperator.handleSubscribe(AbstractSynchronousCompletableOperator.java:46) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.CompletableSubscribeShareContext.handleSubscribe(CompletableSubscribeShareContext.java:34) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.lambda$subscribeWithContext$0(Completable.java:1541) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:80) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.subscribeWithContext(Completable.java:1540) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.subscribeInternal(Completable.java:1137) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.CompletableDefer.handleSubscribe(CompletableDefer.java:47) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.handleSubscribe(Completable.java:1564) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.lambda$subscribeWithContext$0(Completable.java:1541) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingConsumer.accept(ContextPreservingConsumer.java:37) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.NoopOffloader.offloadSubscribe(NoopOffloader.java:80) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.subscribeWithContext(Completable.java:1540) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.Completable.subscribeInternal(Completable.java:1137) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.CompletableDefer.subscribe(CompletableDefer.java:52) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteQueue.execute(DefaultNettyPipelinedConnection.java:235) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteQueue.execute(DefaultNettyPipelinedConnection.java:222) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.SequentialTaskQueue.executeNextTask(SequentialTaskQueue.java:108) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.SequentialTaskQueue.postTaskTermination(SequentialTaskQueue.java:84) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteSourceSubscriber.safePostTaskTermination(DefaultNettyPipelinedConnection.java:339) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.DefaultNettyPipelinedConnection$WriteSourceSubscriber.onComplete(DefaultNettyPipelinedConnection.java:311) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingCancellableCompletableSubscriber.onComplete(ContextPreservingCancellableCompletableSubscriber.java:41) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingCompletableSubscriber.onComplete(ContextPreservingCompletableSubscriber.java:49) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingCancellableCompletableSubscriber.onComplete(ContextPreservingCancellableCompletableSubscriber.java:41) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.AfterFinallyCompletable$AfterFinallyCompletableSubscriber.onComplete(AfterFinallyCompletable.java:60) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ResumeCompletable$ResumeSubscriber.onComplete(ResumeCompletable.java:83) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.BeforeFinallyCompletable$BeforeFinallyCompletableSubscriber.onComplete(BeforeFinallyCompletable.java:65) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingCompletableSubscriber.onComplete(ContextPreservingCompletableSubscriber.java:49) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber$AllWritesPromise.terminateSubscriber(WriteStreamSubscriber.java:394) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber$AllWritesPromise.sourceTerminated(WriteStreamSubscriber.java:284) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber.onComplete(WriteStreamSubscriber.java:179) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber.onComplete(Flush.java:130) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.SingleFlatMapPublisher$SubscriberImpl.onComplete(SingleFlatMapPublisher.java:111) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriptionSubscriber.onComplete(ContextPreservingSubscriptionSubscriber.java:56) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscriber.onComplete(ContextPreservingSubscriber.java:71) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.sendComplete(FromArrayPublisher.java:113) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.FromArrayPublisher$FromArraySubscription.request(FromArrayPublisher.java:89) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.CancellableThenSubscription.request(CancellableThenSubscription.java:62) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.Flush$FlushSubscriber$1.request(Flush.java:86) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.api.ContextPreservingSubscription.request(ContextPreservingSubscription.java:43) [servicetalk-concurrent-api-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.concurrent.internal.ConcurrentSubscription.request(ConcurrentSubscription.java:122) [servicetalk-concurrent-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber.requestMoreIfRequired(WriteStreamSubscriber.java:245) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.servicetalk.transport.netty.internal.WriteStreamSubscriber.lambda$onSubscribe$0(WriteStreamSubscriber.java:126) [servicetalk-transport-netty-internal-0.16.0-SNAPSHOT.jar:0.16.0-SNAPSHOT]
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) [netty-common-4.1.36.Final.jar:4.1.36.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:405) [netty-common-4.1.36.Final.jar:4.1.36.Final]
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:338) [netty-transport-native-epoll-4.1.36.Final-linux-x86_64.jar:4.1.36.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906) [netty-common-4.1.36.Final.jar:4.1.36.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.36.Final.jar:4.1.36.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-common-4.1.36.Final.jar:4.1.36.Final]
at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: io.netty.channel.unix.Errors$NativeIoException: syscall:writev(..) failed: Broken pipe
at io.netty.channel.unix.FileDescriptor.writeAddresses(..)(Unknown Source) ~[netty-transport-native-unix-common-4.1.36.Final.jar:4.1.36.Final]
2019-06-28 03:04:22,486 Time-limited test [INFO ] ClientClosureRaceTest - Completed 12 requests
2019-06-28 03:04:24,356 Time-limited test [INFO ] ClientClosureRaceTest - Completed 1000 requests
```
|
test
|
clientclosureracetest testpipelinedposts test failure error message java util concurrent executionexception io netty channel socket channeloutputshutdownexception channel output shutdown stacktrace java util concurrent executionexception io netty channel socket channeloutputshutdownexception channel output shutdown at io servicetalk concurrent api sourcetofuture reportget sourcetofuture java at io servicetalk concurrent api sourcetofuture get sourcetofuture java at io servicetalk http netty clientclosureracetest runiterations clientclosureracetest java at io servicetalk http netty clientclosureracetest testpipelinedposts clientclosureracetest java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at org junit internal runners statements runbefores evaluate runbefores java at org junit internal runners statements runafters evaluate runafters java at io servicetalk concurrent internal servicetalktesttimeout timeoutstatement callablestatement call servicetalktesttimeout java at io servicetalk concurrent internal servicetalktesttimeout timeoutstatement callablestatement call servicetalktesttimeout java at java base java util concurrent futuretask run futuretask java at java base java lang thread run thread java caused by io netty channel socket channeloutputshutdownexception channel output shutdown at io netty channel abstractchannel abstractunsafe shutdownoutput abstractchannel java at io netty channel abstractchannel abstractunsafe abstractchannel java at io netty channel epoll abstractepollchannel abstractepollunsafe abstractepollchannel java at io netty channel abstractchannel abstractunsafe flush abstractchannel java at io netty channel defaultchannelpipeline headcontext flush defaultchannelpipeline java at io netty channel abstractchannelhandlercontext abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokeflush abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext flush abstractchannelhandlercontext java at io netty channel defaultchannelpipeline flush defaultchannelpipeline java at io netty channel abstractchannel flush abstractchannel java at io servicetalk transport netty internal flush flushsubscriber lambda new flush java at io servicetalk transport netty internal flushonend writeterminated flushonend java at io servicetalk transport netty internal flush flushsubscriber oncomplete flush java at io servicetalk concurrent api singleflatmappublisher subscriberimpl oncomplete singleflatmappublisher java at io servicetalk concurrent api contextpreservingsubscriber oncomplete contextpreservingsubscriber java at io servicetalk concurrent api contextpreservingsubscriptionsubscriber oncomplete contextpreservingsubscriptionsubscriber java at io servicetalk concurrent api contextpreservingsubscriber oncomplete contextpreservingsubscriber java at io servicetalk concurrent api fromarraypublisher fromarraysubscription sendcomplete fromarraypublisher java at io servicetalk concurrent api fromarraypublisher fromarraysubscription request fromarraypublisher java at io servicetalk concurrent api contextpreservingsubscription request contextpreservingsubscription java at io servicetalk concurrent api cancellablethensubscription setsubscription cancellablethensubscription java at io servicetalk concurrent api singleflatmappublisher subscriberimpl onsubscribe singleflatmappublisher java at io servicetalk concurrent api contextpreservingsubscriber onsubscribe contextpreservingsubscriber java at io servicetalk concurrent api contextpreservingsubscriptionsubscriber onsubscribe contextpreservingsubscriptionsubscriber java at io servicetalk concurrent api contextpreservingsubscriber onsubscribe contextpreservingsubscriber java at io servicetalk concurrent api fromarraypublisher dosubscribe fromarraypublisher java at io servicetalk concurrent api abstractsynchronouspublisher handlesubscribe abstractsynchronouspublisher java at io servicetalk concurrent api publisher lambda subscribewithcontext publisher java at io servicetalk concurrent api contextpreservingconsumer accept contextpreservingconsumer java at io servicetalk concurrent api noopoffloader offloadsubscribe noopoffloader java at io servicetalk concurrent api publisher subscribewithcontext publisher java at io servicetalk concurrent api publisher subscribeinternal publisher java at io servicetalk concurrent api singleflatmappublisher subscriberimpl onsuccess singleflatmappublisher java at io servicetalk concurrent api reducesingle reducesubscriber oncomplete reducesingle java at io servicetalk concurrent api contextpreservingsubscriptionsubscriber oncomplete contextpreservingsubscriptionsubscriber java at io servicetalk concurrent api filterpublisher oncomplete filterpublisher java at io servicetalk concurrent api contextpreservingsubscriber oncomplete contextpreservingsubscriber java at io servicetalk concurrent internal scalarvaluesubscription request scalarvaluesubscription java at io servicetalk concurrent internal concurrentsubscription request concurrentsubscription java at io servicetalk concurrent api contextpreservingsubscription request contextpreservingsubscription java at io servicetalk concurrent internal concurrentsubscription request concurrentsubscription java at io servicetalk concurrent api reducesingle reducesubscriber onsubscribe reducesingle java at io servicetalk concurrent api contextpreservingsubscriptionsubscriber onsubscribe contextpreservingsubscriptionsubscriber java at io servicetalk concurrent api filterpublisher onsubscribe filterpublisher java at io servicetalk concurrent api contextpreservingsubscriber onsubscribe contextpreservingsubscriber java at io servicetalk concurrent api fromsingleitempublisher dosubscribe fromsingleitempublisher java at io servicetalk concurrent api abstractsynchronouspublisher handlesubscribe abstractsynchronouspublisher java at io servicetalk concurrent api publisher delegatesubscribe publisher java at io servicetalk concurrent api abstractsynchronouspublisheroperator handlesubscribe abstractsynchronouspublisheroperator java at io servicetalk concurrent api publisher delegatesubscribe publisher java at io servicetalk concurrent api reducesingle handlesubscribe reducesingle java at io servicetalk concurrent api single delegatesubscribe single java at io servicetalk concurrent api singleflatmappublisher handlesubscribe singleflatmappublisher java at io servicetalk concurrent api publisher delegatesubscribe publisher java at io servicetalk concurrent api abstractsynchronouspublisheroperator handlesubscribe abstractsynchronouspublisheroperator java at io servicetalk concurrent api publisher lambda subscribewithcontext publisher java at io servicetalk concurrent api contextpreservingconsumer accept contextpreservingconsumer java at io servicetalk concurrent api noopoffloader offloadsubscribe noopoffloader java at io servicetalk concurrent api publisher subscribewithcontext publisher java at io servicetalk concurrent api publisher subscribeinternal publisher java at io servicetalk concurrent api abstractnohandlesubscribepublisher subscribe abstractnohandlesubscribepublisher java at io servicetalk transport netty internal defaultnettyconnection handlesubscribe defaultnettyconnection java at io servicetalk concurrent api completable handlesubscribe completable java at io servicetalk concurrent api completable delegatesubscribe completable java at io servicetalk concurrent api abstractsynchronouscompletableoperator handlesubscribe abstractsynchronouscompletableoperator java at io servicetalk concurrent api completable delegatesubscribe completable java at io servicetalk concurrent api resumecompletable handlesubscribe resumecompletable java at io servicetalk concurrent api completable delegatesubscribe completable java at io servicetalk concurrent api abstractsynchronouscompletableoperator handlesubscribe abstractsynchronouscompletableoperator java at io servicetalk concurrent api completablesubscribesharecontext handlesubscribe completablesubscribesharecontext java at io servicetalk concurrent api completable lambda subscribewithcontext completable java at io servicetalk concurrent api contextpreservingconsumer accept contextpreservingconsumer java at io servicetalk concurrent api noopoffloader offloadsubscribe noopoffloader java at io servicetalk concurrent api completable subscribewithcontext completable java at io servicetalk concurrent api completable subscribeinternal completable java at io servicetalk concurrent api completabledefer handlesubscribe completabledefer java at io servicetalk concurrent api completable handlesubscribe completable java at io servicetalk concurrent api completable lambda subscribewithcontext completable java at io servicetalk concurrent api contextpreservingconsumer accept contextpreservingconsumer java at io servicetalk concurrent api noopoffloader offloadsubscribe noopoffloader java at io servicetalk concurrent api completable subscribewithcontext completable java at io servicetalk concurrent api completable subscribeinternal completable java at io servicetalk concurrent api completabledefer subscribe completabledefer java at io servicetalk transport netty internal defaultnettypipelinedconnection writequeue execute defaultnettypipelinedconnection java at io servicetalk transport netty internal defaultnettypipelinedconnection writequeue execute defaultnettypipelinedconnection java at io servicetalk transport netty internal sequentialtaskqueue executenexttask sequentialtaskqueue java at io servicetalk transport netty internal sequentialtaskqueue posttasktermination sequentialtaskqueue java at io servicetalk transport netty internal defaultnettypipelinedconnection writesourcesubscriber safeposttasktermination defaultnettypipelinedconnection java at io servicetalk transport netty internal defaultnettypipelinedconnection writesourcesubscriber oncomplete defaultnettypipelinedconnection java at io servicetalk concurrent api contextpreservingcancellablecompletablesubscriber oncomplete contextpreservingcancellablecompletablesubscriber java at io servicetalk concurrent api contextpreservingcompletablesubscriber oncomplete contextpreservingcompletablesubscriber java at io servicetalk concurrent api contextpreservingcancellablecompletablesubscriber oncomplete contextpreservingcancellablecompletablesubscriber java at io servicetalk concurrent api afterfinallycompletable afterfinallycompletablesubscriber oncomplete afterfinallycompletable java at io servicetalk concurrent api resumecompletable resumesubscriber oncomplete resumecompletable java at io servicetalk concurrent api beforefinallycompletable beforefinallycompletablesubscriber oncomplete beforefinallycompletable java at io servicetalk concurrent api contextpreservingcompletablesubscriber oncomplete contextpreservingcompletablesubscriber java at io servicetalk transport netty internal writestreamsubscriber allwritespromise terminatesubscriber writestreamsubscriber java at io servicetalk transport netty internal writestreamsubscriber allwritespromise sourceterminated writestreamsubscriber java at io servicetalk transport netty internal writestreamsubscriber oncomplete writestreamsubscriber java at io servicetalk concurrent api contextpreservingsubscriptionsubscriber oncomplete contextpreservingsubscriptionsubscriber java at io servicetalk transport netty internal flush flushsubscriber oncomplete flush java at io servicetalk concurrent api singleflatmappublisher subscriberimpl oncomplete singleflatmappublisher java at io servicetalk concurrent api contextpreservingsubscriber oncomplete contextpreservingsubscriber java at io servicetalk concurrent api contextpreservingsubscriptionsubscriber oncomplete contextpreservingsubscriptionsubscriber java at io servicetalk concurrent api contextpreservingsubscriber oncomplete contextpreservingsubscriber java at io servicetalk concurrent api fromarraypublisher fromarraysubscription sendcomplete fromarraypublisher java at io servicetalk concurrent api fromarraypublisher fromarraysubscription request fromarraypublisher java at io servicetalk concurrent api contextpreservingsubscription request contextpreservingsubscription java at io servicetalk concurrent api cancellablethensubscription request cancellablethensubscription java at io servicetalk transport netty internal flush flushsubscriber request flush java at io servicetalk concurrent api contextpreservingsubscription request contextpreservingsubscription java at io servicetalk concurrent internal concurrentsubscription request concurrentsubscription java at io servicetalk transport netty internal writestreamsubscriber requestmoreifrequired writestreamsubscriber java at io servicetalk transport netty internal writestreamsubscriber lambda onsubscribe writestreamsubscriber java at io netty util concurrent abstracteventexecutor safeexecute abstracteventexecutor java at io netty util concurrent singlethreadeventexecutor runalltasks singlethreadeventexecutor java at io netty channel epoll epolleventloop run epolleventloop java at io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java at io netty util internal threadexecutormap run threadexecutormap java at io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java more caused by io netty channel unix errors nativeioexception syscall writev failed broken pipe at io netty channel unix filedescriptor writeaddresses unknown source standard output time limited test clientclosureracetest completed requests time limited test clientclosureracetest completed requests servicetalk global io executor channeloutboundbuffer failed to mark a promise as failure because it has failed already writestreamsubscriber allwritespromise failure io netty channel socket channeloutputshutdownexception channel output shutdown unnotified cause io netty channel socket channeloutputshutdownexception channel output shutdown at io netty channel abstractchannel abstractunsafe shutdownoutput abstractchannel java at io netty channel abstractchannel abstractunsafe abstractchannel java at io netty channel epoll abstractepollchannel abstractepollunsafe abstractepollchannel java at io netty channel abstractchannel abstractunsafe flush abstractchannel java at io netty channel defaultchannelpipeline headcontext flush defaultchannelpipeline java at io netty channel abstractchannelhandlercontext abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokeflush abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext flush abstractchannelhandlercontext java at io netty channel defaultchannelpipeline flush defaultchannelpipeline java at io netty channel abstractchannel flush abstractchannel java at io servicetalk transport netty internal flush flushsubscriber lambda new flush java at io servicetalk transport netty internal flushonend writeterminated flushonend java at io servicetalk transport netty internal flush flushsubscriber oncomplete flush java at io servicetalk concurrent api singleflatmappublisher subscriberimpl oncomplete singleflatmappublisher java at io servicetalk concurrent api contextpreservingsubscriber oncomplete contextpreservingsubscriber java at io servicetalk concurrent api contextpreservingsubscriptionsubscriber oncomplete contextpreservingsubscriptionsubscriber java at io servicetalk concurrent api contextpreservingsubscriber oncomplete contextpreservingsubscriber java at io servicetalk concurrent api fromarraypublisher fromarraysubscription sendcomplete fromarraypublisher java at io servicetalk concurrent api fromarraypublisher fromarraysubscription request fromarraypublisher java at io servicetalk concurrent api contextpreservingsubscription request contextpreservingsubscription java at io servicetalk concurrent api cancellablethensubscription setsubscription cancellablethensubscription java at io servicetalk concurrent api singleflatmappublisher subscriberimpl onsubscribe singleflatmappublisher java at io servicetalk concurrent api contextpreservingsubscriber onsubscribe contextpreservingsubscriber java at io servicetalk concurrent api contextpreservingsubscriptionsubscriber onsubscribe contextpreservingsubscriptionsubscriber java at io servicetalk concurrent api contextpreservingsubscriber onsubscribe contextpreservingsubscriber java at io servicetalk concurrent api fromarraypublisher dosubscribe fromarraypublisher java at io servicetalk concurrent api abstractsynchronouspublisher handlesubscribe abstractsynchronouspublisher java at io servicetalk concurrent api publisher lambda subscribewithcontext publisher java at io servicetalk concurrent api contextpreservingconsumer accept contextpreservingconsumer java at io servicetalk concurrent api noopoffloader offloadsubscribe noopoffloader java at io servicetalk concurrent api publisher subscribewithcontext publisher java at io servicetalk concurrent api publisher subscribeinternal publisher java at io servicetalk concurrent api singleflatmappublisher subscriberimpl onsuccess singleflatmappublisher java at io servicetalk concurrent api reducesingle reducesubscriber oncomplete reducesingle java at io servicetalk concurrent api contextpreservingsubscriptionsubscriber oncomplete contextpreservingsubscriptionsubscriber java at io servicetalk concurrent api filterpublisher oncomplete filterpublisher java at io servicetalk concurrent api contextpreservingsubscriber oncomplete contextpreservingsubscriber java at io servicetalk concurrent internal scalarvaluesubscription request scalarvaluesubscription java at io servicetalk concurrent internal concurrentsubscription request concurrentsubscription java at io servicetalk concurrent api contextpreservingsubscription request contextpreservingsubscription java at io servicetalk concurrent internal concurrentsubscription request concurrentsubscription java at io servicetalk concurrent api reducesingle reducesubscriber onsubscribe reducesingle java at io servicetalk concurrent api contextpreservingsubscriptionsubscriber onsubscribe contextpreservingsubscriptionsubscriber java at io servicetalk concurrent api filterpublisher onsubscribe filterpublisher java at io servicetalk concurrent api contextpreservingsubscriber onsubscribe contextpreservingsubscriber java at io servicetalk concurrent api fromsingleitempublisher dosubscribe fromsingleitempublisher java at io servicetalk concurrent api abstractsynchronouspublisher handlesubscribe abstractsynchronouspublisher java at io servicetalk concurrent api publisher delegatesubscribe publisher java at io servicetalk concurrent api abstractsynchronouspublisheroperator handlesubscribe abstractsynchronouspublisheroperator java at io servicetalk concurrent api publisher delegatesubscribe publisher java at io servicetalk concurrent api reducesingle handlesubscribe reducesingle java at io servicetalk concurrent api single delegatesubscribe single java at io servicetalk concurrent api singleflatmappublisher handlesubscribe singleflatmappublisher java at io servicetalk concurrent api publisher delegatesubscribe publisher java at io servicetalk concurrent api abstractsynchronouspublisheroperator handlesubscribe abstractsynchronouspublisheroperator java at io servicetalk concurrent api publisher lambda subscribewithcontext publisher java at io servicetalk concurrent api contextpreservingconsumer accept contextpreservingconsumer java at io servicetalk concurrent api noopoffloader offloadsubscribe noopoffloader java at io servicetalk concurrent api publisher subscribewithcontext publisher java at io servicetalk concurrent api publisher subscribeinternal publisher java at io servicetalk concurrent api abstractnohandlesubscribepublisher subscribe abstractnohandlesubscribepublisher java at io servicetalk transport netty internal defaultnettyconnection handlesubscribe defaultnettyconnection java at io servicetalk concurrent api completable handlesubscribe completable java at io servicetalk concurrent api completable delegatesubscribe completable java at io servicetalk concurrent api abstractsynchronouscompletableoperator handlesubscribe abstractsynchronouscompletableoperator java at io servicetalk concurrent api completable delegatesubscribe completable java at io servicetalk concurrent api resumecompletable handlesubscribe resumecompletable java at io servicetalk concurrent api completable delegatesubscribe completable java at io servicetalk concurrent api abstractsynchronouscompletableoperator handlesubscribe abstractsynchronouscompletableoperator java at io servicetalk concurrent api completablesubscribesharecontext handlesubscribe completablesubscribesharecontext java at io servicetalk concurrent api completable lambda subscribewithcontext completable java at io servicetalk concurrent api contextpreservingconsumer accept contextpreservingconsumer java at io servicetalk concurrent api noopoffloader offloadsubscribe noopoffloader java at io servicetalk concurrent api completable subscribewithcontext completable java at io servicetalk concurrent api completable subscribeinternal completable java at io servicetalk concurrent api completabledefer handlesubscribe completabledefer java at io servicetalk concurrent api completable handlesubscribe completable java at io servicetalk concurrent api completable lambda subscribewithcontext completable java at io servicetalk concurrent api contextpreservingconsumer accept contextpreservingconsumer java at io servicetalk concurrent api noopoffloader offloadsubscribe noopoffloader java at io servicetalk concurrent api completable subscribewithcontext completable java at io servicetalk concurrent api completable subscribeinternal completable java at io servicetalk concurrent api completabledefer subscribe completabledefer java at io servicetalk transport netty internal defaultnettypipelinedconnection writequeue execute defaultnettypipelinedconnection java at io servicetalk transport netty internal defaultnettypipelinedconnection writequeue execute defaultnettypipelinedconnection java at io servicetalk transport netty internal sequentialtaskqueue executenexttask sequentialtaskqueue java at io servicetalk transport netty internal sequentialtaskqueue posttasktermination sequentialtaskqueue java at io servicetalk transport netty internal defaultnettypipelinedconnection writesourcesubscriber safeposttasktermination defaultnettypipelinedconnection java at io servicetalk transport netty internal defaultnettypipelinedconnection writesourcesubscriber oncomplete defaultnettypipelinedconnection java at io servicetalk concurrent api contextpreservingcancellablecompletablesubscriber oncomplete contextpreservingcancellablecompletablesubscriber java at io servicetalk concurrent api contextpreservingcompletablesubscriber oncomplete contextpreservingcompletablesubscriber java at io servicetalk concurrent api contextpreservingcancellablecompletablesubscriber oncomplete contextpreservingcancellablecompletablesubscriber java at io servicetalk concurrent api afterfinallycompletable afterfinallycompletablesubscriber oncomplete afterfinallycompletable java at io servicetalk concurrent api resumecompletable resumesubscriber oncomplete resumecompletable java at io servicetalk concurrent api beforefinallycompletable beforefinallycompletablesubscriber oncomplete beforefinallycompletable java at io servicetalk concurrent api contextpreservingcompletablesubscriber oncomplete contextpreservingcompletablesubscriber java at io servicetalk transport netty internal writestreamsubscriber allwritespromise terminatesubscriber writestreamsubscriber java at io servicetalk transport netty internal writestreamsubscriber allwritespromise sourceterminated writestreamsubscriber java at io servicetalk transport netty internal writestreamsubscriber oncomplete writestreamsubscriber java at io servicetalk concurrent api contextpreservingsubscriptionsubscriber oncomplete contextpreservingsubscriptionsubscriber java at io servicetalk transport netty internal flush flushsubscriber oncomplete flush java at io servicetalk concurrent api singleflatmappublisher subscriberimpl oncomplete singleflatmappublisher java at io servicetalk concurrent api contextpreservingsubscriber oncomplete contextpreservingsubscriber java at io servicetalk concurrent api contextpreservingsubscriptionsubscriber oncomplete contextpreservingsubscriptionsubscriber java at io servicetalk concurrent api contextpreservingsubscriber oncomplete contextpreservingsubscriber java at io servicetalk concurrent api fromarraypublisher fromarraysubscription sendcomplete fromarraypublisher java at io servicetalk concurrent api fromarraypublisher fromarraysubscription request fromarraypublisher java at io servicetalk concurrent api contextpreservingsubscription request contextpreservingsubscription java at io servicetalk concurrent api cancellablethensubscription request cancellablethensubscription java at io servicetalk transport netty internal flush flushsubscriber request flush java at io servicetalk concurrent api contextpreservingsubscription request contextpreservingsubscription java at io servicetalk concurrent internal concurrentsubscription request concurrentsubscription java at io servicetalk transport netty internal writestreamsubscriber requestmoreifrequired writestreamsubscriber java at io servicetalk transport netty internal writestreamsubscriber lambda onsubscribe writestreamsubscriber java at io netty util concurrent abstracteventexecutor safeexecute abstracteventexecutor java at io netty util concurrent singlethreadeventexecutor runalltasks singlethreadeventexecutor java at io netty channel epoll epolleventloop run epolleventloop java at io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java at io netty util internal threadexecutormap run threadexecutormap java at io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java at java base java lang thread run thread java caused by io netty channel unix errors nativeioexception syscall writev failed broken pipe at io netty channel unix filedescriptor writeaddresses unknown source io netty channel socket channeloutputshutdownexception channel output shutdown at io netty channel abstractchannel abstractunsafe shutdownoutput abstractchannel java at io netty channel abstractchannel abstractunsafe abstractchannel java at io netty channel epoll abstractepollchannel abstractepollunsafe abstractepollchannel java at io netty channel abstractchannel abstractunsafe flush abstractchannel java at io netty channel defaultchannelpipeline headcontext flush defaultchannelpipeline java at io netty channel abstractchannelhandlercontext abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokeflush abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext flush abstractchannelhandlercontext java at io netty channel defaultchannelpipeline flush defaultchannelpipeline java at io netty channel abstractchannel flush abstractchannel java at io servicetalk transport netty internal flush flushsubscriber lambda new flush java at io servicetalk transport netty internal flushonend writeterminated flushonend java at io servicetalk transport netty internal flush flushsubscriber oncomplete flush java at io servicetalk concurrent api singleflatmappublisher subscriberimpl oncomplete singleflatmappublisher java at io servicetalk concurrent api contextpreservingsubscriber oncomplete contextpreservingsubscriber java at io servicetalk concurrent api contextpreservingsubscriptionsubscriber oncomplete contextpreservingsubscriptionsubscriber java at io servicetalk concurrent api contextpreservingsubscriber oncomplete contextpreservingsubscriber java at io servicetalk concurrent api fromarraypublisher fromarraysubscription sendcomplete fromarraypublisher java at io servicetalk concurrent api fromarraypublisher fromarraysubscription request fromarraypublisher java at io servicetalk concurrent api contextpreservingsubscription request contextpreservingsubscription java at io servicetalk concurrent api cancellablethensubscription setsubscription cancellablethensubscription java at io servicetalk concurrent api singleflatmappublisher subscriberimpl onsubscribe singleflatmappublisher java at io servicetalk concurrent api contextpreservingsubscriber onsubscribe contextpreservingsubscriber java at io servicetalk concurrent api contextpreservingsubscriptionsubscriber onsubscribe contextpreservingsubscriptionsubscriber java at io servicetalk concurrent api contextpreservingsubscriber onsubscribe contextpreservingsubscriber java at io servicetalk concurrent api fromarraypublisher dosubscribe fromarraypublisher java at io servicetalk concurrent api abstractsynchronouspublisher handlesubscribe abstractsynchronouspublisher java at io servicetalk concurrent api publisher lambda subscribewithcontext publisher java at io servicetalk concurrent api contextpreservingconsumer accept contextpreservingconsumer java at io servicetalk concurrent api noopoffloader offloadsubscribe noopoffloader java at io servicetalk concurrent api publisher subscribewithcontext publisher java at io servicetalk concurrent api publisher subscribeinternal publisher java at io servicetalk concurrent api singleflatmappublisher subscriberimpl onsuccess singleflatmappublisher java at io servicetalk concurrent api reducesingle reducesubscriber oncomplete reducesingle java at io servicetalk concurrent api contextpreservingsubscriptionsubscriber oncomplete contextpreservingsubscriptionsubscriber java at io servicetalk concurrent api filterpublisher oncomplete filterpublisher java at io servicetalk concurrent api contextpreservingsubscriber oncomplete contextpreservingsubscriber java at io servicetalk concurrent internal scalarvaluesubscription request scalarvaluesubscription java at io servicetalk concurrent internal concurrentsubscription request concurrentsubscription java at io servicetalk concurrent api contextpreservingsubscription request contextpreservingsubscription java at io servicetalk concurrent internal concurrentsubscription request concurrentsubscription java at io servicetalk concurrent api reducesingle reducesubscriber onsubscribe reducesingle java at io servicetalk concurrent api contextpreservingsubscriptionsubscriber onsubscribe contextpreservingsubscriptionsubscriber java at io servicetalk concurrent api filterpublisher onsubscribe filterpublisher java at io servicetalk concurrent api contextpreservingsubscriber onsubscribe contextpreservingsubscriber java at io servicetalk concurrent api fromsingleitempublisher dosubscribe fromsingleitempublisher java at io servicetalk concurrent api abstractsynchronouspublisher handlesubscribe abstractsynchronouspublisher java at io servicetalk concurrent api publisher delegatesubscribe publisher java at io servicetalk concurrent api abstractsynchronouspublisheroperator handlesubscribe abstractsynchronouspublisheroperator java at io servicetalk concurrent api publisher delegatesubscribe publisher java at io servicetalk concurrent api reducesingle handlesubscribe reducesingle java at io servicetalk concurrent api single delegatesubscribe single java at io servicetalk concurrent api singleflatmappublisher handlesubscribe singleflatmappublisher java at io servicetalk concurrent api publisher delegatesubscribe publisher java at io servicetalk concurrent api abstractsynchronouspublisheroperator handlesubscribe abstractsynchronouspublisheroperator java at io servicetalk concurrent api publisher lambda subscribewithcontext publisher java at io servicetalk concurrent api contextpreservingconsumer accept contextpreservingconsumer java at io servicetalk concurrent api noopoffloader offloadsubscribe noopoffloader java at io servicetalk concurrent api publisher subscribewithcontext publisher java at io servicetalk concurrent api publisher subscribeinternal publisher java at io servicetalk concurrent api abstractnohandlesubscribepublisher subscribe abstractnohandlesubscribepublisher java at io servicetalk transport netty internal defaultnettyconnection handlesubscribe defaultnettyconnection java at io servicetalk concurrent api completable handlesubscribe completable java at io servicetalk concurrent api completable delegatesubscribe completable java at io servicetalk concurrent api abstractsynchronouscompletableoperator handlesubscribe abstractsynchronouscompletableoperator java at io servicetalk concurrent api completable delegatesubscribe completable java at io servicetalk concurrent api resumecompletable handlesubscribe resumecompletable java at io servicetalk concurrent api completable delegatesubscribe completable java at io servicetalk concurrent api abstractsynchronouscompletableoperator handlesubscribe abstractsynchronouscompletableoperator java at io servicetalk concurrent api completablesubscribesharecontext handlesubscribe completablesubscribesharecontext java at io servicetalk concurrent api completable lambda subscribewithcontext completable java at io servicetalk concurrent api contextpreservingconsumer accept contextpreservingconsumer java at io servicetalk concurrent api noopoffloader offloadsubscribe noopoffloader java at io servicetalk concurrent api completable subscribewithcontext completable java at io servicetalk concurrent api completable subscribeinternal completable java at io servicetalk concurrent api completabledefer handlesubscribe completabledefer java at io servicetalk concurrent api completable handlesubscribe completable java at io servicetalk concurrent api completable lambda subscribewithcontext completable java at io servicetalk concurrent api contextpreservingconsumer accept contextpreservingconsumer java at io servicetalk concurrent api noopoffloader offloadsubscribe noopoffloader java at io servicetalk concurrent api completable subscribewithcontext completable java at io servicetalk concurrent api completable subscribeinternal completable java at io servicetalk concurrent api completabledefer subscribe completabledefer java at io servicetalk transport netty internal defaultnettypipelinedconnection writequeue execute defaultnettypipelinedconnection java at io servicetalk transport netty internal defaultnettypipelinedconnection writequeue execute defaultnettypipelinedconnection java at io servicetalk transport netty internal sequentialtaskqueue executenexttask sequentialtaskqueue java at io servicetalk transport netty internal sequentialtaskqueue posttasktermination sequentialtaskqueue java at io servicetalk transport netty internal defaultnettypipelinedconnection writesourcesubscriber safeposttasktermination defaultnettypipelinedconnection java at io servicetalk transport netty internal defaultnettypipelinedconnection writesourcesubscriber oncomplete defaultnettypipelinedconnection java at io servicetalk concurrent api contextpreservingcancellablecompletablesubscriber oncomplete contextpreservingcancellablecompletablesubscriber java at io servicetalk concurrent api contextpreservingcompletablesubscriber oncomplete contextpreservingcompletablesubscriber java at io servicetalk concurrent api contextpreservingcancellablecompletablesubscriber oncomplete contextpreservingcancellablecompletablesubscriber java at io servicetalk concurrent api afterfinallycompletable afterfinallycompletablesubscriber oncomplete afterfinallycompletable java at io servicetalk concurrent api resumecompletable resumesubscriber oncomplete resumecompletable java at io servicetalk concurrent api beforefinallycompletable beforefinallycompletablesubscriber oncomplete beforefinallycompletable java at io servicetalk concurrent api contextpreservingcompletablesubscriber oncomplete contextpreservingcompletablesubscriber java at io servicetalk transport netty internal writestreamsubscriber allwritespromise terminatesubscriber writestreamsubscriber java at io servicetalk transport netty internal writestreamsubscriber allwritespromise sourceterminated writestreamsubscriber java at io servicetalk transport netty internal writestreamsubscriber oncomplete writestreamsubscriber java at io servicetalk concurrent api contextpreservingsubscriptionsubscriber oncomplete contextpreservingsubscriptionsubscriber java at io servicetalk transport netty internal flush flushsubscriber oncomplete flush java at io servicetalk concurrent api singleflatmappublisher subscriberimpl oncomplete singleflatmappublisher java at io servicetalk concurrent api contextpreservingsubscriber oncomplete contextpreservingsubscriber java at io servicetalk concurrent api contextpreservingsubscriptionsubscriber oncomplete contextpreservingsubscriptionsubscriber java at io servicetalk concurrent api contextpreservingsubscriber oncomplete contextpreservingsubscriber java at io servicetalk concurrent api fromarraypublisher fromarraysubscription sendcomplete fromarraypublisher java at io servicetalk concurrent api fromarraypublisher fromarraysubscription request fromarraypublisher java at io servicetalk concurrent api contextpreservingsubscription request contextpreservingsubscription java at io servicetalk concurrent api cancellablethensubscription request cancellablethensubscription java at io servicetalk transport netty internal flush flushsubscriber request flush java at io servicetalk concurrent api contextpreservingsubscription request contextpreservingsubscription java at io servicetalk concurrent internal concurrentsubscription request concurrentsubscription java at io servicetalk transport netty internal writestreamsubscriber requestmoreifrequired writestreamsubscriber java at io servicetalk transport netty internal writestreamsubscriber lambda onsubscribe writestreamsubscriber java at io netty util concurrent abstracteventexecutor safeexecute abstracteventexecutor java at io netty util concurrent singlethreadeventexecutor runalltasks singlethreadeventexecutor java at io netty channel epoll epolleventloop run epolleventloop java at io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java at io netty util internal threadexecutormap run threadexecutormap java at io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java at java lang thread run thread java caused by io netty channel unix errors nativeioexception syscall writev failed broken pipe at io netty channel unix filedescriptor writeaddresses unknown source servicetalk global io executor channeloutboundbuffer failed to mark a promise as failure because it has failed already writestreamsubscriber allwritespromise failure io netty channel socket channeloutputshutdownexception channel output shutdown unnotified cause io netty channel socket channeloutputshutdownexception channel output shutdown at io netty channel abstractchannel abstractunsafe shutdownoutput abstractchannel java at io netty channel abstractchannel abstractunsafe abstractchannel java at io netty channel epoll abstractepollchannel abstractepollunsafe abstractepollchannel java at io netty channel abstractchannel abstractunsafe flush abstractchannel java at io netty channel defaultchannelpipeline headcontext flush defaultchannelpipeline java at io netty channel abstractchannelhandlercontext abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokeflush abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext flush abstractchannelhandlercontext java at io netty channel defaultchannelpipeline flush defaultchannelpipeline java at io netty channel abstractchannel flush abstractchannel java at io servicetalk transport netty internal flush flushsubscriber lambda new flush java at io servicetalk transport netty internal flushonend writeterminated flushonend java at io servicetalk transport netty internal flush flushsubscriber oncomplete flush java at io servicetalk concurrent api singleflatmappublisher subscriberimpl oncomplete singleflatmappublisher java at io servicetalk concurrent api contextpreservingsubscriber oncomplete contextpreservingsubscriber java at io servicetalk concurrent api contextpreservingsubscriptionsubscriber oncomplete contextpreservingsubscriptionsubscriber java at io servicetalk concurrent api contextpreservingsubscriber oncomplete contextpreservingsubscriber java at io servicetalk concurrent api fromarraypublisher fromarraysubscription sendcomplete fromarraypublisher java at io servicetalk concurrent api fromarraypublisher fromarraysubscription request fromarraypublisher java at io servicetalk concurrent api contextpreservingsubscription request contextpreservingsubscription java at io servicetalk concurrent api cancellablethensubscription setsubscription cancellablethensubscription java at io servicetalk concurrent api singleflatmappublisher subscriberimpl onsubscribe singleflatmappublisher java at io servicetalk concurrent api contextpreservingsubscriber onsubscribe contextpreservingsubscriber java at io servicetalk concurrent api contextpreservingsubscriptionsubscriber onsubscribe contextpreservingsubscriptionsubscriber java at io servicetalk concurrent api contextpreservingsubscriber onsubscribe contextpreservingsubscriber java at io servicetalk concurrent api fromarraypublisher dosubscribe fromarraypublisher java at io servicetalk concurrent api abstractsynchronouspublisher handlesubscribe abstractsynchronouspublisher java at io servicetalk concurrent api publisher lambda subscribewithcontext publisher java at io servicetalk concurrent api contextpreservingconsumer accept contextpreservingconsumer java at io servicetalk concurrent api noopoffloader offloadsubscribe noopoffloader java at io servicetalk concurrent api publisher subscribewithcontext publisher java at io servicetalk concurrent api publisher subscribeinternal publisher java at io servicetalk concurrent api singleflatmappublisher subscriberimpl onsuccess singleflatmappublisher java at io servicetalk concurrent api reducesingle reducesubscriber oncomplete reducesingle java at io servicetalk concurrent api contextpreservingsubscriptionsubscriber oncomplete contextpreservingsubscriptionsubscriber java at io servicetalk concurrent api filterpublisher oncomplete filterpublisher java at io servicetalk concurrent api contextpreservingsubscriber oncomplete contextpreservingsubscriber java at io servicetalk concurrent internal scalarvaluesubscription request scalarvaluesubscription java at io servicetalk concurrent internal concurrentsubscription request concurrentsubscription java at io servicetalk concurrent api contextpreservingsubscription request contextpreservingsubscription java at io servicetalk concurrent internal concurrentsubscription request concurrentsubscription java at io servicetalk concurrent api reducesingle reducesubscriber onsubscribe reducesingle java at io servicetalk concurrent api contextpreservingsubscriptionsubscriber onsubscribe contextpreservingsubscriptionsubscriber java at io servicetalk concurrent api filterpublisher onsubscribe filterpublisher java at io servicetalk concurrent api contextpreservingsubscriber onsubscribe contextpreservingsubscriber java at io servicetalk concurrent api fromsingleitempublisher dosubscribe fromsingleitempublisher java at io servicetalk concurrent api abstractsynchronouspublisher handlesubscribe abstractsynchronouspublisher java at io servicetalk concurrent api publisher delegatesubscribe publisher java at io servicetalk concurrent api abstractsynchronouspublisheroperator handlesubscribe abstractsynchronouspublisheroperator java at io servicetalk concurrent api publisher delegatesubscribe publisher java at io servicetalk concurrent api reducesingle handlesubscribe reducesingle java at io servicetalk concurrent api single delegatesubscribe single java at io servicetalk concurrent api singleflatmappublisher handlesubscribe singleflatmappublisher java at io servicetalk concurrent api publisher delegatesubscribe publisher java at io servicetalk concurrent api abstractsynchronouspublisheroperator handlesubscribe abstractsynchronouspublisheroperator java at io servicetalk concurrent api publisher lambda subscribewithcontext publisher java at io servicetalk concurrent api contextpreservingconsumer accept contextpreservingconsumer java at io servicetalk concurrent api noopoffloader offloadsubscribe noopoffloader java at io servicetalk concurrent api publisher subscribewithcontext publisher java at io servicetalk concurrent api publisher subscribeinternal publisher java at io servicetalk concurrent api abstractnohandlesubscribepublisher subscribe abstractnohandlesubscribepublisher java at io servicetalk transport netty internal defaultnettyconnection handlesubscribe defaultnettyconnection java at io servicetalk concurrent api completable handlesubscribe completable java at io servicetalk concurrent api completable delegatesubscribe completable java at io servicetalk concurrent api abstractsynchronouscompletableoperator handlesubscribe abstractsynchronouscompletableoperator java at io servicetalk concurrent api completable delegatesubscribe completable java at io servicetalk concurrent api resumecompletable handlesubscribe resumecompletable java at io servicetalk concurrent api completable delegatesubscribe completable java at io servicetalk concurrent api abstractsynchronouscompletableoperator handlesubscribe abstractsynchronouscompletableoperator java at io servicetalk concurrent api completablesubscribesharecontext handlesubscribe completablesubscribesharecontext java at io servicetalk concurrent api completable lambda subscribewithcontext completable java at io servicetalk concurrent api contextpreservingconsumer accept contextpreservingconsumer java at io servicetalk concurrent api noopoffloader offloadsubscribe noopoffloader java at io servicetalk concurrent api completable subscribewithcontext completable java at io servicetalk concurrent api completable subscribeinternal completable java at io servicetalk concurrent api completabledefer handlesubscribe completabledefer java at io servicetalk concurrent api completable handlesubscribe completable java at io servicetalk concurrent api completable lambda subscribewithcontext completable java at io servicetalk concurrent api contextpreservingconsumer accept contextpreservingconsumer java at io servicetalk concurrent api noopoffloader offloadsubscribe noopoffloader java at io servicetalk concurrent api completable subscribewithcontext completable java at io servicetalk concurrent api completable subscribeinternal completable java at io servicetalk concurrent api completabledefer subscribe completabledefer java at io servicetalk transport netty internal defaultnettypipelinedconnection writequeue execute defaultnettypipelinedconnection java at io servicetalk transport netty internal defaultnettypipelinedconnection writequeue execute defaultnettypipelinedconnection java at io servicetalk transport netty internal sequentialtaskqueue executenexttask sequentialtaskqueue java at io servicetalk transport netty internal sequentialtaskqueue posttasktermination sequentialtaskqueue java at io servicetalk transport netty internal defaultnettypipelinedconnection writesourcesubscriber safeposttasktermination defaultnettypipelinedconnection java at io servicetalk transport netty internal defaultnettypipelinedconnection writesourcesubscriber oncomplete defaultnettypipelinedconnection java at io servicetalk concurrent api contextpreservingcancellablecompletablesubscriber oncomplete contextpreservingcancellablecompletablesubscriber java at io servicetalk concurrent api contextpreservingcompletablesubscriber oncomplete contextpreservingcompletablesubscriber java at io servicetalk concurrent api contextpreservingcancellablecompletablesubscriber oncomplete contextpreservingcancellablecompletablesubscriber java at io servicetalk concurrent api afterfinallycompletable afterfinallycompletablesubscriber oncomplete afterfinallycompletable java at io servicetalk concurrent api resumecompletable resumesubscriber oncomplete resumecompletable java at io servicetalk concurrent api beforefinallycompletable beforefinallycompletablesubscriber oncomplete beforefinallycompletable java at io servicetalk concurrent api contextpreservingcompletablesubscriber oncomplete contextpreservingcompletablesubscriber java at io servicetalk transport netty internal writestreamsubscriber allwritespromise terminatesubscriber writestreamsubscriber java at io servicetalk transport netty internal writestreamsubscriber allwritespromise sourceterminated writestreamsubscriber java at io servicetalk transport netty internal writestreamsubscriber oncomplete writestreamsubscriber java at io servicetalk concurrent api contextpreservingsubscriptionsubscriber oncomplete contextpreservingsubscriptionsubscriber java at io servicetalk transport netty internal flush flushsubscriber oncomplete flush java at io servicetalk concurrent api singleflatmappublisher subscriberimpl oncomplete singleflatmappublisher java at io servicetalk concurrent api contextpreservingsubscriber oncomplete contextpreservingsubscriber java at io servicetalk concurrent api contextpreservingsubscriptionsubscriber oncomplete contextpreservingsubscriptionsubscriber java at io servicetalk concurrent api contextpreservingsubscriber oncomplete contextpreservingsubscriber java at io servicetalk concurrent api fromarraypublisher fromarraysubscription sendcomplete fromarraypublisher java at io servicetalk concurrent api fromarraypublisher fromarraysubscription request fromarraypublisher java at io servicetalk concurrent api contextpreservingsubscription request contextpreservingsubscription java at io servicetalk concurrent api cancellablethensubscription request cancellablethensubscription java at io servicetalk transport netty internal flush flushsubscriber request flush java at io servicetalk concurrent api contextpreservingsubscription request contextpreservingsubscription java at io servicetalk concurrent internal concurrentsubscription request concurrentsubscription java at io servicetalk transport netty internal writestreamsubscriber requestmoreifrequired writestreamsubscriber java at io servicetalk transport netty internal writestreamsubscriber lambda onsubscribe writestreamsubscriber java at io netty util concurrent abstracteventexecutor safeexecute abstracteventexecutor java at io netty util concurrent singlethreadeventexecutor runalltasks singlethreadeventexecutor java at io netty channel epoll epolleventloop run epolleventloop java at io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java at io netty util internal threadexecutormap run threadexecutormap java at io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java at java base java lang thread run thread java caused by io netty channel unix errors nativeioexception syscall writev failed broken pipe at io netty channel unix filedescriptor writeaddresses unknown source io netty channel socket channeloutputshutdownexception channel output shutdown at io netty channel abstractchannel abstractunsafe shutdownoutput abstractchannel java at io netty channel abstractchannel abstractunsafe abstractchannel java at io netty channel epoll abstractepollchannel abstractepollunsafe abstractepollchannel java at io netty channel abstractchannel abstractunsafe flush abstractchannel java at io netty channel defaultchannelpipeline headcontext flush defaultchannelpipeline java at io netty channel abstractchannelhandlercontext abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokeflush abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext flush abstractchannelhandlercontext java at io netty channel defaultchannelpipeline flush defaultchannelpipeline java at io netty channel abstractchannel flush abstractchannel java at io servicetalk transport netty internal flush flushsubscriber lambda new flush java at io servicetalk transport netty internal flushonend writeterminated flushonend java at io servicetalk transport netty internal flush flushsubscriber oncomplete flush java at io servicetalk concurrent api singleflatmappublisher subscriberimpl oncomplete singleflatmappublisher java at io servicetalk concurrent api contextpreservingsubscriber oncomplete contextpreservingsubscriber java at io servicetalk concurrent api contextpreservingsubscriptionsubscriber oncomplete contextpreservingsubscriptionsubscriber java at io servicetalk concurrent api contextpreservingsubscriber oncomplete contextpreservingsubscriber java at io servicetalk concurrent api fromarraypublisher fromarraysubscription sendcomplete fromarraypublisher java at io servicetalk concurrent api fromarraypublisher fromarraysubscription request fromarraypublisher java at io servicetalk concurrent api contextpreservingsubscription request contextpreservingsubscription java at io servicetalk concurrent api cancellablethensubscription setsubscription cancellablethensubscription java at io servicetalk concurrent api singleflatmappublisher subscriberimpl onsubscribe singleflatmappublisher java at io servicetalk concurrent api contextpreservingsubscriber onsubscribe contextpreservingsubscriber java at io servicetalk concurrent api contextpreservingsubscriptionsubscriber onsubscribe contextpreservingsubscriptionsubscriber java at io servicetalk concurrent api contextpreservingsubscriber onsubscribe contextpreservingsubscriber java at io servicetalk concurrent api fromarraypublisher dosubscribe fromarraypublisher java at io servicetalk concurrent api abstractsynchronouspublisher handlesubscribe abstractsynchronouspublisher java at io servicetalk concurrent api publisher lambda subscribewithcontext publisher java at io servicetalk concurrent api contextpreservingconsumer accept contextpreservingconsumer java at io servicetalk concurrent api noopoffloader offloadsubscribe noopoffloader java at io servicetalk concurrent api publisher subscribewithcontext publisher java at io servicetalk concurrent api publisher subscribeinternal publisher java at io servicetalk concurrent api singleflatmappublisher subscriberimpl onsuccess singleflatmappublisher java at io servicetalk concurrent api reducesingle reducesubscriber oncomplete reducesingle java at io servicetalk concurrent api contextpreservingsubscriptionsubscriber oncomplete contextpreservingsubscriptionsubscriber java at io servicetalk concurrent api filterpublisher oncomplete filterpublisher java at io servicetalk concurrent api contextpreservingsubscriber oncomplete contextpreservingsubscriber java at io servicetalk concurrent internal scalarvaluesubscription request scalarvaluesubscription java at io servicetalk concurrent internal concurrentsubscription request concurrentsubscription java at io servicetalk concurrent api contextpreservingsubscription request contextpreservingsubscription java at io servicetalk concurrent internal concurrentsubscription request concurrentsubscription java at io servicetalk concurrent api reducesingle reducesubscriber onsubscribe reducesingle java at io servicetalk concurrent api contextpreservingsubscriptionsubscriber onsubscribe contextpreservingsubscriptionsubscriber java at io servicetalk concurrent api filterpublisher onsubscribe filterpublisher java at io servicetalk concurrent api contextpreservingsubscriber onsubscribe contextpreservingsubscriber java at io servicetalk concurrent api fromsingleitempublisher dosubscribe fromsingleitempublisher java at io servicetalk concurrent api abstractsynchronouspublisher handlesubscribe abstractsynchronouspublisher java at io servicetalk concurrent api publisher delegatesubscribe publisher java at io servicetalk concurrent api abstractsynchronouspublisheroperator handlesubscribe abstractsynchronouspublisheroperator java at io servicetalk concurrent api publisher delegatesubscribe publisher java at io servicetalk concurrent api reducesingle handlesubscribe reducesingle java at io servicetalk concurrent api single delegatesubscribe single java at io servicetalk concurrent api singleflatmappublisher handlesubscribe singleflatmappublisher java at io servicetalk concurrent api publisher delegatesubscribe publisher java at io servicetalk concurrent api abstractsynchronouspublisheroperator handlesubscribe abstractsynchronouspublisheroperator java at io servicetalk concurrent api publisher lambda subscribewithcontext publisher java at io servicetalk concurrent api contextpreservingconsumer accept contextpreservingconsumer java at io servicetalk concurrent api noopoffloader offloadsubscribe noopoffloader java at io servicetalk concurrent api publisher subscribewithcontext publisher java at io servicetalk concurrent api publisher subscribeinternal publisher java at io servicetalk concurrent api abstractnohandlesubscribepublisher subscribe abstractnohandlesubscribepublisher java at io servicetalk transport netty internal defaultnettyconnection handlesubscribe defaultnettyconnection java at io servicetalk concurrent api completable handlesubscribe completable java at io servicetalk concurrent api completable delegatesubscribe completable java at io servicetalk concurrent api abstractsynchronouscompletableoperator handlesubscribe abstractsynchronouscompletableoperator java at io servicetalk concurrent api completable delegatesubscribe completable java at io servicetalk concurrent api resumecompletable handlesubscribe resumecompletable java at io servicetalk concurrent api completable delegatesubscribe completable java at io servicetalk concurrent api abstractsynchronouscompletableoperator handlesubscribe abstractsynchronouscompletableoperator java at io servicetalk concurrent api completablesubscribesharecontext handlesubscribe completablesubscribesharecontext java at io servicetalk concurrent api completable lambda subscribewithcontext completable java at io servicetalk concurrent api contextpreservingconsumer accept contextpreservingconsumer java at io servicetalk concurrent api noopoffloader offloadsubscribe noopoffloader java at io servicetalk concurrent api completable subscribewithcontext completable java at io servicetalk concurrent api completable subscribeinternal completable java at io servicetalk concurrent api completabledefer handlesubscribe completabledefer java at io servicetalk concurrent api completable handlesubscribe completable java at io servicetalk concurrent api completable lambda subscribewithcontext completable java at io servicetalk concurrent api contextpreservingconsumer accept contextpreservingconsumer java at io servicetalk concurrent api noopoffloader offloadsubscribe noopoffloader java at io servicetalk concurrent api completable subscribewithcontext completable java at io servicetalk concurrent api completable subscribeinternal completable java at io servicetalk concurrent api completabledefer subscribe completabledefer java at io servicetalk transport netty internal defaultnettypipelinedconnection writequeue execute defaultnettypipelinedconnection java at io servicetalk transport netty internal defaultnettypipelinedconnection writequeue execute defaultnettypipelinedconnection java at io servicetalk transport netty internal sequentialtaskqueue executenexttask sequentialtaskqueue java at io servicetalk transport netty internal sequentialtaskqueue posttasktermination sequentialtaskqueue java at io servicetalk transport netty internal defaultnettypipelinedconnection writesourcesubscriber safeposttasktermination defaultnettypipelinedconnection java at io servicetalk transport netty internal defaultnettypipelinedconnection writesourcesubscriber oncomplete defaultnettypipelinedconnection java at io servicetalk concurrent api contextpreservingcancellablecompletablesubscriber oncomplete contextpreservingcancellablecompletablesubscriber java at io servicetalk concurrent api contextpreservingcompletablesubscriber oncomplete contextpreservingcompletablesubscriber java at io servicetalk concurrent api contextpreservingcancellablecompletablesubscriber oncomplete contextpreservingcancellablecompletablesubscriber java at io servicetalk concurrent api afterfinallycompletable afterfinallycompletablesubscriber oncomplete afterfinallycompletable java at io servicetalk concurrent api resumecompletable resumesubscriber oncomplete resumecompletable java at io servicetalk concurrent api beforefinallycompletable beforefinallycompletablesubscriber oncomplete beforefinallycompletable java at io servicetalk concurrent api contextpreservingcompletablesubscriber oncomplete contextpreservingcompletablesubscriber java at io servicetalk transport netty internal writestreamsubscriber allwritespromise terminatesubscriber writestreamsubscriber java at io servicetalk transport netty internal writestreamsubscriber allwritespromise sourceterminated writestreamsubscriber java at io servicetalk transport netty internal writestreamsubscriber oncomplete writestreamsubscriber java at io servicetalk concurrent api contextpreservingsubscriptionsubscriber oncomplete contextpreservingsubscriptionsubscriber java at io servicetalk transport netty internal flush flushsubscriber oncomplete flush java at io servicetalk concurrent api singleflatmappublisher subscriberimpl oncomplete singleflatmappublisher java at io servicetalk concurrent api contextpreservingsubscriber oncomplete contextpreservingsubscriber java at io servicetalk concurrent api contextpreservingsubscriptionsubscriber oncomplete contextpreservingsubscriptionsubscriber java at io servicetalk concurrent api contextpreservingsubscriber oncomplete contextpreservingsubscriber java at io servicetalk concurrent api fromarraypublisher fromarraysubscription sendcomplete fromarraypublisher java at io servicetalk concurrent api fromarraypublisher fromarraysubscription request fromarraypublisher java at io servicetalk concurrent api contextpreservingsubscription request contextpreservingsubscription java at io servicetalk concurrent api cancellablethensubscription request cancellablethensubscription java at io servicetalk transport netty internal flush flushsubscriber request flush java at io servicetalk concurrent api contextpreservingsubscription request contextpreservingsubscription java at io servicetalk concurrent internal concurrentsubscription request concurrentsubscription java at io servicetalk transport netty internal writestreamsubscriber requestmoreifrequired writestreamsubscriber java at io servicetalk transport netty internal writestreamsubscriber lambda onsubscribe writestreamsubscriber java at io netty util concurrent abstracteventexecutor safeexecute abstracteventexecutor java at io netty util concurrent singlethreadeventexecutor runalltasks singlethreadeventexecutor java at io netty channel epoll epolleventloop run epolleventloop java at io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java at io netty util internal threadexecutormap run threadexecutormap java at io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java at java lang thread run thread java caused by io netty channel unix errors nativeioexception syscall writev failed broken pipe at io netty channel unix filedescriptor writeaddresses unknown source time limited test clientclosureracetest completed requests time limited test clientclosureracetest completed requests
| 1
|
75,037
| 7,458,796,623
|
IssuesEvent
|
2018-03-30 12:22:54
|
ballerina-lang/testerina
|
https://api.github.com/repos/ballerina-lang/testerina
|
closed
|
[Blocker] Cannot access the values in a catch block of a try-catch
|
component/testerina
|
1. I have a function like below which I test using testarina
```
public function scheduledTaskAppointment (string cron) (string msg) {
string appTid;
function () returns (error) onTriggerFunction;
function (error) onErrorFunction;
onTriggerFunction = cleanupOTP;
onErrorFunction = cleanupError;
try{
appTid, _ = task:scheduleAppointment(onTriggerFunction, onErrorFunction, cron);
msg = "Success";
}catch(error e){
log:printErrorCause("Scheduled Task Appointment failure", e);
msg = "Fail";
println(e.msg);
}
return;
}
```
2. My test function is as follows. This tests the error message of an invalid cron expression
```
function testscheduledTaskAppointment_invalidCron () {
string returnValue = utils:scheduledTaskAppointment("");
test:assertStringEquals(returnValue,"CronExpression '' is invalid.", "Assert Failure");
println("testscheduledTaskAppointment_invalidCron completed");
}
```
**Issue**
When the scheduledTaskAppointment("") is executed (ballerina run), it gives the `CronExpression '' is invalid.` error message.
But, when the tests are run (ballerina test), this gives an error and the test fails.
|
1.0
|
[Blocker] Cannot access the values in a catch block of a try-catch - 1. I have a function like below which I test using testarina
```
public function scheduledTaskAppointment (string cron) (string msg) {
string appTid;
function () returns (error) onTriggerFunction;
function (error) onErrorFunction;
onTriggerFunction = cleanupOTP;
onErrorFunction = cleanupError;
try{
appTid, _ = task:scheduleAppointment(onTriggerFunction, onErrorFunction, cron);
msg = "Success";
}catch(error e){
log:printErrorCause("Scheduled Task Appointment failure", e);
msg = "Fail";
println(e.msg);
}
return;
}
```
2. My test function is as follows. This tests the error message of an invalid cron expression
```
function testscheduledTaskAppointment_invalidCron () {
string returnValue = utils:scheduledTaskAppointment("");
test:assertStringEquals(returnValue,"CronExpression '' is invalid.", "Assert Failure");
println("testscheduledTaskAppointment_invalidCron completed");
}
```
**Issue**
When the scheduledTaskAppointment("") is executed (ballerina run), it gives the `CronExpression '' is invalid.` error message.
But, when the tests are run (ballerina test), this gives an error and the test fails.
|
test
|
cannot access the values in a catch block of a try catch i have a function like below which i test using testarina public function scheduledtaskappointment string cron string msg string apptid function returns error ontriggerfunction function error onerrorfunction ontriggerfunction cleanupotp onerrorfunction cleanuperror try apptid task scheduleappointment ontriggerfunction onerrorfunction cron msg success catch error e log printerrorcause scheduled task appointment failure e msg fail println e msg return my test function is as follows this tests the error message of an invalid cron expression function testscheduledtaskappointment invalidcron string returnvalue utils scheduledtaskappointment test assertstringequals returnvalue cronexpression is invalid assert failure println testscheduledtaskappointment invalidcron completed issue when the scheduledtaskappointment is executed ballerina run it gives the cronexpression is invalid error message but when the tests are run ballerina test this gives an error and the test fails
| 1
|
193,334
| 6,884,071,103
|
IssuesEvent
|
2017-11-21 11:38:53
|
Sharavanth/ho-app
|
https://api.github.com/repos/Sharavanth/ho-app
|
opened
|
Format common colors and common variables based on individual components in Core-lib.
|
Low Priority React UI: core-lib
|
Formats colors.js and variables.js files based on the individual components in core-lib.
|
1.0
|
Format common colors and common variables based on individual components in Core-lib. - Formats colors.js and variables.js files based on the individual components in core-lib.
|
non_test
|
format common colors and common variables based on individual components in core lib formats colors js and variables js files based on the individual components in core lib
| 0
|
406,379
| 27,561,559,852
|
IssuesEvent
|
2023-03-07 22:32:39
|
llvm/llvm-project
|
https://api.github.com/repos/llvm/llvm-project
|
closed
|
libc apparently superfluous option for SCUDO
|
documentation libc
|
In the website [documentation](https://libc.llvm.org/full_host_build.html) for building libc, this command is given to configure it properly.
```shell
cmake ../llvm \
-G Ninja \ # Generator
-DLLVM_ENABLE_PROJECTS="clang;libc;lld;compiler-rt" \ # Enabled projects
-DCMAKE_BUILD_TYPE=<Debug|Release> \ # Select build type
-DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ \
-DLLVM_LIBC_FULL_BUILD=ON \ # We want the full libc
-DLLVM_LIBC_INCLUDE_SCUDO=ON \ # Include Scudo in the libc
-DCOMPILER_RT_BUILD_SCUDO_STANDALONE_WITH_LLVM_LIBC=ON \
-DCOMPILER_RT_BUILD_GWP_ASAN=OFF \
-DCOMPILER_RT_SCUDO_STANDALONE_BUILD_SHARED=OFF \
-DCMAKE_INSTALL_PREFIX=<SYSROOT> # Specify a sysroot directory
```
However, upon doing this, apparently one of the options is not used, is this deprecated or new and not fleshed out? Or just not supposed to be there?
```console
CMake Warning:
Manually-specified variables were not used by the project:
LLVM_LIBC_INCLUDE_SCUDO
```
|
1.0
|
libc apparently superfluous option for SCUDO - In the website [documentation](https://libc.llvm.org/full_host_build.html) for building libc, this command is given to configure it properly.
```shell
cmake ../llvm \
-G Ninja \ # Generator
-DLLVM_ENABLE_PROJECTS="clang;libc;lld;compiler-rt" \ # Enabled projects
-DCMAKE_BUILD_TYPE=<Debug|Release> \ # Select build type
-DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ \
-DLLVM_LIBC_FULL_BUILD=ON \ # We want the full libc
-DLLVM_LIBC_INCLUDE_SCUDO=ON \ # Include Scudo in the libc
-DCOMPILER_RT_BUILD_SCUDO_STANDALONE_WITH_LLVM_LIBC=ON \
-DCOMPILER_RT_BUILD_GWP_ASAN=OFF \
-DCOMPILER_RT_SCUDO_STANDALONE_BUILD_SHARED=OFF \
-DCMAKE_INSTALL_PREFIX=<SYSROOT> # Specify a sysroot directory
```
However, upon doing this, apparently one of the options is not used, is this deprecated or new and not fleshed out? Or just not supposed to be there?
```console
CMake Warning:
Manually-specified variables were not used by the project:
LLVM_LIBC_INCLUDE_SCUDO
```
|
non_test
|
libc apparently superfluous option for scudo in the website for building libc this command is given to configure it properly shell cmake llvm g ninja generator dllvm enable projects clang libc lld compiler rt enabled projects dcmake build type select build type dcmake c compiler clang dcmake cxx compiler clang dllvm libc full build on we want the full libc dllvm libc include scudo on include scudo in the libc dcompiler rt build scudo standalone with llvm libc on dcompiler rt build gwp asan off dcompiler rt scudo standalone build shared off dcmake install prefix specify a sysroot directory however upon doing this apparently one of the options is not used is this deprecated or new and not fleshed out or just not supposed to be there console cmake warning manually specified variables were not used by the project llvm libc include scudo
| 0
|
46,651
| 13,055,955,125
|
IssuesEvent
|
2020-07-30 03:13:35
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
opened
|
[lilliput] destructor must be noexecpt in c++11 (Trac #1666)
|
Incomplete Migration Migrated from Trac combo reconstruction defect
|
Migrated from https://code.icecube.wisc.edu/ticket/1666
```json
{
"status": "closed",
"changetime": "2019-02-13T14:13:10",
"description": "Don't call log_fatal in a destructor.\n\nWarning:\n{{{\nfrom\n /home/dschultz/Documents/combo/trunk/src/lilliput/private/test/PSSTestModul\n e.h:4,\nfrom /home/dschultz/Documents/combo/trunk/src/lilliput/private/test/PSSTestModule.cxx:1:\n /home/dschultz/Documents/combo/trunk/src/lilliput/private/test/PSSTestModule.cx\n x: In destructor \u2018virtual PSSTestModule::~PSSTestModule()\u2019:\n/home/dschultz/Documents/combo/trunk/src/icetray/public/icetray/I3Logging.h:181:57:\n warning: throw will always call terminate() [-Wterminate]\n##__VA_ARGS__) + \" (in \" + __PRETTY_FUNCTION__ + \")\")\n^\n/home/dschultz/Documents/combo/trunk/src/lilliput/private/test/PSSTestModule.cxx:17:9:\n note: in expansion of macro \u2018log_fatal\u2019\nlog_fatal(\"saw ZERO NADA physics frames\");\n^\n/home/dschultz/Documents/combo/trunk/src/icetray/public/icetray/I3Logging.h:181:57:\n note: in C++11 destructors default to noexcept\n##__VA_ARGS__) + \" (in \" + __PRETTY_FUNCTION__ + \")\")\n^\n/home/dschultz/Documents/combo/trunk/src/lilliput/private/test/PSSTestModule.cxx:17:9:\n note: in expansion of macro \u2018log_fatal\u2019\nlog_fatal(\"saw ZERO NADA physics frames\");\n^\n}}}",
"reporter": "david.schultz",
"cc": "",
"resolution": "fixed",
"_ts": "1550067190995086",
"component": "combo reconstruction",
"summary": "[lilliput] destructor must be noexecpt in c++11",
"priority": "normal",
"keywords": "",
"time": "2016-04-26T21:12:35",
"milestone": "",
"owner": "kkrings",
"type": "defect"
}
```
|
1.0
|
[lilliput] destructor must be noexecpt in c++11 (Trac #1666) - Migrated from https://code.icecube.wisc.edu/ticket/1666
```json
{
"status": "closed",
"changetime": "2019-02-13T14:13:10",
"description": "Don't call log_fatal in a destructor.\n\nWarning:\n{{{\nfrom\n /home/dschultz/Documents/combo/trunk/src/lilliput/private/test/PSSTestModul\n e.h:4,\nfrom /home/dschultz/Documents/combo/trunk/src/lilliput/private/test/PSSTestModule.cxx:1:\n /home/dschultz/Documents/combo/trunk/src/lilliput/private/test/PSSTestModule.cx\n x: In destructor \u2018virtual PSSTestModule::~PSSTestModule()\u2019:\n/home/dschultz/Documents/combo/trunk/src/icetray/public/icetray/I3Logging.h:181:57:\n warning: throw will always call terminate() [-Wterminate]\n##__VA_ARGS__) + \" (in \" + __PRETTY_FUNCTION__ + \")\")\n^\n/home/dschultz/Documents/combo/trunk/src/lilliput/private/test/PSSTestModule.cxx:17:9:\n note: in expansion of macro \u2018log_fatal\u2019\nlog_fatal(\"saw ZERO NADA physics frames\");\n^\n/home/dschultz/Documents/combo/trunk/src/icetray/public/icetray/I3Logging.h:181:57:\n note: in C++11 destructors default to noexcept\n##__VA_ARGS__) + \" (in \" + __PRETTY_FUNCTION__ + \")\")\n^\n/home/dschultz/Documents/combo/trunk/src/lilliput/private/test/PSSTestModule.cxx:17:9:\n note: in expansion of macro \u2018log_fatal\u2019\nlog_fatal(\"saw ZERO NADA physics frames\");\n^\n}}}",
"reporter": "david.schultz",
"cc": "",
"resolution": "fixed",
"_ts": "1550067190995086",
"component": "combo reconstruction",
"summary": "[lilliput] destructor must be noexecpt in c++11",
"priority": "normal",
"keywords": "",
"time": "2016-04-26T21:12:35",
"milestone": "",
"owner": "kkrings",
"type": "defect"
}
```
|
non_test
|
destructor must be noexecpt in c trac migrated from json status closed changetime description don t call log fatal in a destructor n nwarning n nfrom n home dschultz documents combo trunk src lilliput private test psstestmodul n e h nfrom home dschultz documents combo trunk src lilliput private test psstestmodule cxx n home dschultz documents combo trunk src lilliput private test psstestmodule cx n x in destructor psstestmodule psstestmodule n home dschultz documents combo trunk src icetray public icetray h n warning throw will always call terminate n va args in pretty function n n home dschultz documents combo trunk src lilliput private test psstestmodule cxx n note in expansion of macro fatal nlog fatal saw zero nada physics frames n n home dschultz documents combo trunk src icetray public icetray h n note in c destructors default to noexcept n va args in pretty function n n home dschultz documents combo trunk src lilliput private test psstestmodule cxx n note in expansion of macro fatal nlog fatal saw zero nada physics frames n n reporter david schultz cc resolution fixed ts component combo reconstruction summary destructor must be noexecpt in c priority normal keywords time milestone owner kkrings type defect
| 0
|
807
| 3,285,422,387
|
IssuesEvent
|
2015-10-28 20:32:42
|
GsDevKit/zinc
|
https://api.github.com/repos/GsDevKit/zinc
|
reopened
|
Timeout passed to a ZnClient is ignored while making the connection via the underlying socket
|
inprocess
|
### Suppose the following situation:
- Create a ZnClient
- Pass in an explicit timeout (e.g., 3 seconds)
- Pass in a host/port combination that does not exist
### Expected behavior: an error should be thrown after 3 seconds.
What happens: the user has to wait for the (default) timeout of GsSocket; the timeout passed in to the client is ignored.
### Reason: the SocketStream does not use the timeout while connecting to the socket created via SocketStreamSocket.
In the code this is reflected in the following extension method
SocketStream>>openConnectionToHost: host port: portNumber timeout: timeout
| socket |
socket :=SocketStreamSocket newTCPSocket.
socket connectTo: host port: portNumber.
^(self on: socket)
timeout: timeout;
yourself
I compared this with the Pharo implementation, and here the timeout is passed.
|
1.0
|
Timeout passed to a ZnClient is ignored while making the connection via the underlying socket - ### Suppose the following situation:
- Create a ZnClient
- Pass in an explicit timeout (e.g., 3 seconds)
- Pass in a host/port combination that does not exist
### Expected behavior: an error should be thrown after 3 seconds.
What happens: the user has to wait for the (default) timeout of GsSocket; the timeout passed in to the client is ignored.
### Reason: the SocketStream does not use the timeout while connecting to the socket created via SocketStreamSocket.
In the code this is reflected in the following extension method
SocketStream>>openConnectionToHost: host port: portNumber timeout: timeout
| socket |
socket :=SocketStreamSocket newTCPSocket.
socket connectTo: host port: portNumber.
^(self on: socket)
timeout: timeout;
yourself
I compared this with the Pharo implementation, and here the timeout is passed.
|
non_test
|
timeout passed to a znclient is ignored while making the connection via the underlying socket suppose the following situation create a znclient pass in an explicit timeout e g seconds pass in a host port combination that does not exist expected behavior an error should be thrown after seconds what happens the user has to wait for the default timeout of gssocket the timeout passed in to the client is ignored reason the socketstream does not use the timeout while connecting to the socket created via socketstreamsocket in the code this is reflected in the following extension method socketstream openconnectiontohost host port portnumber timeout timeout socket socket socketstreamsocket newtcpsocket socket connectto host port portnumber self on socket timeout timeout yourself i compared this with the pharo implementation and here the timeout is passed
| 0
|
452,043
| 32,048,890,520
|
IssuesEvent
|
2023-09-23 10:09:06
|
hedyorg/hedy
|
https://api.github.com/repos/hedyorg/hedy
|
closed
|
[DOCUMENTATION] Pygame markdown
|
documentation
|
**For team Pygame:**
1. Create `pygame.md` for documentation.
This should make it possible for other contributors to understand what we've implemented during our project.
2. It's important to note what we've **done** and what we've **NOT done** so far.
3. Mention the technical implementation.
|
1.0
|
[DOCUMENTATION] Pygame markdown - **For team Pygame:**
1. Create `pygame.md` for documentation.
This should make it possible for other contributors to understand what we've implemented during our project.
2. It's important to note what we've **done** and what we've **NOT done** so far.
3. Mention the technical implementation.
|
non_test
|
pygame markdown for team pygame create pygame md for documentation this should make it possible for other contributors to understand what we ve implemented during our project it s important to note what we ve done and what we ve not done so far mention the technical implementation
| 0
|
17,008
| 9,962,799,374
|
IssuesEvent
|
2019-07-07 17:36:27
|
brewpi-remix/brewpi-script-rmx
|
https://api.github.com/repos/brewpi-remix/brewpi-script-rmx
|
closed
|
BEERSOCKET is 777
|
security
|
BEERSOCKET is 777, as we expand capabilities, this opens a pretty big security issue.
|
True
|
BEERSOCKET is 777 - BEERSOCKET is 777, as we expand capabilities, this opens a pretty big security issue.
|
non_test
|
beersocket is beersocket is as we expand capabilities this opens a pretty big security issue
| 0
|
342,441
| 10,317,112,006
|
IssuesEvent
|
2019-08-30 11:51:25
|
garden-io/garden
|
https://api.github.com/repos/garden-io/garden
|
closed
|
Command names not displaying in garden --help in PowerShell
|
bug good first issue priority:low
|
No idea why, but it seems the color we use for the command names in our help message don't display in PowerShell:
https://www.dropbox.com/s/nxcr92o7zfnwcrf/Screenshot%202018-09-16%2016.45.56.png?dl=0
Side-note: I'm not sure the single letter aliases make much sense for top-level commands, and some of them are clearly wrong, `r` is an alias in more than one place. We should check and fix those.
|
1.0
|
Command names not displaying in garden --help in PowerShell - No idea why, but it seems the color we use for the command names in our help message don't display in PowerShell:
https://www.dropbox.com/s/nxcr92o7zfnwcrf/Screenshot%202018-09-16%2016.45.56.png?dl=0
Side-note: I'm not sure the single letter aliases make much sense for top-level commands, and some of them are clearly wrong, `r` is an alias in more than one place. We should check and fix those.
|
non_test
|
command names not displaying in garden help in powershell no idea why but it seems the color we use for the command names in our help message don t display in powershell side note i m not sure the single letter aliases make much sense for top level commands and some of them are clearly wrong r is an alias in more than one place we should check and fix those
| 0
|
307,921
| 26,570,274,479
|
IssuesEvent
|
2023-01-21 03:31:07
|
QubesOS/updates-status
|
https://api.github.com/repos/QubesOS/updates-status
|
closed
|
manager v4.1.28-1 (r4.2)
|
r4.2-host-cur-test r4.2-vm-bullseye-cur-test r4.2-vm-bookworm-cur-test r4.2-vm-fc37-cur-test r4.2-vm-fc36-cur-test r4.2-vm-centos-stream8-cur-test
|
Update of manager to v4.1.28-1 for Qubes r4.2, see comments below for details and build status.
From commit: https://github.com/QubesOS/qubes-manager/commit/c63e3257997fdde9e8192cddf4d4d588b8fa6ad9
[Changes since previous version](https://github.com/QubesOS/qubes-manager/compare/v4.1.27-1...v4.1.28-1):
QubesOS/qubes-manager@c63e325 version 4.1.28-1
QubesOS/qubes-manager@3d2e662 Merge remote-tracking branch 'origin/pr/333'
QubesOS/qubes-manager@c652a72 Fix tests for kernel settings
Referenced issues:
If you're release manager, you can issue GPG-inline signed command:
* `Upload-component r4.2 manager c63e3257997fdde9e8192cddf4d4d588b8fa6ad9 current all` (available 5 days from now)
* `Upload-component r4.2 manager c63e3257997fdde9e8192cddf4d4d588b8fa6ad9 security-testing`
You can choose subset of distributions like:
* `Upload-component r4.2 manager c63e3257997fdde9e8192cddf4d4d588b8fa6ad9 current vm-bookworm,vm-fc37` (available 5 days from now)
Above commands will work only if packages in current-testing repository were built from given commit (i.e. no new version superseded it).
For more information on how to test this update, please take a look at https://www.qubes-os.org/doc/testing/#updates.
|
6.0
|
manager v4.1.28-1 (r4.2) - Update of manager to v4.1.28-1 for Qubes r4.2, see comments below for details and build status.
From commit: https://github.com/QubesOS/qubes-manager/commit/c63e3257997fdde9e8192cddf4d4d588b8fa6ad9
[Changes since previous version](https://github.com/QubesOS/qubes-manager/compare/v4.1.27-1...v4.1.28-1):
QubesOS/qubes-manager@c63e325 version 4.1.28-1
QubesOS/qubes-manager@3d2e662 Merge remote-tracking branch 'origin/pr/333'
QubesOS/qubes-manager@c652a72 Fix tests for kernel settings
Referenced issues:
If you're release manager, you can issue GPG-inline signed command:
* `Upload-component r4.2 manager c63e3257997fdde9e8192cddf4d4d588b8fa6ad9 current all` (available 5 days from now)
* `Upload-component r4.2 manager c63e3257997fdde9e8192cddf4d4d588b8fa6ad9 security-testing`
You can choose subset of distributions like:
* `Upload-component r4.2 manager c63e3257997fdde9e8192cddf4d4d588b8fa6ad9 current vm-bookworm,vm-fc37` (available 5 days from now)
Above commands will work only if packages in current-testing repository were built from given commit (i.e. no new version superseded it).
For more information on how to test this update, please take a look at https://www.qubes-os.org/doc/testing/#updates.
|
test
|
manager update of manager to for qubes see comments below for details and build status from commit qubesos qubes manager version qubesos qubes manager merge remote tracking branch origin pr qubesos qubes manager fix tests for kernel settings referenced issues if you re release manager you can issue gpg inline signed command upload component manager current all available days from now upload component manager security testing you can choose subset of distributions like upload component manager current vm bookworm vm available days from now above commands will work only if packages in current testing repository were built from given commit i e no new version superseded it for more information on how to test this update please take a look at
| 1
|
169,934
| 13,166,760,914
|
IssuesEvent
|
2020-08-11 09:06:12
|
WoWManiaUK/Blackwing-Lair
|
https://api.github.com/repos/WoWManiaUK/Blackwing-Lair
|
closed
|
[Quest] The Dark Tower - Redridge Mountains
|
Confirmed By Tester Fixed Confirmed Fixed in Dev
|
**Links:**
quest https://www.wowhead.com/quest=26693/the-dark-tower
**What is happening:**
- No key dropping
player killed 4 times and no key
**What should happen:**
_Recover the Key of Ilgalar._
**Other Information:**
Reported by GM from ticket
|
1.0
|
[Quest] The Dark Tower - Redridge Mountains - **Links:**
quest https://www.wowhead.com/quest=26693/the-dark-tower
**What is happening:**
- No key dropping
player killed 4 times and no key
**What should happen:**
_Recover the Key of Ilgalar._
**Other Information:**
Reported by GM from ticket
|
test
|
the dark tower redridge mountains links quest what is happening no key dropping player killed times and no key what should happen recover the key of ilgalar other information reported by gm from ticket
| 1
|
88,576
| 17,611,367,949
|
IssuesEvent
|
2021-08-18 02:04:41
|
PyTorchLightning/pytorch-lightning
|
https://api.github.com/repos/PyTorchLightning/pytorch-lightning
|
closed
|
[RFC] Ensure error handling is supported across all Trainer entry points
|
enhancement help wanted refactors / code health
|
## 🚀 Feature
### Motivation
We are auditing the Lightning components and APIs to assess opportunities for improvements:
- https://github.com/PyTorchLightning/pytorch-lightning/issues/7740
- https://docs.google.com/document/d/1xHU7-iQSpp9KJTjI3As2EM0mfNHHr37WZYpDpwLkivA/edit#
One item that came up was error handling in the Trainer.
Currently, Lightning has error handling for when `trainer.fit()` is called. This allows for component cleanup before re-raising the exception to the parent program. https://github.com/PyTorchLightning/pytorch-lightning/blob/49d03f87fed0458cbb146d38243be56be4cb9689/pytorch_lightning/trainer/trainer.py#L1057-L1079
However, this error handling currently applies only during trainer.fit(). Instead, we should ensure this try/catch applies to all top-level trainer functions, such as trainer.validate(), trainer.test(), and trainer.predict(). This can be very useful to power features such as error collection datasets.
### Pitch
The `_run` function houses most of the execution logic: all the top-level trainer entry points are funneled through here for processing: https://github.com/PyTorchLightning/pytorch-lightning/blob/963c26764682fa4cf64c93c5a7572ae0040e9c32/pytorch_lightning/trainer/trainer.py#L854
We could wrap `_run` with the try/catch and rename the current `_run` to `_run_impl`
```
def _run(self, model: "pl.LightningModule") -> ...:
try:
return self._run_impl(model)
except KeyboardInterrupt:
...
except BaseException:
....
raise
```
lifting the logic from _run_train here for the shutdown: https://github.com/PyTorchLightning/pytorch-lightning/blob/49d03f87fed0458cbb146d38243be56be4cb9689/pytorch_lightning/trainer/trainer.py#L1061-L1079
### Alternatives
The proposal above misses some of the misconfiguration errors which take place inside of `fit`/`validate`/`test`/`predict` before `_run` is called. To ensure no gaps, we could have corresponding `_fit_impl`, `_validate_impl`, `_test_impl`, and `_predict_impl` functions in the trainer , such that fit becomes:
```
def fit():
try:
return _fit_impl()
# exception handling
````
Both of these proposals would remove the need for error handling inside of `_run_train` specifically
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
### Additional context
<!-- Add any other context or screenshots about the feature request here. -->
______________________________________________________________________
#### If you enjoy Lightning, check out our other projects! ⚡
<sub>
- [**Metrics**](https://github.com/PyTorchLightning/metrics): Machine learning metrics for distributed, scalable PyTorch applications.
- [**Flash**](https://github.com/PyTorchLightning/lightning-flash): The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning
- [**Bolts**](https://github.com/PyTorchLightning/lightning-bolts): Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch
- [**Lightning Transformers**](https://github.com/PyTorchLightning/lightning-transformers): Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
</sub>
|
1.0
|
[RFC] Ensure error handling is supported across all Trainer entry points - ## 🚀 Feature
### Motivation
We are auditing the Lightning components and APIs to assess opportunities for improvements:
- https://github.com/PyTorchLightning/pytorch-lightning/issues/7740
- https://docs.google.com/document/d/1xHU7-iQSpp9KJTjI3As2EM0mfNHHr37WZYpDpwLkivA/edit#
One item that came up was error handling in the Trainer.
Currently, Lightning has error handling for when `trainer.fit()` is called. This allows for component cleanup before re-raising the exception to the parent program. https://github.com/PyTorchLightning/pytorch-lightning/blob/49d03f87fed0458cbb146d38243be56be4cb9689/pytorch_lightning/trainer/trainer.py#L1057-L1079
However, this error handling currently applies only during trainer.fit(). Instead, we should ensure this try/catch applies to all top-level trainer functions, such as trainer.validate(), trainer.test(), and trainer.predict(). This can be very useful to power features such as error collection datasets.
### Pitch
The `_run` function houses most of the execution logic: all the top-level trainer entry points are funneled through here for processing: https://github.com/PyTorchLightning/pytorch-lightning/blob/963c26764682fa4cf64c93c5a7572ae0040e9c32/pytorch_lightning/trainer/trainer.py#L854
We could wrap `_run` with the try/catch and rename the current `_run` to `_run_impl`
```
def _run(self, model: "pl.LightningModule") -> ...:
try:
return self._run_impl(model)
except KeyboardInterrupt:
...
except BaseException:
....
raise
```
lifting the logic from _run_train here for the shutdown: https://github.com/PyTorchLightning/pytorch-lightning/blob/49d03f87fed0458cbb146d38243be56be4cb9689/pytorch_lightning/trainer/trainer.py#L1061-L1079
### Alternatives
The proposal above misses some of the misconfiguration errors which take place inside of `fit`/`validate`/`test`/`predict` before `_run` is called. To ensure no gaps, we could have corresponding `_fit_impl`, `_validate_impl`, `_test_impl`, and `_predict_impl` functions in the trainer , such that fit becomes:
```
def fit():
try:
return _fit_impl()
# exception handling
````
Both of these proposals would remove the need for error handling inside of `_run_train` specifically
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
### Additional context
<!-- Add any other context or screenshots about the feature request here. -->
______________________________________________________________________
#### If you enjoy Lightning, check out our other projects! ⚡
<sub>
- [**Metrics**](https://github.com/PyTorchLightning/metrics): Machine learning metrics for distributed, scalable PyTorch applications.
- [**Flash**](https://github.com/PyTorchLightning/lightning-flash): The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning
- [**Bolts**](https://github.com/PyTorchLightning/lightning-bolts): Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch
- [**Lightning Transformers**](https://github.com/PyTorchLightning/lightning-transformers): Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
</sub>
|
non_test
|
ensure error handling is supported across all trainer entry points 🚀 feature motivation we are auditing the lightning components and apis to assess opportunities for improvements one item that came up was error handling in the trainer currently lightning has error handling for when trainer fit is called this allows for component cleanup before re raising the exception to the parent program however this error handling currently applies only during trainer fit instead we should ensure this try catch applies to all top level trainer functions such as trainer validate trainer test and trainer predict this can be very useful to power features such as error collection datasets pitch the run function houses most of the execution logic all the top level trainer entry points are funneled through here for processing we could wrap run with the try catch and rename the current run to run impl def run self model pl lightningmodule try return self run impl model except keyboardinterrupt except baseexception raise lifting the logic from run train here for the shutdown alternatives the proposal above misses some of the misconfiguration errors which take place inside of fit validate test predict before run is called to ensure no gaps we could have corresponding fit impl validate impl test impl and predict impl functions in the trainer such that fit becomes def fit try return fit impl exception handling both of these proposals would remove the need for error handling inside of run train specifically additional context if you enjoy lightning check out our other projects ⚡ machine learning metrics for distributed scalable pytorch applications the fastest way to get a lightning baseline a collection of tasks for fast prototyping baselining finetuning and solving problems with deep learning pretrained sota deep learning models callbacks and more for research and production with pytorch lightning and pytorch flexible interface for high performance research using sota transformers leveraging pytorch lightning transformers and hydra
| 0
|
49,588
| 6,033,742,736
|
IssuesEvent
|
2017-06-09 09:10:15
|
piwik/piwik
|
https://api.github.com/repos/piwik/piwik
|
closed
|
One test fails locally but seems to work on Travis CI
|
c: Tests & QA
|
the OneVisitorTwoVisitsTest fails on the 3.x-dev branch when I run `./console tests:run ./tests/PHPUnit/System/OneVisitorTwoVisitsTest.php` (`There were 5 failures:`)
Why does this test fails locally but works on Travis CI ?
|
1.0
|
One test fails locally but seems to work on Travis CI - the OneVisitorTwoVisitsTest fails on the 3.x-dev branch when I run `./console tests:run ./tests/PHPUnit/System/OneVisitorTwoVisitsTest.php` (`There were 5 failures:`)
Why does this test fails locally but works on Travis CI ?
|
test
|
one test fails locally but seems to work on travis ci the onevisitortwovisitstest fails on the x dev branch when i run console tests run tests phpunit system onevisitortwovisitstest php there were failures why does this test fails locally but works on travis ci
| 1
|
366,076
| 25,567,906,598
|
IssuesEvent
|
2022-11-30 15:32:15
|
chronic-care/mcc-project
|
https://api.github.com/repos/chronic-care/mcc-project
|
closed
|
Document design for common data services library
|
documentation duplicate
|
In word, what does the API look like? What are the inputs/outputs?
|
1.0
|
Document design for common data services library - In word, what does the API look like? What are the inputs/outputs?
|
non_test
|
document design for common data services library in word what does the api look like what are the inputs outputs
| 0
|
67,422
| 7,047,963,224
|
IssuesEvent
|
2018-01-02 15:46:58
|
9214/daruma
|
https://api.github.com/repos/9214/daruma
|
closed
|
Test with HTTP POST/GET to darkroom.bgemyth.net.
|
io red test
|
[Bgemyth](http://darkroom.bgemyth.net/) is a French fansite that hosts decoder similar to the one we have here. If you could figure out how to use `write/info` with POST and GET methods and send/fetch Internet code / locker code, then we can plug it into our test suit just to be fancy!
|
1.0
|
Test with HTTP POST/GET to darkroom.bgemyth.net. - [Bgemyth](http://darkroom.bgemyth.net/) is a French fansite that hosts decoder similar to the one we have here. If you could figure out how to use `write/info` with POST and GET methods and send/fetch Internet code / locker code, then we can plug it into our test suit just to be fancy!
|
test
|
test with http post get to darkroom bgemyth net is a french fansite that hosts decoder similar to the one we have here if you could figure out how to use write info with post and get methods and send fetch internet code locker code then we can plug it into our test suit just to be fancy
| 1
|
242,229
| 20,206,370,968
|
IssuesEvent
|
2022-02-11 20:54:55
|
eclipse-openj9/openj9
|
https://api.github.com/repos/eclipse-openj9/openj9
|
opened
|
OpenJDK incubator/vector/Short64VectorTests Unexpected exit from test [exit code: 137]
|
comp:jit test failure
|
Internal build /job/Test_openjdk18_j9_extended.openjdk_s390x_linux/5 - ub18s390xrt-1-9
jdk_vector_0 `-XX:+UseCompressedOops`
jdk/incubator/vector/Short64VectorTests.java
artifacts:
/ui/native/sys-rt-generic-local/hyc-runtimes-jenkins.swg-devops.com/Test_openjdk18_j9_extended.openjdk_s390x_linux/5/openjdk_test_output.tar.gz
```
04:19:16 rerun:
04:19:16 cd /home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/aqa-tests/TKG/output_16445604291340/jdk_vector_0/work/scratch && \
04:19:16 DISPLAY=:0 \
04:19:16 HOME=/home/jenkins \
04:19:16 LANG=C \
04:19:16 PATH=/bin:/usr/bin:/usr/sbin \
04:19:16 CLASSPATH=/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/aqa-tests/TKG/output_16445604291340/jdk_vector_0/work/classes/jdk/incubator/vector/Short64VectorTests.d:/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/aqa-tests/openjdk/openjdk-jdk/test/jdk/jdk/incubator/vector:/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/jvmtest/openjdk/jtreg/lib/testng.jar:/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/jvmtest/openjdk/jtreg/lib/jcommander.jar:/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/jvmtest/openjdk/jtreg/lib/guice.jar:/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/jvmtest/openjdk/jtreg/lib/javatest.jar:/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/jvmtest/openjdk/jtreg/lib/jtreg.jar \
04:19:16 /home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/openjdkbinary/j2sdk-image/bin/java \
04:19:16 -Dtest.vm.opts='-ea -esa -Xmx512m -XX:+UseCompressedOops' \
04:19:16 -Dtest.tool.vm.opts='-J-ea -J-esa -J-Xmx512m -J-XX:+UseCompressedOops' \
04:19:16 -Dtest.compiler.opts= \
04:19:16 -Dtest.java.opts= \
04:19:16 -Dtest.jdk=/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/openjdkbinary/j2sdk-image \
04:19:16 -Dcompile.jdk=/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/openjdkbinary/j2sdk-image \
04:19:16 -Dtest.timeout.factor=8.0 \
04:19:16 -Dtest.nativepath=/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/openjdkbinary/openjdk-test-image/jdk/jtreg/native \
04:19:16 -Dtest.root=/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/aqa-tests/openjdk/openjdk-jdk/test/jdk \
04:19:16 -Dtest.name=jdk/incubator/vector/Short64VectorTests.java \
04:19:16 -Dtest.file=/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/aqa-tests/openjdk/openjdk-jdk/test/jdk/jdk/incubator/vector/Short64VectorTests.java \
04:19:16 -Dtest.src=/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/aqa-tests/openjdk/openjdk-jdk/test/jdk/jdk/incubator/vector \
04:19:16 -Dtest.src.path=/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/aqa-tests/openjdk/openjdk-jdk/test/jdk/jdk/incubator/vector \
04:19:16 -Dtest.classes=/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/aqa-tests/TKG/output_16445604291340/jdk_vector_0/work/classes/jdk/incubator/vector/Short64VectorTests.d \
04:19:16 -Dtest.class.path=/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/aqa-tests/TKG/output_16445604291340/jdk_vector_0/work/classes/jdk/incubator/vector/Short64VectorTests.d \
04:19:16 -Dtest.class.path.prefix=/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/aqa-tests/TKG/output_16445604291340/jdk_vector_0/work/classes/jdk/incubator/vector/Short64VectorTests.d:/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/aqa-tests/openjdk/openjdk-jdk/test/jdk/jdk/incubator/vector \
04:19:16 -Dtest.modules=jdk.incubator.vector \
04:19:16 --add-modules jdk.incubator.vector \
04:19:16 -ea \
04:19:16 -esa \
04:19:16 -Xmx512m \
04:19:16 -XX:+UseCompressedOops \
04:19:16 -Djava.library.path=/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/openjdkbinary/openjdk-test-image/jdk/jtreg/native \
04:19:16 -ea \
04:19:16 -esa \
04:19:16 -Xbatch \
04:19:16 -XX:-TieredCompilation \
04:19:16 com.sun.javatest.regtest.agent.MainWrapper /home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/aqa-tests/TKG/output_16445604291340/jdk_vector_0/work/jdk/incubator/vector/Short64VectorTests.d/testng.0.jta jdk/incubator/vector/Short64VectorTests.java false Short64VectorTests
04:19:16
04:19:16 TEST RESULT: Failed. Unexpected exit from test [exit code: 137]
```
@gita-omr
|
1.0
|
OpenJDK incubator/vector/Short64VectorTests Unexpected exit from test [exit code: 137] - Internal build /job/Test_openjdk18_j9_extended.openjdk_s390x_linux/5 - ub18s390xrt-1-9
jdk_vector_0 `-XX:+UseCompressedOops`
jdk/incubator/vector/Short64VectorTests.java
artifacts:
/ui/native/sys-rt-generic-local/hyc-runtimes-jenkins.swg-devops.com/Test_openjdk18_j9_extended.openjdk_s390x_linux/5/openjdk_test_output.tar.gz
```
04:19:16 rerun:
04:19:16 cd /home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/aqa-tests/TKG/output_16445604291340/jdk_vector_0/work/scratch && \
04:19:16 DISPLAY=:0 \
04:19:16 HOME=/home/jenkins \
04:19:16 LANG=C \
04:19:16 PATH=/bin:/usr/bin:/usr/sbin \
04:19:16 CLASSPATH=/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/aqa-tests/TKG/output_16445604291340/jdk_vector_0/work/classes/jdk/incubator/vector/Short64VectorTests.d:/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/aqa-tests/openjdk/openjdk-jdk/test/jdk/jdk/incubator/vector:/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/jvmtest/openjdk/jtreg/lib/testng.jar:/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/jvmtest/openjdk/jtreg/lib/jcommander.jar:/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/jvmtest/openjdk/jtreg/lib/guice.jar:/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/jvmtest/openjdk/jtreg/lib/javatest.jar:/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/jvmtest/openjdk/jtreg/lib/jtreg.jar \
04:19:16 /home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/openjdkbinary/j2sdk-image/bin/java \
04:19:16 -Dtest.vm.opts='-ea -esa -Xmx512m -XX:+UseCompressedOops' \
04:19:16 -Dtest.tool.vm.opts='-J-ea -J-esa -J-Xmx512m -J-XX:+UseCompressedOops' \
04:19:16 -Dtest.compiler.opts= \
04:19:16 -Dtest.java.opts= \
04:19:16 -Dtest.jdk=/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/openjdkbinary/j2sdk-image \
04:19:16 -Dcompile.jdk=/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/openjdkbinary/j2sdk-image \
04:19:16 -Dtest.timeout.factor=8.0 \
04:19:16 -Dtest.nativepath=/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/openjdkbinary/openjdk-test-image/jdk/jtreg/native \
04:19:16 -Dtest.root=/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/aqa-tests/openjdk/openjdk-jdk/test/jdk \
04:19:16 -Dtest.name=jdk/incubator/vector/Short64VectorTests.java \
04:19:16 -Dtest.file=/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/aqa-tests/openjdk/openjdk-jdk/test/jdk/jdk/incubator/vector/Short64VectorTests.java \
04:19:16 -Dtest.src=/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/aqa-tests/openjdk/openjdk-jdk/test/jdk/jdk/incubator/vector \
04:19:16 -Dtest.src.path=/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/aqa-tests/openjdk/openjdk-jdk/test/jdk/jdk/incubator/vector \
04:19:16 -Dtest.classes=/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/aqa-tests/TKG/output_16445604291340/jdk_vector_0/work/classes/jdk/incubator/vector/Short64VectorTests.d \
04:19:16 -Dtest.class.path=/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/aqa-tests/TKG/output_16445604291340/jdk_vector_0/work/classes/jdk/incubator/vector/Short64VectorTests.d \
04:19:16 -Dtest.class.path.prefix=/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/aqa-tests/TKG/output_16445604291340/jdk_vector_0/work/classes/jdk/incubator/vector/Short64VectorTests.d:/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/aqa-tests/openjdk/openjdk-jdk/test/jdk/jdk/incubator/vector \
04:19:16 -Dtest.modules=jdk.incubator.vector \
04:19:16 --add-modules jdk.incubator.vector \
04:19:16 -ea \
04:19:16 -esa \
04:19:16 -Xmx512m \
04:19:16 -XX:+UseCompressedOops \
04:19:16 -Djava.library.path=/home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/openjdkbinary/openjdk-test-image/jdk/jtreg/native \
04:19:16 -ea \
04:19:16 -esa \
04:19:16 -Xbatch \
04:19:16 -XX:-TieredCompilation \
04:19:16 com.sun.javatest.regtest.agent.MainWrapper /home/jenkins/workspace/Test_openjdk18_j9_extended.openjdk_s390x_linux/aqa-tests/TKG/output_16445604291340/jdk_vector_0/work/jdk/incubator/vector/Short64VectorTests.d/testng.0.jta jdk/incubator/vector/Short64VectorTests.java false Short64VectorTests
04:19:16
04:19:16 TEST RESULT: Failed. Unexpected exit from test [exit code: 137]
```
@gita-omr
|
test
|
openjdk incubator vector unexpected exit from test internal build job test extended openjdk linux jdk vector xx usecompressedoops jdk incubator vector java artifacts ui native sys rt generic local hyc runtimes jenkins swg devops com test extended openjdk linux openjdk test output tar gz rerun cd home jenkins workspace test extended openjdk linux aqa tests tkg output jdk vector work scratch display home home jenkins lang c path bin usr bin usr sbin classpath home jenkins workspace test extended openjdk linux aqa tests tkg output jdk vector work classes jdk incubator vector d home jenkins workspace test extended openjdk linux aqa tests openjdk openjdk jdk test jdk jdk incubator vector home jenkins workspace test extended openjdk linux jvmtest openjdk jtreg lib testng jar home jenkins workspace test extended openjdk linux jvmtest openjdk jtreg lib jcommander jar home jenkins workspace test extended openjdk linux jvmtest openjdk jtreg lib guice jar home jenkins workspace test extended openjdk linux jvmtest openjdk jtreg lib javatest jar home jenkins workspace test extended openjdk linux jvmtest openjdk jtreg lib jtreg jar home jenkins workspace test extended openjdk linux openjdkbinary image bin java dtest vm opts ea esa xx usecompressedoops dtest tool vm opts j ea j esa j j xx usecompressedoops dtest compiler opts dtest java opts dtest jdk home jenkins workspace test extended openjdk linux openjdkbinary image dcompile jdk home jenkins workspace test extended openjdk linux openjdkbinary image dtest timeout factor dtest nativepath home jenkins workspace test extended openjdk linux openjdkbinary openjdk test image jdk jtreg native dtest root home jenkins workspace test extended openjdk linux aqa tests openjdk openjdk jdk test jdk dtest name jdk incubator vector java dtest file home jenkins workspace test extended openjdk linux aqa tests openjdk openjdk jdk test jdk jdk incubator vector java dtest src home jenkins workspace test extended openjdk linux aqa tests openjdk openjdk jdk test jdk jdk incubator vector dtest src path home jenkins workspace test extended openjdk linux aqa tests openjdk openjdk jdk test jdk jdk incubator vector dtest classes home jenkins workspace test extended openjdk linux aqa tests tkg output jdk vector work classes jdk incubator vector d dtest class path home jenkins workspace test extended openjdk linux aqa tests tkg output jdk vector work classes jdk incubator vector d dtest class path prefix home jenkins workspace test extended openjdk linux aqa tests tkg output jdk vector work classes jdk incubator vector d home jenkins workspace test extended openjdk linux aqa tests openjdk openjdk jdk test jdk jdk incubator vector dtest modules jdk incubator vector add modules jdk incubator vector ea esa xx usecompressedoops djava library path home jenkins workspace test extended openjdk linux openjdkbinary openjdk test image jdk jtreg native ea esa xbatch xx tieredcompilation com sun javatest regtest agent mainwrapper home jenkins workspace test extended openjdk linux aqa tests tkg output jdk vector work jdk incubator vector d testng jta jdk incubator vector java false test result failed unexpected exit from test gita omr
| 1
|
168,300
| 13,069,193,283
|
IssuesEvent
|
2020-07-31 05:54:35
|
zephyrproject-rtos/zephyr
|
https://api.github.com/repos/zephyrproject-rtos/zephyr
|
closed
|
tests/kernel/timer/timer_api test fails on twr_ke18f
|
area: Tests bug platform: NXP priority: low
|
**Describe the bug**
tests/kernel/timer/timer_api test fails on twr_ke18f
platform:
twrke18f
Location:
tests/kernel/timer/timer_api
**To Reproduce**
Steps to reproduce the behavior:
1. mkdir build; cd build
2. cmake -DBOARD=twr_ke18f ..
3. make flash
4. See error
**Expected behavior**
test pass
**Impact**
timer api
**Logs and console output**
```
*** Booting Zephyr OS version 2.3.99 ***
Running test suite timer_api
===================================================================
START - test_time_conversions
PASS - test_time_conversions
===================================================================
START - test_timer_duration_period
PASS - test_timer_duration_period
===================================================================
START - test_timer_period_0
PASS - test_timer_period_0
===================================================================
START - test_timer_expirefn_null
PASS - test_timer_expirefn_null
===================================================================
START - test_timer_periodicity
PASS - test_timer_periodicity
===================================================================
START - test_timer_status_get
PASS - test_timer_status_get
===================================================================
START - test_timer_status_get_anytime
PASS - test_timer_status_get_anytime
===================================================================
START - test_timer_status_sync
PASS - test_timer_status_sync
===================================================================
START - test_timer_k_define
Assertion failed at WEST_TOPDIR/zephyr/tests/kernel/timer/timer_api/src/main.c:94: duration_expire: (interval >= 100) || (((10000 % 1000U) != 0) && (interval == 100 - 1)) is false
FAIL - test_timer_k_define
===================================================================
START - test_timer_user_data
PASS - test_timer_user_data
===================================================================
START - test_timer_remaining
PASS - test_timer_remaining
===================================================================
START - test_timeout_abs
PASS - test_timeout_abs
===================================================================
Test suite timer_api failed.
===================================================================
PROJECT EXECUTION FAILED
```
**Environment (please complete the following information):**
- OS: (e.g. Linux)
- Toolchain (e.g Zephyr SDK, ...)
- Commit SHA: ab3e778f47d
|
1.0
|
tests/kernel/timer/timer_api test fails on twr_ke18f - **Describe the bug**
tests/kernel/timer/timer_api test fails on twr_ke18f
platform:
twrke18f
Location:
tests/kernel/timer/timer_api
**To Reproduce**
Steps to reproduce the behavior:
1. mkdir build; cd build
2. cmake -DBOARD=twr_ke18f ..
3. make flash
4. See error
**Expected behavior**
test pass
**Impact**
timer api
**Logs and console output**
```
*** Booting Zephyr OS version 2.3.99 ***
Running test suite timer_api
===================================================================
START - test_time_conversions
PASS - test_time_conversions
===================================================================
START - test_timer_duration_period
PASS - test_timer_duration_period
===================================================================
START - test_timer_period_0
PASS - test_timer_period_0
===================================================================
START - test_timer_expirefn_null
PASS - test_timer_expirefn_null
===================================================================
START - test_timer_periodicity
PASS - test_timer_periodicity
===================================================================
START - test_timer_status_get
PASS - test_timer_status_get
===================================================================
START - test_timer_status_get_anytime
PASS - test_timer_status_get_anytime
===================================================================
START - test_timer_status_sync
PASS - test_timer_status_sync
===================================================================
START - test_timer_k_define
Assertion failed at WEST_TOPDIR/zephyr/tests/kernel/timer/timer_api/src/main.c:94: duration_expire: (interval >= 100) || (((10000 % 1000U) != 0) && (interval == 100 - 1)) is false
FAIL - test_timer_k_define
===================================================================
START - test_timer_user_data
PASS - test_timer_user_data
===================================================================
START - test_timer_remaining
PASS - test_timer_remaining
===================================================================
START - test_timeout_abs
PASS - test_timeout_abs
===================================================================
Test suite timer_api failed.
===================================================================
PROJECT EXECUTION FAILED
```
**Environment (please complete the following information):**
- OS: (e.g. Linux)
- Toolchain (e.g Zephyr SDK, ...)
- Commit SHA: ab3e778f47d
|
test
|
tests kernel timer timer api test fails on twr describe the bug tests kernel timer timer api test fails on twr platform location tests kernel timer timer api to reproduce steps to reproduce the behavior mkdir build cd build cmake dboard twr make flash see error expected behavior test pass impact timer api logs and console output booting zephyr os version running test suite timer api start test time conversions pass test time conversions start test timer duration period pass test timer duration period start test timer period pass test timer period start test timer expirefn null pass test timer expirefn null start test timer periodicity pass test timer periodicity start test timer status get pass test timer status get start test timer status get anytime pass test timer status get anytime start test timer status sync pass test timer status sync start test timer k define assertion failed at west topdir zephyr tests kernel timer timer api src main c duration expire interval interval is false fail test timer k define start test timer user data pass test timer user data start test timer remaining pass test timer remaining start test timeout abs pass test timeout abs test suite timer api failed project execution failed environment please complete the following information os e g linux toolchain e g zephyr sdk commit sha
| 1
|
408,581
| 27,697,037,171
|
IssuesEvent
|
2023-03-14 03:39:24
|
HypsyNZ/Binance-Trader
|
https://api.github.com/repos/HypsyNZ/Binance-Trader
|
reopened
|
Order placing failed: 200004017
|
documentation
|
>[03/14/23 16:23:48:081][Error] Order placing failed: 200004017|Don?t miss out! We need your consent to send you occasional updates on exciting developments. Just click the button below to stay up to date.Yes! I want to be the first to know about special offers and exclusive Binance events.You can change your marketing preferences at any time by visiting Settings, Preferences, Marketing Emails and then selecting whether you want to receive marketing emails from us.
You will experience this error when `Binance` requires you to log in to the website and accept a confirmation prompt.
You will need to restart `Binance Trader` after confirming the prompt.
This will be fixed in an upcoming update
|
1.0
|
Order placing failed: 200004017 - >[03/14/23 16:23:48:081][Error] Order placing failed: 200004017|Don?t miss out! We need your consent to send you occasional updates on exciting developments. Just click the button below to stay up to date.Yes! I want to be the first to know about special offers and exclusive Binance events.You can change your marketing preferences at any time by visiting Settings, Preferences, Marketing Emails and then selecting whether you want to receive marketing emails from us.
You will experience this error when `Binance` requires you to log in to the website and accept a confirmation prompt.
You will need to restart `Binance Trader` after confirming the prompt.
This will be fixed in an upcoming update
|
non_test
|
order placing failed order placing failed don t miss out we need your consent to send you occasional updates on exciting developments just click the button below to stay up to date yes i want to be the first to know about special offers and exclusive binance events you can change your marketing preferences at any time by visiting settings preferences marketing emails and then selecting whether you want to receive marketing emails from us you will experience this error when binance requires you to log in to the website and accept a confirmation prompt you will need to restart binance trader after confirming the prompt this will be fixed in an upcoming update
| 0
|
198,582
| 14,987,679,534
|
IssuesEvent
|
2021-01-28 23:26:01
|
funcx-faas/funcX
|
https://api.github.com/repos/funcx-faas/funcX
|
opened
|
[TestSuite] Testing SDK function registration
|
Testing good first issue help wanted
|
Test the following scenarios in the function registration step:
- [ ] Register with non-existent endpoint
- [ ] Register non-serializable function (eg, a generator)
- [ ] Generic errors with reaching the web-service
|
1.0
|
[TestSuite] Testing SDK function registration - Test the following scenarios in the function registration step:
- [ ] Register with non-existent endpoint
- [ ] Register non-serializable function (eg, a generator)
- [ ] Generic errors with reaching the web-service
|
test
|
testing sdk function registration test the following scenarios in the function registration step register with non existent endpoint register non serializable function eg a generator generic errors with reaching the web service
| 1
|
235,632
| 19,405,998,957
|
IssuesEvent
|
2021-12-20 00:46:10
|
MohistMC/Mohist
|
https://api.github.com/repos/MohistMC/Mohist
|
closed
|
Bug Report LuckPerms 5.3.53
|
Bug 1.16.5 Plugin Needs Testing
|
<!-- Thank you for reporting ! Please note that issues can take a lot of time to be fixed and there is no eta.-->
<!-- If you don't know where to upload your logs and crash reports, you can use these websites : -->
<!-- https://paste.ubuntu.com/ (recommended) -->
<!-- https://mclo.gs -->
<!-- https://haste.mohistmc.com -->
<!-- https://pastebin.com -->
<!-- TO FILL THIS TEMPLATE, YOU NEED TO REPLACE THE {} BY WHAT YOU WANT -->
**Minecraft Version :** {1.16.5}
**Mohist Version :** {724}
**Operating System :** {Linux Ubuntu (server) ; Windows 10 (me)}
**Logs :** {[latest (3).log](https://github.com/MohistMC/Mohist/files/6864957/latest.3.log) (I have no links with your links)}
**Mod list :** {No mods}
**Plugin list :** {DiscordSRV, EssentialsX (Antibuild, Chat, Protect, Spawn, LuckPerms, Vault}
**Description of issue :** {When I try to run my server it keeps "loading configuration" of Luck perms and after hours nothing happen, it seems that the server froze anyways the server don't start at all.}
|
1.0
|
Bug Report LuckPerms 5.3.53 - <!-- Thank you for reporting ! Please note that issues can take a lot of time to be fixed and there is no eta.-->
<!-- If you don't know where to upload your logs and crash reports, you can use these websites : -->
<!-- https://paste.ubuntu.com/ (recommended) -->
<!-- https://mclo.gs -->
<!-- https://haste.mohistmc.com -->
<!-- https://pastebin.com -->
<!-- TO FILL THIS TEMPLATE, YOU NEED TO REPLACE THE {} BY WHAT YOU WANT -->
**Minecraft Version :** {1.16.5}
**Mohist Version :** {724}
**Operating System :** {Linux Ubuntu (server) ; Windows 10 (me)}
**Logs :** {[latest (3).log](https://github.com/MohistMC/Mohist/files/6864957/latest.3.log) (I have no links with your links)}
**Mod list :** {No mods}
**Plugin list :** {DiscordSRV, EssentialsX (Antibuild, Chat, Protect, Spawn, LuckPerms, Vault}
**Description of issue :** {When I try to run my server it keeps "loading configuration" of Luck perms and after hours nothing happen, it seems that the server froze anyways the server don't start at all.}
|
test
|
bug report luckperms minecraft version mohist version operating system linux ubuntu server windows me logs i have no links with your links mod list no mods plugin list discordsrv essentialsx antibuild chat protect spawn luckperms vault description of issue when i try to run my server it keeps loading configuration of luck perms and after hours nothing happen it seems that the server froze anyways the server don t start at all
| 1
|
202,666
| 15,294,697,862
|
IssuesEvent
|
2021-02-24 03:04:00
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
sql/tests: TestDescriptorRepairOrphanedDescriptors failed
|
C-test-failure O-robot branch-master skipped-test
|
[(sql/tests).TestDescriptorRepairOrphanedDescriptors failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2695461&tab=buildLog) on [master@cb0d14a39c32772494a42362fbbcbc308dbf4b35](https://github.com/cockroachdb/cockroach/commits/cb0d14a39c32772494a42362fbbcbc308dbf4b35):
```
=== RUN TestDescriptorRepairOrphanedDescriptors
test_log_scope.go:73: test logs captured to: /go/src/github.com/cockroachdb/cockroach/artifacts/logTestDescriptorRepairOrphanedDescriptors689702253
test_log_scope.go:74: use -show-logs to present logs inline
=== CONT TestDescriptorRepairOrphanedDescriptors
repair_test.go:246: -- test log scope end --
test logs left over in: /go/src/github.com/cockroachdb/cockroach/artifacts/logTestDescriptorRepairOrphanedDescriptors689702253
--- FAIL: TestDescriptorRepairOrphanedDescriptors (11.07s)
=== RUN TestDescriptorRepairOrphanedDescriptors/orphaned_table_with_data_-_51782
repair_test.go:146:
Error Trace: repair_test.go:146
Error: Received unexpected error:
pq: setting updated but timed out waiting to read new value
Test: TestDescriptorRepairOrphanedDescriptors/orphaned_table_with_data_-_51782
--- FAIL: TestDescriptorRepairOrphanedDescriptors/orphaned_table_with_data_-_51782 (10.51s)
```
<details><summary>More</summary><p>
Parameters:
- GOFLAGS=-json
```
make stressrace TESTS=TestDescriptorRepairOrphanedDescriptors PKG=./pkg/sql/tests TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1
```
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2ATestDescriptorRepairOrphanedDescriptors.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
2.0
|
sql/tests: TestDescriptorRepairOrphanedDescriptors failed - [(sql/tests).TestDescriptorRepairOrphanedDescriptors failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2695461&tab=buildLog) on [master@cb0d14a39c32772494a42362fbbcbc308dbf4b35](https://github.com/cockroachdb/cockroach/commits/cb0d14a39c32772494a42362fbbcbc308dbf4b35):
```
=== RUN TestDescriptorRepairOrphanedDescriptors
test_log_scope.go:73: test logs captured to: /go/src/github.com/cockroachdb/cockroach/artifacts/logTestDescriptorRepairOrphanedDescriptors689702253
test_log_scope.go:74: use -show-logs to present logs inline
=== CONT TestDescriptorRepairOrphanedDescriptors
repair_test.go:246: -- test log scope end --
test logs left over in: /go/src/github.com/cockroachdb/cockroach/artifacts/logTestDescriptorRepairOrphanedDescriptors689702253
--- FAIL: TestDescriptorRepairOrphanedDescriptors (11.07s)
=== RUN TestDescriptorRepairOrphanedDescriptors/orphaned_table_with_data_-_51782
repair_test.go:146:
Error Trace: repair_test.go:146
Error: Received unexpected error:
pq: setting updated but timed out waiting to read new value
Test: TestDescriptorRepairOrphanedDescriptors/orphaned_table_with_data_-_51782
--- FAIL: TestDescriptorRepairOrphanedDescriptors/orphaned_table_with_data_-_51782 (10.51s)
```
<details><summary>More</summary><p>
Parameters:
- GOFLAGS=-json
```
make stressrace TESTS=TestDescriptorRepairOrphanedDescriptors PKG=./pkg/sql/tests TESTTIMEOUT=5m STRESSFLAGS='-timeout 5m' 2>&1
```
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2ATestDescriptorRepairOrphanedDescriptors.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
test
|
sql tests testdescriptorrepairorphaneddescriptors failed on run testdescriptorrepairorphaneddescriptors test log scope go test logs captured to go src github com cockroachdb cockroach artifacts test log scope go use show logs to present logs inline cont testdescriptorrepairorphaneddescriptors repair test go test log scope end test logs left over in go src github com cockroachdb cockroach artifacts fail testdescriptorrepairorphaneddescriptors run testdescriptorrepairorphaneddescriptors orphaned table with data repair test go error trace repair test go error received unexpected error pq setting updated but timed out waiting to read new value test testdescriptorrepairorphaneddescriptors orphaned table with data fail testdescriptorrepairorphaneddescriptors orphaned table with data more parameters goflags json make stressrace tests testdescriptorrepairorphaneddescriptors pkg pkg sql tests testtimeout stressflags timeout powered by
| 1
|
252,714
| 21,626,811,958
|
IssuesEvent
|
2022-05-05 04:04:50
|
commaai/openpilot
|
https://api.github.com/repos/commaai/openpilot
|
opened
|
Reduce onroad initializing time + add CI test
|
enhancement testing
|
`controlsd` waits until all services are up and valid for up to 3.5s, but it's not uncommon for that timeout to be hit. Everything should fit comfortably in that 3.5s.
https://github.com/commaai/openpilot/blob/1bc6f2fa7db49ebd3d29063e2a93234488b88db0/selfdrive/controls/controlsd.py#L401
|
1.0
|
Reduce onroad initializing time + add CI test - `controlsd` waits until all services are up and valid for up to 3.5s, but it's not uncommon for that timeout to be hit. Everything should fit comfortably in that 3.5s.
https://github.com/commaai/openpilot/blob/1bc6f2fa7db49ebd3d29063e2a93234488b88db0/selfdrive/controls/controlsd.py#L401
|
test
|
reduce onroad initializing time add ci test controlsd waits until all services are up and valid for up to but it s not uncommon for that timeout to be hit everything should fit comfortably in that
| 1
|
477,684
| 13,766,373,003
|
IssuesEvent
|
2020-10-07 14:29:47
|
ansible/awx
|
https://api.github.com/repos/ansible/awx
|
opened
|
Host details variables showing JSON format without indentation on load
|
component:ui_next priority:low state:needs_devel type:bug
|
<!-- Issues are for **concrete, actionable bugs and feature requests** only - if you're just asking for debugging help or technical support, please use:
- http://webchat.freenode.net/?channels=ansible-awx
- https://groups.google.com/forum/#!forum/awx-project
We have to limit this because of limited volunteer time to respond to issues! -->
##### ISSUE TYPE
- Bug Report
##### SUMMARY
Host details variables showing JSON format without indentation on load
##### ENVIRONMENT
* AWX version: 15.0.0
* AWX install method:docker for mac
* Ansible version: 2.9.13
* Operating System: Catalina
* Web Browser: Chrome
##### STEPS TO REPRODUCE
1. Navigate to Hosts
2. Select a host which has variables
##### EXPECTED RESULTS
The CodeMirror should show the JSON indented
##### ACTUAL RESULTS
The JSON is displayed without indentation
##### ADDITIONAL INFORMATION

|
1.0
|
Host details variables showing JSON format without indentation on load - <!-- Issues are for **concrete, actionable bugs and feature requests** only - if you're just asking for debugging help or technical support, please use:
- http://webchat.freenode.net/?channels=ansible-awx
- https://groups.google.com/forum/#!forum/awx-project
We have to limit this because of limited volunteer time to respond to issues! -->
##### ISSUE TYPE
- Bug Report
##### SUMMARY
Host details variables showing JSON format without indentation on load
##### ENVIRONMENT
* AWX version: 15.0.0
* AWX install method:docker for mac
* Ansible version: 2.9.13
* Operating System: Catalina
* Web Browser: Chrome
##### STEPS TO REPRODUCE
1. Navigate to Hosts
2. Select a host which has variables
##### EXPECTED RESULTS
The CodeMirror should show the JSON indented
##### ACTUAL RESULTS
The JSON is displayed without indentation
##### ADDITIONAL INFORMATION

|
non_test
|
host details variables showing json format without indentation on load issues are for concrete actionable bugs and feature requests only if you re just asking for debugging help or technical support please use we have to limit this because of limited volunteer time to respond to issues issue type bug report summary host details variables showing json format without indentation on load environment awx version awx install method docker for mac ansible version operating system catalina web browser chrome steps to reproduce navigate to hosts select a host which has variables expected results the codemirror should show the json indented actual results the json is displayed without indentation additional information
| 0
|
79,209
| 15,586,114,074
|
IssuesEvent
|
2021-03-18 01:12:16
|
mibo32/fitbit-api-example-java
|
https://api.github.com/repos/mibo32/fitbit-api-example-java
|
opened
|
WS-2017-0117 (Medium) detected in angularjs-1.4.3.jar
|
security vulnerability
|
## WS-2017-0117 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>angularjs-1.4.3.jar</b></p></summary>
<p>WebJar for AngularJS</p>
<p>Library home page: <a href="http://webjars.org">http://webjars.org</a></p>
<p>Path to dependency file: fitbit-api-example-java/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/org/webjars/angularjs/1.4.3/angularjs-1.4.3.jar</p>
<p>
Dependency Hierarchy:
- :x: **angularjs-1.4.3.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Affected versions of the package are vulnerable to Cross-site Scripting (XSS) attacks.
<p>Publish Date: 2015-11-30
<p>URL: <a href=https://github.com/angular/angular.js/commit/5a674f3bb9d1118d11b333e3b966c01a571c09e6>WS-2017-0117</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Change files</p>
<p>Origin: <a href="https://github.com/angular/angular.js/commit/5a674f3bb9d1118d11b333e3b966c01a571c09e6">https://github.com/angular/angular.js/commit/5a674f3bb9d1118d11b333e3b966c01a571c09e6</a></p>
<p>Release Date: 2015-12-06</p>
<p>Fix Resolution: Replace or update the following files: parseSpec.js, parse.js</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2017-0117 (Medium) detected in angularjs-1.4.3.jar - ## WS-2017-0117 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>angularjs-1.4.3.jar</b></p></summary>
<p>WebJar for AngularJS</p>
<p>Library home page: <a href="http://webjars.org">http://webjars.org</a></p>
<p>Path to dependency file: fitbit-api-example-java/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/org/webjars/angularjs/1.4.3/angularjs-1.4.3.jar</p>
<p>
Dependency Hierarchy:
- :x: **angularjs-1.4.3.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Affected versions of the package are vulnerable to Cross-site Scripting (XSS) attacks.
<p>Publish Date: 2015-11-30
<p>URL: <a href=https://github.com/angular/angular.js/commit/5a674f3bb9d1118d11b333e3b966c01a571c09e6>WS-2017-0117</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Change files</p>
<p>Origin: <a href="https://github.com/angular/angular.js/commit/5a674f3bb9d1118d11b333e3b966c01a571c09e6">https://github.com/angular/angular.js/commit/5a674f3bb9d1118d11b333e3b966c01a571c09e6</a></p>
<p>Release Date: 2015-12-06</p>
<p>Fix Resolution: Replace or update the following files: parseSpec.js, parse.js</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
ws medium detected in angularjs jar ws medium severity vulnerability vulnerable library angularjs jar webjar for angularjs library home page a href path to dependency file fitbit api example java pom xml path to vulnerable library canner repository org webjars angularjs angularjs jar dependency hierarchy x angularjs jar vulnerable library vulnerability details affected versions of the package are vulnerable to cross site scripting xss attacks publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope changed impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type change files origin a href release date fix resolution replace or update the following files parsespec js parse js step up your open source security game with whitesource
| 0
|
251,684
| 21,517,413,670
|
IssuesEvent
|
2022-04-28 11:14:16
|
mozilla-mobile/focus-android
|
https://api.github.com/repos/mozilla-mobile/focus-android
|
reopened
|
Intermittent UI test failure - < SafeBrowsingTest. unblockSafeBrowsingTest >
|
crash 🔥 eng:ui-test eng:intermittent-test
|
### Firebase Test Run: [Firebase link](https://console.firebase.google.com/u/0/project/moz-focus-android/testlab/histories/bh.2189b040bbce6d5a/matrices/5047216479076315206/executions/bs.678a4f6d9cc9a5ef/testcases/2/test-cases)
### Build: 3/8 Main
### Notes
❗ Not sure what is causing this crash
The last performed action was verifyPageContent("It’s an Attack!") so the UI test ran to completion
Similar with #6439 #6437 #6436 #6523 #6538 #6600 and #6601
[Logcat](https://github.com/mozilla-mobile/focus-android/files/8206570/unblockSafeBrowsingTest.txt)
|
2.0
|
Intermittent UI test failure - < SafeBrowsingTest. unblockSafeBrowsingTest > - ### Firebase Test Run: [Firebase link](https://console.firebase.google.com/u/0/project/moz-focus-android/testlab/histories/bh.2189b040bbce6d5a/matrices/5047216479076315206/executions/bs.678a4f6d9cc9a5ef/testcases/2/test-cases)
### Build: 3/8 Main
### Notes
❗ Not sure what is causing this crash
The last performed action was verifyPageContent("It’s an Attack!") so the UI test ran to completion
Similar with #6439 #6437 #6436 #6523 #6538 #6600 and #6601
[Logcat](https://github.com/mozilla-mobile/focus-android/files/8206570/unblockSafeBrowsingTest.txt)
|
test
|
intermittent ui test failure firebase test run build main notes ❗ not sure what is causing this crash the last performed action was verifypagecontent it’s an attack so the ui test ran to completion similar with and
| 1
|
76,246
| 9,924,310,602
|
IssuesEvent
|
2019-07-01 09:24:07
|
kubermatic/kubeone
|
https://api.github.com/repos/kubermatic/kubeone
|
opened
|
Update the contributing guide to explain how to sign existing commits
|
kind/documentation
|
As Prow and the DCO plugin require all commits to be signed, the contributing guide should mention that and explain how to sign the existing commits.
|
1.0
|
Update the contributing guide to explain how to sign existing commits - As Prow and the DCO plugin require all commits to be signed, the contributing guide should mention that and explain how to sign the existing commits.
|
non_test
|
update the contributing guide to explain how to sign existing commits as prow and the dco plugin require all commits to be signed the contributing guide should mention that and explain how to sign the existing commits
| 0
|
59,940
| 7,303,265,353
|
IssuesEvent
|
2018-02-27 12:33:20
|
nextcloud/server
|
https://api.github.com/repos/nextcloud/server
|
closed
|
Add public page template for apps
|
1. to develop design enhancement
|
Many apps now offer public pages that are shown to users that are not logged in to the instance.
There should be a common template for those pages, that apps can use.
Right now every app includes its own template:
- gallery: https://github.com/nextcloud/gallery/blob/master/templates/public.php
- calendar: https://github.com/nextcloud/calendar/blob/master/templates/public.php
- files_sharing: https://github.com/nextcloud/server/blob/master/apps/files_sharing/templates/public.php
- spreed: https://github.com/nextcloud/spreed/blob/master/templates/index-public.php
- polls needs that as well: https://github.com/nextcloud/polls/issues/96#issuecomment-330154123
@nextcloud/designers
|
1.0
|
Add public page template for apps - Many apps now offer public pages that are shown to users that are not logged in to the instance.
There should be a common template for those pages, that apps can use.
Right now every app includes its own template:
- gallery: https://github.com/nextcloud/gallery/blob/master/templates/public.php
- calendar: https://github.com/nextcloud/calendar/blob/master/templates/public.php
- files_sharing: https://github.com/nextcloud/server/blob/master/apps/files_sharing/templates/public.php
- spreed: https://github.com/nextcloud/spreed/blob/master/templates/index-public.php
- polls needs that as well: https://github.com/nextcloud/polls/issues/96#issuecomment-330154123
@nextcloud/designers
|
non_test
|
add public page template for apps many apps now offer public pages that are shown to users that are not logged in to the instance there should be a common template for those pages that apps can use right now every app includes its own template gallery calendar files sharing spreed polls needs that as well nextcloud designers
| 0
|
172,214
| 13,282,198,417
|
IssuesEvent
|
2020-08-23 21:27:26
|
Azure/aks-engine
|
https://api.github.com/repos/Azure/aks-engine
|
closed
|
metrics-server unable to resolve hosts, resolved upon deleting pod
|
bug e2e test signal stale
|
metrics not being reported:
```
$ kubectl top nodes
- error: metrics not available yet
```
relevant metrics-server logs:
```
E0617 17:21:45.513235 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:k8s-agentpool-29602602-vmss000000: unable to fetch metrics from Kubelet k8s-agentpool-29602602-vmss000000 (k8s-agentpool-29602602-vmss000000): Get https://k8s-agentpool-29602602-vmss000000:10250/stats/summary?only_cpu_and_memory=true: dial tcp: lookup k8s-agentpool-29602602-vmss000000 on 10.0.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:k8s-master-29602602-0: unable to fetch metrics from Kubelet k8s-master-29602602-0 (k8s-master-29602602-0): Get https://k8s-master-29602602-0:10250/stats/summary?only_cpu_and_memory=true: dial tcp: lookup k8s-master-29602602-0 on 10.0.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:2960k8s01000001: unable to fetch metrics from Kubelet 2960k8s01000001 (2960k8s01000001): Get https://2960k8s01000001:10250/stats/summary?only_cpu_and_memory=true: dial tcp: lookup 2960k8s01000001 on 10.0.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:k8s-agentpool-29602602-vmss000001: unable to fetch metrics from Kubelet k8s-agentpool-29602602-vmss000001 (k8s-agentpool-29602602-vmss000001): Get https://k8s-agentpool-29602602-vmss000001:10250/stats/summary?only_cpu_and_memory=true: dial tcp: lookup k8s-agentpool-29602602-vmss000001 on 10.0.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:2960k8s01000000: unable to fetch metrics from Kubelet 2960k8s01000000 (2960k8s01000000): Get https://2960k8s01000000:10250/stats/summary?only_cpu_and_memory=true: dial tcp: lookup 2960k8s01000000 on 10.0.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:k8s-agentpool-29602602-vmss000002: unable to fetch metrics from Kubelet k8s-agentpool-29602602-vmss000002 (k8s-agentpool-29602602-vmss000002): Get https://k8s-agentpool-29602602-vmss000002:10250/stats/summary?only_cpu_and_memory=true: dial tcp: lookup k8s-agentpool-29602602-vmss000002 on 10.0.0.10:53: no such host]
```
After deleting the metrics-server pod, things are better :/
```
azureuser@k8s-master-29602602-0:~$ kubectl delete pod metrics-server-5b5b7d447d-bq7zr -n kube-system
pod "metrics-server-5b5b7d447d-bq7zr" deleted
azureuser@k8s-master-29602602-0:~$ kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
2960k8s01000000 86m 4% 757Mi 12%
k8s-agentpool-29602602-vmss000000 80m 4% 1000Mi 16%
k8s-agentpool-29602602-vmss000001 82m 4% 1004Mi 16%
k8s-agentpool-29602602-vmss000002 80m 4% 1026Mi 16%
k8s-master-29602602-0 156m 7% 1779Mi 28%
2960k8s01000001 <unknown> <unknown> <unknown> <unknown>
```
|
1.0
|
metrics-server unable to resolve hosts, resolved upon deleting pod - metrics not being reported:
```
$ kubectl top nodes
- error: metrics not available yet
```
relevant metrics-server logs:
```
E0617 17:21:45.513235 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:k8s-agentpool-29602602-vmss000000: unable to fetch metrics from Kubelet k8s-agentpool-29602602-vmss000000 (k8s-agentpool-29602602-vmss000000): Get https://k8s-agentpool-29602602-vmss000000:10250/stats/summary?only_cpu_and_memory=true: dial tcp: lookup k8s-agentpool-29602602-vmss000000 on 10.0.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:k8s-master-29602602-0: unable to fetch metrics from Kubelet k8s-master-29602602-0 (k8s-master-29602602-0): Get https://k8s-master-29602602-0:10250/stats/summary?only_cpu_and_memory=true: dial tcp: lookup k8s-master-29602602-0 on 10.0.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:2960k8s01000001: unable to fetch metrics from Kubelet 2960k8s01000001 (2960k8s01000001): Get https://2960k8s01000001:10250/stats/summary?only_cpu_and_memory=true: dial tcp: lookup 2960k8s01000001 on 10.0.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:k8s-agentpool-29602602-vmss000001: unable to fetch metrics from Kubelet k8s-agentpool-29602602-vmss000001 (k8s-agentpool-29602602-vmss000001): Get https://k8s-agentpool-29602602-vmss000001:10250/stats/summary?only_cpu_and_memory=true: dial tcp: lookup k8s-agentpool-29602602-vmss000001 on 10.0.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:2960k8s01000000: unable to fetch metrics from Kubelet 2960k8s01000000 (2960k8s01000000): Get https://2960k8s01000000:10250/stats/summary?only_cpu_and_memory=true: dial tcp: lookup 2960k8s01000000 on 10.0.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:k8s-agentpool-29602602-vmss000002: unable to fetch metrics from Kubelet k8s-agentpool-29602602-vmss000002 (k8s-agentpool-29602602-vmss000002): Get https://k8s-agentpool-29602602-vmss000002:10250/stats/summary?only_cpu_and_memory=true: dial tcp: lookup k8s-agentpool-29602602-vmss000002 on 10.0.0.10:53: no such host]
```
After deleting the metrics-server pod, things are better :/
```
azureuser@k8s-master-29602602-0:~$ kubectl delete pod metrics-server-5b5b7d447d-bq7zr -n kube-system
pod "metrics-server-5b5b7d447d-bq7zr" deleted
azureuser@k8s-master-29602602-0:~$ kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
2960k8s01000000 86m 4% 757Mi 12%
k8s-agentpool-29602602-vmss000000 80m 4% 1000Mi 16%
k8s-agentpool-29602602-vmss000001 82m 4% 1004Mi 16%
k8s-agentpool-29602602-vmss000002 80m 4% 1026Mi 16%
k8s-master-29602602-0 156m 7% 1779Mi 28%
2960k8s01000001 <unknown> <unknown> <unknown> <unknown>
```
|
test
|
metrics server unable to resolve hosts resolved upon deleting pod metrics not being reported kubectl top nodes error metrics not available yet relevant metrics server logs manager go unable to fully collect metrics after deleting the metrics server pod things are better azureuser master kubectl delete pod metrics server n kube system pod metrics server deleted azureuser master kubectl top nodes name cpu cores cpu memory bytes memory agentpool agentpool agentpool master
| 1
|
40,453
| 5,292,440,576
|
IssuesEvent
|
2017-02-09 02:10:32
|
uccser/kordac
|
https://api.github.com/repos/uccser/kordac
|
closed
|
Load html templates from file
|
restructuring testing
|
Currently each tag has a html string defined in it's processor file, these should be retrieved from a template file instead.
|
1.0
|
Load html templates from file - Currently each tag has a html string defined in it's processor file, these should be retrieved from a template file instead.
|
test
|
load html templates from file currently each tag has a html string defined in it s processor file these should be retrieved from a template file instead
| 1
|
84,440
| 24,309,650,376
|
IssuesEvent
|
2022-09-29 20:50:47
|
orbeon/orbeon-forms
|
https://api.github.com/repos/orbeon/orbeon-forms
|
closed
|
Permissions UI: checkboxes no longer show as readonly
|
Module: Form Builder Priority: Regression
|
Check "Update" and notice other checkboxes are not readonly.
|
1.0
|
Permissions UI: checkboxes no longer show as readonly - Check "Update" and notice other checkboxes are not readonly.
|
non_test
|
permissions ui checkboxes no longer show as readonly check update and notice other checkboxes are not readonly
| 0
|
347,365
| 31,160,396,242
|
IssuesEvent
|
2023-08-16 15:36:21
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
closed
|
Failing test: "before all" hook for "should have READ access to Endpoint list page" - Roles for Security Essential PLI with Endpoint Essentials addon for role: t1_analyst "before all" hook for "should have READ access to Endpoint list page"
|
failed-test Team:Defend Workflows
|
A test failed on a tracked branch
```
CypressError: `cy.task('indexEndpointHosts')` timed out after waiting `240000ms`.
https://on.cypress.io/api/task
Because this error occurred during a `before all` hook we are skipping the remaining tests in the current suite: `Roles for Security Essentia...`
at <unknown> (http://localhost:5678/__cypress/runner/cypress_runner.js:150950:78)
at tryCatcher (http://localhost:5678/__cypress/runner/cypress_runner.js:18744:23)
at <unknown> (http://localhost:5678/__cypress/runner/cypress_runner.js:13866:41)
at tryCatcher (http://localhost:5678/__cypress/runner/cypress_runner.js:18744:23)
at Promise._settlePromiseFromHandler (http://localhost:5678/__cypress/runner/cypress_runner.js:16679:31)
at Promise._settlePromise (http://localhost:5678/__cypress/runner/cypress_runner.js:16736:18)
at Promise._settlePromise0 (http://localhost:5678/__cypress/runner/cypress_runner.js:16781:10)
at Promise._settlePromises (http://localhost:5678/__cypress/runner/cypress_runner.js:16857:18)
at _drainQueueStep (http://localhost:5678/__cypress/runner/cypress_runner.js:13451:12)
at _drainQueue (http://localhost:5678/__cypress/runner/cypress_runner.js:13444:9)
at ../../node_modules/bluebird/js/release/async.js.Async._drainQueues (http://localhost:5678/__cypress/runner/cypress_runner.js:13460:5)
at Async.drainQueues (http://localhost:5678/__cypress/runner/cypress_runner.js:13330:14)
From Your Spec Code:
at Context.eval (webpack:///./e2e/endpoint_management/roles/essentials_with_endpoint.roles.cy.ts:47:9)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-serverless/builds/1874#0189f547-bde0-49e0-9e5e-7d5c2fe8bb5f)
<!-- kibanaCiData = {"failed-test":{"test.class":"\"before all\" hook for \"should have READ access to Endpoint list page\"","test.name":"Roles for Security Essential PLI with Endpoint Essentials addon for role: t1_analyst \"before all\" hook for \"should have READ access to Endpoint list page\"","test.failCount":6}} -->
|
1.0
|
Failing test: "before all" hook for "should have READ access to Endpoint list page" - Roles for Security Essential PLI with Endpoint Essentials addon for role: t1_analyst "before all" hook for "should have READ access to Endpoint list page" - A test failed on a tracked branch
```
CypressError: `cy.task('indexEndpointHosts')` timed out after waiting `240000ms`.
https://on.cypress.io/api/task
Because this error occurred during a `before all` hook we are skipping the remaining tests in the current suite: `Roles for Security Essentia...`
at <unknown> (http://localhost:5678/__cypress/runner/cypress_runner.js:150950:78)
at tryCatcher (http://localhost:5678/__cypress/runner/cypress_runner.js:18744:23)
at <unknown> (http://localhost:5678/__cypress/runner/cypress_runner.js:13866:41)
at tryCatcher (http://localhost:5678/__cypress/runner/cypress_runner.js:18744:23)
at Promise._settlePromiseFromHandler (http://localhost:5678/__cypress/runner/cypress_runner.js:16679:31)
at Promise._settlePromise (http://localhost:5678/__cypress/runner/cypress_runner.js:16736:18)
at Promise._settlePromise0 (http://localhost:5678/__cypress/runner/cypress_runner.js:16781:10)
at Promise._settlePromises (http://localhost:5678/__cypress/runner/cypress_runner.js:16857:18)
at _drainQueueStep (http://localhost:5678/__cypress/runner/cypress_runner.js:13451:12)
at _drainQueue (http://localhost:5678/__cypress/runner/cypress_runner.js:13444:9)
at ../../node_modules/bluebird/js/release/async.js.Async._drainQueues (http://localhost:5678/__cypress/runner/cypress_runner.js:13460:5)
at Async.drainQueues (http://localhost:5678/__cypress/runner/cypress_runner.js:13330:14)
From Your Spec Code:
at Context.eval (webpack:///./e2e/endpoint_management/roles/essentials_with_endpoint.roles.cy.ts:47:9)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-serverless/builds/1874#0189f547-bde0-49e0-9e5e-7d5c2fe8bb5f)
<!-- kibanaCiData = {"failed-test":{"test.class":"\"before all\" hook for \"should have READ access to Endpoint list page\"","test.name":"Roles for Security Essential PLI with Endpoint Essentials addon for role: t1_analyst \"before all\" hook for \"should have READ access to Endpoint list page\"","test.failCount":6}} -->
|
test
|
failing test before all hook for should have read access to endpoint list page roles for security essential pli with endpoint essentials addon for role analyst before all hook for should have read access to endpoint list page a test failed on a tracked branch cypresserror cy task indexendpointhosts timed out after waiting because this error occurred during a before all hook we are skipping the remaining tests in the current suite roles for security essentia at at trycatcher at at trycatcher at promise settlepromisefromhandler at promise settlepromise at promise at promise settlepromises at drainqueuestep at drainqueue at node modules bluebird js release async js async drainqueues at async drainqueues from your spec code at context eval webpack endpoint management roles essentials with endpoint roles cy ts first failure
| 1
|
260,670
| 22,638,712,957
|
IssuesEvent
|
2022-06-30 22:05:31
|
danbudris/vulnerabilityProcessor
|
https://api.github.com/repos/danbudris/vulnerabilityProcessor
|
opened
|
LOW vulnerability CVE-2022-27776 - libcurl, curl in 2 packages affecting 1 resources
|
hey there test severity/LOW
|
Issue auto cut by Vulnerability Processor
Processor Version: `v0.0.0-dev`
Message Source: `EventBridge`
Finding Source: `inspectorV2`
LOW vulnerability CVE-2022-27776 detected in 1 resources
- i-066bd473e31e27cc3
Associated Pull Requests:
- https://github.com/danbudris/vulnerabilityProcessor/pull/434
|
1.0
|
LOW vulnerability CVE-2022-27776 - libcurl, curl in 2 packages affecting 1 resources - Issue auto cut by Vulnerability Processor
Processor Version: `v0.0.0-dev`
Message Source: `EventBridge`
Finding Source: `inspectorV2`
LOW vulnerability CVE-2022-27776 detected in 1 resources
- i-066bd473e31e27cc3
Associated Pull Requests:
- https://github.com/danbudris/vulnerabilityProcessor/pull/434
|
test
|
low vulnerability cve libcurl curl in packages affecting resources issue auto cut by vulnerability processor processor version dev message source eventbridge finding source low vulnerability cve detected in resources i associated pull requests
| 1
|
52,835
| 6,282,761,349
|
IssuesEvent
|
2017-07-19 00:26:33
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
System.Data.Common.Tests InvokeCodeThatShouldFirEvents_EnsureEventsFired test disabled.
|
area-System.Data test-run-uwp-ilc
|
-method System.Data.Tests.DataCommonEventSourceTest.InvokeCodeThatShouldFirEvents_EnsureEventsFired
````
System.Data.Tests.DataCommonEventSourceTest.InvokeCodeThatShouldFirEvents_EnsureEventsFired [FAIL]
Assert.InRange() Failure
Range: (1 - 2147483647)
Actual: 0
Stack Trace:
c:\dd\CoreFxN\src\System.Data.Common\tests\System\Data\DataCommonEventSourceTest.cs(35,0): at System.Data.Tests.DataCommonEventSourceTest.InvokeCodeThatShouldFirEvents_EnsureEventsFired()
at _$ILCT$.$ILT$ReflectionDynamicInvoke$.InvokeRetV(Object thisPtr, IntPtr methodToCall, ArgSetupState argSetupState, Boolean targetIsThisCall)
at System.InvokeUtils.CalliIntrinsics.Call(IntPtr dynamicInvokeHelperMethod, Object thisPtrForDynamicInvokeHelperMethod, Object thisPtr, IntPtr methodToCall, ArgSetupState argSetupState)
c:\dd\ProjectN\src\ndp\fxcore\CoreRT\src\System.Private.CoreLib\src\System\InvokeUtils.cs(400,0): at System.InvokeUtils.CallDynamicInvokeMethod(Object thisPtr, IntPtr methodToCall, Object thisPtrDynamicInvokeMethod, IntPtr dynamicInvokeHelperMethod, IntPtr dynamicInvokeHelperGenericDictionary, Object targetMethodOrDelegate, Object[] parameters, BinderBundle binderBundle, Boolean invokeMethodHelperIsThisCall, Boolean methodToCallIsThisCall)
```
|
1.0
|
System.Data.Common.Tests InvokeCodeThatShouldFirEvents_EnsureEventsFired test disabled. - -method System.Data.Tests.DataCommonEventSourceTest.InvokeCodeThatShouldFirEvents_EnsureEventsFired
````
System.Data.Tests.DataCommonEventSourceTest.InvokeCodeThatShouldFirEvents_EnsureEventsFired [FAIL]
Assert.InRange() Failure
Range: (1 - 2147483647)
Actual: 0
Stack Trace:
c:\dd\CoreFxN\src\System.Data.Common\tests\System\Data\DataCommonEventSourceTest.cs(35,0): at System.Data.Tests.DataCommonEventSourceTest.InvokeCodeThatShouldFirEvents_EnsureEventsFired()
at _$ILCT$.$ILT$ReflectionDynamicInvoke$.InvokeRetV(Object thisPtr, IntPtr methodToCall, ArgSetupState argSetupState, Boolean targetIsThisCall)
at System.InvokeUtils.CalliIntrinsics.Call(IntPtr dynamicInvokeHelperMethod, Object thisPtrForDynamicInvokeHelperMethod, Object thisPtr, IntPtr methodToCall, ArgSetupState argSetupState)
c:\dd\ProjectN\src\ndp\fxcore\CoreRT\src\System.Private.CoreLib\src\System\InvokeUtils.cs(400,0): at System.InvokeUtils.CallDynamicInvokeMethod(Object thisPtr, IntPtr methodToCall, Object thisPtrDynamicInvokeMethod, IntPtr dynamicInvokeHelperMethod, IntPtr dynamicInvokeHelperGenericDictionary, Object targetMethodOrDelegate, Object[] parameters, BinderBundle binderBundle, Boolean invokeMethodHelperIsThisCall, Boolean methodToCallIsThisCall)
```
|
test
|
system data common tests invokecodethatshouldfirevents ensureeventsfired test disabled method system data tests datacommoneventsourcetest invokecodethatshouldfirevents ensureeventsfired system data tests datacommoneventsourcetest invokecodethatshouldfirevents ensureeventsfired assert inrange failure range actual stack trace c dd corefxn src system data common tests system data datacommoneventsourcetest cs at system data tests datacommoneventsourcetest invokecodethatshouldfirevents ensureeventsfired at ilct ilt reflectiondynamicinvoke invokeretv object thisptr intptr methodtocall argsetupstate argsetupstate boolean targetisthiscall at system invokeutils calliintrinsics call intptr dynamicinvokehelpermethod object thisptrfordynamicinvokehelpermethod object thisptr intptr methodtocall argsetupstate argsetupstate c dd projectn src ndp fxcore corert src system private corelib src system invokeutils cs at system invokeutils calldynamicinvokemethod object thisptr intptr methodtocall object thisptrdynamicinvokemethod intptr dynamicinvokehelpermethod intptr dynamicinvokehelpergenericdictionary object targetmethodordelegate object parameters binderbundle binderbundle boolean invokemethodhelperisthiscall boolean methodtocallisthiscall
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.