Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
3,073
6,077,576,794
IssuesEvent
2017-06-16 04:44:51
dotnet/corefx
https://api.github.com/repos/dotnet/corefx
reopened
Test: System.Diagnostics.Tests.ProcessStartInfoTests/Verbs_GetWithExeExtension_ReturnsExpected failed with "Xunit.Sdk.ContainsException"
area-System.Diagnostics.Process os-windows-uwp test-run-uwp-coreclr
Opened on behalf of @Jiayili1 The test `System.Diagnostics.Tests.ProcessStartInfoTests/Verbs_GetWithExeExtension_ReturnsExpected` has failed. Assert.Contains() Failure\r Not found: open\r In value: String[] [] Stack Trace: at System.Diagnostics.Tests.ProcessStartInfoTests.Verbs_GetWithExeExtension_ReturnsExpected() Build : Master - 20170510.01 (UWP F5 Tests) Failing configurations: - Windows.10.Amd64-x64 - Debug - Release Detail: https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Fuwp~2F/build/20170510.01/workItem/System.Diagnostics.Process.Tests/analysis/xunit/System.Diagnostics.Tests.ProcessStartInfoTests~2FVerbs_GetWithExeExtension_ReturnsExpected
1.0
Test: System.Diagnostics.Tests.ProcessStartInfoTests/Verbs_GetWithExeExtension_ReturnsExpected failed with "Xunit.Sdk.ContainsException" - Opened on behalf of @Jiayili1 The test `System.Diagnostics.Tests.ProcessStartInfoTests/Verbs_GetWithExeExtension_ReturnsExpected` has failed. Assert.Contains() Failure\r Not found: open\r In value: String[] [] Stack Trace: at System.Diagnostics.Tests.ProcessStartInfoTests.Verbs_GetWithExeExtension_ReturnsExpected() Build : Master - 20170510.01 (UWP F5 Tests) Failing configurations: - Windows.10.Amd64-x64 - Debug - Release Detail: https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Fuwp~2F/build/20170510.01/workItem/System.Diagnostics.Process.Tests/analysis/xunit/System.Diagnostics.Tests.ProcessStartInfoTests~2FVerbs_GetWithExeExtension_ReturnsExpected
process
test system diagnostics tests processstartinfotests verbs getwithexeextension returnsexpected failed with xunit sdk containsexception opened on behalf of the test system diagnostics tests processstartinfotests verbs getwithexeextension returnsexpected has failed assert contains failure r not found open r in value string stack trace at system diagnostics tests processstartinfotests verbs getwithexeextension returnsexpected build master uwp tests failing configurations windows debug release detail
1
7,450
10,558,824,853
IssuesEvent
2019-10-04 09:59:43
Altinn/altinn-studio
https://api.github.com/repos/Altinn/altinn-studio
closed
Update the handling of workflow in runtime to support to submit/archive directly from fill in form
app-backend process ready-for-specification team-tamagotchi user-story
**Functional architect/designer:** @-mention **Technical architect:** @-mention **Description** As a end user I would like to submit/archive directly from fill in form Relates to #1029 **Sketch (if relevant)** (Screenshot and link to Figma, make sure your sketch is public!) **Navigation from/to (if relevant)** From: Fill in form To: Archive/receipt **Technical considerations** Input (beyond tasks) on how the user story should be solved can be put here. **Acceptance criterea** - What is allowed/not allowed - Validations - Error messages and warnings - ... **Tasks** - [ ] Update the handling of workflow in runtime to support to submit/archive directly from fill in form.
1.0
Update the handling of workflow in runtime to support to submit/archive directly from fill in form - **Functional architect/designer:** @-mention **Technical architect:** @-mention **Description** As a end user I would like to submit/archive directly from fill in form Relates to #1029 **Sketch (if relevant)** (Screenshot and link to Figma, make sure your sketch is public!) **Navigation from/to (if relevant)** From: Fill in form To: Archive/receipt **Technical considerations** Input (beyond tasks) on how the user story should be solved can be put here. **Acceptance criterea** - What is allowed/not allowed - Validations - Error messages and warnings - ... **Tasks** - [ ] Update the handling of workflow in runtime to support to submit/archive directly from fill in form.
process
update the handling of workflow in runtime to support to submit archive directly from fill in form functional architect designer mention technical architect mention description as a end user i would like to submit archive directly from fill in form relates to sketch if relevant screenshot and link to figma make sure your sketch is public navigation from to if relevant from fill in form to archive receipt technical considerations input beyond tasks on how the user story should be solved can be put here acceptance criterea what is allowed not allowed validations error messages and warnings tasks update the handling of workflow in runtime to support to submit archive directly from fill in form
1
23,768
7,374,159,132
IssuesEvent
2018-03-13 19:25:56
ngageoint/hootenanny
https://api.github.com/repos/ngageoint/hootenanny
opened
Omit UI buildInfo.json from archive
Category: Build Category: Services Category: UI Priority: High
In #2241 I removed the `services-build` target when creating the packaging archive. This created a regression when running `make archive`, leading to failures like this: ``` # Copy the buildInfo.json file for hoot services if [ "services" == "services" ]; then mkdir -p hootenanny-0.2.39_20_g672adde/hoot-ui/data; cp hoot-ui/data/buildInfo.json hootenanny-0.2.39_20_g672adde/hoot-ui/data; fi make[1]: Leaving directory `/rpmbuild/hootenanny' ```
1.0
Omit UI buildInfo.json from archive - In #2241 I removed the `services-build` target when creating the packaging archive. This created a regression when running `make archive`, leading to failures like this: ``` # Copy the buildInfo.json file for hoot services if [ "services" == "services" ]; then mkdir -p hootenanny-0.2.39_20_g672adde/hoot-ui/data; cp hoot-ui/data/buildInfo.json hootenanny-0.2.39_20_g672adde/hoot-ui/data; fi make[1]: Leaving directory `/rpmbuild/hootenanny' ```
non_process
omit ui buildinfo json from archive in i removed the services build target when creating the packaging archive this created a regression when running make archive leading to failures like this copy the buildinfo json file for hoot services if then mkdir p hootenanny hoot ui data cp hoot ui data buildinfo json hootenanny hoot ui data fi make leaving directory rpmbuild hootenanny
0
11,857
14,664,841,606
IssuesEvent
2020-12-29 12:57:14
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[PM] [Dev] Studies > All studies are not displayed and loader icon is not shown at end even though there are >10 studies
Bug P0 Participant manager Process: Dev Process: Fixed
Steps: 1. Navigate to Studies tab 2. Scroll till end 3. Observe A/R: Only 9 sets of studies are displayed E/R: 10 studies should be displayed per set and loader icon should be displayed
2.0
[PM] [Dev] Studies > All studies are not displayed and loader icon is not shown at end even though there are >10 studies - Steps: 1. Navigate to Studies tab 2. Scroll till end 3. Observe A/R: Only 9 sets of studies are displayed E/R: 10 studies should be displayed per set and loader icon should be displayed
process
studies all studies are not displayed and loader icon is not shown at end even though there are studies steps navigate to studies tab scroll till end observe a r only sets of studies are displayed e r studies should be displayed per set and loader icon should be displayed
1
15,608
19,730,027,146
IssuesEvent
2022-01-14 00:48:17
GoogleCloudPlatform/python-docs-samples
https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples
closed
Security Vulnerability for Pillow V8 references
type: process priority: p1 samples
References: https://github.com/advisories/GHSA-xrcv-f9gm-v42c https://github.com/advisories/GHSA-8vj2-vxx3-667w https://github.com/advisories/GHSA-pw3c-h7wp-cvhx There are 3 references to it in this repo.
1.0
Security Vulnerability for Pillow V8 references - References: https://github.com/advisories/GHSA-xrcv-f9gm-v42c https://github.com/advisories/GHSA-8vj2-vxx3-667w https://github.com/advisories/GHSA-pw3c-h7wp-cvhx There are 3 references to it in this repo.
process
security vulnerability for pillow references references there are references to it in this repo
1
19,156
25,240,360,611
IssuesEvent
2022-11-15 06:46:33
googleapis/sphinx-docfx-yaml
https://api.github.com/repos/googleapis/sphinx-docfx-yaml
opened
Update docfx minimum Python version to 3.9 on client libraries
type: process priority: p1
Followup to #266: will need to update docfx jobs running with Python 3.9 throughout the client libraries.
1.0
Update docfx minimum Python version to 3.9 on client libraries - Followup to #266: will need to update docfx jobs running with Python 3.9 throughout the client libraries.
process
update docfx minimum python version to on client libraries followup to will need to update docfx jobs running with python throughout the client libraries
1
189,275
14,497,110,981
IssuesEvent
2020-12-11 13:47:37
kalexmills/github-vet-tests-dec2020
https://api.github.com/repos/kalexmills/github-vet-tests-dec2020
closed
anchore/syft: syft/cataloger/java/archive_parser_test.go; 5 LoC
fresh test tiny
Found a possible issue in [anchore/syft](https://www.github.com/anchore/syft) at [syft/cataloger/java/archive_parser_test.go](https://github.com/anchore/syft/blob/52bac6e2fd1adb3d8852f0fab6536a81ec037b89/syft/cataloger/java/archive_parser_test.go#L245-L249) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > reference to a is reassigned at line 247 [Click here to see the code in its original context.](https://github.com/anchore/syft/blob/52bac6e2fd1adb3d8852f0fab6536a81ec037b89/syft/cataloger/java/archive_parser_test.go#L245-L249) <details> <summary>Click here to show the 5 line(s) of Go which triggered the analyzer.</summary> ```go for _, a := range actual { if strings.Contains(a.Name, "example-") { parent = &a } } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 52bac6e2fd1adb3d8852f0fab6536a81ec037b89
1.0
anchore/syft: syft/cataloger/java/archive_parser_test.go; 5 LoC - Found a possible issue in [anchore/syft](https://www.github.com/anchore/syft) at [syft/cataloger/java/archive_parser_test.go](https://github.com/anchore/syft/blob/52bac6e2fd1adb3d8852f0fab6536a81ec037b89/syft/cataloger/java/archive_parser_test.go#L245-L249) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > reference to a is reassigned at line 247 [Click here to see the code in its original context.](https://github.com/anchore/syft/blob/52bac6e2fd1adb3d8852f0fab6536a81ec037b89/syft/cataloger/java/archive_parser_test.go#L245-L249) <details> <summary>Click here to show the 5 line(s) of Go which triggered the analyzer.</summary> ```go for _, a := range actual { if strings.Contains(a.Name, "example-") { parent = &a } } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 52bac6e2fd1adb3d8852f0fab6536a81ec037b89
non_process
anchore syft syft cataloger java archive parser test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message reference to a is reassigned at line click here to show the line s of go which triggered the analyzer go for a range actual if strings contains a name example parent a leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
0
18,139
24,183,927,087
IssuesEvent
2022-09-23 11:34:48
cloudfoundry/korifi
https://api.github.com/repos/cloudfoundry/korifi
closed
[Feature]: Developer can push apps using the top-level `instances` field in the manifest
Top-level process config
### Background **As a** developer **I want** top-level process configuration in manifests to be supported **So that** I can use shortcut `cf push` flags like `-c`, `-i`, `-m` etc. ### Acceptance Criteria * **GIVEN** I have the sources of an application (e.g. `tests/smoke/assets/test-node-app`) **AND** `manifest.yml` looks like this: ```yaml --- applications: - name: my-app instances: 3 ``` **WHEN I** `cf push` **THEN I** see the push succeeds with an output similar to this: ``` name: test requested state: started routes: test.vcap.me last uploaded: Mon 29 Aug 16:28:36 UTC 2022 stack: cflinuxfs3 buildpacks: name version detect output buildpack name nodejs_buildpack 1.7.61 nodejs nodejs type: web sidecars: instances: 3/3 memory usage: 256M start command: npm start state since cpu memory disk details #0 running 2022-08-29T16:28:54Z 1.6% 42.3M of 256M 115.7M of 1G #1 running 2022-08-29T16:28:54Z 1.6% 40.5M of 256M 115.7M of 1G #2 running 2022-08-29T16:28:54Z 1.5% 40.6M of 256M 115.7M of 1G ``` * **GIVEN** I have the same app with the following manifest: ```yaml --- applications: - name: my-app instances: 2 processes: type: web instances: 3 ``` **WHEN I** `cf push` **THEN I** see the push succeeds with the same output as above
1.0
[Feature]: Developer can push apps using the top-level `instances` field in the manifest - ### Background **As a** developer **I want** top-level process configuration in manifests to be supported **So that** I can use shortcut `cf push` flags like `-c`, `-i`, `-m` etc. ### Acceptance Criteria * **GIVEN** I have the sources of an application (e.g. `tests/smoke/assets/test-node-app`) **AND** `manifest.yml` looks like this: ```yaml --- applications: - name: my-app instances: 3 ``` **WHEN I** `cf push` **THEN I** see the push succeeds with an output similar to this: ``` name: test requested state: started routes: test.vcap.me last uploaded: Mon 29 Aug 16:28:36 UTC 2022 stack: cflinuxfs3 buildpacks: name version detect output buildpack name nodejs_buildpack 1.7.61 nodejs nodejs type: web sidecars: instances: 3/3 memory usage: 256M start command: npm start state since cpu memory disk details #0 running 2022-08-29T16:28:54Z 1.6% 42.3M of 256M 115.7M of 1G #1 running 2022-08-29T16:28:54Z 1.6% 40.5M of 256M 115.7M of 1G #2 running 2022-08-29T16:28:54Z 1.5% 40.6M of 256M 115.7M of 1G ``` * **GIVEN** I have the same app with the following manifest: ```yaml --- applications: - name: my-app instances: 2 processes: type: web instances: 3 ``` **WHEN I** `cf push` **THEN I** see the push succeeds with the same output as above
process
developer can push apps using the top level instances field in the manifest background as a developer i want top level process configuration in manifests to be supported so that i can use shortcut cf push flags like c i m etc acceptance criteria given i have the sources of an application e g tests smoke assets test node app and manifest yml looks like this yaml applications name my app instances when i cf push then i see the push succeeds with an output similar to this name test requested state started routes test vcap me last uploaded mon aug utc stack buildpacks name version detect output buildpack name nodejs buildpack nodejs nodejs type web sidecars instances memory usage start command npm start state since cpu memory disk details running of of running of of running of of given i have the same app with the following manifest yaml applications name my app instances processes type web instances when i cf push then i see the push succeeds with the same output as above
1
42,644
17,225,875,525
IssuesEvent
2021-07-20 01:31:53
dorksquad/artwork
https://api.github.com/repos/dorksquad/artwork
opened
artwork service - streamline apis
artwork service good first issue
1. condense all apis to just `/artworks` with 2 optional query parameters (instead of path variables). - query parameters will be `name` (name of artwork) and `album` (name of the music album). 2. add all CRUD operations to the `/artworks` api path. 3. clean up service layer get methods to be just one getArtworks() method with parameters for the above path variables. - it should handle each case of parameters being null, not null, etc 4. update tests
1.0
artwork service - streamline apis - 1. condense all apis to just `/artworks` with 2 optional query parameters (instead of path variables). - query parameters will be `name` (name of artwork) and `album` (name of the music album). 2. add all CRUD operations to the `/artworks` api path. 3. clean up service layer get methods to be just one getArtworks() method with parameters for the above path variables. - it should handle each case of parameters being null, not null, etc 4. update tests
non_process
artwork service streamline apis condense all apis to just artworks with optional query parameters instead of path variables query parameters will be name name of artwork and album name of the music album add all crud operations to the artworks api path clean up service layer get methods to be just one getartworks method with parameters for the above path variables it should handle each case of parameters being null not null etc update tests
0
794
2,545,209,135
IssuesEvent
2015-01-29 15:51:09
slick/slick
https://api.github.com/repos/slick/slick
closed
relative links for API doc references
1 - Ready improvement topic:documentation
*[Migrated from Assembla ticket [306](https://www.assembla.com/spaces/typesafe-slick/tickets/306) - reported by @cvogt on 2013-08-21 10:43:35]* Could we use relative links for API docs and generate them locally into the right folder? They work locally when you test the docs. We put the scaladoc under ./api anyway, so if we could link to it there, we could copy or move the generated scaladocs as part of the build process. We should also link to the API docs from the ToC so we don't need two separate entry points for manual and API docs for each Slick version
1.0
relative links for API doc references - *[Migrated from Assembla ticket [306](https://www.assembla.com/spaces/typesafe-slick/tickets/306) - reported by @cvogt on 2013-08-21 10:43:35]* Could we use relative links for API docs and generate them locally into the right folder? They work locally when you test the docs. We put the scaladoc under ./api anyway, so if we could link to it there, we could copy or move the generated scaladocs as part of the build process. We should also link to the API docs from the ToC so we don't need two separate entry points for manual and API docs for each Slick version
non_process
relative links for api doc references reported by cvogt on could we use relative links for api docs and generate them locally into the right folder they work locally when you test the docs we put the scaladoc under api anyway so if we could link to it there we could copy or move the generated scaladocs as part of the build process we should also link to the api docs from the toc so we don t need two separate entry points for manual and api docs for each slick version
0
95,971
12,067,562,051
IssuesEvent
2020-04-16 13:34:41
flutter/flutter
https://api.github.com/repos/flutter/flutter
closed
[Help]I want to create buttonAppNavigation layout like attached image.But, I don't know this way.
f: material design framework
I want to create button app navigation layout like attached image. But, I don't know way to create that this layout. I tried to implement using standard button app navigation and floating action button, centerDocked of floating action button location, but it has totally different layout. So if you know how to implement this layout, can you help me? <img src="https://user-images.githubusercontent.com/7133772/57903589-adab4680-78a9-11e9-8528-c548966355c9.PNG" width="50%"> <img src="https://user-images.githubusercontent.com/7133772/58142963-0bfa6f80-7c84-11e9-8868-37e65ccfa8d8.jpg" width="50%">
1.0
[Help]I want to create buttonAppNavigation layout like attached image.But, I don't know this way. - I want to create button app navigation layout like attached image. But, I don't know way to create that this layout. I tried to implement using standard button app navigation and floating action button, centerDocked of floating action button location, but it has totally different layout. So if you know how to implement this layout, can you help me? <img src="https://user-images.githubusercontent.com/7133772/57903589-adab4680-78a9-11e9-8528-c548966355c9.PNG" width="50%"> <img src="https://user-images.githubusercontent.com/7133772/58142963-0bfa6f80-7c84-11e9-8868-37e65ccfa8d8.jpg" width="50%">
non_process
i want to create buttonappnavigation layout like attached image but i don t know this way i want to create button app navigation layout like attached image but i don t know way to create that this layout i tried to implement using standard button app navigation and floating action button centerdocked of floating action button location but it has totally different layout so if you know how to implement this layout can you help me
0
8,437
11,598,899,526
IssuesEvent
2020-02-25 00:28:23
quinngroup/CiliaRepresentation
https://api.github.com/repos/quinngroup/CiliaRepresentation
opened
Obtaining segmentation masks for localized representation learning
datasets processing
Incorporating segmentation masks into representation learning will allow learning localized ciliary patches. Right now, we have segmentation masks on ~20% of entire dataset and appearance module is not targeting cilia regions. For time efficacy, unknown segmentation masks can be computed through 1) Rudimentary thresholding using pixel values, optical flow, and/or derivative quantities or 2) learned via supervised NN/ML algorithm trained on existing segmentation masks, optical flow quantities, and/or derivative quantities. Ideally, we would be able to obtain segmentation masks without supervision through a "refinement" stage that occurs in the larger appearance pipeline. However, I'm simply a novice and do not know how to do that yet; having a full set of segmentation masks as an initial sanity check for the appearance pipeline has worthwhile short term benefits. Next steps: - Create set of segmentation masks via thresholding, optimizing for minimal false positives - If ^ are inadequate, train NN to learn segmentation masks Eventually: - modify appearance module to iteratively refine rough thresholded/unsupervised segmentation masks as a byproduct of spatial reconstruction
1.0
Obtaining segmentation masks for localized representation learning - Incorporating segmentation masks into representation learning will allow learning localized ciliary patches. Right now, we have segmentation masks on ~20% of entire dataset and appearance module is not targeting cilia regions. For time efficacy, unknown segmentation masks can be computed through 1) Rudimentary thresholding using pixel values, optical flow, and/or derivative quantities or 2) learned via supervised NN/ML algorithm trained on existing segmentation masks, optical flow quantities, and/or derivative quantities. Ideally, we would be able to obtain segmentation masks without supervision through a "refinement" stage that occurs in the larger appearance pipeline. However, I'm simply a novice and do not know how to do that yet; having a full set of segmentation masks as an initial sanity check for the appearance pipeline has worthwhile short term benefits. Next steps: - Create set of segmentation masks via thresholding, optimizing for minimal false positives - If ^ are inadequate, train NN to learn segmentation masks Eventually: - modify appearance module to iteratively refine rough thresholded/unsupervised segmentation masks as a byproduct of spatial reconstruction
process
obtaining segmentation masks for localized representation learning incorporating segmentation masks into representation learning will allow learning localized ciliary patches right now we have segmentation masks on of entire dataset and appearance module is not targeting cilia regions for time efficacy unknown segmentation masks can be computed through rudimentary thresholding using pixel values optical flow and or derivative quantities or learned via supervised nn ml algorithm trained on existing segmentation masks optical flow quantities and or derivative quantities ideally we would be able to obtain segmentation masks without supervision through a refinement stage that occurs in the larger appearance pipeline however i m simply a novice and do not know how to do that yet having a full set of segmentation masks as an initial sanity check for the appearance pipeline has worthwhile short term benefits next steps create set of segmentation masks via thresholding optimizing for minimal false positives if are inadequate train nn to learn segmentation masks eventually modify appearance module to iteratively refine rough thresholded unsupervised segmentation masks as a byproduct of spatial reconstruction
1
15,291
19,296,149,005
IssuesEvent
2021-12-12 16:16:59
glennl-msft/WAF_PnP_Demo3
https://api.github.com/repos/glennl-msft/WAF_PnP_Demo3
opened
Put a solution in place that ensures all VMs are patched in a timely manner and that ensures strong local administrative password management
Security Operational Procedures Patch & Update Process (PNU)
<a href="https://docs.microsoft.com/azure/automation/update-management/overview">Put a solution in place that ensures all VMs are patched in a timely manner and that ensures strong local administrative password management</a>
1.0
Put a solution in place that ensures all VMs are patched in a timely manner and that ensures strong local administrative password management - <a href="https://docs.microsoft.com/azure/automation/update-management/overview">Put a solution in place that ensures all VMs are patched in a timely manner and that ensures strong local administrative password management</a>
process
put a solution in place that ensures all vms are patched in a timely manner and that ensures strong local administrative password management
1
17,068
22,534,002,474
IssuesEvent
2022-06-25 00:57:22
neudesic/documentation-solution-centers
https://api.github.com/repos/neudesic/documentation-solution-centers
closed
Add Links to Homepage
feature Process
Add links on the homepage Readme for - Onboarding Checklist - Internship Program - Best Practices (Development) - Processes and Guidelines - Solution Center Narrative
1.0
Add Links to Homepage - Add links on the homepage Readme for - Onboarding Checklist - Internship Program - Best Practices (Development) - Processes and Guidelines - Solution Center Narrative
process
add links to homepage add links on the homepage readme for onboarding checklist internship program best practices development processes and guidelines solution center narrative
1
57,852
8,211,307,221
IssuesEvent
2018-09-04 13:29:32
daostack/access_control
https://api.github.com/repos/daostack/access_control
closed
Making an EIP
documentation
- Figure out what the structure of an EIP is and write one in an `.md` file under the `docs` dir.
1.0
Making an EIP - - Figure out what the structure of an EIP is and write one in an `.md` file under the `docs` dir.
non_process
making an eip figure out what the structure of an eip is and write one in an md file under the docs dir
0
237,210
26,084,070,710
IssuesEvent
2022-12-25 21:18:14
billmcchesney1/goalert
https://api.github.com/repos/billmcchesney1/goalert
opened
CVE-2022-46175 (High) detected in json5-1.0.1.tgz, json5-2.1.3.tgz
security vulnerability
## CVE-2022-46175 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>json5-1.0.1.tgz</b>, <b>json5-2.1.3.tgz</b></p></summary> <p> <details><summary><b>json5-1.0.1.tgz</b></p></summary> <p>JSON for humans.</p> <p>Library home page: <a href="https://registry.npmjs.org/json5/-/json5-1.0.1.tgz">https://registry.npmjs.org/json5/-/json5-1.0.1.tgz</a></p> <p>Path to dependency file: /web/src/package.json</p> <p>Path to vulnerable library: /web/src/node_modules/loader-utils/node_modules/json5/package.json,/web/src/node_modules/tsconfig-paths/node_modules/json5/package.json</p> <p> Dependency Hierarchy: - babel-loader-8.2.2.tgz (Root Library) - loader-utils-1.4.0.tgz - :x: **json5-1.0.1.tgz** (Vulnerable Library) </details> <details><summary><b>json5-2.1.3.tgz</b></p></summary> <p>JSON for humans.</p> <p>Library home page: <a href="https://registry.npmjs.org/json5/-/json5-2.1.3.tgz">https://registry.npmjs.org/json5/-/json5-2.1.3.tgz</a></p> <p>Path to dependency file: /web/src/package.json</p> <p>Path to vulnerable library: /web/src/node_modules/json5/package.json</p> <p> Dependency Hierarchy: - core-7.12.10.tgz (Root Library) - :x: **json5-2.1.3.tgz** (Vulnerable Library) </details> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> JSON5 is an extension to the popular JSON file format that aims to be easier to write and maintain by hand (e.g. for config files). The `parse` method of the JSON5 library before and including version `2.2.1` does not restrict parsing of keys named `__proto__`, allowing specially crafted strings to pollute the prototype of the resulting object. This vulnerability pollutes the prototype of the object returned by `JSON5.parse` and not the global Object prototype, which is the commonly understood definition of Prototype Pollution. However, polluting the prototype of a single object can have significant security impact for an application if the object is later used in trusted operations. This vulnerability could allow an attacker to set arbitrary and unexpected keys on the object returned from `JSON5.parse`. The actual impact will depend on how applications utilize the returned object and how they filter unwanted keys, but could include denial of service, cross-site scripting, elevation of privilege, and in extreme cases, remote code execution. `JSON5.parse` should restrict parsing of `__proto__` keys when parsing JSON strings to objects. As a point of reference, the `JSON.parse` method included in JavaScript ignores `__proto__` keys. Simply changing `JSON5.parse` to `JSON.parse` in the examples above mitigates this vulnerability. This vulnerability is patched in json5 version 2.2.2 and later. <p>Publish Date: 2022-12-24 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-46175>CVE-2022-46175</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: Low - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-46175">https://www.cve.org/CVERecord?id=CVE-2022-46175</a></p> <p>Release Date: 2022-12-24</p> <p>Fix Resolution: json5 - 2.2.2</p> </p> </details> <p></p>
True
CVE-2022-46175 (High) detected in json5-1.0.1.tgz, json5-2.1.3.tgz - ## CVE-2022-46175 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>json5-1.0.1.tgz</b>, <b>json5-2.1.3.tgz</b></p></summary> <p> <details><summary><b>json5-1.0.1.tgz</b></p></summary> <p>JSON for humans.</p> <p>Library home page: <a href="https://registry.npmjs.org/json5/-/json5-1.0.1.tgz">https://registry.npmjs.org/json5/-/json5-1.0.1.tgz</a></p> <p>Path to dependency file: /web/src/package.json</p> <p>Path to vulnerable library: /web/src/node_modules/loader-utils/node_modules/json5/package.json,/web/src/node_modules/tsconfig-paths/node_modules/json5/package.json</p> <p> Dependency Hierarchy: - babel-loader-8.2.2.tgz (Root Library) - loader-utils-1.4.0.tgz - :x: **json5-1.0.1.tgz** (Vulnerable Library) </details> <details><summary><b>json5-2.1.3.tgz</b></p></summary> <p>JSON for humans.</p> <p>Library home page: <a href="https://registry.npmjs.org/json5/-/json5-2.1.3.tgz">https://registry.npmjs.org/json5/-/json5-2.1.3.tgz</a></p> <p>Path to dependency file: /web/src/package.json</p> <p>Path to vulnerable library: /web/src/node_modules/json5/package.json</p> <p> Dependency Hierarchy: - core-7.12.10.tgz (Root Library) - :x: **json5-2.1.3.tgz** (Vulnerable Library) </details> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> JSON5 is an extension to the popular JSON file format that aims to be easier to write and maintain by hand (e.g. for config files). The `parse` method of the JSON5 library before and including version `2.2.1` does not restrict parsing of keys named `__proto__`, allowing specially crafted strings to pollute the prototype of the resulting object. This vulnerability pollutes the prototype of the object returned by `JSON5.parse` and not the global Object prototype, which is the commonly understood definition of Prototype Pollution. However, polluting the prototype of a single object can have significant security impact for an application if the object is later used in trusted operations. This vulnerability could allow an attacker to set arbitrary and unexpected keys on the object returned from `JSON5.parse`. The actual impact will depend on how applications utilize the returned object and how they filter unwanted keys, but could include denial of service, cross-site scripting, elevation of privilege, and in extreme cases, remote code execution. `JSON5.parse` should restrict parsing of `__proto__` keys when parsing JSON strings to objects. As a point of reference, the `JSON.parse` method included in JavaScript ignores `__proto__` keys. Simply changing `JSON5.parse` to `JSON.parse` in the examples above mitigates this vulnerability. This vulnerability is patched in json5 version 2.2.2 and later. <p>Publish Date: 2022-12-24 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-46175>CVE-2022-46175</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: Low - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-46175">https://www.cve.org/CVERecord?id=CVE-2022-46175</a></p> <p>Release Date: 2022-12-24</p> <p>Fix Resolution: json5 - 2.2.2</p> </p> </details> <p></p>
non_process
cve high detected in tgz tgz cve high severity vulnerability vulnerable libraries tgz tgz tgz json for humans library home page a href path to dependency file web src package json path to vulnerable library web src node modules loader utils node modules package json web src node modules tsconfig paths node modules package json dependency hierarchy babel loader tgz root library loader utils tgz x tgz vulnerable library tgz json for humans library home page a href path to dependency file web src package json path to vulnerable library web src node modules package json dependency hierarchy core tgz root library x tgz vulnerable library found in base branch master vulnerability details is an extension to the popular json file format that aims to be easier to write and maintain by hand e g for config files the parse method of the library before and including version does not restrict parsing of keys named proto allowing specially crafted strings to pollute the prototype of the resulting object this vulnerability pollutes the prototype of the object returned by parse and not the global object prototype which is the commonly understood definition of prototype pollution however polluting the prototype of a single object can have significant security impact for an application if the object is later used in trusted operations this vulnerability could allow an attacker to set arbitrary and unexpected keys on the object returned from parse the actual impact will depend on how applications utilize the returned object and how they filter unwanted keys but could include denial of service cross site scripting elevation of privilege and in extreme cases remote code execution parse should restrict parsing of proto keys when parsing json strings to objects as a point of reference the json parse method included in javascript ignores proto keys simply changing parse to json parse in the examples above mitigates this vulnerability this vulnerability is patched in version and later publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact low availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution
0
5,561
8,403,499,643
IssuesEvent
2018-10-11 09:56:31
kiwicom/orbit-components
https://api.github.com/repos/kiwicom/orbit-components
closed
<Stack />: align option doesn't apply align-items property
bug processing
## Expected Behavior `align="end"` to attach `align-items: flex-end` ## Current Behavior `align="end"` to attaches `align-content: flex-end` but not `align-items: flex-end` ![image](https://user-images.githubusercontent.com/16268406/46671214-7e767d80-cbd4-11e8-84ff-013941c66174.png) ## Steps to Reproduce `<Stack direction="row" align="end">` ## Context (Environment) orbit 0.13.0
1.0
<Stack />: align option doesn't apply align-items property - ## Expected Behavior `align="end"` to attach `align-items: flex-end` ## Current Behavior `align="end"` to attaches `align-content: flex-end` but not `align-items: flex-end` ![image](https://user-images.githubusercontent.com/16268406/46671214-7e767d80-cbd4-11e8-84ff-013941c66174.png) ## Steps to Reproduce `<Stack direction="row" align="end">` ## Context (Environment) orbit 0.13.0
process
align option doesn t apply align items property expected behavior align end to attach align items flex end current behavior align end to attaches align content flex end but not align items flex end steps to reproduce context environment orbit
1
707,663
24,313,368,416
IssuesEvent
2022-09-30 02:17:20
gama-platform/gama
https://api.github.com/repos/gama-platform/gama
closed
Popup documentation in editor do not show up for action arguments
😱 Bug About GAML Priority High V. 1.8.2
**Describe the bug** In the GAML editor, no documentation show up when moving the mouse over action arguments. **To Reproduce** Steps to reproduce the behavior: 1. Open (for instance) `GAML Syntax > System > RunThread.gaml` 2. Move the mouse over `interval: ` ![RunThread gaml - Gama (runtime) 2022-09-26 07-54-46](https://user-images.githubusercontent.com/579256/192174430-6aadec2e-76b0-42fb-9f4e-c1e958387fc8.png) 3. A blank popup appears **Expected behavior** The documentation of the argument should appear, as it does for facets. **Desktop (please complete the following information):** - GAMA version: 1.8.2 - Java version: 1.17
1.0
Popup documentation in editor do not show up for action arguments - **Describe the bug** In the GAML editor, no documentation show up when moving the mouse over action arguments. **To Reproduce** Steps to reproduce the behavior: 1. Open (for instance) `GAML Syntax > System > RunThread.gaml` 2. Move the mouse over `interval: ` ![RunThread gaml - Gama (runtime) 2022-09-26 07-54-46](https://user-images.githubusercontent.com/579256/192174430-6aadec2e-76b0-42fb-9f4e-c1e958387fc8.png) 3. A blank popup appears **Expected behavior** The documentation of the argument should appear, as it does for facets. **Desktop (please complete the following information):** - GAMA version: 1.8.2 - Java version: 1.17
non_process
popup documentation in editor do not show up for action arguments describe the bug in the gaml editor no documentation show up when moving the mouse over action arguments to reproduce steps to reproduce the behavior open for instance gaml syntax system runthread gaml move the mouse over interval a blank popup appears expected behavior the documentation of the argument should appear as it does for facets desktop please complete the following information gama version java version
0
469,145
13,501,998,275
IssuesEvent
2020-09-13 05:52:52
ceochrism/StackOverFlowHackerRankHybrid
https://api.github.com/repos/ceochrism/StackOverFlowHackerRankHybrid
opened
Character Selection/Customization Screen
Priority:Medium Status:On Hold
This would be apart of some sort of settings screen, or the first time the user loads the game. We will have a variety of costumes a user can choose from in order to customize their character. This can be done in mono game or win forms, I believe it may be user in Windows Forms if we utilize something like the ImageList control.
1.0
Character Selection/Customization Screen - This would be apart of some sort of settings screen, or the first time the user loads the game. We will have a variety of costumes a user can choose from in order to customize their character. This can be done in mono game or win forms, I believe it may be user in Windows Forms if we utilize something like the ImageList control.
non_process
character selection customization screen this would be apart of some sort of settings screen or the first time the user loads the game we will have a variety of costumes a user can choose from in order to customize their character this can be done in mono game or win forms i believe it may be user in windows forms if we utilize something like the imagelist control
0
98,419
16,373,817,026
IssuesEvent
2021-05-15 17:40:54
hugh-whitesource/NodeGoat-1
https://api.github.com/repos/hugh-whitesource/NodeGoat-1
opened
WS-2018-0069 (High) detected in is-my-json-valid-2.15.0.tgz
security vulnerability
## WS-2018-0069 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>is-my-json-valid-2.15.0.tgz</b></p></summary> <p>A JSONSchema validator that uses code generation to be extremely fast</p> <p>Library home page: <a href="https://registry.npmjs.org/is-my-json-valid/-/is-my-json-valid-2.15.0.tgz">https://registry.npmjs.org/is-my-json-valid/-/is-my-json-valid-2.15.0.tgz</a></p> <p>Path to dependency file: NodeGoat-1/package.json</p> <p>Path to vulnerable library: NodeGoat-1/node_modules/npm/node_modules/request/node_modules/har-validator/node_modules/is-my-json-valid/package.json</p> <p> Dependency Hierarchy: - grunt-npm-install-0.3.1.tgz (Root Library) - npm-3.10.10.tgz - request-2.75.0.tgz - har-validator-2.0.6.tgz - :x: **is-my-json-valid-2.15.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/hugh-whitesource/NodeGoat-1/commit/1acb8446b41e455d2f087e892c9a9ce80609f601">1acb8446b41e455d2f087e892c9a9ce80609f601</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Version of is-my-json-valid before 1.4.1 or 2.17.2 are vulnerable to regular expression denial of service (ReDoS) via the email validation function. <p>Publish Date: 2018-02-14 <p>URL: <a href=https://github.com/mafintosh/is-my-json-valid/commit/b3051b277f7caa08cd2edc6f74f50aeda65d2976>WS-2018-0069</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nodesecurity.io/advisories/572">https://nodesecurity.io/advisories/572</a></p> <p>Release Date: 2018-01-24</p> <p>Fix Resolution: 1.4.1</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"is-my-json-valid","packageVersion":"2.15.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-npm-install:0.3.1;npm:3.10.10;request:2.75.0;har-validator:2.0.6;is-my-json-valid:2.15.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.4.1"}],"baseBranches":["master"],"vulnerabilityIdentifier":"WS-2018-0069","vulnerabilityDetails":"Version of is-my-json-valid before 1.4.1 or 2.17.2 are vulnerable to regular expression denial of service (ReDoS) via the email validation function.","vulnerabilityUrl":"https://github.com/mafintosh/is-my-json-valid/commit/b3051b277f7caa08cd2edc6f74f50aeda65d2976","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
True
WS-2018-0069 (High) detected in is-my-json-valid-2.15.0.tgz - ## WS-2018-0069 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>is-my-json-valid-2.15.0.tgz</b></p></summary> <p>A JSONSchema validator that uses code generation to be extremely fast</p> <p>Library home page: <a href="https://registry.npmjs.org/is-my-json-valid/-/is-my-json-valid-2.15.0.tgz">https://registry.npmjs.org/is-my-json-valid/-/is-my-json-valid-2.15.0.tgz</a></p> <p>Path to dependency file: NodeGoat-1/package.json</p> <p>Path to vulnerable library: NodeGoat-1/node_modules/npm/node_modules/request/node_modules/har-validator/node_modules/is-my-json-valid/package.json</p> <p> Dependency Hierarchy: - grunt-npm-install-0.3.1.tgz (Root Library) - npm-3.10.10.tgz - request-2.75.0.tgz - har-validator-2.0.6.tgz - :x: **is-my-json-valid-2.15.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/hugh-whitesource/NodeGoat-1/commit/1acb8446b41e455d2f087e892c9a9ce80609f601">1acb8446b41e455d2f087e892c9a9ce80609f601</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Version of is-my-json-valid before 1.4.1 or 2.17.2 are vulnerable to regular expression denial of service (ReDoS) via the email validation function. <p>Publish Date: 2018-02-14 <p>URL: <a href=https://github.com/mafintosh/is-my-json-valid/commit/b3051b277f7caa08cd2edc6f74f50aeda65d2976>WS-2018-0069</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nodesecurity.io/advisories/572">https://nodesecurity.io/advisories/572</a></p> <p>Release Date: 2018-01-24</p> <p>Fix Resolution: 1.4.1</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"is-my-json-valid","packageVersion":"2.15.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-npm-install:0.3.1;npm:3.10.10;request:2.75.0;har-validator:2.0.6;is-my-json-valid:2.15.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.4.1"}],"baseBranches":["master"],"vulnerabilityIdentifier":"WS-2018-0069","vulnerabilityDetails":"Version of is-my-json-valid before 1.4.1 or 2.17.2 are vulnerable to regular expression denial of service (ReDoS) via the email validation function.","vulnerabilityUrl":"https://github.com/mafintosh/is-my-json-valid/commit/b3051b277f7caa08cd2edc6f74f50aeda65d2976","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
non_process
ws high detected in is my json valid tgz ws high severity vulnerability vulnerable library is my json valid tgz a jsonschema validator that uses code generation to be extremely fast library home page a href path to dependency file nodegoat package json path to vulnerable library nodegoat node modules npm node modules request node modules har validator node modules is my json valid package json dependency hierarchy grunt npm install tgz root library npm tgz request tgz har validator tgz x is my json valid tgz vulnerable library found in head commit a href found in base branch master vulnerability details version of is my json valid before or are vulnerable to regular expression denial of service redos via the email validation function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree grunt npm install npm request har validator is my json valid isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier ws vulnerabilitydetails version of is my json valid before or are vulnerable to regular expression denial of service redos via the email validation function vulnerabilityurl
0
14,758
18,040,776,942
IssuesEvent
2021-09-18 02:31:17
ooi-data/RS01SBPS-SF01A-4A-NUTNRA101-streamed-nutnr_a_dark_sample
https://api.github.com/repos/ooi-data/RS01SBPS-SF01A-4A-NUTNRA101-streamed-nutnr_a_dark_sample
opened
🛑 Processing failed: OSError
process
## Overview `OSError` found in `processing_task` task during run ended on 2021-09-18T02:31:16.255440. ## Details Flow name: `RS01SBPS-SF01A-4A-NUTNRA101-streamed-nutnr_a_dark_sample` Task name: `processing_task` Error type: `OSError` Error message: [Errno 16] Please reduce your request rate. <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 248, in _call_s3 out = await method(**additional_kwargs) File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/client.py", line 155, in _make_api_call raise error_class(parsed_response, operation_name) botocore.exceptions.ClientError: An error occurred (SlowDown) when calling the DeleteObjects operation (reached max retries: 4): Please reduce your request rate. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.8/site-packages/ooi_harvester/processor/pipeline.py", line 101, in processing final_path = finalize_zarr( File "/srv/conda/envs/notebook/lib/python3.8/site-packages/ooi_harvester/processor/__init__.py", line 359, in finalize_zarr source_store.fs.delete(source_store.root, recursive=True) File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/spec.py", line 1187, in delete return self.rm(path, recursive=recursive, maxdepth=maxdepth) File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 88, in wrapper return sync(self.loop, func, *args, **kwargs) File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 69, in sync raise result[0] File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 25, in _runner result[0] = await coro File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1677, in _rm await asyncio.gather( File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1657, in _bulk_delete await self._call_s3("delete_objects", kwargs, Bucket=bucket, Delete=delete_keys) File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 268, in _call_s3 raise err OSError: [Errno 16] Please reduce your request rate. ``` </details>
1.0
🛑 Processing failed: OSError - ## Overview `OSError` found in `processing_task` task during run ended on 2021-09-18T02:31:16.255440. ## Details Flow name: `RS01SBPS-SF01A-4A-NUTNRA101-streamed-nutnr_a_dark_sample` Task name: `processing_task` Error type: `OSError` Error message: [Errno 16] Please reduce your request rate. <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 248, in _call_s3 out = await method(**additional_kwargs) File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/client.py", line 155, in _make_api_call raise error_class(parsed_response, operation_name) botocore.exceptions.ClientError: An error occurred (SlowDown) when calling the DeleteObjects operation (reached max retries: 4): Please reduce your request rate. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.8/site-packages/ooi_harvester/processor/pipeline.py", line 101, in processing final_path = finalize_zarr( File "/srv/conda/envs/notebook/lib/python3.8/site-packages/ooi_harvester/processor/__init__.py", line 359, in finalize_zarr source_store.fs.delete(source_store.root, recursive=True) File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/spec.py", line 1187, in delete return self.rm(path, recursive=recursive, maxdepth=maxdepth) File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 88, in wrapper return sync(self.loop, func, *args, **kwargs) File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 69, in sync raise result[0] File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 25, in _runner result[0] = await coro File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1677, in _rm await asyncio.gather( File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1657, in _bulk_delete await self._call_s3("delete_objects", kwargs, Bucket=bucket, Delete=delete_keys) File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 268, in _call_s3 raise err OSError: [Errno 16] Please reduce your request rate. ``` </details>
process
🛑 processing failed oserror overview oserror found in processing task task during run ended on details flow name streamed nutnr a dark sample task name processing task error type oserror error message please reduce your request rate traceback traceback most recent call last file srv conda envs notebook lib site packages core py line in call out await method additional kwargs file srv conda envs notebook lib site packages aiobotocore client py line in make api call raise error class parsed response operation name botocore exceptions clienterror an error occurred slowdown when calling the deleteobjects operation reached max retries please reduce your request rate the above exception was the direct cause of the following exception traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize zarr file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize zarr source store fs delete source store root recursive true file srv conda envs notebook lib site packages fsspec spec py line in delete return self rm path recursive recursive maxdepth maxdepth file srv conda envs notebook lib site packages fsspec asyn py line in wrapper return sync self loop func args kwargs file srv conda envs notebook lib site packages fsspec asyn py line in sync raise result file srv conda envs notebook lib site packages fsspec asyn py line in runner result await coro file srv conda envs notebook lib site packages core py line in rm await asyncio gather file srv conda envs notebook lib site packages core py line in bulk delete await self call delete objects kwargs bucket bucket delete delete keys file srv conda envs notebook lib site packages core py line in call raise err oserror please reduce your request rate
1
388,293
11,485,932,739
IssuesEvent
2020-02-11 08:54:58
DigitalCampus/moodle-block_oppia_mobile_export
https://api.github.com/repos/DigitalCampus/moodle-block_oppia_mobile_export
closed
In server connections save all settings
enhancement low priority
So, for example, can have different cropping, image icons sizes etc for the different servers used
1.0
In server connections save all settings - So, for example, can have different cropping, image icons sizes etc for the different servers used
non_process
in server connections save all settings so for example can have different cropping image icons sizes etc for the different servers used
0
1,604
4,217,943,607
IssuesEvent
2016-06-30 14:36:51
e-government-ua/iBP
https://api.github.com/repos/e-government-ua/iBP
closed
Київ: Присвоєння поштової адреси
In process of testing in work test
предлагаемая схема процесса - https://www.dropbox.com/s/40wip0wzg16o3gv/%D0%9F%D1%80%D0%B8%D1%81%D0%B2%D0%BE%D1%94%D0%BD%D0%BD%D1%8F%20%D0%BF%D0%BE%D1%88%D1%82%D0%BE%D0%B2%D0%BE%D1%97%20%D0%B0%D0%B4%D1%80%D0%B5%D1%81%D0%B8%20%28%D0%BD%D0%B5%D0%B6%D0%B8%D1%82%D0%BB%D0%BE%D0%B2%D1%96%20%D0%BF%D1%80%D0%B8%D0%BC%D1%96%D1%89%D0%B5%D0%BD%D0%BD%D1%8F%29%20v1.png?dl=0 исходные материалы - https://www.dropbox.com/sh/qs29vqe799l5zzq/AADGAvtKFq9Oity8wtLSmkC0a?dl=0
1.0
Київ: Присвоєння поштової адреси - предлагаемая схема процесса - https://www.dropbox.com/s/40wip0wzg16o3gv/%D0%9F%D1%80%D0%B8%D1%81%D0%B2%D0%BE%D1%94%D0%BD%D0%BD%D1%8F%20%D0%BF%D0%BE%D1%88%D1%82%D0%BE%D0%B2%D0%BE%D1%97%20%D0%B0%D0%B4%D1%80%D0%B5%D1%81%D0%B8%20%28%D0%BD%D0%B5%D0%B6%D0%B8%D1%82%D0%BB%D0%BE%D0%B2%D1%96%20%D0%BF%D1%80%D0%B8%D0%BC%D1%96%D1%89%D0%B5%D0%BD%D0%BD%D1%8F%29%20v1.png?dl=0 исходные материалы - https://www.dropbox.com/sh/qs29vqe799l5zzq/AADGAvtKFq9Oity8wtLSmkC0a?dl=0
process
київ присвоєння поштової адреси предлагаемая схема процесса исходные материалы
1
192,882
14,631,656,202
IssuesEvent
2020-12-23 20:24:50
github-vet/rangeloop-pointer-findings
https://api.github.com/repos/github-vet/rangeloop-pointer-findings
closed
Noxdew/Knights-Of-Discord: vendor/github.com/mongodb/mongo-go-driver/mongo/crud_spec_test.go; 51 LoC
fresh medium test vendored
Found a possible issue in [Noxdew/Knights-Of-Discord](https://www.github.com/Noxdew/Knights-Of-Discord) at [vendor/github.com/mongodb/mongo-go-driver/mongo/crud_spec_test.go](https://github.com/Noxdew/Knights-Of-Discord/blob/54e2089536ec92da137c78869f0023e47b2ae354/vendor/github.com/mongodb/mongo-go-driver/mongo/crud_spec_test.go#L126-L176) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > function call which takes a reference to test at line 148 may start a goroutine [Click here to see the code in its original context.](https://github.com/Noxdew/Knights-Of-Discord/blob/54e2089536ec92da137c78869f0023e47b2ae354/vendor/github.com/mongodb/mongo-go-driver/mongo/crud_spec_test.go#L126-L176) <details> <summary>Click here to show the 51 line(s) of Go which triggered the analyzer.</summary> ```go for _, test := range testfile.Tests { collName := sanitizeCollectionName("crud-spec-tests", test.Description) _, _ = db.RunCommand( context.Background(), bson.NewDocument(bson.EC.String("drop", collName)), ) if test.Outcome.Collection != nil && len(test.Outcome.Collection.Name) > 0 { _, _ = db.RunCommand( context.Background(), bson.NewDocument(bson.EC.String("drop", test.Outcome.Collection.Name)), ) } coll := db.Collection(collName) docsToInsert := docSliceToInterfaceSlice(docSliceFromRaw(t, testfile.Data)) _, err = coll.InsertMany(context.Background(), docsToInsert) require.NoError(t, err) switch test.Operation.Name { case "aggregate": aggregateTest(t, db, coll, &test) case "count": countTest(t, coll, &test) case "distinct": distinctTest(t, coll, &test) case "find": findTest(t, coll, &test) case "deleteMany": deleteManyTest(t, coll, &test) case "deleteOne": deleteOneTest(t, coll, &test) case "findOneAndDelete": findOneAndDeleteTest(t, coll, &test) case "findOneAndReplace": findOneAndReplaceTest(t, coll, &test) case "findOneAndUpdate": findOneAndUpdateTest(t, coll, &test) case "insertMany": insertManyTest(t, coll, &test) case "insertOne": insertOneTest(t, coll, &test) case "replaceOne": replaceOneTest(t, coll, &test) case "updateMany": updateManyTest(t, coll, &test) case "updateOne": updateOneTest(t, coll, &test) } } ``` </details> <details> <summary>Click here to show extra information the analyzer produced.</summary> ``` The following paths through the callgraph could lead to a goroutine: (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Release, 1) -> (Front, 0) -> (ref, 0) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (NewServer, 2) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (NewServer, 2) -> (serve, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (NewClientConn, 3) -> (newMux, 1) -> (loop, 0) -> (Println, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (docSliceFromRaw, 2) -> (MutableDocument, 0) -> (Get, 1) -> (get, 1) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (Lookup, 1) -> (elem, 1) -> (lookupString, 1) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (New, 1) -> (Merge, 1) -> (next32, 0) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (ReadWireMessage, 1) -> (DecodeError, 1) -> (Timeout, 0) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (Get, 1) -> (get, 1) -> (AddUint64, 2) -> (Handshake, 3) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (NewServer, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (NewClient, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Open, 0) -> (listen, 2) -> (reconnect, 0) -> (ChannelVoiceJoin, 4) -> (ServeConn, 2) -> (serve, 0) -> (shutDownIn, 1) -> (runHandler, 3) -> (handler, 2) -> (setParent, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Open, 0) -> (listen, 2) -> (reconnect, 0) -> (ChannelVoiceJoin, 4) -> (ServeConn, 2) -> (serve, 0) -> (shutDownIn, 1) -> (runHandler, 3) -> (handler, 2) -> (setParent, 1) -> (Consume, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Open, 0) -> (listen, 2) -> (reconnect, 0) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (Reset, 0) -> (resetBuf, 0) -> (New, 3) -> (Handshake, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (, 0) -> (newWidthTrie, 1) -> (mapper, 1) -> (Execute, 1) -> (serve, 0) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (NewClientConn, 3) -> (newMux, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (ServeConn, 2) -> (serve, 0) -> (shutDownIn, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (NewClientConn, 3) -> (newMux, 1) -> (loop, 0) -> (Println, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (handleEvent, 2) -> (handle, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (NewServer, 2) -> (serve, 2) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (ServeConn, 2) -> (serve, 0) -> (shutDownIn, 1) -> (runHandler, 3) -> (handler, 2) -> (setParent, 1) -> (Consume, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (readLine, 0) -> (Add, 1) -> (ref, 0) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (Get, 1) -> (get, 1) -> (AddUint64, 2) -> (Handshake, 3) -> (connect, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Open, 0) -> (listen, 2) -> (reconnect, 0) -> (ChannelVoiceJoin, 4) -> (NewServer, 2) -> (serve, 2) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (ServeConn, 2) -> (serve, 0) -> (shutDownIn, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (NewServer, 2) -> (serve, 2) -> (cmdFunc, 2) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (Close, 0) -> (ucol_close, 1) -> (streamByID, 2) -> (logWrite, 0) -> (open, 0) -> (wsListen, 2) -> (onEvent, 1) -> (h, 2) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (NewClientConn, 3) -> (newMux, 1) -> (loop, 0) -> (Println, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (ServeConn, 2) -> (serve, 0) -> (shutDownIn, 1) -> (runHandler, 3) -> (handler, 2) -> (setParent, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (Reset, 0) -> (resetBuf, 0) -> (New, 3) -> (Handshake, 3) -> (connect, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (NewClientConn, 3) -> (newMux, 1) -> (loop, 0) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (NewClientConn, 3) -> (newMux, 1) -> (loop, 0) -> (Println, 2) -> (handlePacket, 1) -> (responseMessageReceived, 0) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Reader, 1) -> (Open, 0) -> (listen, 2) -> (reconnect, 0) -> (connect, 3) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (Close, 0) -> (ucol_close, 1) -> (streamByID, 2) -> (logWrite, 0) -> (open, 0) -> (wsListen, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (NewServer, 2) -> (serve, 2) -> (cmdFunc, 2) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Open, 0) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (Close, 0) -> (ucol_close, 1) -> (streamByID, 2) -> (logWrite, 0) -> (Dial, 3) -> (DialConfig, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (NewServer, 2) -> (serve, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (, 0) -> (newWidthTrie, 1) -> (mapper, 1) -> (Execute, 1) -> (serve, 0) -> (shutDownIn, 1) -> (runHandler, 3) -> (handler, 2) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (Connect, 1) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (wsListen, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (NewClientConn, 3) -> (newMux, 1) -> (loop, 0) -> (Println, 2) -> (handlePacket, 1) -> (responseMessageReceived, 0) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (wsListen, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Get, 1) -> (get, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (Release, 1) -> (Front, 0) -> (ref, 0) (aggregateTest, 4) -> (Collation, 1) -> (SubDocument, 2) -> (panic, 1) -> (Write, 1) -> (WriteString, 2) -> (generateMaskingKey, 0) -> (yaml_emitter_set_writer_error, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Get, 1) -> (get, 1) -> (New, 3) -> (Handshake, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Reset, 0) -> (resetBuf, 0) -> (New, 3) -> (Handshake, 3) -> (connect, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (NewServer, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Open, 0) -> (listen, 2) -> (reconnect, 0) -> (ChannelVoiceJoin, 4) -> (NewServer, 2) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (NewServer, 2) -> (serve, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (NewClientConn, 3) -> (newMux, 1) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (NewClientConn, 3) -> (newMux, 1) -> (loop, 0) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Open, 0) -> (listen, 2) -> (reconnect, 0) -> (ChannelVoiceJoin, 4) -> (NewServer, 2) -> (serve, 2) -> (cmdFunc, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Open, 0) -> (listen, 2) -> (reconnect, 0) -> (ChannelVoiceJoin, 4) -> (ServeConn, 2) -> (serve, 0) -> (shutDownIn, 1) -> (runHandler, 3) -> (handler, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (Get, 1) -> (get, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (ReadWireMessage, 1) -> (DecodeError, 1) -> (Timeout, 0) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (Reset, 0) -> (resetBuf, 0) -> (New, 3) -> (Handshake, 3) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (Reset, 0) -> (resetBuf, 0) -> (New, 3) -> (Handshake, 3) -> (connect, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (NewServer, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (wsListen, 2) -> (onEvent, 1) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (handleEvent, 2) -> (handle, 2) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (New, 1) -> (Merge, 1) -> (next32, 0) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (NewServer, 2) -> (serve, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Open, 0) -> (listen, 2) -> (reconnect, 0) -> (ChannelVoiceJoin, 4) -> (ServeConn, 2) -> (serve, 0) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (ServeConn, 2) -> (serve, 0) -> (shutDownIn, 1) -> (runHandler, 3) -> (handler, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (ServeConn, 2) -> (serve, 0) -> (shutDownIn, 1) -> (runHandler, 3) -> (handler, 2) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (NewClientConn, 3) -> (newMux, 1) -> (loop, 0) -> (Println, 2) -> (handlePacket, 1) -> (responseMessageReceived, 0) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (docSliceFromRaw, 2) -> (MutableDocument, 0) -> (Get, 1) -> (get, 1) -> (New, 3) -> (Handshake, 3) -> (connect, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (Connect, 1) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (Close, 0) -> (ucol_close, 1) -> (streamByID, 2) -> (logWrite, 0) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (NewClient, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (ServeConn, 2) -> (serve, 0) -> (shutDownIn, 1) -> (runHandler, 3) -> (handler, 2) -> (setParent, 1) -> (Consume, 1) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (Close, 0) -> (ucol_close, 1) -> (streamByID, 2) -> (logWrite, 0) -> (open, 0) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (NewServer, 2) -> (serve, 2) -> (cmdFunc, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (, 0) -> (newWidthTrie, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (Close, 0) -> (ucol_close, 1) -> (streamByID, 2) -> (logWrite, 0) -> (Dial, 3) -> (DialConfig, 1) -> (NewClient, 2) -> (newHybiClientConn, 3) -> (newHybiConn, 4) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Flush, 0) -> (Buffered, 0) -> (ref, 0) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (Get, 1) -> (get, 1) -> (New, 3) -> (Handshake, 3) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (NewServer, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (wsListen, 2) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (wsListen, 2) -> (onEvent, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (NewServer, 2) -> (serve, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (ServeConn, 2) -> (serve, 0) -> (shutDownIn, 1) -> (runHandler, 3) -> (handler, 2) -> (setParent, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (Release, 1) -> (Front, 0) -> (ref, 0) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (finalMsg, 1) -> (getDerivedKeys, 1) -> (setCache, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (wsListen, 2) -> (onEvent, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (wsListen, 2) -> (onEvent, 1) -> (h, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Reader, 1) -> (Open, 0) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (Close, 0) -> (ucol_close, 1) -> (streamByID, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Flush, 0) -> (writeChunk, 1) -> (writeDataFromHandler, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (New, 1) -> (Merge, 1) -> (next32, 0) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (NewClient, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (wsListen, 2) -> (onEvent, 1) -> (h, 2) (aggregateTest, 4) -> (verifyCollectionContents, 3) -> (Find, 2) -> (Find, 3) -> (WithDeadline, 2) -> (AfterFunc, 2) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (WriteWireMessage, 2) -> (Error, 1) -> (Timeout, 0) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (, 0) -> (newWidthTrie, 1) -> (mapper, 1) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (Close, 0) -> (ucol_close, 1) -> (streamByID, 2) -> (logWrite, 0) -> (open, 0) -> (wsListen, 2) -> (onEvent, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (NewClientConn, 3) -> (newMux, 1) -> (loop, 0) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (WithCancel, 1) -> (cancel, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Reset, 0) -> (resetBuf, 0) -> (New, 3) -> (Handshake, 3) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (NewServer, 2) -> (serve, 2) -> (cmdFunc, 2) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (Connect, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (, 0) -> (newWidthTrie, 1) -> (mapper, 1) -> (Execute, 1) -> (serve, 2) -> (cmdFunc, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (ServeConn, 2) -> (serve, 0) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (Close, 0) -> (ucol_close, 1) -> (streamByID, 2) -> (logWrite, 0) -> (Dial, 3) -> (DialConfig, 1) -> (NewClient, 2) -> (newHybiClientConn, 3) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (Reset, 0) -> (resetBuf, 0) -> (New, 3) -> (Handshake, 3) -> (connect, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (Get, 1) -> (get, 1) -> (New, 3) -> (Handshake, 3) -> (connect, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (, 0) -> (newWidthTrie, 1) -> (mapper, 1) -> (Execute, 1) -> (serve, 0) -> (shutDownIn, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (NewClientConn, 3) -> (newMux, 1) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (Close, 0) -> (ucol_close, 1) -> (streamByID, 2) -> (logWrite, 0) -> (Dial, 3) -> (DialConfig, 1) -> (NewClient, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (, 0) -> (newWidthTrie, 1) -> (mapper, 1) -> (Execute, 1) -> (serve, 0) -> (shutDownIn, 1) -> (runHandler, 3) -> (handler, 2) -> (setParent, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (, 0) -> (newWidthTrie, 1) -> (mapper, 1) -> (Execute, 1) -> (serve, 0) -> (shutDownIn, 1) -> (runHandler, 3) -> (handler, 2) -> (setParent, 1) -> (Consume, 1) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (WriteWireMessage, 2) -> (Error, 1) -> (Timeout, 0) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (Flush, 0) -> (writeChunk, 1) -> (writeDataFromHandler, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Open, 0) -> (listen, 2) -> (reconnect, 0) -> (ChannelVoiceJoin, 4) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (wsListen, 2) -> (onEvent, 1) -> (h, 2) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (NewServer, 2) -> (serve, 2) -> (cmdFunc, 2) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (Get, 1) -> (get, 1) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (Flush, 0) -> (writeChunk, 1) -> (writeDataFromHandler, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (Close, 0) -> (ucol_close, 1) -> (streamByID, 2) -> (logWrite, 0) -> (Dial, 3) -> (DialConfig, 1) -> (NewClient, 2) -> (newHybiClientConn, 3) -> (newHybiConn, 4) -> (handlePacket, 1) -> (responseMessageReceived, 0) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (handleEvent, 2) -> (handle, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (New, 1) -> (Merge, 1) -> (next32, 0) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (Reset, 0) -> (resetBuf, 0) -> (New, 3) -> (Handshake, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (Connect, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (NewServer, 2) -> (serve, 2) -> (cmdFunc, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Open, 0) -> (listen, 2) -> (reconnect, 0) -> (ChannelVoiceJoin, 4) -> (ServeConn, 2) -> (serve, 0) -> (shutDownIn, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Reader, 1) -> (Open, 0) -> (listen, 2) -> (reconnect, 0) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (, 0) -> (newWidthTrie, 1) -> (mapper, 1) -> (Execute, 1) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (NewServer, 2) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (ServeConn, 2) -> (serve, 0) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (Write, 1) -> (WriteString, 2) -> (generateMaskingKey, 0) -> (yaml_emitter_set_writer_error, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (docSliceFromRaw, 2) -> (MutableDocument, 0) -> (Get, 1) -> (get, 1) -> (New, 3) -> (Handshake, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (, 0) -> (newWidthTrie, 1) -> (mapper, 1) -> (Execute, 1) -> (serve, 2) ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 54e2089536ec92da137c78869f0023e47b2ae354
1.0
Noxdew/Knights-Of-Discord: vendor/github.com/mongodb/mongo-go-driver/mongo/crud_spec_test.go; 51 LoC - Found a possible issue in [Noxdew/Knights-Of-Discord](https://www.github.com/Noxdew/Knights-Of-Discord) at [vendor/github.com/mongodb/mongo-go-driver/mongo/crud_spec_test.go](https://github.com/Noxdew/Knights-Of-Discord/blob/54e2089536ec92da137c78869f0023e47b2ae354/vendor/github.com/mongodb/mongo-go-driver/mongo/crud_spec_test.go#L126-L176) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > function call which takes a reference to test at line 148 may start a goroutine [Click here to see the code in its original context.](https://github.com/Noxdew/Knights-Of-Discord/blob/54e2089536ec92da137c78869f0023e47b2ae354/vendor/github.com/mongodb/mongo-go-driver/mongo/crud_spec_test.go#L126-L176) <details> <summary>Click here to show the 51 line(s) of Go which triggered the analyzer.</summary> ```go for _, test := range testfile.Tests { collName := sanitizeCollectionName("crud-spec-tests", test.Description) _, _ = db.RunCommand( context.Background(), bson.NewDocument(bson.EC.String("drop", collName)), ) if test.Outcome.Collection != nil && len(test.Outcome.Collection.Name) > 0 { _, _ = db.RunCommand( context.Background(), bson.NewDocument(bson.EC.String("drop", test.Outcome.Collection.Name)), ) } coll := db.Collection(collName) docsToInsert := docSliceToInterfaceSlice(docSliceFromRaw(t, testfile.Data)) _, err = coll.InsertMany(context.Background(), docsToInsert) require.NoError(t, err) switch test.Operation.Name { case "aggregate": aggregateTest(t, db, coll, &test) case "count": countTest(t, coll, &test) case "distinct": distinctTest(t, coll, &test) case "find": findTest(t, coll, &test) case "deleteMany": deleteManyTest(t, coll, &test) case "deleteOne": deleteOneTest(t, coll, &test) case "findOneAndDelete": findOneAndDeleteTest(t, coll, &test) case "findOneAndReplace": findOneAndReplaceTest(t, coll, &test) case "findOneAndUpdate": findOneAndUpdateTest(t, coll, &test) case "insertMany": insertManyTest(t, coll, &test) case "insertOne": insertOneTest(t, coll, &test) case "replaceOne": replaceOneTest(t, coll, &test) case "updateMany": updateManyTest(t, coll, &test) case "updateOne": updateOneTest(t, coll, &test) } } ``` </details> <details> <summary>Click here to show extra information the analyzer produced.</summary> ``` The following paths through the callgraph could lead to a goroutine: (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Release, 1) -> (Front, 0) -> (ref, 0) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (NewServer, 2) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (NewServer, 2) -> (serve, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (NewClientConn, 3) -> (newMux, 1) -> (loop, 0) -> (Println, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (docSliceFromRaw, 2) -> (MutableDocument, 0) -> (Get, 1) -> (get, 1) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (Lookup, 1) -> (elem, 1) -> (lookupString, 1) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (New, 1) -> (Merge, 1) -> (next32, 0) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (ReadWireMessage, 1) -> (DecodeError, 1) -> (Timeout, 0) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (Get, 1) -> (get, 1) -> (AddUint64, 2) -> (Handshake, 3) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (NewServer, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (NewClient, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Open, 0) -> (listen, 2) -> (reconnect, 0) -> (ChannelVoiceJoin, 4) -> (ServeConn, 2) -> (serve, 0) -> (shutDownIn, 1) -> (runHandler, 3) -> (handler, 2) -> (setParent, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Open, 0) -> (listen, 2) -> (reconnect, 0) -> (ChannelVoiceJoin, 4) -> (ServeConn, 2) -> (serve, 0) -> (shutDownIn, 1) -> (runHandler, 3) -> (handler, 2) -> (setParent, 1) -> (Consume, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Open, 0) -> (listen, 2) -> (reconnect, 0) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (Reset, 0) -> (resetBuf, 0) -> (New, 3) -> (Handshake, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (, 0) -> (newWidthTrie, 1) -> (mapper, 1) -> (Execute, 1) -> (serve, 0) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (NewClientConn, 3) -> (newMux, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (ServeConn, 2) -> (serve, 0) -> (shutDownIn, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (NewClientConn, 3) -> (newMux, 1) -> (loop, 0) -> (Println, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (handleEvent, 2) -> (handle, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (NewServer, 2) -> (serve, 2) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (ServeConn, 2) -> (serve, 0) -> (shutDownIn, 1) -> (runHandler, 3) -> (handler, 2) -> (setParent, 1) -> (Consume, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (readLine, 0) -> (Add, 1) -> (ref, 0) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (Get, 1) -> (get, 1) -> (AddUint64, 2) -> (Handshake, 3) -> (connect, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Open, 0) -> (listen, 2) -> (reconnect, 0) -> (ChannelVoiceJoin, 4) -> (NewServer, 2) -> (serve, 2) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (ServeConn, 2) -> (serve, 0) -> (shutDownIn, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (NewServer, 2) -> (serve, 2) -> (cmdFunc, 2) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (Close, 0) -> (ucol_close, 1) -> (streamByID, 2) -> (logWrite, 0) -> (open, 0) -> (wsListen, 2) -> (onEvent, 1) -> (h, 2) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (NewClientConn, 3) -> (newMux, 1) -> (loop, 0) -> (Println, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (ServeConn, 2) -> (serve, 0) -> (shutDownIn, 1) -> (runHandler, 3) -> (handler, 2) -> (setParent, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (Reset, 0) -> (resetBuf, 0) -> (New, 3) -> (Handshake, 3) -> (connect, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (NewClientConn, 3) -> (newMux, 1) -> (loop, 0) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (NewClientConn, 3) -> (newMux, 1) -> (loop, 0) -> (Println, 2) -> (handlePacket, 1) -> (responseMessageReceived, 0) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Reader, 1) -> (Open, 0) -> (listen, 2) -> (reconnect, 0) -> (connect, 3) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (Close, 0) -> (ucol_close, 1) -> (streamByID, 2) -> (logWrite, 0) -> (open, 0) -> (wsListen, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (NewServer, 2) -> (serve, 2) -> (cmdFunc, 2) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Open, 0) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (Close, 0) -> (ucol_close, 1) -> (streamByID, 2) -> (logWrite, 0) -> (Dial, 3) -> (DialConfig, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (NewServer, 2) -> (serve, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (, 0) -> (newWidthTrie, 1) -> (mapper, 1) -> (Execute, 1) -> (serve, 0) -> (shutDownIn, 1) -> (runHandler, 3) -> (handler, 2) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (Connect, 1) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (wsListen, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (NewClientConn, 3) -> (newMux, 1) -> (loop, 0) -> (Println, 2) -> (handlePacket, 1) -> (responseMessageReceived, 0) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (wsListen, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Get, 1) -> (get, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (Release, 1) -> (Front, 0) -> (ref, 0) (aggregateTest, 4) -> (Collation, 1) -> (SubDocument, 2) -> (panic, 1) -> (Write, 1) -> (WriteString, 2) -> (generateMaskingKey, 0) -> (yaml_emitter_set_writer_error, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Get, 1) -> (get, 1) -> (New, 3) -> (Handshake, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Reset, 0) -> (resetBuf, 0) -> (New, 3) -> (Handshake, 3) -> (connect, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (NewServer, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Open, 0) -> (listen, 2) -> (reconnect, 0) -> (ChannelVoiceJoin, 4) -> (NewServer, 2) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (NewServer, 2) -> (serve, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (NewClientConn, 3) -> (newMux, 1) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (NewClientConn, 3) -> (newMux, 1) -> (loop, 0) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Open, 0) -> (listen, 2) -> (reconnect, 0) -> (ChannelVoiceJoin, 4) -> (NewServer, 2) -> (serve, 2) -> (cmdFunc, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Open, 0) -> (listen, 2) -> (reconnect, 0) -> (ChannelVoiceJoin, 4) -> (ServeConn, 2) -> (serve, 0) -> (shutDownIn, 1) -> (runHandler, 3) -> (handler, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (Get, 1) -> (get, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (ReadWireMessage, 1) -> (DecodeError, 1) -> (Timeout, 0) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (Reset, 0) -> (resetBuf, 0) -> (New, 3) -> (Handshake, 3) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (Reset, 0) -> (resetBuf, 0) -> (New, 3) -> (Handshake, 3) -> (connect, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (NewServer, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (wsListen, 2) -> (onEvent, 1) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (handleEvent, 2) -> (handle, 2) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (New, 1) -> (Merge, 1) -> (next32, 0) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (NewServer, 2) -> (serve, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Open, 0) -> (listen, 2) -> (reconnect, 0) -> (ChannelVoiceJoin, 4) -> (ServeConn, 2) -> (serve, 0) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (ServeConn, 2) -> (serve, 0) -> (shutDownIn, 1) -> (runHandler, 3) -> (handler, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (ServeConn, 2) -> (serve, 0) -> (shutDownIn, 1) -> (runHandler, 3) -> (handler, 2) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (NewClientConn, 3) -> (newMux, 1) -> (loop, 0) -> (Println, 2) -> (handlePacket, 1) -> (responseMessageReceived, 0) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (docSliceFromRaw, 2) -> (MutableDocument, 0) -> (Get, 1) -> (get, 1) -> (New, 3) -> (Handshake, 3) -> (connect, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (Connect, 1) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (Close, 0) -> (ucol_close, 1) -> (streamByID, 2) -> (logWrite, 0) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (NewClient, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (ServeConn, 2) -> (serve, 0) -> (shutDownIn, 1) -> (runHandler, 3) -> (handler, 2) -> (setParent, 1) -> (Consume, 1) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (Close, 0) -> (ucol_close, 1) -> (streamByID, 2) -> (logWrite, 0) -> (open, 0) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (NewServer, 2) -> (serve, 2) -> (cmdFunc, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (, 0) -> (newWidthTrie, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (Close, 0) -> (ucol_close, 1) -> (streamByID, 2) -> (logWrite, 0) -> (Dial, 3) -> (DialConfig, 1) -> (NewClient, 2) -> (newHybiClientConn, 3) -> (newHybiConn, 4) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Flush, 0) -> (Buffered, 0) -> (ref, 0) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (Get, 1) -> (get, 1) -> (New, 3) -> (Handshake, 3) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (NewServer, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (wsListen, 2) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (wsListen, 2) -> (onEvent, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (NewServer, 2) -> (serve, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (ServeConn, 2) -> (serve, 0) -> (shutDownIn, 1) -> (runHandler, 3) -> (handler, 2) -> (setParent, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (Release, 1) -> (Front, 0) -> (ref, 0) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (finalMsg, 1) -> (getDerivedKeys, 1) -> (setCache, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (wsListen, 2) -> (onEvent, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (wsListen, 2) -> (onEvent, 1) -> (h, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Reader, 1) -> (Open, 0) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (Close, 0) -> (ucol_close, 1) -> (streamByID, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Flush, 0) -> (writeChunk, 1) -> (writeDataFromHandler, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (New, 1) -> (Merge, 1) -> (next32, 0) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (NewClient, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (wsListen, 2) -> (onEvent, 1) -> (h, 2) (aggregateTest, 4) -> (verifyCollectionContents, 3) -> (Find, 2) -> (Find, 3) -> (WithDeadline, 2) -> (AfterFunc, 2) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (WriteWireMessage, 2) -> (Error, 1) -> (Timeout, 0) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (, 0) -> (newWidthTrie, 1) -> (mapper, 1) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (Close, 0) -> (ucol_close, 1) -> (streamByID, 2) -> (logWrite, 0) -> (open, 0) -> (wsListen, 2) -> (onEvent, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (NewClientConn, 3) -> (newMux, 1) -> (loop, 0) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (WithCancel, 1) -> (cancel, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Reset, 0) -> (resetBuf, 0) -> (New, 3) -> (Handshake, 3) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (NewServer, 2) -> (serve, 2) -> (cmdFunc, 2) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (Connect, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (, 0) -> (newWidthTrie, 1) -> (mapper, 1) -> (Execute, 1) -> (serve, 2) -> (cmdFunc, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (ServeConn, 2) -> (serve, 0) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (Close, 0) -> (ucol_close, 1) -> (streamByID, 2) -> (logWrite, 0) -> (Dial, 3) -> (DialConfig, 1) -> (NewClient, 2) -> (newHybiClientConn, 3) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (Reset, 0) -> (resetBuf, 0) -> (New, 3) -> (Handshake, 3) -> (connect, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (Get, 1) -> (get, 1) -> (New, 3) -> (Handshake, 3) -> (connect, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (, 0) -> (newWidthTrie, 1) -> (mapper, 1) -> (Execute, 1) -> (serve, 0) -> (shutDownIn, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (NewClientConn, 3) -> (newMux, 1) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (Close, 0) -> (ucol_close, 1) -> (streamByID, 2) -> (logWrite, 0) -> (Dial, 3) -> (DialConfig, 1) -> (NewClient, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (, 0) -> (newWidthTrie, 1) -> (mapper, 1) -> (Execute, 1) -> (serve, 0) -> (shutDownIn, 1) -> (runHandler, 3) -> (handler, 2) -> (setParent, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (, 0) -> (newWidthTrie, 1) -> (mapper, 1) -> (Execute, 1) -> (serve, 0) -> (shutDownIn, 1) -> (runHandler, 3) -> (handler, 2) -> (setParent, 1) -> (Consume, 1) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (WriteWireMessage, 2) -> (Error, 1) -> (Timeout, 0) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (Flush, 0) -> (writeChunk, 1) -> (writeDataFromHandler, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Open, 0) -> (listen, 2) -> (reconnect, 0) -> (ChannelVoiceJoin, 4) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) -> (wsListen, 2) -> (onEvent, 1) -> (h, 2) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (NewServer, 2) -> (serve, 2) -> (cmdFunc, 2) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (Get, 1) -> (get, 1) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (Flush, 0) -> (writeChunk, 1) -> (writeDataFromHandler, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (handleEvent, 2) -> (handle, 2) -> (leafCert, 1) -> (CreateCertificate, 5) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (Close, 0) -> (ucol_close, 1) -> (streamByID, 2) -> (logWrite, 0) -> (Dial, 3) -> (DialConfig, 1) -> (NewClient, 2) -> (newHybiClientConn, 3) -> (newHybiConn, 4) -> (handlePacket, 1) -> (responseMessageReceived, 0) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (handleEvent, 2) -> (handle, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (New, 1) -> (Merge, 1) -> (next32, 0) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Next, 1) -> (ControlMessageSpace, 1) -> (Reset, 0) -> (resetBuf, 0) -> (New, 3) -> (Handshake, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (Connect, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (NewServer, 2) -> (serve, 2) -> (cmdFunc, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Open, 0) -> (listen, 2) -> (reconnect, 0) -> (ChannelVoiceJoin, 4) -> (ServeConn, 2) -> (serve, 0) -> (shutDownIn, 1) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (Reader, 1) -> (Open, 0) -> (listen, 2) -> (reconnect, 0) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (, 0) -> (newWidthTrie, 1) -> (mapper, 1) -> (Execute, 1) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (New, 1) -> (Merge, 1) -> (next32, 0) -> (sortIter, 1) -> (NewServer, 2) (aggregateTest, 4) -> (Aggregate, 3) -> (Aggregate, 7) -> (RoundTrip, 4) -> (updateDescription, 2) -> (Store, 1) -> (countSparseEntries, 1) -> (mostFrequentStride, 1) -> (ServeConn, 2) -> (serve, 0) (aggregateTest, 4) -> (Collation, 1) -> (Append, 1) -> (uint32, 1) -> (Write, 1) -> (WriteString, 2) -> (generateMaskingKey, 0) -> (yaml_emitter_set_writer_error, 2) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (docSliceFromRaw, 2) -> (MutableDocument, 0) -> (Get, 1) -> (get, 1) -> (New, 3) -> (Handshake, 3) (aggregateTest, 4) -> (verifyCursorResult, 3) -> (Decode, 1) -> (finalize, 1) -> (, 0) -> (newWidthTrie, 1) -> (mapper, 1) -> (Execute, 1) -> (serve, 2) ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 54e2089536ec92da137c78869f0023e47b2ae354
non_process
noxdew knights of discord vendor github com mongodb mongo go driver mongo crud spec test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message function call which takes a reference to test at line may start a goroutine click here to show the line s of go which triggered the analyzer go for test range testfile tests collname sanitizecollectionname crud spec tests test description db runcommand context background bson newdocument bson ec string drop collname if test outcome collection nil len test outcome collection name db runcommand context background bson newdocument bson ec string drop test outcome collection name coll db collection collname docstoinsert docslicetointerfaceslice docslicefromraw t testfile data err coll insertmany context background docstoinsert require noerror t err switch test operation name case aggregate aggregatetest t db coll test case count counttest t coll test case distinct distincttest t coll test case find findtest t coll test case deletemany deletemanytest t coll test case deleteone deleteonetest t coll test case findoneanddelete findoneanddeletetest t coll test case findoneandreplace findoneandreplacetest t coll test case findoneandupdate findoneandupdatetest t coll test case insertmany insertmanytest t coll test case insertone insertonetest t coll test case replaceone replaceonetest t coll test case updatemany updatemanytest t coll test case updateone updateonetest t coll test click here to show extra information the analyzer produced the following paths through the callgraph could lead to a goroutine aggregatetest verifycursorresult decode finalize release front ref aggregatetest verifycursorresult next controlmessagespace updatedescription store countsparseentries mostfrequentstride newserver aggregatetest collation append new merge sortiter newserver serve aggregatetest verifycursorresult decode finalize handleevent handle leafcert createcertificate newclientconn newmux loop println aggregatetest verifycursorresult docslicefromraw mutabledocument get get aggregatetest aggregate aggregate roundtrip lookup elem lookupstring aggregatetest aggregate aggregate roundtrip new merge aggregatetest aggregate aggregate roundtrip readwiremessage decodeerror timeout aggregatetest aggregate aggregate roundtrip get get handshake aggregatetest aggregate aggregate roundtrip updatedescription store countsparseentries mostfrequentstride newserver aggregatetest verifycursorresult decode finalize handleevent handle leafcert createcertificate newclient aggregatetest verifycursorresult decode finalize open listen reconnect channelvoicejoin serveconn serve shutdownin runhandler handler setparent aggregatetest verifycursorresult decode finalize open listen reconnect channelvoicejoin serveconn serve shutdownin runhandler handler setparent consume aggregatetest verifycursorresult decode finalize open listen reconnect aggregatetest collation append reset resetbuf new handshake aggregatetest verifycursorresult decode finalize newwidthtrie mapper execute serve aggregatetest aggregate aggregate roundtrip handleevent handle leafcert createcertificate newclientconn newmux aggregatetest verifycursorresult next controlmessagespace updatedescription store countsparseentries mostfrequentstride serveconn serve shutdownin aggregatetest verifycursorresult next controlmessagespace handleevent handle leafcert createcertificate newclientconn newmux loop println aggregatetest verifycursorresult decode finalize handleevent handle aggregatetest verifycursorresult next controlmessagespace new merge sortiter newserver serve aggregatetest aggregate aggregate roundtrip updatedescription store countsparseentries mostfrequentstride serveconn serve shutdownin runhandler handler setparent consume aggregatetest verifycursorresult decode finalize readline add ref aggregatetest aggregate aggregate roundtrip get get handshake connect aggregatetest verifycursorresult decode finalize open listen reconnect channelvoicejoin newserver serve aggregatetest aggregate aggregate roundtrip updatedescription store countsparseentries mostfrequentstride serveconn serve shutdownin aggregatetest verifycursorresult next controlmessagespace new merge sortiter newserver serve cmdfunc aggregatetest collation append close ucol close streambyid logwrite open wslisten onevent h aggregatetest aggregate aggregate roundtrip handleevent handle leafcert createcertificate newclientconn newmux loop println aggregatetest verifycursorresult next controlmessagespace handleevent handle leafcert createcertificate aggregatetest aggregate aggregate roundtrip updatedescription store countsparseentries mostfrequentstride serveconn serve shutdownin runhandler handler setparent aggregatetest verifycursorresult next controlmessagespace reset resetbuf new handshake connect aggregatetest verifycursorresult next controlmessagespace handleevent handle leafcert createcertificate newclientconn newmux loop aggregatetest verifycursorresult decode finalize handleevent handle leafcert createcertificate newclientconn newmux loop println handlepacket responsemessagereceived aggregatetest verifycursorresult decode finalize reader open listen reconnect connect aggregatetest collation append close ucol close streambyid logwrite open wslisten aggregatetest verifycursorresult next controlmessagespace updatedescription store countsparseentries mostfrequentstride newserver serve cmdfunc aggregatetest aggregate aggregate roundtrip aggregatetest verifycursorresult decode finalize open aggregatetest collation append close ucol close streambyid logwrite dial dialconfig aggregatetest verifycursorresult decode finalize new merge sortiter newserver serve aggregatetest verifycursorresult decode finalize newwidthtrie mapper execute serve shutdownin runhandler handler aggregatetest collation append new merge sortiter connect aggregatetest aggregate aggregate roundtrip handleevent handle leafcert createcertificate wslisten aggregatetest verifycursorresult next controlmessagespace handleevent handle leafcert createcertificate newclientconn newmux loop println handlepacket responsemessagereceived aggregatetest verifycursorresult decode finalize handleevent handle leafcert createcertificate wslisten aggregatetest verifycursorresult decode finalize get get aggregatetest verifycursorresult next controlmessagespace release front ref aggregatetest collation subdocument panic write writestring generatemaskingkey yaml emitter set writer error aggregatetest verifycursorresult decode finalize get get new handshake aggregatetest verifycursorresult decode finalize reset resetbuf new handshake connect aggregatetest verifycursorresult decode finalize new merge sortiter newserver aggregatetest verifycursorresult decode finalize open listen reconnect channelvoicejoin newserver aggregatetest aggregate aggregate roundtrip updatedescription store countsparseentries mostfrequentstride newserver serve aggregatetest verifycursorresult next controlmessagespace handleevent handle leafcert createcertificate newclientconn newmux aggregatetest aggregate aggregate roundtrip handleevent handle leafcert createcertificate newclientconn newmux loop aggregatetest verifycursorresult decode finalize open listen reconnect channelvoicejoin newserver serve cmdfunc aggregatetest verifycursorresult decode finalize open listen reconnect channelvoicejoin serveconn serve shutdownin runhandler handler aggregatetest verifycursorresult next controlmessagespace get get aggregatetest verifycursorresult next controlmessagespace readwiremessage decodeerror timeout aggregatetest aggregate aggregate roundtrip reset resetbuf new handshake aggregatetest collation append reset resetbuf new handshake connect aggregatetest verifycursorresult next controlmessagespace new merge sortiter newserver aggregatetest verifycursorresult next controlmessagespace handleevent handle leafcert createcertificate wslisten onevent aggregatetest aggregate aggregate roundtrip handleevent handle aggregatetest collation append new merge aggregatetest aggregate aggregate roundtrip new merge sortiter newserver serve aggregatetest verifycursorresult decode finalize open listen reconnect channelvoicejoin serveconn serve aggregatetest aggregate aggregate roundtrip updatedescription store countsparseentries mostfrequentstride serveconn serve shutdownin runhandler handler aggregatetest verifycursorresult next controlmessagespace updatedescription store countsparseentries mostfrequentstride serveconn serve shutdownin runhandler handler aggregatetest aggregate aggregate roundtrip handleevent handle leafcert createcertificate newclientconn newmux loop println handlepacket responsemessagereceived aggregatetest verifycursorresult docslicefromraw mutabledocument get get new handshake connect aggregatetest verifycursorresult next controlmessagespace new merge sortiter connect aggregatetest collation append close ucol close streambyid logwrite aggregatetest aggregate aggregate roundtrip handleevent handle leafcert createcertificate newclient aggregatetest verifycursorresult next controlmessagespace updatedescription store countsparseentries mostfrequentstride serveconn serve shutdownin runhandler handler setparent consume aggregatetest collation append close ucol close streambyid logwrite open aggregatetest collation append new merge sortiter newserver serve cmdfunc aggregatetest verifycursorresult decode finalize newwidthtrie aggregatetest verifycursorresult next controlmessagespace updatedescription store countsparseentries aggregatetest collation append close ucol close streambyid logwrite dial dialconfig newclient newhybiclientconn newhybiconn aggregatetest verifycursorresult decode finalize flush buffered ref aggregatetest aggregate aggregate roundtrip updatedescription store countsparseentries mostfrequentstride aggregatetest verifycursorresult next controlmessagespace get get new handshake aggregatetest aggregate aggregate roundtrip new merge sortiter newserver aggregatetest verifycursorresult next controlmessagespace handleevent handle leafcert createcertificate wslisten aggregatetest aggregate aggregate roundtrip handleevent handle leafcert createcertificate wslisten onevent aggregatetest verifycursorresult next controlmessagespace updatedescription store countsparseentries mostfrequentstride newserver serve aggregatetest verifycursorresult next controlmessagespace updatedescription store countsparseentries mostfrequentstride serveconn serve shutdownin runhandler handler setparent aggregatetest verifycursorresult next controlmessagespace aggregatetest aggregate aggregate roundtrip release front ref aggregatetest verifycursorresult next controlmessagespace finalmsg getderivedkeys setcache aggregatetest verifycursorresult decode finalize handleevent handle leafcert createcertificate wslisten onevent aggregatetest verifycursorresult decode finalize handleevent handle leafcert createcertificate wslisten onevent h aggregatetest verifycursorresult decode finalize reader open aggregatetest collation append close ucol close streambyid aggregatetest verifycursorresult decode finalize flush writechunk writedatafromhandler aggregatetest verifycursorresult decode finalize new merge aggregatetest verifycursorresult next controlmessagespace handleevent handle leafcert createcertificate newclient aggregatetest verifycursorresult next controlmessagespace handleevent handle leafcert createcertificate wslisten onevent h aggregatetest verifycollectioncontents find find withdeadline afterfunc aggregatetest aggregate aggregate roundtrip updatedescription store countsparseentries aggregatetest verifycursorresult next controlmessagespace writewiremessage error timeout aggregatetest verifycursorresult decode finalize newwidthtrie mapper aggregatetest aggregate aggregate roundtrip handleevent handle leafcert createcertificate aggregatetest collation append close ucol close streambyid logwrite open wslisten onevent aggregatetest verifycursorresult decode finalize handleevent handle leafcert createcertificate newclientconn newmux loop aggregatetest collation append withcancel cancel aggregatetest verifycursorresult decode finalize reset resetbuf new handshake aggregatetest aggregate aggregate roundtrip updatedescription store countsparseentries mostfrequentstride newserver serve cmdfunc aggregatetest aggregate aggregate roundtrip new merge sortiter connect aggregatetest verifycursorresult decode finalize newwidthtrie mapper execute serve cmdfunc aggregatetest verifycursorresult next controlmessagespace updatedescription store countsparseentries mostfrequentstride serveconn serve aggregatetest collation append close ucol close streambyid logwrite dial dialconfig newclient newhybiclientconn aggregatetest aggregate aggregate roundtrip reset resetbuf new handshake connect aggregatetest verifycursorresult next controlmessagespace get get new handshake connect aggregatetest verifycursorresult decode finalize newwidthtrie mapper execute serve shutdownin aggregatetest verifycursorresult decode finalize handleevent handle leafcert createcertificate newclientconn newmux aggregatetest collation append close ucol close streambyid logwrite dial dialconfig newclient aggregatetest verifycursorresult decode finalize newwidthtrie mapper execute serve shutdownin runhandler handler setparent aggregatetest verifycursorresult decode finalize newwidthtrie mapper execute serve shutdownin runhandler handler setparent consume aggregatetest aggregate aggregate roundtrip writewiremessage error timeout aggregatetest verifycursorresult next controlmessagespace flush writechunk writedatafromhandler aggregatetest verifycursorresult decode finalize open listen reconnect channelvoicejoin aggregatetest aggregate aggregate roundtrip handleevent handle leafcert createcertificate wslisten onevent h aggregatetest aggregate aggregate roundtrip new merge sortiter newserver serve cmdfunc aggregatetest aggregate aggregate roundtrip get get aggregatetest aggregate aggregate roundtrip flush writechunk writedatafromhandler aggregatetest verifycursorresult decode finalize handleevent handle leafcert createcertificate aggregatetest collation append close ucol close streambyid logwrite dial dialconfig newclient newhybiclientconn newhybiconn handlepacket responsemessagereceived aggregatetest verifycursorresult next controlmessagespace handleevent handle aggregatetest verifycursorresult next controlmessagespace new merge aggregatetest verifycursorresult next controlmessagespace reset resetbuf new handshake aggregatetest verifycursorresult decode finalize new merge sortiter connect aggregatetest verifycursorresult decode finalize new merge sortiter newserver serve cmdfunc aggregatetest verifycursorresult decode finalize open listen reconnect channelvoicejoin serveconn serve shutdownin aggregatetest verifycursorresult decode finalize reader open listen reconnect aggregatetest verifycursorresult decode finalize newwidthtrie mapper execute aggregatetest collation append new merge sortiter newserver aggregatetest aggregate aggregate roundtrip updatedescription store countsparseentries mostfrequentstride serveconn serve aggregatetest collation append write writestring generatemaskingkey yaml emitter set writer error aggregatetest verifycursorresult docslicefromraw mutabledocument get get new handshake aggregatetest verifycursorresult decode finalize newwidthtrie mapper execute serve leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
0
6,700
9,814,742,406
IssuesEvent
2019-06-13 10:55:25
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
gdal2tiles very slow compared to QGIS 2.18
Bug Feedback Processing
Author Name: **Karsten Tebling** (Karsten Tebling) Original Redmine Issue: [21819](https://issues.qgis.org/issues/21819) Affected QGIS version: 3.6.1 Redmine category:processing/gdal --- I tried to generate tiles for zoom levels 10-11 for a roughly 2GB compressed DOP, with QGIS 3.6.1 it took about 581 minutes to finish. I also tried it with QGIS 2.18.28 and it only took around 8 seconds for the same DOP.
1.0
gdal2tiles very slow compared to QGIS 2.18 - Author Name: **Karsten Tebling** (Karsten Tebling) Original Redmine Issue: [21819](https://issues.qgis.org/issues/21819) Affected QGIS version: 3.6.1 Redmine category:processing/gdal --- I tried to generate tiles for zoom levels 10-11 for a roughly 2GB compressed DOP, with QGIS 3.6.1 it took about 581 minutes to finish. I also tried it with QGIS 2.18.28 and it only took around 8 seconds for the same DOP.
process
very slow compared to qgis author name karsten tebling karsten tebling original redmine issue affected qgis version redmine category processing gdal i tried to generate tiles for zoom levels for a roughly compressed dop with qgis it took about minutes to finish i also tried it with qgis and it only took around seconds for the same dop
1
78,777
7,668,423,239
IssuesEvent
2018-05-14 05:37:39
rancher/rancher
https://api.github.com/repos/rancher/rancher
closed
Unable to view upgrade status when upgrading charts
area/catalog kind/enhancement status/resolved status/to-test version/2.0
**Rancher versions:** master 04/27 **Docker version: (`docker version`,`docker info` preferred)** 17.03.2-ce **Operating system and kernel: (`cat /etc/os-release`, `uname -r` preferred)** Ubuntu 16.04.4 LTS 4.4.0-1052-aws **Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)** AWS When I upgraded the chart in catalog apps, I couldn’t see the real-time status of the chart. The status of the chart has always been 'active'
1.0
Unable to view upgrade status when upgrading charts - **Rancher versions:** master 04/27 **Docker version: (`docker version`,`docker info` preferred)** 17.03.2-ce **Operating system and kernel: (`cat /etc/os-release`, `uname -r` preferred)** Ubuntu 16.04.4 LTS 4.4.0-1052-aws **Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)** AWS When I upgraded the chart in catalog apps, I couldn’t see the real-time status of the chart. The status of the chart has always been 'active'
non_process
unable to view upgrade status when upgrading charts rancher versions master docker version docker version docker info preferred ce operating system and kernel cat etc os release uname r preferred ubuntu lts aws type provider of hosts virtualbox bare metal aws gce do aws when i upgraded the chart in catalog apps i couldn’t see the real time status of the chart the status of the chart has always been active
0
783
3,265,873,236
IssuesEvent
2015-10-22 18:10:17
USC-CSSL/TACIT
https://api.github.com/repos/USC-CSSL/TACIT
opened
TACIT word count: preprocess twitter json fails
bug Preprocessing word count
If you put a json file in to word count (or a corpus), it will run a word count on the entire file. However, if you click preprocess, it will say it created a new preprocessed file, but it wont actually create anything and then it wont give you a word count output.
1.0
TACIT word count: preprocess twitter json fails - If you put a json file in to word count (or a corpus), it will run a word count on the entire file. However, if you click preprocess, it will say it created a new preprocessed file, but it wont actually create anything and then it wont give you a word count output.
process
tacit word count preprocess twitter json fails if you put a json file in to word count or a corpus it will run a word count on the entire file however if you click preprocess it will say it created a new preprocessed file but it wont actually create anything and then it wont give you a word count output
1
100,642
4,099,777,274
IssuesEvent
2016-06-03 13:58:09
jpppina/migracion-galeno-art-forms11g
https://api.github.com/repos/jpppina/migracion-galeno-art-forms11g
opened
No se visualiza los botones
Aplicación-ART Error Priority-Low
Sellados Provinciales-->Informacion a Entidades Provinciales-->Sellados Prov. Prest. Interfase Usuario: RIALM Pass: desaa002 No se pueden ver los botones
1.0
No se visualiza los botones - Sellados Provinciales-->Informacion a Entidades Provinciales-->Sellados Prov. Prest. Interfase Usuario: RIALM Pass: desaa002 No se pueden ver los botones
non_process
no se visualiza los botones sellados provinciales informacion a entidades provinciales sellados prov prest interfase usuario rialm pass no se pueden ver los botones
0
6,137
8,999,231,516
IssuesEvent
2019-02-03 07:40:10
SerialLain3170/GAN-papers
https://api.github.com/repos/SerialLain3170/GAN-papers
opened
Controllable Image-to-Video Translation:A Case Study on Facial Expression Generation
Video Processing
# Paper [Controllable Image-to-Video Translation:A Case Study on Facial Expression Generation](https://arxiv.org/pdf/1808.02992.pdf) # Summary - Encoderを2つ用意して片方の出力にフレーム毎の変数を掛けて、もう片方の出力に足したものをDecoderの入力に入れる - Adversarial lossやReconstruction lossに加え、Temporary loss, Landmark prediction lossを考慮 ![screenshot from 2019-02-03 16-29-35](https://user-images.githubusercontent.com/32360147/52173972-1b3ad780-27d1-11e9-892d-e307dd7c18b9.png) # Date 2018/08/09
1.0
Controllable Image-to-Video Translation:A Case Study on Facial Expression Generation - # Paper [Controllable Image-to-Video Translation:A Case Study on Facial Expression Generation](https://arxiv.org/pdf/1808.02992.pdf) # Summary - Encoderを2つ用意して片方の出力にフレーム毎の変数を掛けて、もう片方の出力に足したものをDecoderの入力に入れる - Adversarial lossやReconstruction lossに加え、Temporary loss, Landmark prediction lossを考慮 ![screenshot from 2019-02-03 16-29-35](https://user-images.githubusercontent.com/32360147/52173972-1b3ad780-27d1-11e9-892d-e307dd7c18b9.png) # Date 2018/08/09
process
controllable image to video translation a case study on facial expression generation paper summary 、もう片方の出力に足したものをdecoderの入力に入れる adversarial lossやreconstruction lossに加え、temporary loss landmark prediction lossを考慮 date
1
15,620
19,762,212,131
IssuesEvent
2022-01-16 15:46:38
ForNeVeR/Cesium
https://api.github.com/repos/ForNeVeR/Cesium
opened
C17-compliant preprocessor
kind:feature status:help-wanted area:standard-support area:preprocessor
The section **6.10 Preprocessing directives** of the C standard defines the requirements to the C preprocessor. We should fulfill them. - [ ] 6.10 Preprocessing directives - [ ] 6.10.1 Conditional inclusion - [ ] 6.10.2 Source file inclusion - [ ] 6.10.3 Macro replacement - [ ] 6.10.3.1 Argument substitution - [ ] 6.10.3.2 The # operator - [ ] 6.10.3.3 The ## operator - [ ] 6.10.3.4. Rescanning and further replacement - [ ] 6.10.3.5 Scope of macro definitions - [ ] 6.10.4 Line control - [ ] 6.10.5 Error directive - [ ] 6.10.6 Pragma directive - [ ] 6.10.7 Null directive - [ ] 6.10.8 Predefined macro names - [ ] 6.10.8.1 Mandatory macros - [ ] 6.10.8.2 Environment macros - [ ] 6.10.8.3 Conditional feature macros - [ ] 6.10.9 Pragma operator
1.0
C17-compliant preprocessor - The section **6.10 Preprocessing directives** of the C standard defines the requirements to the C preprocessor. We should fulfill them. - [ ] 6.10 Preprocessing directives - [ ] 6.10.1 Conditional inclusion - [ ] 6.10.2 Source file inclusion - [ ] 6.10.3 Macro replacement - [ ] 6.10.3.1 Argument substitution - [ ] 6.10.3.2 The # operator - [ ] 6.10.3.3 The ## operator - [ ] 6.10.3.4. Rescanning and further replacement - [ ] 6.10.3.5 Scope of macro definitions - [ ] 6.10.4 Line control - [ ] 6.10.5 Error directive - [ ] 6.10.6 Pragma directive - [ ] 6.10.7 Null directive - [ ] 6.10.8 Predefined macro names - [ ] 6.10.8.1 Mandatory macros - [ ] 6.10.8.2 Environment macros - [ ] 6.10.8.3 Conditional feature macros - [ ] 6.10.9 Pragma operator
process
compliant preprocessor the section preprocessing directives of the c standard defines the requirements to the c preprocessor we should fulfill them preprocessing directives conditional inclusion source file inclusion macro replacement argument substitution the operator the operator rescanning and further replacement scope of macro definitions line control error directive pragma directive null directive predefined macro names mandatory macros environment macros conditional feature macros pragma operator
1
12,784
15,166,085,985
IssuesEvent
2021-02-12 15:54:12
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
cell growth mode switching, bipolar to monopolar TPV?
PomBase cellular processes parent relationship query
I know we have discussed this many times before, but I can't work out if something has changed and it is now incorrect, or it was always like this and I misremebered I want to use the term https://www.ebi.ac.uk/QuickGO/term/GO:0051524 GO:0051524 cell growth mode switching, bipolar to monopolar (I think this is the term we use for NETO, although it does not have these synonymns.) I'm also pretty sure that in the past this did not have a parentage to growth (because it isn't to do with a size increase, its to do with the direction of growth) Anyway, it now has a parent to "growth" via regulation of cell growth. @ukemi @mah11
1.0
cell growth mode switching, bipolar to monopolar TPV? - I know we have discussed this many times before, but I can't work out if something has changed and it is now incorrect, or it was always like this and I misremebered I want to use the term https://www.ebi.ac.uk/QuickGO/term/GO:0051524 GO:0051524 cell growth mode switching, bipolar to monopolar (I think this is the term we use for NETO, although it does not have these synonymns.) I'm also pretty sure that in the past this did not have a parentage to growth (because it isn't to do with a size increase, its to do with the direction of growth) Anyway, it now has a parent to "growth" via regulation of cell growth. @ukemi @mah11
process
cell growth mode switching bipolar to monopolar tpv i know we have discussed this many times before but i can t work out if something has changed and it is now incorrect or it was always like this and i misremebered i want to use the term go cell growth mode switching bipolar to monopolar i think this is the term we use for neto although it does not have these synonymns i m also pretty sure that in the past this did not have a parentage to growth because it isn t to do with a size increase its to do with the direction of growth anyway it now has a parent to growth via regulation of cell growth ukemi
1
444,333
31,033,122,032
IssuesEvent
2023-08-10 13:41:28
stormatics/pg_cirrus
https://api.github.com/repos/stormatics/pg_cirrus
opened
Update HOWTO web page
documentation
Since there have been changes in how pg_cirrus executes, the [HOWTO](https://stormatics.tech/how-to-use-pg_cirrus-to-setup-a-highly-available-postgresql-cluster) web page must be updated.
1.0
Update HOWTO web page - Since there have been changes in how pg_cirrus executes, the [HOWTO](https://stormatics.tech/how-to-use-pg_cirrus-to-setup-a-highly-available-postgresql-cluster) web page must be updated.
non_process
update howto web page since there have been changes in how pg cirrus executes the web page must be updated
0
21,807
30,316,402,473
IssuesEvent
2023-07-10 15:51:39
tdwg/dwc
https://api.github.com/repos/tdwg/dwc
closed
Change term - countrycode
Term - change Class - Location non-normative Process - complete
Submitter: John Wieczorek (following issue raised by Ian Engelbrecht @ianengelbrecht Issue #221 and tdwg/dwc-qa#141) Justification (why is this change necessary?): Clarity Proponents (who needs this change): Everyone Current Term definition: https://dwc.tdwg.org/list/#dwc_countrycode Proposed new attributes of the term: Usage comments (recommendations regarding content, etc.): Recommended best practice is to use an ISO 3166-1-alpha-2 country code. Recommended best practice is to leave this field blank if the Location spans multiple entities at this administrative level or if the Location might be in one or another of multiple possible entities at this level. Multiplicity and uncertainty of the geographic entity can be captured either in the term higherGeography or in the term locality, or both. Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/countrycode-2017-10-06
1.0
Change term - countrycode - Submitter: John Wieczorek (following issue raised by Ian Engelbrecht @ianengelbrecht Issue #221 and tdwg/dwc-qa#141) Justification (why is this change necessary?): Clarity Proponents (who needs this change): Everyone Current Term definition: https://dwc.tdwg.org/list/#dwc_countrycode Proposed new attributes of the term: Usage comments (recommendations regarding content, etc.): Recommended best practice is to use an ISO 3166-1-alpha-2 country code. Recommended best practice is to leave this field blank if the Location spans multiple entities at this administrative level or if the Location might be in one or another of multiple possible entities at this level. Multiplicity and uncertainty of the geographic entity can be captured either in the term higherGeography or in the term locality, or both. Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/countrycode-2017-10-06
process
change term countrycode submitter john wieczorek following issue raised by ian engelbrecht ianengelbrecht issue and tdwg dwc qa justification why is this change necessary clarity proponents who needs this change everyone current term definition proposed new attributes of the term usage comments recommendations regarding content etc recommended best practice is to use an iso alpha country code recommended best practice is to leave this field blank if the location spans multiple entities at this administrative level or if the location might be in one or another of multiple possible entities at this level multiplicity and uncertainty of the geographic entity can be captured either in the term highergeography or in the term locality or both replaces identifier of the existing term that would be deprecated and replaced by this term if applicable
1
9,745
12,733,961,505
IssuesEvent
2020-06-25 13:11:22
prisma/vscode
https://api.github.com/repos/prisma/vscode
closed
Composite keys are not considered valid?
bug/2-confirmed kind/bug process/candidate team/engines
![image](https://user-images.githubusercontent.com/26666870/85343465-c4a36b80-b4a1-11ea-923d-f6ff578a3b37.png) As the image explains... Happened with prisma `2.0.1` and vscode extension `2.0.3`.
1.0
Composite keys are not considered valid? - ![image](https://user-images.githubusercontent.com/26666870/85343465-c4a36b80-b4a1-11ea-923d-f6ff578a3b37.png) As the image explains... Happened with prisma `2.0.1` and vscode extension `2.0.3`.
process
composite keys are not considered valid as the image explains happened with prisma and vscode extension
1
9,275
12,302,262,844
IssuesEvent
2020-05-11 16:41:02
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
Referencing Output Variable from Dependency does not work as described
Pri1 devops-cicd-process/tech devops/prod doc-bug
I'm using the runOnce strategy with the deploy hook, but referencing an output from one of the tasks does not work using this syntax as described here. $[dependencies.&lt;job-name&gt;.outputs['&lt;lifecycle-hookname&gt;.&lt;step-name&gt;.&lt;variable-name&gt;']] The only way I was able to get this to work was actually by using the following syntax $[dependencies.&lt;job-name&gt;.outputs['&lt;job-name&gt;.&lt;step-name&gt;.&lt;variable-name&gt;']] This seems to be corroborated by the comments in this GitHub issue https://github.com/MicrosoftDocs/azure-devops-docs/issues/4946#issuecomment-543366950 --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 5aeeaace-1c5b-a51b-e41f-f25b806155b8 * Version Independent ID: fd7ff690-b2e4-41c7-a342-e528b911c6e1 * Content: [Deployment jobs - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/deployment-jobs?view=azure-devops#feedback) * Content Source: [docs/pipelines/process/deployment-jobs.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/deployment-jobs.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
Referencing Output Variable from Dependency does not work as described - I'm using the runOnce strategy with the deploy hook, but referencing an output from one of the tasks does not work using this syntax as described here. $[dependencies.&lt;job-name&gt;.outputs['&lt;lifecycle-hookname&gt;.&lt;step-name&gt;.&lt;variable-name&gt;']] The only way I was able to get this to work was actually by using the following syntax $[dependencies.&lt;job-name&gt;.outputs['&lt;job-name&gt;.&lt;step-name&gt;.&lt;variable-name&gt;']] This seems to be corroborated by the comments in this GitHub issue https://github.com/MicrosoftDocs/azure-devops-docs/issues/4946#issuecomment-543366950 --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 5aeeaace-1c5b-a51b-e41f-f25b806155b8 * Version Independent ID: fd7ff690-b2e4-41c7-a342-e528b911c6e1 * Content: [Deployment jobs - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/deployment-jobs?view=azure-devops#feedback) * Content Source: [docs/pipelines/process/deployment-jobs.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/deployment-jobs.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
referencing output variable from dependency does not work as described i m using the runonce strategy with the deploy hook but referencing an output from one of the tasks does not work using this syntax as described here the only way i was able to get this to work was actually by using the following syntax this seems to be corroborated by the comments in this github issue document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
13,491
16,018,831,373
IssuesEvent
2021-04-20 19:39:20
scikit-learn/scikit-learn
https://api.github.com/repos/scikit-learn/scikit-learn
closed
Change KMeans algorithm for KBinsDiscretizer from 'elkan' (default) to 'full'
Performance module:preprocessing
In KBinsDiscretizer KMeans is used with default parameters (eps=1e-4, algorithm='elkan'). https://github.com/scikit-learn/scikit-learn/blob/8c6a045e46abe94e43a971d4f8042728addfd6a7/sklearn/preprocessing/_discretization.py#L208 But 'full' algotithm works better. Here are timings from two different stations. I also checked two different eps values since discretization does not need high precision and reducing eps parameter from 1e-4 (default) could be beneficial: Timings 1 (Ubuntu + Intel Core i5 8300H + 32GB) + 0.23.2 ![Timings 1 (Ubuntu + Intel Core i5 8300H + 32GB)](https://user-images.githubusercontent.com/36483986/105574249-20e1ad00-5d5b-11eb-9dc0-ffa2a1301903.png) Timings 2 (MacOS + Intel Core i7 7820HQ + 16GB) + 0.23.2 ![Timings 2 (MacOS + Intel Core i7 7820HQ + 16GB) ](https://user-images.githubusercontent.com/36483986/105574363-fa704180-5d5b-11eb-9f0e-b43e901211fc.png) Kaggle Kernel + 0.23.2 ![time_kaggle](https://user-images.githubusercontent.com/36483986/105575737-a5d1c400-5d65-11eb-83ef-9e55ce8a4db1.png) In colab (0.22.2.post1) behavior is different ![timings_colab](https://user-images.githubusercontent.com/36483986/105575483-e03a6180-5d63-11eb-8af7-354441db4f21.png) So, I guess something changed after 0.22.2. Description of 'elkan' method states > The “elkan” variation is more efficient on data with well-defined clusters, by using the triangle inequality. However it’s more memory intensive due to the allocation of an extra array of shape (n_samples, n_clusters). But IMO, assumption that 1d array would have "well-defined" clusters is a bit naive. I also opened a feature request related to KBD [here](https://github.com/scikit-learn/scikit-learn/issues/19255). I would love to implement all these changes (and also some minor refactoring, like replacing format-strings with f-strings).
1.0
Change KMeans algorithm for KBinsDiscretizer from 'elkan' (default) to 'full' - In KBinsDiscretizer KMeans is used with default parameters (eps=1e-4, algorithm='elkan'). https://github.com/scikit-learn/scikit-learn/blob/8c6a045e46abe94e43a971d4f8042728addfd6a7/sklearn/preprocessing/_discretization.py#L208 But 'full' algotithm works better. Here are timings from two different stations. I also checked two different eps values since discretization does not need high precision and reducing eps parameter from 1e-4 (default) could be beneficial: Timings 1 (Ubuntu + Intel Core i5 8300H + 32GB) + 0.23.2 ![Timings 1 (Ubuntu + Intel Core i5 8300H + 32GB)](https://user-images.githubusercontent.com/36483986/105574249-20e1ad00-5d5b-11eb-9dc0-ffa2a1301903.png) Timings 2 (MacOS + Intel Core i7 7820HQ + 16GB) + 0.23.2 ![Timings 2 (MacOS + Intel Core i7 7820HQ + 16GB) ](https://user-images.githubusercontent.com/36483986/105574363-fa704180-5d5b-11eb-9f0e-b43e901211fc.png) Kaggle Kernel + 0.23.2 ![time_kaggle](https://user-images.githubusercontent.com/36483986/105575737-a5d1c400-5d65-11eb-83ef-9e55ce8a4db1.png) In colab (0.22.2.post1) behavior is different ![timings_colab](https://user-images.githubusercontent.com/36483986/105575483-e03a6180-5d63-11eb-8af7-354441db4f21.png) So, I guess something changed after 0.22.2. Description of 'elkan' method states > The “elkan” variation is more efficient on data with well-defined clusters, by using the triangle inequality. However it’s more memory intensive due to the allocation of an extra array of shape (n_samples, n_clusters). But IMO, assumption that 1d array would have "well-defined" clusters is a bit naive. I also opened a feature request related to KBD [here](https://github.com/scikit-learn/scikit-learn/issues/19255). I would love to implement all these changes (and also some minor refactoring, like replacing format-strings with f-strings).
process
change kmeans algorithm for kbinsdiscretizer from elkan default to full in kbinsdiscretizer kmeans is used with default parameters eps algorithm elkan but full algotithm works better here are timings from two different stations i also checked two different eps values since discretization does not need high precision and reducing eps parameter from default could be beneficial timings ubuntu intel core timings macos intel core kaggle kernel in colab behavior is different so i guess something changed after description of elkan method states the “elkan” variation is more efficient on data with well defined clusters by using the triangle inequality however it’s more memory intensive due to the allocation of an extra array of shape n samples n clusters but imo assumption that array would have well defined clusters is a bit naive i also opened a feature request related to kbd i would love to implement all these changes and also some minor refactoring like replacing format strings with f strings
1
18,675
24,594,063,940
IssuesEvent
2022-10-14 06:39:11
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[FHIR] Spatial span memory > JSON > Maximum consecutive failures > Need to correct the Spelling(typo) for the linkID
Bug Response datastore P3 Process: Fixed Process: Tested dev
Spatial span memory > JSON > Maximum consecutive failures > Need to correct the Spelling(typo) for the linkID, linkID should be - Maximum_Consecutive_Failures_spatial ![j1](https://user-images.githubusercontent.com/86007179/183687063-d75e8b04-d965-421f-ae18-f362a87eaa3b.png)
2.0
[FHIR] Spatial span memory > JSON > Maximum consecutive failures > Need to correct the Spelling(typo) for the linkID - Spatial span memory > JSON > Maximum consecutive failures > Need to correct the Spelling(typo) for the linkID, linkID should be - Maximum_Consecutive_Failures_spatial ![j1](https://user-images.githubusercontent.com/86007179/183687063-d75e8b04-d965-421f-ae18-f362a87eaa3b.png)
process
spatial span memory json maximum consecutive failures need to correct the spelling typo for the linkid spatial span memory json maximum consecutive failures need to correct the spelling typo for the linkid linkid should be maximum consecutive failures spatial
1
21,486
29,577,955,597
IssuesEvent
2023-06-07 01:37:48
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
blackbox workspace tests need a complete overhaul for maintainability
P4 type: process team-ExternalDeps stale
The set of tests under src/test/j.c.g/devtools/build/lib/blackbox is brittle and hard to maintain. - They depend on external repositories. So either have to keep those in sync with versions in //WORKSPACE or risk version skew. - we are not necessarily sharing the download cache with the main repo - version numbers and sha sums are embedded in java code. - the workspace rules are getting increasingly complex, where dependencies have to use deps patterns to download their deps - This degenerated in #9046 to needing to have a big workspace preamble in any test which uses the RepoWithRuleWriting pattern. - They dynamically generate content of WORKSPACE files which then are consumed in the same test - this is had to debug if you get the workspace rules wrong. These files could be generated by a BUILD rule and be data input to the tests. Some suggestions: - Replace the use of rules_pkg in the framework with a bazel source internal tar packager. #11183 - Restructure the tests so we can inspect the intermediate artifacts more easily.
1.0
blackbox workspace tests need a complete overhaul for maintainability - The set of tests under src/test/j.c.g/devtools/build/lib/blackbox is brittle and hard to maintain. - They depend on external repositories. So either have to keep those in sync with versions in //WORKSPACE or risk version skew. - we are not necessarily sharing the download cache with the main repo - version numbers and sha sums are embedded in java code. - the workspace rules are getting increasingly complex, where dependencies have to use deps patterns to download their deps - This degenerated in #9046 to needing to have a big workspace preamble in any test which uses the RepoWithRuleWriting pattern. - They dynamically generate content of WORKSPACE files which then are consumed in the same test - this is had to debug if you get the workspace rules wrong. These files could be generated by a BUILD rule and be data input to the tests. Some suggestions: - Replace the use of rules_pkg in the framework with a bazel source internal tar packager. #11183 - Restructure the tests so we can inspect the intermediate artifacts more easily.
process
blackbox workspace tests need a complete overhaul for maintainability the set of tests under src test j c g devtools build lib blackbox is brittle and hard to maintain they depend on external repositories so either have to keep those in sync with versions in workspace or risk version skew we are not necessarily sharing the download cache with the main repo version numbers and sha sums are embedded in java code the workspace rules are getting increasingly complex where dependencies have to use deps patterns to download their deps this degenerated in to needing to have a big workspace preamble in any test which uses the repowithrulewriting pattern they dynamically generate content of workspace files which then are consumed in the same test this is had to debug if you get the workspace rules wrong these files could be generated by a build rule and be data input to the tests some suggestions replace the use of rules pkg in the framework with a bazel source internal tar packager restructure the tests so we can inspect the intermediate artifacts more easily
1
35,605
2,791,491,429
IssuesEvent
2015-05-10 05:59:49
afollestad/cabinet-issue-tracker
https://api.github.com/repos/afollestad/cabinet-issue-tracker
opened
Use SortedList for RecyclerView file listing
enhancement high priority in progress
This will make updating the list with new info much easier (e.g. first run of `ls` to get file names, second run of `ls -l` to get detailed info such as file sizes). Also the animations will be taken care of, so that's nice. Noting these resources for myself while I work on this: Official demo https://github.com/android/platform_development/blob/master/samples/Support7Demos/src/com/example/android/supportv7/util/SortedListActivity.java Some useful info http://stackoverflow.com/questions/29795299/what-is-the-sortedlistt-working-with-recyclerview-adapter Docs http://developer.android.com/reference/android/support/v7/util/SortedList.html
1.0
Use SortedList for RecyclerView file listing - This will make updating the list with new info much easier (e.g. first run of `ls` to get file names, second run of `ls -l` to get detailed info such as file sizes). Also the animations will be taken care of, so that's nice. Noting these resources for myself while I work on this: Official demo https://github.com/android/platform_development/blob/master/samples/Support7Demos/src/com/example/android/supportv7/util/SortedListActivity.java Some useful info http://stackoverflow.com/questions/29795299/what-is-the-sortedlistt-working-with-recyclerview-adapter Docs http://developer.android.com/reference/android/support/v7/util/SortedList.html
non_process
use sortedlist for recyclerview file listing this will make updating the list with new info much easier e g first run of ls to get file names second run of ls l to get detailed info such as file sizes also the animations will be taken care of so that s nice noting these resources for myself while i work on this official demo some useful info docs
0
9,693
12,699,160,328
IssuesEvent
2020-06-22 14:28:16
prisma/prisma
https://api.github.com/repos/prisma/prisma
opened
Unify prisma introspect and prisma introspect --url wrt re-introspection
kind/improvement process/candidate topic: re-introspection
The output of `prisma introspect` and `prisma introspect --url <url>` should be the same. Currently only `prisma introspect` is able to trigger the re-introspection workflow and sends existing schema.prisma to the introspection engine. ``` @prisma/cli : 2.1.0-dev.54 Current platform : darwin Query Engine : query-engine 6b10f7bfb5c09d707016877e7ec2e0c35f26eb67 (at /Users/divyendusingh/.npm/_npx/30682/lib/node_modules/@prisma/cli/query-engine-darwin) Migration Engine : migration-engine-cli 6b10f7bfb5c09d707016877e7ec2e0c35f26eb67 (at /Users/divyendusingh/.npm/_npx/30682/lib/node_modules/@prisma/cli/migration-engine-darwin) Introspection Engine : introspection-core 845b52c148128544b1d41956f45d63a276c3dc7b (at /Users/divyendusingh/Documents/prisma/reintrospection-ci/binaries/ie-darwin, resolved by PRISMA_INTROSPECTION_ENGINE_BINARY) Format Binary : prisma-fmt 6b10f7bfb5c09d707016877e7ec2e0c35f26eb67 (at /Users/divyendusingh/.npm/_npx/30682/lib/node_modules/@prisma/cli/prisma-fmt-darwin) ``` Used re-introspection binary from https://github.com/prisma/prisma-engines/pull/809
1.0
Unify prisma introspect and prisma introspect --url wrt re-introspection - The output of `prisma introspect` and `prisma introspect --url <url>` should be the same. Currently only `prisma introspect` is able to trigger the re-introspection workflow and sends existing schema.prisma to the introspection engine. ``` @prisma/cli : 2.1.0-dev.54 Current platform : darwin Query Engine : query-engine 6b10f7bfb5c09d707016877e7ec2e0c35f26eb67 (at /Users/divyendusingh/.npm/_npx/30682/lib/node_modules/@prisma/cli/query-engine-darwin) Migration Engine : migration-engine-cli 6b10f7bfb5c09d707016877e7ec2e0c35f26eb67 (at /Users/divyendusingh/.npm/_npx/30682/lib/node_modules/@prisma/cli/migration-engine-darwin) Introspection Engine : introspection-core 845b52c148128544b1d41956f45d63a276c3dc7b (at /Users/divyendusingh/Documents/prisma/reintrospection-ci/binaries/ie-darwin, resolved by PRISMA_INTROSPECTION_ENGINE_BINARY) Format Binary : prisma-fmt 6b10f7bfb5c09d707016877e7ec2e0c35f26eb67 (at /Users/divyendusingh/.npm/_npx/30682/lib/node_modules/@prisma/cli/prisma-fmt-darwin) ``` Used re-introspection binary from https://github.com/prisma/prisma-engines/pull/809
process
unify prisma introspect and prisma introspect url wrt re introspection the output of prisma introspect and prisma introspect url should be the same currently only prisma introspect is able to trigger the re introspection workflow and sends existing schema prisma to the introspection engine prisma cli dev current platform darwin query engine query engine at users divyendusingh npm npx lib node modules prisma cli query engine darwin migration engine migration engine cli at users divyendusingh npm npx lib node modules prisma cli migration engine darwin introspection engine introspection core at users divyendusingh documents prisma reintrospection ci binaries ie darwin resolved by prisma introspection engine binary format binary prisma fmt at users divyendusingh npm npx lib node modules prisma cli prisma fmt darwin used re introspection binary from
1
6,686
9,806,689,958
IssuesEvent
2019-06-12 12:04:44
Open-EO/openeo-api
https://api.github.com/repos/Open-EO/openeo-api
closed
Improvements for process catalogue
process discovery
There are some more nice additions we could integrate into our process catalogue. - Numeric types could hold an additional unit of measurement for values - Maybe allow additional schemas than JSON (depending on mime-type?) Some of these ideas are borrowed from WPS2, which our description is already similar to. Example WPS2 response: http://geoprocessing.info/schemas/wps/1.0/examples/40_wpsDescribeProcess_response.xml
1.0
Improvements for process catalogue - There are some more nice additions we could integrate into our process catalogue. - Numeric types could hold an additional unit of measurement for values - Maybe allow additional schemas than JSON (depending on mime-type?) Some of these ideas are borrowed from WPS2, which our description is already similar to. Example WPS2 response: http://geoprocessing.info/schemas/wps/1.0/examples/40_wpsDescribeProcess_response.xml
process
improvements for process catalogue there are some more nice additions we could integrate into our process catalogue numeric types could hold an additional unit of measurement for values maybe allow additional schemas than json depending on mime type some of these ideas are borrowed from which our description is already similar to example response
1
12,551
14,976,853,583
IssuesEvent
2021-01-28 08:41:49
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[Hydra] Account lock functionality is failed in Some scenario
Bug Hydra P0 Process: Fixed Process: Tested QA Process: Tested dev
Steps:- 1. Open the application 2. Navigate to sign in screen 3. Enter valid email and Invalid password for 5 times 4. Verify account is locked 5. Sign in using email and temporary password 6. Once navigated to reset password screen, enter the temp password received and enter old password used in the past in new password and confirm password fields and click on submit and verify A/R:- Displaying error message and navigated to sign in screen E/R:- App should validate for password entered and user should be able to successfully reset password https://user-images.githubusercontent.com/60500517/104809361-c2747600-5812-11eb-9d2c-6d0a6103d392.mp4 Note:- Refer attached video for more information
3.0
[Hydra] Account lock functionality is failed in Some scenario - Steps:- 1. Open the application 2. Navigate to sign in screen 3. Enter valid email and Invalid password for 5 times 4. Verify account is locked 5. Sign in using email and temporary password 6. Once navigated to reset password screen, enter the temp password received and enter old password used in the past in new password and confirm password fields and click on submit and verify A/R:- Displaying error message and navigated to sign in screen E/R:- App should validate for password entered and user should be able to successfully reset password https://user-images.githubusercontent.com/60500517/104809361-c2747600-5812-11eb-9d2c-6d0a6103d392.mp4 Note:- Refer attached video for more information
process
account lock functionality is failed in some scenario steps open the application navigate to sign in screen enter valid email and invalid password for times verify account is locked sign in using email and temporary password once navigated to reset password screen enter the temp password received and enter old password used in the past in new password and confirm password fields and click on submit and verify a r displaying error message and navigated to sign in screen e r app should validate for password entered and user should be able to successfully reset password note refer attached video for more information
1
4,699
7,542,414,174
IssuesEvent
2018-04-17 12:57:13
amarbajric/EBUSA-AIM17
https://api.github.com/repos/amarbajric/EBUSA-AIM17
opened
Adapt BP based on new Data Model
BP business processes in progress
- Registration Process needs adaption based on User <--> Company Relation - ProcessValidation should be extended with ProcessModel Upload into the ProcessStore and renamed to "Processupload and Validation" - ProcessPurchase needs adaption especially for the DEV Team to set the structure, how the procedure should be handled when a company buys a process from the Store
1.0
Adapt BP based on new Data Model - - Registration Process needs adaption based on User <--> Company Relation - ProcessValidation should be extended with ProcessModel Upload into the ProcessStore and renamed to "Processupload and Validation" - ProcessPurchase needs adaption especially for the DEV Team to set the structure, how the procedure should be handled when a company buys a process from the Store
process
adapt bp based on new data model registration process needs adaption based on user company relation processvalidation should be extended with processmodel upload into the processstore and renamed to processupload and validation processpurchase needs adaption especially for the dev team to set the structure how the procedure should be handled when a company buys a process from the store
1
4,972
7,807,792,568
IssuesEvent
2018-06-11 18:02:36
decidim/decidim
https://api.github.com/repos/decidim/decidim
closed
User roles and participatory processes privacy
space: processes stale-issue type: discussion wontfix
# This is a Feature Proposal #### :tophat: Description Sometimes associations need to have private debates. It would be nice to have user roles and choose which ones could participate at each participatory process. USER WORKFLOW * System Admin could view, create, edit and delete organization user roles * Each user role would have an "Admin verification needed" field (checkbox) * User chooses a role at registration process (combo) * User confirms account and views role field at account section as... * "pending" if user role needs Admin verification * "accepted" otherwise * Admin starts verification process... * Admin could view and verify / change user roles as bulk operation * Admin verifies user role * User receives a confirmation message and views role as "accepted" at account section * If user edits role field and chooses a role that needs Admin verification, system would show a warning explaining that verification process would start again PARTICIPATORY PROCESS WORKFLOW * System Admin could choose default participatory process type: "public" or "private" * Admin could choose participatory process type at creation process (default one would be selected) * If admin chooses "public", everyone could see participatory process (anonymous also!) * If admin chooses "private", only registered users with selected roles could see participatory process * Edit participatory process type would not be allowed due to permissions issues #### :clipboard: Additional Data * ***Decidim deployment where you found the issue***: gem "decidim", "0.3.2"
1.0
User roles and participatory processes privacy - # This is a Feature Proposal #### :tophat: Description Sometimes associations need to have private debates. It would be nice to have user roles and choose which ones could participate at each participatory process. USER WORKFLOW * System Admin could view, create, edit and delete organization user roles * Each user role would have an "Admin verification needed" field (checkbox) * User chooses a role at registration process (combo) * User confirms account and views role field at account section as... * "pending" if user role needs Admin verification * "accepted" otherwise * Admin starts verification process... * Admin could view and verify / change user roles as bulk operation * Admin verifies user role * User receives a confirmation message and views role as "accepted" at account section * If user edits role field and chooses a role that needs Admin verification, system would show a warning explaining that verification process would start again PARTICIPATORY PROCESS WORKFLOW * System Admin could choose default participatory process type: "public" or "private" * Admin could choose participatory process type at creation process (default one would be selected) * If admin chooses "public", everyone could see participatory process (anonymous also!) * If admin chooses "private", only registered users with selected roles could see participatory process * Edit participatory process type would not be allowed due to permissions issues #### :clipboard: Additional Data * ***Decidim deployment where you found the issue***: gem "decidim", "0.3.2"
process
user roles and participatory processes privacy this is a feature proposal tophat description sometimes associations need to have private debates it would be nice to have user roles and choose which ones could participate at each participatory process user workflow system admin could view create edit and delete organization user roles each user role would have an admin verification needed field checkbox user chooses a role at registration process combo user confirms account and views role field at account section as pending if user role needs admin verification accepted otherwise admin starts verification process admin could view and verify change user roles as bulk operation admin verifies user role user receives a confirmation message and views role as accepted at account section if user edits role field and chooses a role that needs admin verification system would show a warning explaining that verification process would start again participatory process workflow system admin could choose default participatory process type public or private admin could choose participatory process type at creation process default one would be selected if admin chooses public everyone could see participatory process anonymous also if admin chooses private only registered users with selected roles could see participatory process edit participatory process type would not be allowed due to permissions issues clipboard additional data decidim deployment where you found the issue gem decidim
1
6,285
9,285,377,897
IssuesEvent
2019-03-21 06:51:11
omuskywalker/gitalk-comment
https://api.github.com/repos/omuskywalker/gitalk-comment
closed
XV6 Ch1 OS 組織 | OMU Skywalker
/xv6-1-process/ Gitalk
https://blog.omuskywalker.com/xv6-1-process/ OS 必須具備三項技能:多工、獨立及交流。 kernel 組織 Monolithic kernel:整個 OS 都位於 kernel 中,如此一來所有 system calls 都會在 kernel 中執行(XV6)。 好處 設計者不須決定 OS 的哪些部份不需要完整的硬體特權。 更方便的讓不同部份的 OS 去合作。 壞處 通常在不同部份的 OS 中的介面是複雜的。 這會容易讓開發者出錯。
1.0
XV6 Ch1 OS 組織 | OMU Skywalker - https://blog.omuskywalker.com/xv6-1-process/ OS 必須具備三項技能:多工、獨立及交流。 kernel 組織 Monolithic kernel:整個 OS 都位於 kernel 中,如此一來所有 system calls 都會在 kernel 中執行(XV6)。 好處 設計者不須決定 OS 的哪些部份不需要完整的硬體特權。 更方便的讓不同部份的 OS 去合作。 壞處 通常在不同部份的 OS 中的介面是複雜的。 這會容易讓開發者出錯。
process
os 組織 omu skywalker os 必須具備三項技能:多工、獨立及交流。 kernel 組織 monolithic kernel:整個 os 都位於 kernel 中,如此一來所有 system calls 都會在 kernel 中執行( )。 好處 設計者不須決定 os 的哪些部份不需要完整的硬體特權。 更方便的讓不同部份的 os 去合作。 壞處 通常在不同部份的 os 中的介面是複雜的。 這會容易讓開發者出錯。
1
165,821
6,286,892,845
IssuesEvent
2017-07-19 13:57:13
phil-mansfield/shellfish
https://api.github.com/repos/phil-mansfield/shellfish
opened
Generalize Gadget reader
bug priority: everything is on fire
Generalize Gadget reader so that it can read: - Standard DMO Gadget snapshots where DM particles are type 1. - Non-standard DMO Gadget snapshots where DM particles are type 0. - Non-standard DMO Gadget snapshots where DM particles are multiple types. - Standard hydro Gadget snapshots which contain gas particles with an arbitrary number of fields in the type 0 slot. - Non-standard hydro Gadget snapshots where the DM particles are type 0. Add configuration options which allow the user to specify this.
1.0
Generalize Gadget reader - Generalize Gadget reader so that it can read: - Standard DMO Gadget snapshots where DM particles are type 1. - Non-standard DMO Gadget snapshots where DM particles are type 0. - Non-standard DMO Gadget snapshots where DM particles are multiple types. - Standard hydro Gadget snapshots which contain gas particles with an arbitrary number of fields in the type 0 slot. - Non-standard hydro Gadget snapshots where the DM particles are type 0. Add configuration options which allow the user to specify this.
non_process
generalize gadget reader generalize gadget reader so that it can read standard dmo gadget snapshots where dm particles are type non standard dmo gadget snapshots where dm particles are type non standard dmo gadget snapshots where dm particles are multiple types standard hydro gadget snapshots which contain gas particles with an arbitrary number of fields in the type slot non standard hydro gadget snapshots where the dm particles are type add configuration options which allow the user to specify this
0
20,037
26,520,585,044
IssuesEvent
2023-01-19 02:00:08
lizhihao6/get-daily-arxiv-noti
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
opened
New submissions for Thu, 19 Jan 23
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
## Keyword: events There is no result ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP ### PTA-Det: Point Transformer Associating Point cloud and Image for 3D Object Detection - **Authors:** Rui Wan, Tianyun Zhao, Wei Zhao - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2301.07301 - **Pdf link:** https://arxiv.org/pdf/2301.07301 - **Abstract** In autonomous driving, 3D object detection based on multi-modal data has become an indispensable approach when facing complex environments around the vehicle. During multi-modal detection, LiDAR and camera are simultaneously applied for capturing and modeling. However, due to the intrinsic discrepancies between the LiDAR point and camera image, the fusion of the data for object detection encounters a series of problems. Most multi-modal detection methods perform even worse than LiDAR-only methods. In this investigation, we propose a method named PTA-Det to improve the performance of multi-modal detection. Accompanied by PTA-Det, a Pseudo Point Cloud Generation Network is proposed, which can convert image information including texture and semantic features by pseudo points. Thereafter, through a transformer-based Point Fusion Transition (PFT) module, the features of LiDAR points and pseudo points from image can be deeply fused under a unified point-based representation. The combination of these modules can conquer the major obstacle in feature fusion across modalities and realizes a complementary and discriminative representation for proposal generation. Extensive experiments on the KITTI dataset show the PTA-Det achieves a competitive result and support its effectiveness. ### Face Recognition in the age of CLIP & Billion image datasets - **Authors:** Aaditya Bhat, Shrey Jain - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2301.07315 - **Pdf link:** https://arxiv.org/pdf/2301.07315 - **Abstract** CLIP (Contrastive Language-Image Pre-training) models developed by OpenAI have achieved outstanding results on various image recognition and retrieval tasks, displaying strong zero-shot performance. This means that they are able to perform effectively on tasks for which they have not been explicitly trained. Inspired by the success of OpenAI CLIP, a new publicly available dataset called LAION-5B was collected which resulted in the development of open ViT-H/14, ViT-G/14 models that outperform the OpenAI L/14 model. The LAION-5B dataset also released an approximate nearest neighbor index, with a web interface for search & subset creation. In this paper, we evaluate the performance of various CLIP models as zero-shot face recognizers. Our findings show that CLIP models perform well on face recognition tasks, but increasing the size of the CLIP model does not necessarily lead to improved accuracy. Additionally, we investigate the robustness of CLIP models against data poisoning attacks by testing their performance on poisoned data. Through this analysis, we aim to understand the potential consequences and misuse of search engines built using CLIP models, which could potentially function as unintentional face recognition engines. ### FPANet: Frequency-based Video Demoireing using Frame-level Post Alignment - **Authors:** Gyeongrok Oh, Heon Gu, Sangpil Kim, Jinkyu Kim - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://arxiv.org/abs/2301.07330 - **Pdf link:** https://arxiv.org/pdf/2301.07330 - **Abstract** Interference between overlapping gird patterns creates moire patterns, degrading the visual quality of an image that captures a screen of a digital display device by an ordinary digital camera. Removing such moire patterns is challenging due to their complex patterns of diverse sizes and color distortions. Existing approaches mainly focus on filtering out in the spatial domain, failing to remove a large-scale moire pattern. In this paper, we propose a novel model called FPANet that learns filters in both frequency and spatial domains, improving the restoration quality by removing various sizes of moire patterns. To further enhance, our model takes multiple consecutive frames, learning to extract frame-invariant content features and outputting better quality temporally consistent images. We demonstrate the effectiveness of our proposed method with a publicly available large-scale dataset, observing that ours outperforms the state-of-the-art approaches, including ESDNet, VDmoire, MBCNN, WDNet, UNet, and DMCNN, in terms of the image and video quality metrics, such as PSNR, SSIM, LPIPS, FVD, and FSIM. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### Creating awareness about security and safety on highways to mitigate wildlife-vehicle collisions by detecting and recognizing wildlife fences using deep learning and drone technology - **Authors:** Irene Nandutu, Marcellin Atemkeng, Patrice Okouma, Nokubonga Mgqatsa, Jean Louis Ebongue Kedieng Fendji, Franklin Tchakounte - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Robotics (cs.RO) - **Arxiv link:** https://arxiv.org/abs/2301.07174 - **Pdf link:** https://arxiv.org/pdf/2301.07174 - **Abstract** In South Africa, it is a common practice for people to leave their vehicles beside the road when traveling long distances for a short comfort break. This practice might increase human encounters with wildlife, threatening their security and safety. Here we intend to create awareness about wildlife fencing, using drone technology and computer vision algorithms to recognize and detect wildlife fences and associated features. We collected data at Amakhala and Lalibela private game reserves in the Eastern Cape, South Africa. We used wildlife electric fence data containing single and double fences for the classification task. Additionally, we used aerial and still annotated images extracted from the drone and still cameras for the segmentation and detection tasks. The model training results from the drone camera outperformed those from the still camera. Generally, poor model performance is attributed to (1) over-decompression of images and (2) the ability of drone cameras to capture more details on images for the machine learning model to learn as compared to still cameras that capture only the front view of the wildlife fence. We argue that our model can be deployed on client-edge devices to inform people about the presence and significance of wildlife fencing, which minimizes human encounters with wildlife, thereby mitigating wildlife-vehicle collisions. ## Keyword: RAW ### Effective End-to-End Vision Language Pretraining with Semantic Visual Loss - **Authors:** Xiaofeng Yang, Fayao Liu, Guosheng Lin - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2301.07236 - **Pdf link:** https://arxiv.org/pdf/2301.07236 - **Abstract** Current vision language pretraining models are dominated by methods using region visual features extracted from object detectors. Given their good performance, the extract-then-process pipeline significantly restricts the inference speed and therefore limits their real-world use cases. However, training vision language models from raw image pixels is difficult, as the raw image pixels give much less prior knowledge than region features. In this paper, we systematically study how to leverage auxiliary visual pretraining tasks to help training end-to-end vision language models. We introduce three types of visual losses that enable much faster convergence and better finetuning accuracy. Compared with region feature models, our end-to-end models could achieve similar or better performance on downstream tasks and run more than 10 times faster during inference. Compared with other end-to-end models, our proposed method could achieve similar or better performance when pretrained for only 10% of the pretraining GPU hours. ## Keyword: raw image ### Effective End-to-End Vision Language Pretraining with Semantic Visual Loss - **Authors:** Xiaofeng Yang, Fayao Liu, Guosheng Lin - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2301.07236 - **Pdf link:** https://arxiv.org/pdf/2301.07236 - **Abstract** Current vision language pretraining models are dominated by methods using region visual features extracted from object detectors. Given their good performance, the extract-then-process pipeline significantly restricts the inference speed and therefore limits their real-world use cases. However, training vision language models from raw image pixels is difficult, as the raw image pixels give much less prior knowledge than region features. In this paper, we systematically study how to leverage auxiliary visual pretraining tasks to help training end-to-end vision language models. We introduce three types of visual losses that enable much faster convergence and better finetuning accuracy. Compared with region feature models, our end-to-end models could achieve similar or better performance on downstream tasks and run more than 10 times faster during inference. Compared with other end-to-end models, our proposed method could achieve similar or better performance when pretrained for only 10% of the pretraining GPU hours.
2.0
New submissions for Thu, 19 Jan 23 - ## Keyword: events There is no result ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP ### PTA-Det: Point Transformer Associating Point cloud and Image for 3D Object Detection - **Authors:** Rui Wan, Tianyun Zhao, Wei Zhao - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2301.07301 - **Pdf link:** https://arxiv.org/pdf/2301.07301 - **Abstract** In autonomous driving, 3D object detection based on multi-modal data has become an indispensable approach when facing complex environments around the vehicle. During multi-modal detection, LiDAR and camera are simultaneously applied for capturing and modeling. However, due to the intrinsic discrepancies between the LiDAR point and camera image, the fusion of the data for object detection encounters a series of problems. Most multi-modal detection methods perform even worse than LiDAR-only methods. In this investigation, we propose a method named PTA-Det to improve the performance of multi-modal detection. Accompanied by PTA-Det, a Pseudo Point Cloud Generation Network is proposed, which can convert image information including texture and semantic features by pseudo points. Thereafter, through a transformer-based Point Fusion Transition (PFT) module, the features of LiDAR points and pseudo points from image can be deeply fused under a unified point-based representation. The combination of these modules can conquer the major obstacle in feature fusion across modalities and realizes a complementary and discriminative representation for proposal generation. Extensive experiments on the KITTI dataset show the PTA-Det achieves a competitive result and support its effectiveness. ### Face Recognition in the age of CLIP & Billion image datasets - **Authors:** Aaditya Bhat, Shrey Jain - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2301.07315 - **Pdf link:** https://arxiv.org/pdf/2301.07315 - **Abstract** CLIP (Contrastive Language-Image Pre-training) models developed by OpenAI have achieved outstanding results on various image recognition and retrieval tasks, displaying strong zero-shot performance. This means that they are able to perform effectively on tasks for which they have not been explicitly trained. Inspired by the success of OpenAI CLIP, a new publicly available dataset called LAION-5B was collected which resulted in the development of open ViT-H/14, ViT-G/14 models that outperform the OpenAI L/14 model. The LAION-5B dataset also released an approximate nearest neighbor index, with a web interface for search & subset creation. In this paper, we evaluate the performance of various CLIP models as zero-shot face recognizers. Our findings show that CLIP models perform well on face recognition tasks, but increasing the size of the CLIP model does not necessarily lead to improved accuracy. Additionally, we investigate the robustness of CLIP models against data poisoning attacks by testing their performance on poisoned data. Through this analysis, we aim to understand the potential consequences and misuse of search engines built using CLIP models, which could potentially function as unintentional face recognition engines. ### FPANet: Frequency-based Video Demoireing using Frame-level Post Alignment - **Authors:** Gyeongrok Oh, Heon Gu, Sangpil Kim, Jinkyu Kim - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://arxiv.org/abs/2301.07330 - **Pdf link:** https://arxiv.org/pdf/2301.07330 - **Abstract** Interference between overlapping gird patterns creates moire patterns, degrading the visual quality of an image that captures a screen of a digital display device by an ordinary digital camera. Removing such moire patterns is challenging due to their complex patterns of diverse sizes and color distortions. Existing approaches mainly focus on filtering out in the spatial domain, failing to remove a large-scale moire pattern. In this paper, we propose a novel model called FPANet that learns filters in both frequency and spatial domains, improving the restoration quality by removing various sizes of moire patterns. To further enhance, our model takes multiple consecutive frames, learning to extract frame-invariant content features and outputting better quality temporally consistent images. We demonstrate the effectiveness of our proposed method with a publicly available large-scale dataset, observing that ours outperforms the state-of-the-art approaches, including ESDNet, VDmoire, MBCNN, WDNet, UNet, and DMCNN, in terms of the image and video quality metrics, such as PSNR, SSIM, LPIPS, FVD, and FSIM. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### Creating awareness about security and safety on highways to mitigate wildlife-vehicle collisions by detecting and recognizing wildlife fences using deep learning and drone technology - **Authors:** Irene Nandutu, Marcellin Atemkeng, Patrice Okouma, Nokubonga Mgqatsa, Jean Louis Ebongue Kedieng Fendji, Franklin Tchakounte - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Robotics (cs.RO) - **Arxiv link:** https://arxiv.org/abs/2301.07174 - **Pdf link:** https://arxiv.org/pdf/2301.07174 - **Abstract** In South Africa, it is a common practice for people to leave their vehicles beside the road when traveling long distances for a short comfort break. This practice might increase human encounters with wildlife, threatening their security and safety. Here we intend to create awareness about wildlife fencing, using drone technology and computer vision algorithms to recognize and detect wildlife fences and associated features. We collected data at Amakhala and Lalibela private game reserves in the Eastern Cape, South Africa. We used wildlife electric fence data containing single and double fences for the classification task. Additionally, we used aerial and still annotated images extracted from the drone and still cameras for the segmentation and detection tasks. The model training results from the drone camera outperformed those from the still camera. Generally, poor model performance is attributed to (1) over-decompression of images and (2) the ability of drone cameras to capture more details on images for the machine learning model to learn as compared to still cameras that capture only the front view of the wildlife fence. We argue that our model can be deployed on client-edge devices to inform people about the presence and significance of wildlife fencing, which minimizes human encounters with wildlife, thereby mitigating wildlife-vehicle collisions. ## Keyword: RAW ### Effective End-to-End Vision Language Pretraining with Semantic Visual Loss - **Authors:** Xiaofeng Yang, Fayao Liu, Guosheng Lin - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2301.07236 - **Pdf link:** https://arxiv.org/pdf/2301.07236 - **Abstract** Current vision language pretraining models are dominated by methods using region visual features extracted from object detectors. Given their good performance, the extract-then-process pipeline significantly restricts the inference speed and therefore limits their real-world use cases. However, training vision language models from raw image pixels is difficult, as the raw image pixels give much less prior knowledge than region features. In this paper, we systematically study how to leverage auxiliary visual pretraining tasks to help training end-to-end vision language models. We introduce three types of visual losses that enable much faster convergence and better finetuning accuracy. Compared with region feature models, our end-to-end models could achieve similar or better performance on downstream tasks and run more than 10 times faster during inference. Compared with other end-to-end models, our proposed method could achieve similar or better performance when pretrained for only 10% of the pretraining GPU hours. ## Keyword: raw image ### Effective End-to-End Vision Language Pretraining with Semantic Visual Loss - **Authors:** Xiaofeng Yang, Fayao Liu, Guosheng Lin - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2301.07236 - **Pdf link:** https://arxiv.org/pdf/2301.07236 - **Abstract** Current vision language pretraining models are dominated by methods using region visual features extracted from object detectors. Given their good performance, the extract-then-process pipeline significantly restricts the inference speed and therefore limits their real-world use cases. However, training vision language models from raw image pixels is difficult, as the raw image pixels give much less prior knowledge than region features. In this paper, we systematically study how to leverage auxiliary visual pretraining tasks to help training end-to-end vision language models. We introduce three types of visual losses that enable much faster convergence and better finetuning accuracy. Compared with region feature models, our end-to-end models could achieve similar or better performance on downstream tasks and run more than 10 times faster during inference. Compared with other end-to-end models, our proposed method could achieve similar or better performance when pretrained for only 10% of the pretraining GPU hours.
process
new submissions for thu jan keyword events there is no result keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp pta det point transformer associating point cloud and image for object detection authors rui wan tianyun zhao wei zhao subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract in autonomous driving object detection based on multi modal data has become an indispensable approach when facing complex environments around the vehicle during multi modal detection lidar and camera are simultaneously applied for capturing and modeling however due to the intrinsic discrepancies between the lidar point and camera image the fusion of the data for object detection encounters a series of problems most multi modal detection methods perform even worse than lidar only methods in this investigation we propose a method named pta det to improve the performance of multi modal detection accompanied by pta det a pseudo point cloud generation network is proposed which can convert image information including texture and semantic features by pseudo points thereafter through a transformer based point fusion transition pft module the features of lidar points and pseudo points from image can be deeply fused under a unified point based representation the combination of these modules can conquer the major obstacle in feature fusion across modalities and realizes a complementary and discriminative representation for proposal generation extensive experiments on the kitti dataset show the pta det achieves a competitive result and support its effectiveness face recognition in the age of clip billion image datasets authors aaditya bhat shrey jain subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract clip contrastive language image pre training models developed by openai have achieved outstanding results on various image recognition and retrieval tasks displaying strong zero shot performance this means that they are able to perform effectively on tasks for which they have not been explicitly trained inspired by the success of openai clip a new publicly available dataset called laion was collected which resulted in the development of open vit h vit g models that outperform the openai l model the laion dataset also released an approximate nearest neighbor index with a web interface for search subset creation in this paper we evaluate the performance of various clip models as zero shot face recognizers our findings show that clip models perform well on face recognition tasks but increasing the size of the clip model does not necessarily lead to improved accuracy additionally we investigate the robustness of clip models against data poisoning attacks by testing their performance on poisoned data through this analysis we aim to understand the potential consequences and misuse of search engines built using clip models which could potentially function as unintentional face recognition engines fpanet frequency based video demoireing using frame level post alignment authors gyeongrok oh heon gu sangpil kim jinkyu kim subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract interference between overlapping gird patterns creates moire patterns degrading the visual quality of an image that captures a screen of a digital display device by an ordinary digital camera removing such moire patterns is challenging due to their complex patterns of diverse sizes and color distortions existing approaches mainly focus on filtering out in the spatial domain failing to remove a large scale moire pattern in this paper we propose a novel model called fpanet that learns filters in both frequency and spatial domains improving the restoration quality by removing various sizes of moire patterns to further enhance our model takes multiple consecutive frames learning to extract frame invariant content features and outputting better quality temporally consistent images we demonstrate the effectiveness of our proposed method with a publicly available large scale dataset observing that ours outperforms the state of the art approaches including esdnet vdmoire mbcnn wdnet unet and dmcnn in terms of the image and video quality metrics such as psnr ssim lpips fvd and fsim keyword image signal processing there is no result keyword image signal process there is no result keyword compression creating awareness about security and safety on highways to mitigate wildlife vehicle collisions by detecting and recognizing wildlife fences using deep learning and drone technology authors irene nandutu marcellin atemkeng patrice okouma nokubonga mgqatsa jean louis ebongue kedieng fendji franklin tchakounte subjects computer vision and pattern recognition cs cv machine learning cs lg robotics cs ro arxiv link pdf link abstract in south africa it is a common practice for people to leave their vehicles beside the road when traveling long distances for a short comfort break this practice might increase human encounters with wildlife threatening their security and safety here we intend to create awareness about wildlife fencing using drone technology and computer vision algorithms to recognize and detect wildlife fences and associated features we collected data at amakhala and lalibela private game reserves in the eastern cape south africa we used wildlife electric fence data containing single and double fences for the classification task additionally we used aerial and still annotated images extracted from the drone and still cameras for the segmentation and detection tasks the model training results from the drone camera outperformed those from the still camera generally poor model performance is attributed to over decompression of images and the ability of drone cameras to capture more details on images for the machine learning model to learn as compared to still cameras that capture only the front view of the wildlife fence we argue that our model can be deployed on client edge devices to inform people about the presence and significance of wildlife fencing which minimizes human encounters with wildlife thereby mitigating wildlife vehicle collisions keyword raw effective end to end vision language pretraining with semantic visual loss authors xiaofeng yang fayao liu guosheng lin subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract current vision language pretraining models are dominated by methods using region visual features extracted from object detectors given their good performance the extract then process pipeline significantly restricts the inference speed and therefore limits their real world use cases however training vision language models from raw image pixels is difficult as the raw image pixels give much less prior knowledge than region features in this paper we systematically study how to leverage auxiliary visual pretraining tasks to help training end to end vision language models we introduce three types of visual losses that enable much faster convergence and better finetuning accuracy compared with region feature models our end to end models could achieve similar or better performance on downstream tasks and run more than times faster during inference compared with other end to end models our proposed method could achieve similar or better performance when pretrained for only of the pretraining gpu hours keyword raw image effective end to end vision language pretraining with semantic visual loss authors xiaofeng yang fayao liu guosheng lin subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract current vision language pretraining models are dominated by methods using region visual features extracted from object detectors given their good performance the extract then process pipeline significantly restricts the inference speed and therefore limits their real world use cases however training vision language models from raw image pixels is difficult as the raw image pixels give much less prior knowledge than region features in this paper we systematically study how to leverage auxiliary visual pretraining tasks to help training end to end vision language models we introduce three types of visual losses that enable much faster convergence and better finetuning accuracy compared with region feature models our end to end models could achieve similar or better performance on downstream tasks and run more than times faster during inference compared with other end to end models our proposed method could achieve similar or better performance when pretrained for only of the pretraining gpu hours
1
5,232
8,033,076,877
IssuesEvent
2018-07-28 23:37:25
JonathanBelanger/DECaxp
https://api.github.com/repos/JonathanBelanger/DECaxp
closed
Compile issue.
in process changes
Hello Jonathan! I am trying to compile DECaxp under FreeBSD. I 've got: In file included from /emulator/Alpha/prg/DECaxp/src/comutl/AXP_VirtualDisk.c:32:0: /emulator/Alpha/prg/DECaxp/src/comutl/AXP_VHDX.h:265:5: error: unknown type name 'AXP_BLOCK_DSC' AXP_BLOCK_DSC header; ^~~~~~~~~~~~~ /emulator/Alpha/prg/DECaxp/src/comutl/AXP_VirtualDisk.c: In function 'AXP_VHD_Open': /emulator/Alpha/prg/DECaxp/src/comutl/AXP_VirtualDisk.c:289:12: warning: implicit declaration of function '_AXP_RAW_Open' [-Wimplicit-function-declaration] retVal = _AXP_RAW_Open( ^~~~~~~~~~~~~ /emulator/Alpha/prg/DECaxp/src/comutl/AXP_VirtualDisk.c: In function 'AXP_VHD_CloseHandle': /emulator/Alpha/prg/DECaxp/src/comutl/AXP_VirtualDisk.c:338:22: error: request for member 'type' in something not a structure or union if ((vhdx->header.type == AXP_VHDX_BLK) && ^ /emulator/Alpha/prg/DECaxp/src/comutl/AXP_VirtualDisk.c:339:15: error: request for member 'size' in something not a structure or union (vhdx->header.size == sizeof(AXP_VHDX_Handle))) ^
1.0
Compile issue. - Hello Jonathan! I am trying to compile DECaxp under FreeBSD. I 've got: In file included from /emulator/Alpha/prg/DECaxp/src/comutl/AXP_VirtualDisk.c:32:0: /emulator/Alpha/prg/DECaxp/src/comutl/AXP_VHDX.h:265:5: error: unknown type name 'AXP_BLOCK_DSC' AXP_BLOCK_DSC header; ^~~~~~~~~~~~~ /emulator/Alpha/prg/DECaxp/src/comutl/AXP_VirtualDisk.c: In function 'AXP_VHD_Open': /emulator/Alpha/prg/DECaxp/src/comutl/AXP_VirtualDisk.c:289:12: warning: implicit declaration of function '_AXP_RAW_Open' [-Wimplicit-function-declaration] retVal = _AXP_RAW_Open( ^~~~~~~~~~~~~ /emulator/Alpha/prg/DECaxp/src/comutl/AXP_VirtualDisk.c: In function 'AXP_VHD_CloseHandle': /emulator/Alpha/prg/DECaxp/src/comutl/AXP_VirtualDisk.c:338:22: error: request for member 'type' in something not a structure or union if ((vhdx->header.type == AXP_VHDX_BLK) && ^ /emulator/Alpha/prg/DECaxp/src/comutl/AXP_VirtualDisk.c:339:15: error: request for member 'size' in something not a structure or union (vhdx->header.size == sizeof(AXP_VHDX_Handle))) ^
process
compile issue hello jonathan i am trying to compile decaxp under freebsd i ve got in file included from emulator alpha prg decaxp src comutl axp virtualdisk c emulator alpha prg decaxp src comutl axp vhdx h error unknown type name axp block dsc axp block dsc header emulator alpha prg decaxp src comutl axp virtualdisk c in function axp vhd open emulator alpha prg decaxp src comutl axp virtualdisk c warning implicit declaration of function axp raw open retval axp raw open emulator alpha prg decaxp src comutl axp virtualdisk c in function axp vhd closehandle emulator alpha prg decaxp src comutl axp virtualdisk c error request for member type in something not a structure or union if vhdx header type axp vhdx blk emulator alpha prg decaxp src comutl axp virtualdisk c error request for member size in something not a structure or union vhdx header size sizeof axp vhdx handle
1
9,422
12,416,855,583
IssuesEvent
2020-05-22 19:10:24
NationalSecurityAgency/ghidra
https://api.github.com/repos/NationalSecurityAgency/ghidra
closed
MSVC Float comparison to 0.0 not decompiled properly
Feature: Processor/x86 Type: Bug
**Describe the bug** On 32 bit x86 architecture (and possibly 64bit), MSVC generates a specific pattern to test equality to zero of floating point numbers (see: https://stackoverflow.com/a/46772747). This pattern is incorrectly decompiled. **To Reproduce** decompile this pattern: ``` 0f 57 c0 XORPS XMM0,XMM0 0f 2e c8 UCOMISS XMM1,XMM0 9f LAHF f6 c4 44 TEST AH,44h 7b 28 JNP loc_equal_zero ``` The code is decompiled as: `if(false) { ... }` Once the dead code removal is deactivated. **Expected behavior** The expected behavior is: `if ( value_in_XMM1 != 0.0 ) { ... }` **Environment:** - OS: Windows 10 - Version 9.0
1.0
MSVC Float comparison to 0.0 not decompiled properly - **Describe the bug** On 32 bit x86 architecture (and possibly 64bit), MSVC generates a specific pattern to test equality to zero of floating point numbers (see: https://stackoverflow.com/a/46772747). This pattern is incorrectly decompiled. **To Reproduce** decompile this pattern: ``` 0f 57 c0 XORPS XMM0,XMM0 0f 2e c8 UCOMISS XMM1,XMM0 9f LAHF f6 c4 44 TEST AH,44h 7b 28 JNP loc_equal_zero ``` The code is decompiled as: `if(false) { ... }` Once the dead code removal is deactivated. **Expected behavior** The expected behavior is: `if ( value_in_XMM1 != 0.0 ) { ... }` **Environment:** - OS: Windows 10 - Version 9.0
process
msvc float comparison to not decompiled properly describe the bug on bit architecture and possibly msvc generates a specific pattern to test equality to zero of floating point numbers see this pattern is incorrectly decompiled to reproduce decompile this pattern xorps ucomiss lahf test ah jnp loc equal zero the code is decompiled as if false once the dead code removal is deactivated expected behavior the expected behavior is if value in environment os windows version
1
15,432
19,622,518,645
IssuesEvent
2022-01-07 08:56:42
symfony/symfony
https://api.github.com/repos/symfony/symfony
closed
[Process] arrays in `$env` result in `Array to string conversion`
Bug Process Status: Needs Review
### Symfony version(s) affected 4.4.35, 4.4.36 ### Description Since https://github.com/symfony/symfony/commit/11ccbcd24c2e2d3f4e5897f159d1c1d23fc62a67, `Process->run()` prints errors "Array to string conversion" when `$env` contains array. Before this commit, `$env` was cleaned of arrays, but this got removed. In particular [this removed `array_filter()` on `$env`](https://github.com/symfony/symfony/commit/11ccbcd24c2e2d3f4e5897f159d1c1d23fc62a67#diff-edd51b05bf438c7cdca442e25d68f41dd0a51952a69fa0c5fa912dc31d833ef4L1189-L1191) introduces the bug. Similar issues were reported e.g. in https://github.com/symfony/symfony/issues/44197, but there only `$argv`/`$argc` were discussed and handled. What's the reason for removing the `array_filter()` that cleaned `$env` from arrays? Below you find an example of `$env` (dumped [here in `Process.php`](https://github.com/symfony/symfony/blob/v4.4.35/src/Symfony/Component/Process/Process.php#L342) that causes the described bug. The dump is from [userli's checkpassword symfony command](https://github.com/systemli/userli/blob/main/src/Command/CheckPasswordCommand.php). Also see https://github.com/systemli/userli/issues/341 for details. <details> <summary> Fold/Collapse var_dump($env) </summary> <pre> /vagrant/vendor/symfony/process/Process.php:342: array(52) { [0] => array(27) { 'SHELL' => string(9) "/bin/bash" 'SSH_AUTH_SOCK' => string(31) "/tmp/ssh-BHtDglisnt/agent.29483" 'PWD' => string(8) "/vagrant" 'LOGNAME' => string(7) "vagrant" 'XDG_SESSION_TYPE' => string(3) "tty" 'MOTD_SHOWN' => string(3) "pam" 'HOME' => string(13) "/home/vagrant" 'LANG' => string(7) "C.UTF-8" 'LS_COLORS' => string(1508) "rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01"... 'SSH_CONNECTION' => string(38) "192.168.121.1 36092 192.168.121.169 22" 'NPM_CONFIG_PREFIX' => string(18) "/usr/local/lib/npm" 'XDG_SESSION_CLASS' => string(4) "user" 'TERM' => string(11) "xterm-kitty" 'USER' => string(7) "vagrant" 'SHLVL' => string(1) "0" 'XDG_SESSION_ID' => string(2) "11" 'XDG_RUNTIME_DIR' => string(14) "/run/user/1000" 'NODE_PATH' => string(36) ":/usr/local/lib/npm/lib/node_modules" 'SSH_CLIENT' => string(22) "192.168.121.1 36092 22" 'LC_ALL' => string(11) "en_US.UTF-8" 'PATH' => string(79) "/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/lib/npm/bin" 'SSH_TTY' => string(10) "/dev/pts/1" 'OLDPWD' => string(13) "/home/vagrant" '_' => string(11) "bin/console" 'LINES' => string(2) "50" 'COLUMNS' => string(2) "80" 'SHELL_VERBOSITY' => string(1) "0" } 'USER' => string(17) "admin@example.org" 'HOME' => string(28) "/var/vmail/example.org/admin" 'userdb_uid' => int(5000) 'userdb_gid' => int(5000) 'EXTRA' => string(21) "userdb_uid userdb_gid" 'APP_NAME' => string(6) "Userli" 'APP_URL' => string(25) "https://users.example.org" 'PROJECT_NAME' => string(11) "example.org" 'PROJECT_URL' => string(23) "https://www.example.org" 'SENDER_ADDRESS' => string(17) "admin@example.org" 'NOTIFICATION_ADDRESS' => string(22) "monitoring@example.org" 'SEND_MAIL' => string(1) "1" 'LOCALE' => string(2) "en" 'HAS_SINA_BOX' => string(1) "0" 'MAIL_CRYPT' => string(1) "2" 'DOVECOT_MAIL_LOCATION' => string(10) "/var/vmail" 'DOVECOT_MAIL_UID' => string(4) "5000" 'DOVECOT_MAIL_GID' => string(4) "5000" 'WEBMAIL_URL' => string(0) "" 'WKD_DIRECTORY' => string(36) "/var/www/html/.well-known/openpgpkey" 'WKD_FORMAT' => string(8) "advanced" 'APP_ENV' => string(3) "dev" 'APP_SECRET' => string(32) "165e25e3846534bb4665d7078a851c0b" 'MAILER_URL' => string(42) "smtp://localhost:25?encryption=&auth_mode=" 'MAILER_DELIVERY_ADDRESS' => string(17) "admin@example.org" 'DATABASE_URL' => string(71) "mysql://mail:password@127.0.0.1:3306/mail?serverVersion=mariadb-10.3.23" 'SYMFONY_DOTENV_VARS' => string(278) "APP_NAME,APP_URL,PROJECT_NAME,PROJECT_URL,SENDER_ADDRESS,NOTIFICATION_ADDRESS,SEND_MAIL,LOCALE,HAS_SINA_BOX,MAIL_CRYPT,DOVECOT_MAIL_LOCATION,DOVECOT_MAIL_UID,DOVECOT_MAIL_GID,WEBMAIL_URL,WKD_DIRECTORY,WKD_FORMAT,APP_ENV,APP_SECRET,MAILER_URL,MAILER_DELIVERY_ADDRESS,DATABASE_URL" 'APP_DEBUG' => string(1) "1" 'SHELL_VERBOSITY' => int(0) 'SHELL' => string(9) "/bin/bash" 'SSH_AUTH_SOCK' => string(31) "/tmp/ssh-BHtDglisnt/agent.29483" 'PWD' => string(8) "/vagrant" 'LOGNAME' => string(7) "vagrant" 'XDG_SESSION_TYPE' => string(3) "tty" 'MOTD_SHOWN' => string(3) "pam" 'LANG' => string(7) "C.UTF-8" 'LS_COLORS' => string(1508) "rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01"... 'SSH_CONNECTION' => string(38) "192.168.121.1 36092 192.168.121.169 22" 'NPM_CONFIG_PREFIX' => string(18) "/usr/local/lib/npm" 'XDG_SESSION_CLASS' => string(4) "user" 'TERM' => string(11) "xterm-kitty" 'SHLVL' => string(1) "0" 'XDG_SESSION_ID' => string(2) "11" 'XDG_RUNTIME_DIR' => string(14) "/run/user/1000" 'NODE_PATH' => string(36) ":/usr/local/lib/npm/lib/node_modules" 'SSH_CLIENT' => string(22) "192.168.121.1 36092 22" 'LC_ALL' => string(11) "en_US.UTF-8" 'PATH' => string(79) "/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/lib/npm/bin" 'SSH_TTY' => string(10) "/dev/pts/1" 'OLDPWD' => string(13) "/home/vagrant" '_' => string(11) "bin/console" } </pre> </details> ### How to reproduce Run `Process->run()` with `$env` as pasted above. ### Possible Solution _No response_ ### Additional Context _No response_
1.0
[Process] arrays in `$env` result in `Array to string conversion` - ### Symfony version(s) affected 4.4.35, 4.4.36 ### Description Since https://github.com/symfony/symfony/commit/11ccbcd24c2e2d3f4e5897f159d1c1d23fc62a67, `Process->run()` prints errors "Array to string conversion" when `$env` contains array. Before this commit, `$env` was cleaned of arrays, but this got removed. In particular [this removed `array_filter()` on `$env`](https://github.com/symfony/symfony/commit/11ccbcd24c2e2d3f4e5897f159d1c1d23fc62a67#diff-edd51b05bf438c7cdca442e25d68f41dd0a51952a69fa0c5fa912dc31d833ef4L1189-L1191) introduces the bug. Similar issues were reported e.g. in https://github.com/symfony/symfony/issues/44197, but there only `$argv`/`$argc` were discussed and handled. What's the reason for removing the `array_filter()` that cleaned `$env` from arrays? Below you find an example of `$env` (dumped [here in `Process.php`](https://github.com/symfony/symfony/blob/v4.4.35/src/Symfony/Component/Process/Process.php#L342) that causes the described bug. The dump is from [userli's checkpassword symfony command](https://github.com/systemli/userli/blob/main/src/Command/CheckPasswordCommand.php). Also see https://github.com/systemli/userli/issues/341 for details. <details> <summary> Fold/Collapse var_dump($env) </summary> <pre> /vagrant/vendor/symfony/process/Process.php:342: array(52) { [0] => array(27) { 'SHELL' => string(9) "/bin/bash" 'SSH_AUTH_SOCK' => string(31) "/tmp/ssh-BHtDglisnt/agent.29483" 'PWD' => string(8) "/vagrant" 'LOGNAME' => string(7) "vagrant" 'XDG_SESSION_TYPE' => string(3) "tty" 'MOTD_SHOWN' => string(3) "pam" 'HOME' => string(13) "/home/vagrant" 'LANG' => string(7) "C.UTF-8" 'LS_COLORS' => string(1508) "rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01"... 'SSH_CONNECTION' => string(38) "192.168.121.1 36092 192.168.121.169 22" 'NPM_CONFIG_PREFIX' => string(18) "/usr/local/lib/npm" 'XDG_SESSION_CLASS' => string(4) "user" 'TERM' => string(11) "xterm-kitty" 'USER' => string(7) "vagrant" 'SHLVL' => string(1) "0" 'XDG_SESSION_ID' => string(2) "11" 'XDG_RUNTIME_DIR' => string(14) "/run/user/1000" 'NODE_PATH' => string(36) ":/usr/local/lib/npm/lib/node_modules" 'SSH_CLIENT' => string(22) "192.168.121.1 36092 22" 'LC_ALL' => string(11) "en_US.UTF-8" 'PATH' => string(79) "/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/lib/npm/bin" 'SSH_TTY' => string(10) "/dev/pts/1" 'OLDPWD' => string(13) "/home/vagrant" '_' => string(11) "bin/console" 'LINES' => string(2) "50" 'COLUMNS' => string(2) "80" 'SHELL_VERBOSITY' => string(1) "0" } 'USER' => string(17) "admin@example.org" 'HOME' => string(28) "/var/vmail/example.org/admin" 'userdb_uid' => int(5000) 'userdb_gid' => int(5000) 'EXTRA' => string(21) "userdb_uid userdb_gid" 'APP_NAME' => string(6) "Userli" 'APP_URL' => string(25) "https://users.example.org" 'PROJECT_NAME' => string(11) "example.org" 'PROJECT_URL' => string(23) "https://www.example.org" 'SENDER_ADDRESS' => string(17) "admin@example.org" 'NOTIFICATION_ADDRESS' => string(22) "monitoring@example.org" 'SEND_MAIL' => string(1) "1" 'LOCALE' => string(2) "en" 'HAS_SINA_BOX' => string(1) "0" 'MAIL_CRYPT' => string(1) "2" 'DOVECOT_MAIL_LOCATION' => string(10) "/var/vmail" 'DOVECOT_MAIL_UID' => string(4) "5000" 'DOVECOT_MAIL_GID' => string(4) "5000" 'WEBMAIL_URL' => string(0) "" 'WKD_DIRECTORY' => string(36) "/var/www/html/.well-known/openpgpkey" 'WKD_FORMAT' => string(8) "advanced" 'APP_ENV' => string(3) "dev" 'APP_SECRET' => string(32) "165e25e3846534bb4665d7078a851c0b" 'MAILER_URL' => string(42) "smtp://localhost:25?encryption=&auth_mode=" 'MAILER_DELIVERY_ADDRESS' => string(17) "admin@example.org" 'DATABASE_URL' => string(71) "mysql://mail:password@127.0.0.1:3306/mail?serverVersion=mariadb-10.3.23" 'SYMFONY_DOTENV_VARS' => string(278) "APP_NAME,APP_URL,PROJECT_NAME,PROJECT_URL,SENDER_ADDRESS,NOTIFICATION_ADDRESS,SEND_MAIL,LOCALE,HAS_SINA_BOX,MAIL_CRYPT,DOVECOT_MAIL_LOCATION,DOVECOT_MAIL_UID,DOVECOT_MAIL_GID,WEBMAIL_URL,WKD_DIRECTORY,WKD_FORMAT,APP_ENV,APP_SECRET,MAILER_URL,MAILER_DELIVERY_ADDRESS,DATABASE_URL" 'APP_DEBUG' => string(1) "1" 'SHELL_VERBOSITY' => int(0) 'SHELL' => string(9) "/bin/bash" 'SSH_AUTH_SOCK' => string(31) "/tmp/ssh-BHtDglisnt/agent.29483" 'PWD' => string(8) "/vagrant" 'LOGNAME' => string(7) "vagrant" 'XDG_SESSION_TYPE' => string(3) "tty" 'MOTD_SHOWN' => string(3) "pam" 'LANG' => string(7) "C.UTF-8" 'LS_COLORS' => string(1508) "rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01"... 'SSH_CONNECTION' => string(38) "192.168.121.1 36092 192.168.121.169 22" 'NPM_CONFIG_PREFIX' => string(18) "/usr/local/lib/npm" 'XDG_SESSION_CLASS' => string(4) "user" 'TERM' => string(11) "xterm-kitty" 'SHLVL' => string(1) "0" 'XDG_SESSION_ID' => string(2) "11" 'XDG_RUNTIME_DIR' => string(14) "/run/user/1000" 'NODE_PATH' => string(36) ":/usr/local/lib/npm/lib/node_modules" 'SSH_CLIENT' => string(22) "192.168.121.1 36092 22" 'LC_ALL' => string(11) "en_US.UTF-8" 'PATH' => string(79) "/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/lib/npm/bin" 'SSH_TTY' => string(10) "/dev/pts/1" 'OLDPWD' => string(13) "/home/vagrant" '_' => string(11) "bin/console" } </pre> </details> ### How to reproduce Run `Process->run()` with `$env` as pasted above. ### Possible Solution _No response_ ### Additional Context _No response_
process
arrays in env result in array to string conversion symfony version s affected description since process run prints errors array to string conversion when env contains array before this commit env was cleaned of arrays but this got removed in particular introduces the bug similar issues were reported e g in but there only argv argc were discussed and handled what s the reason for removing the array filter that cleaned env from arrays below you find an example of env dumped that causes the described bug the dump is from also see for details fold collapse var dump env vagrant vendor symfony process process php array array shell string bin bash ssh auth sock string tmp ssh bhtdglisnt agent pwd string vagrant logname string vagrant xdg session type string tty motd shown string pam home string home vagrant lang string c utf ls colors string rs di ln mh pi so do bd cd or mi su sg ca tw ow st ex tar tgz arc arj taz lha lzh lzma tlz txz tzo zip z dz gz lrz lz lzo xz zst tzst bz tbz tz deb rpm ssh connection string npm config prefix string usr local lib npm xdg session class string user term string xterm kitty user string vagrant shlvl string xdg session id string xdg runtime dir string run user node path string usr local lib npm lib node modules ssh client string lc all string en us utf path string usr local bin usr bin bin usr local games usr games usr local lib npm bin ssh tty string dev pts oldpwd string home vagrant string bin console lines string columns string shell verbosity string user string admin example org home string var vmail example org admin userdb uid int userdb gid int extra string userdb uid userdb gid app name string userli app url string project name string example org project url string sender address string admin example org notification address string monitoring example org send mail string locale string en has sina box string mail crypt string dovecot mail location string var vmail dovecot mail uid string dovecot mail gid string webmail url string wkd directory string var www html well known openpgpkey wkd format string advanced app env string dev app secret string mailer url string smtp localhost encryption auth mode mailer delivery address string admin example org database url string mysql mail password mail serverversion mariadb symfony dotenv vars string app name app url project name project url sender address notification address send mail locale has sina box mail crypt dovecot mail location dovecot mail uid dovecot mail gid webmail url wkd directory wkd format app env app secret mailer url mailer delivery address database url app debug string shell verbosity int shell string bin bash ssh auth sock string tmp ssh bhtdglisnt agent pwd string vagrant logname string vagrant xdg session type string tty motd shown string pam lang string c utf ls colors string rs di ln mh pi so do bd cd or mi su sg ca tw ow st ex tar tgz arc arj taz lha lzh lzma tlz txz tzo zip z dz gz lrz lz lzo xz zst tzst bz tbz tz deb rpm ssh connection string npm config prefix string usr local lib npm xdg session class string user term string xterm kitty shlvl string xdg session id string xdg runtime dir string run user node path string usr local lib npm lib node modules ssh client string lc all string en us utf path string usr local bin usr bin bin usr local games usr games usr local lib npm bin ssh tty string dev pts oldpwd string home vagrant string bin console how to reproduce run process run with env as pasted above possible solution no response additional context no response
1
21,420
29,359,592,108
IssuesEvent
2023-05-28 00:36:56
devssa/onde-codar-em-salvador
https://api.github.com/repos/devssa/onde-codar-em-salvador
closed
[Remoto] CTO na Coodesh
SALVADOR PJ INFRAESTRUTURA BANCO DE DADOS STARTUP REQUISITOS REMOTO PROCESSOS INOVAÇÃO GITHUB INGLÊS CI EXCEL UMA R LIDERANÇA CLOUD COMPUTING MANUTENÇÃO NEGÓCIOS INTELIGÊNCIA ARTIFICIAL ARQUITETURA DE SOFTWARE CYBER SECURITY Stale
## Descrição da vaga: Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios. Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/cto-203939393?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋 <p>A <strong>KOR Solutions</strong> está em busca de <strong><ins>CTO</ins></strong> para compor seu time!</p> <p>Procura um lugar para construir uma história <strong>GIGANTE</strong>? Então se liga:</p> <p>Temos uma missão para ser <strong>CTO</strong> aqui na <strong>KOR</strong> <strong>Solutions</strong>!</p> <p>Somos uma startup LawTech em crescimento super acelerado, que atua nas resoluções de conflitos judiciais e extrajudiciais entre as empresas e seus consumidores, além da recuperação de crédito! Pensamos sempre na manutenção dos clientes e na preservação da imagem da empresa e promovemos uma rápida negociação automatizada entre as partes, via Inteligência Artificial, aquela combinação perfeita entre inovação tecnológica, serviços e integração com o negócio.&nbsp;</p> <p>Legal né?</p> <p><strong>Sua gigante missão:</strong></p> <ul> <li>Ser a referência técnica considerando ambiente de desenvolvimento utilizando linguagem de programação, arquitetura de software, banco de dados, infraestrutura, Cyber Security e cloud computing;</li> <li>Responsável por gerir o time mantendo excelência no funcionamento dos sistemas e soluções;</li> <li>Gerenciamento de Squads, orçamentos e entregas;</li> <li>Sugerir e implementar melhorias nos processos e tecnologias atuais;</li> <li>Gerenciamento de roteiros de produtos para atingir metas e estratégias do negócio;</li> <li>Acompanhar os principais indicadores de desempenho (KPIs);</li> <li>Desenvolvimento de melhoria contínua de tecnologias;</li> <li>Gerencie riscos de negócios e estabeleça políticas e processos robustos.</li> </ul> <p><strong>Por aqui temos:</strong></p> <ul> <li>Oportunidade de construir a sua história em uma empresa GIGANTE;</li> <li>Ambiente sensacional e sem burocracia;</li> <li>Liderança HUMANIZADA;</li> <li>Crescimento meritocrático e humanizado;</li> <li>Regime de Contratação PJ;</li> <li>Formato de trabalho Híbrido e remoto.</li> </ul> <p>Todas as aplicações de vagas na KOR são consideradas sem distinção de gênero, orientação sexual, etnia, cultura, origem, religião, deficiência, idade etc.</p> <p></p> ## KOR Solutions: <p>A KOR auxilia empresas de todos os ramos a negociar incontáveis processos e disputas de seus clientes, fazendo isso de forma conveniente, segura e com validade jurídica. Uma vez que a proposta de acordo tenha sido aceita pelo parte contrária que processou a empresa, o sistema gera automaticamente a minuta do acordo e o contrato é executado. A assinatura de ambas as partes podem ser físicas ou digitais, facilitando o acordo conforme conveniência. Tudo isso pode ser concluído no mesmo dia! Além da negociação automatizada, oferecemos diversas outras soluções de automação, inteligência e análise nas demais etapas e processos do setor jurídico das empresas.</p> </p> ## Habilidades: - Gestão de T.I - Gestão de times de tecnologia - Gestão de Times Remotamente Distribuídos ## Local: 100% Remoto ## Requisitos: - Ter atuado em empresas de tecnologia; - Ter atuado como gestor de squads; - Bacharelado em Engenharia, Ciência da Computação (ou experiência equivalente); - Inglês Fluente. ## Benefícios: - Gympass; - Stock Options. ## Como se candidatar: Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [CTO na KOR Solutions](https://coodesh.com/vagas/cto-203939393?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação. ## Labels #### Alocação Remoto #### Regime PJ #### Categoria Gestão em TI
1.0
[Remoto] CTO na Coodesh - ## Descrição da vaga: Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios. Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/cto-203939393?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋 <p>A <strong>KOR Solutions</strong> está em busca de <strong><ins>CTO</ins></strong> para compor seu time!</p> <p>Procura um lugar para construir uma história <strong>GIGANTE</strong>? Então se liga:</p> <p>Temos uma missão para ser <strong>CTO</strong> aqui na <strong>KOR</strong> <strong>Solutions</strong>!</p> <p>Somos uma startup LawTech em crescimento super acelerado, que atua nas resoluções de conflitos judiciais e extrajudiciais entre as empresas e seus consumidores, além da recuperação de crédito! Pensamos sempre na manutenção dos clientes e na preservação da imagem da empresa e promovemos uma rápida negociação automatizada entre as partes, via Inteligência Artificial, aquela combinação perfeita entre inovação tecnológica, serviços e integração com o negócio.&nbsp;</p> <p>Legal né?</p> <p><strong>Sua gigante missão:</strong></p> <ul> <li>Ser a referência técnica considerando ambiente de desenvolvimento utilizando linguagem de programação, arquitetura de software, banco de dados, infraestrutura, Cyber Security e cloud computing;</li> <li>Responsável por gerir o time mantendo excelência no funcionamento dos sistemas e soluções;</li> <li>Gerenciamento de Squads, orçamentos e entregas;</li> <li>Sugerir e implementar melhorias nos processos e tecnologias atuais;</li> <li>Gerenciamento de roteiros de produtos para atingir metas e estratégias do negócio;</li> <li>Acompanhar os principais indicadores de desempenho (KPIs);</li> <li>Desenvolvimento de melhoria contínua de tecnologias;</li> <li>Gerencie riscos de negócios e estabeleça políticas e processos robustos.</li> </ul> <p><strong>Por aqui temos:</strong></p> <ul> <li>Oportunidade de construir a sua história em uma empresa GIGANTE;</li> <li>Ambiente sensacional e sem burocracia;</li> <li>Liderança HUMANIZADA;</li> <li>Crescimento meritocrático e humanizado;</li> <li>Regime de Contratação PJ;</li> <li>Formato de trabalho Híbrido e remoto.</li> </ul> <p>Todas as aplicações de vagas na KOR são consideradas sem distinção de gênero, orientação sexual, etnia, cultura, origem, religião, deficiência, idade etc.</p> <p></p> ## KOR Solutions: <p>A KOR auxilia empresas de todos os ramos a negociar incontáveis processos e disputas de seus clientes, fazendo isso de forma conveniente, segura e com validade jurídica. Uma vez que a proposta de acordo tenha sido aceita pelo parte contrária que processou a empresa, o sistema gera automaticamente a minuta do acordo e o contrato é executado. A assinatura de ambas as partes podem ser físicas ou digitais, facilitando o acordo conforme conveniência. Tudo isso pode ser concluído no mesmo dia! Além da negociação automatizada, oferecemos diversas outras soluções de automação, inteligência e análise nas demais etapas e processos do setor jurídico das empresas.</p> </p> ## Habilidades: - Gestão de T.I - Gestão de times de tecnologia - Gestão de Times Remotamente Distribuídos ## Local: 100% Remoto ## Requisitos: - Ter atuado em empresas de tecnologia; - Ter atuado como gestor de squads; - Bacharelado em Engenharia, Ciência da Computação (ou experiência equivalente); - Inglês Fluente. ## Benefícios: - Gympass; - Stock Options. ## Como se candidatar: Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [CTO na KOR Solutions](https://coodesh.com/vagas/cto-203939393?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação. ## Labels #### Alocação Remoto #### Regime PJ #### Categoria Gestão em TI
process
cto na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 a kor solutions está em busca de cto para compor seu time procura um lugar para construir uma história gigante então se liga temos uma missão para ser cto aqui na kor solutions somos uma startup lawtech em crescimento super acelerado que atua nas resoluções de conflitos judiciais e extrajudiciais entre as empresas e seus consumidores além da recuperação de crédito pensamos sempre na manutenção dos clientes e na preservação da imagem da empresa e promovemos uma rápida negociação automatizada entre as partes via inteligência artificial aquela combinação perfeita entre inovação tecnológica serviços e integração com o negócio nbsp legal né sua gigante missão ser a referência técnica considerando ambiente de desenvolvimento utilizando linguagem de programação arquitetura de software banco de dados infraestrutura cyber security e cloud computing responsável por gerir o time mantendo excelência no funcionamento dos sistemas e soluções gerenciamento de squads orçamentos e entregas sugerir e implementar melhorias nos processos e tecnologias atuais gerenciamento de roteiros de produtos para atingir metas e estratégias do negócio acompanhar os principais indicadores de desempenho kpis desenvolvimento de melhoria contínua de tecnologias gerencie riscos de negócios e estabeleça políticas e processos robustos por aqui temos oportunidade de construir a sua história em uma empresa gigante ambiente sensacional e sem burocracia liderança humanizada crescimento meritocrático e humanizado regime de contratação pj formato de trabalho híbrido e remoto todas as aplicações de vagas na kor são consideradas sem distinção de gênero orientação sexual etnia cultura origem religião deficiência idade etc kor solutions a kor auxilia empresas de todos os ramos a negociar incontáveis processos e disputas de seus clientes fazendo isso de forma conveniente segura e com validade jurídica uma vez que a proposta de acordo tenha sido aceita pelo parte contrária que processou a empresa o sistema gera automaticamente a minuta do acordo e o contrato é executado a assinatura de ambas as partes podem ser físicas ou digitais facilitando o acordo conforme conveniência tudo isso pode ser concluído no mesmo dia além da negociação automatizada oferecemos diversas outras soluções de automação inteligência e análise nas demais etapas e processos do setor jurídico das empresas habilidades gestão de t i gestão de times de tecnologia gestão de times remotamente distribuídos local remoto requisitos ter atuado em empresas de tecnologia ter atuado como gestor de squads bacharelado em engenharia ciência da computação ou experiência equivalente inglês fluente benefícios gympass stock options como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação remoto regime pj categoria gestão em ti
1
280,753
24,330,077,579
IssuesEvent
2022-09-30 18:31:08
elastic/kibana
https://api.github.com/repos/elastic/kibana
closed
Failing Cypress test Mark one alert as acknowledged when more than one open alerts are selected
failed-test skipped-test Team:Detections and Resp Team: SecuritySolution Team:Detection Alerts
*Kibana version:* 8.2 Failing test: https://buildkite.com/elastic/kibana-pull-request/builds/31565#b9dd9c8b-7861-4542-9e5b-f891118c4fc1 [Error message](https://s3.amazonaws.com/buildkiteartifacts.com/e0f3970e-3a75-4621-919f-e6c773e2bb12/0fda5127-f57f-42fb-8e5a-146b3d535916/8825e03b-08f5-4fc0-b6b6-3a00e842b89a/b9dd9c8b-7861-4542-9e5b-f891118c4fc1/target/test_failures/b9dd9c8b-7861-4542-9e5b-f891118c4fc1_2e71f68b0ff97f64153fa0ab7a0eab08.html?response-content-type=text%2Fhtml&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIAQPCP3C7L3LA6TJUP%2F20220321%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220321T172101Z&X-Amz-Expires=600&X-Amz-Security-Token=IQoJb3JpZ2luX2VjEJH%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaCXVzLWVhc3QtMSJIMEYCIQCIE1ZPmcGses4wL7bAStfSz3OaT0hCK2eDngVrNMGjLAIhAIYVzIt6HmNra6lXk2C6AMvYQsshPXnpJDlhpSPjwUPMKvoDCBkQABoMMDMyMzc5NzA1MzAzIgzorGcF96j9aDQr76Uq1wPecwR3jdoCYEyaQRE52kdl%2B6uBOVhnczeBfcRE4504oPp8VhuYJc0evhiB3XpD5tiYwuQTCX%2BGuSp5SEtiPvmzrQgLOOlKImLM4yh0p%2BfyWla71t4yw5RRjrDR8w1PFBYyuT8ji3fK31eNQueg4I5ucfmNf3EGbdESD4JWzUrGl8tE2FRSh%2BY6hMt4LywsCvP41be0gjQ5oAaWufwGUZefpeoxrT2bUYP7idncxMgdQo2JdNEt6WF%2BJ%2BOd9aMc8QNWOq93Gsn1qmeNudtUswgCBjGPzwPsPd0D27C2321b6hIxrYk3NSUeKDyQq8lOnAI8qEezDIEIIw8LLKmmeJE29p4xdUjv2hagSHOiI2mZtCADrOBFzSGWR66RrixSCnCqaZpIDU15HOm9b520KTEFSQj%2B3OYmapjYcUyllSZ1kB4oSxG9geOL8CpHfjWa%2Fclo3Td6b2MYyEeSME4cPVCPrIZ4Mw5o1UlYmnCkfff7VJ3uRysBDkn8zzEtGxPtk0pQjcLzq%2BvUc4aaElb%2FnqIwTztaWQVc6g1TRFOBokcxDLbMZU9jUHd9p2oSfVqL2IZzkurcX%2BFI0b0DEGFbY%2FA4aGjPWbmb8CU%2BuNlOfV2BeZK%2F2A7CD3IwmMTikQY6pAHatDA5L%2FJi9aY%2BlU9kAvzozYlFtjcSRC%2Bk4WO%2FetAPgOTJZy%2FPtV8ABBP2tJkSCin9fPouy0OQrX162xupN5j%2FCI6s%2Fxjg4hVbn6HPazk8CIxxvh%2BRdbfT9G6Z2lRB8sg6oODVV93M8D18lMNsYj9vZlb9pIf2mCnueDZa0VO5QCuajt1OqhhyOWZx20nJQ3LdVoplvMxhQnngSWHInQpa1nUO8w%3D%3D&X-Amz-SignedHeaders=host&X-Amz-Signature=81fbf9e233e86b2616e17cf937d44a28fd66599c11a643238657cdda00637b22) Skipped here: [`ca8b683` (#125960)](https://github.com/elastic/kibana/pull/125960/commits/ca8b68358613641909d130af576bb5cb298520d4)
2.0
Failing Cypress test Mark one alert as acknowledged when more than one open alerts are selected - *Kibana version:* 8.2 Failing test: https://buildkite.com/elastic/kibana-pull-request/builds/31565#b9dd9c8b-7861-4542-9e5b-f891118c4fc1 [Error message](https://s3.amazonaws.com/buildkiteartifacts.com/e0f3970e-3a75-4621-919f-e6c773e2bb12/0fda5127-f57f-42fb-8e5a-146b3d535916/8825e03b-08f5-4fc0-b6b6-3a00e842b89a/b9dd9c8b-7861-4542-9e5b-f891118c4fc1/target/test_failures/b9dd9c8b-7861-4542-9e5b-f891118c4fc1_2e71f68b0ff97f64153fa0ab7a0eab08.html?response-content-type=text%2Fhtml&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIAQPCP3C7L3LA6TJUP%2F20220321%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220321T172101Z&X-Amz-Expires=600&X-Amz-Security-Token=IQoJb3JpZ2luX2VjEJH%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaCXVzLWVhc3QtMSJIMEYCIQCIE1ZPmcGses4wL7bAStfSz3OaT0hCK2eDngVrNMGjLAIhAIYVzIt6HmNra6lXk2C6AMvYQsshPXnpJDlhpSPjwUPMKvoDCBkQABoMMDMyMzc5NzA1MzAzIgzorGcF96j9aDQr76Uq1wPecwR3jdoCYEyaQRE52kdl%2B6uBOVhnczeBfcRE4504oPp8VhuYJc0evhiB3XpD5tiYwuQTCX%2BGuSp5SEtiPvmzrQgLOOlKImLM4yh0p%2BfyWla71t4yw5RRjrDR8w1PFBYyuT8ji3fK31eNQueg4I5ucfmNf3EGbdESD4JWzUrGl8tE2FRSh%2BY6hMt4LywsCvP41be0gjQ5oAaWufwGUZefpeoxrT2bUYP7idncxMgdQo2JdNEt6WF%2BJ%2BOd9aMc8QNWOq93Gsn1qmeNudtUswgCBjGPzwPsPd0D27C2321b6hIxrYk3NSUeKDyQq8lOnAI8qEezDIEIIw8LLKmmeJE29p4xdUjv2hagSHOiI2mZtCADrOBFzSGWR66RrixSCnCqaZpIDU15HOm9b520KTEFSQj%2B3OYmapjYcUyllSZ1kB4oSxG9geOL8CpHfjWa%2Fclo3Td6b2MYyEeSME4cPVCPrIZ4Mw5o1UlYmnCkfff7VJ3uRysBDkn8zzEtGxPtk0pQjcLzq%2BvUc4aaElb%2FnqIwTztaWQVc6g1TRFOBokcxDLbMZU9jUHd9p2oSfVqL2IZzkurcX%2BFI0b0DEGFbY%2FA4aGjPWbmb8CU%2BuNlOfV2BeZK%2F2A7CD3IwmMTikQY6pAHatDA5L%2FJi9aY%2BlU9kAvzozYlFtjcSRC%2Bk4WO%2FetAPgOTJZy%2FPtV8ABBP2tJkSCin9fPouy0OQrX162xupN5j%2FCI6s%2Fxjg4hVbn6HPazk8CIxxvh%2BRdbfT9G6Z2lRB8sg6oODVV93M8D18lMNsYj9vZlb9pIf2mCnueDZa0VO5QCuajt1OqhhyOWZx20nJQ3LdVoplvMxhQnngSWHInQpa1nUO8w%3D%3D&X-Amz-SignedHeaders=host&X-Amz-Signature=81fbf9e233e86b2616e17cf937d44a28fd66599c11a643238657cdda00637b22) Skipped here: [`ca8b683` (#125960)](https://github.com/elastic/kibana/pull/125960/commits/ca8b68358613641909d130af576bb5cb298520d4)
non_process
failing cypress test mark one alert as acknowledged when more than one open alerts are selected kibana version failing test skipped here
0
4,460
7,329,950,610
IssuesEvent
2018-03-05 08:04:00
UKHomeOffice/dq-aws-transition
https://api.github.com/repos/UKHomeOffice/dq-aws-transition
closed
Update MVT database creds in ACL Python script
DQ Data Ingest DQ Tranche 1 Production SSM processing
Update MVT Database Creds in ACL Python script - [x] dp1_acl.sh
1.0
Update MVT database creds in ACL Python script - Update MVT Database Creds in ACL Python script - [x] dp1_acl.sh
process
update mvt database creds in acl python script update mvt database creds in acl python script acl sh
1
35,767
7,992,814,348
IssuesEvent
2018-07-20 03:58:47
joomla/joomla-cms
https://api.github.com/repos/joomla/joomla-cms
closed
[4.0] Edit profile on frontend, cancel button reloads page,
J4 Issue No Code Attached Yet
### Steps to reproduce the issue install 4.0-dev at 79fd94942e1758cff32da7f8f368b5f78cc75c40 install sample data login on frontend as super admin click change password in top right module Click Cancel button ### Expected result "something" is cancelled, and redirected "somewhere' I dont have the answer to where "somewhere" should be, but it should not be the actual result below ### Actual result Page reloads. ![screen recording 2018-05-19 at 09 33 pm](https://user-images.githubusercontent.com/400092/40272834-4d0c3e26-5bac-11e8-9c20-643404fd0f65.gif) ### System information (as much as possible) 79fd94942e1758cff32da7f8f368b5f78cc75c40 Google Chrome
1.0
[4.0] Edit profile on frontend, cancel button reloads page, - ### Steps to reproduce the issue install 4.0-dev at 79fd94942e1758cff32da7f8f368b5f78cc75c40 install sample data login on frontend as super admin click change password in top right module Click Cancel button ### Expected result "something" is cancelled, and redirected "somewhere' I dont have the answer to where "somewhere" should be, but it should not be the actual result below ### Actual result Page reloads. ![screen recording 2018-05-19 at 09 33 pm](https://user-images.githubusercontent.com/400092/40272834-4d0c3e26-5bac-11e8-9c20-643404fd0f65.gif) ### System information (as much as possible) 79fd94942e1758cff32da7f8f368b5f78cc75c40 Google Chrome
non_process
edit profile on frontend cancel button reloads page steps to reproduce the issue install dev at install sample data login on frontend as super admin click change password in top right module click cancel button expected result something is cancelled and redirected somewhere i dont have the answer to where somewhere should be but it should not be the actual result below actual result page reloads system information as much as possible google chrome
0
20,646
27,323,575,885
IssuesEvent
2023-02-24 22:33:57
cse442-at-ub/project_s23-iweatherify
https://api.github.com/repos/cse442-at-ub/project_s23-iweatherify
closed
Think of possible ideas for what the project should be and document them
Processing Task Sprint 1
**Task Tests** *Test 1* 1) Open document https://docs.google.com/document/d/1fQDM2_rvD49LgCHpRX-fh-yX0UqBG1fgJzAu0D9KsRQ/edit?usp=sharing 2) Verify that document has at least three ideas and that each idea gives at least a brief description of what it is about and what features it envisions being implemented
1.0
Think of possible ideas for what the project should be and document them - **Task Tests** *Test 1* 1) Open document https://docs.google.com/document/d/1fQDM2_rvD49LgCHpRX-fh-yX0UqBG1fgJzAu0D9KsRQ/edit?usp=sharing 2) Verify that document has at least three ideas and that each idea gives at least a brief description of what it is about and what features it envisions being implemented
process
think of possible ideas for what the project should be and document them task tests test open document verify that document has at least three ideas and that each idea gives at least a brief description of what it is about and what features it envisions being implemented
1
364,998
25,515,776,602
IssuesEvent
2022-11-28 16:13:02
bergmanlab/ngs_te_mapper
https://api.github.com/repos/bergmanlab/ngs_te_mapper
closed
Update readme to explain inputs and output of system
high priority documentation/usability
- explain which new directories & subdirectories are created - explain which files are created in each (sub)directory (what they are, what their format is)
1.0
Update readme to explain inputs and output of system - - explain which new directories & subdirectories are created - explain which files are created in each (sub)directory (what they are, what their format is)
non_process
update readme to explain inputs and output of system explain which new directories subdirectories are created explain which files are created in each sub directory what they are what their format is
0
344,120
24,798,757,750
IssuesEvent
2022-10-24 19:41:09
valkim55/VK-just-tech-news
https://api.github.com/repos/valkim55/VK-just-tech-news
closed
Users can create, read, update, and delete a profile in the database
documentation
- as a user, I can create my own profile that stores information about me - as a user, I can retrieve my profile data or another user's profile data - as a user, I can update my profile data - as a user, I can delete my profile data
1.0
Users can create, read, update, and delete a profile in the database - - as a user, I can create my own profile that stores information about me - as a user, I can retrieve my profile data or another user's profile data - as a user, I can update my profile data - as a user, I can delete my profile data
non_process
users can create read update and delete a profile in the database as a user i can create my own profile that stores information about me as a user i can retrieve my profile data or another user s profile data as a user i can update my profile data as a user i can delete my profile data
0
257,789
27,563,817,934
IssuesEvent
2023-03-08 01:08:41
billmcchesney1/superagent
https://api.github.com/repos/billmcchesney1/superagent
opened
CVE-2019-1010266 (Medium) detected in multiple libraries
security vulnerability
## CVE-2019-1010266 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash-2.4.2.tgz</b>, <b>lodash-3.10.1.tgz</b>, <b>lodash-3.2.0.tgz</b>, <b>lodash-2.1.0.tgz</b></p></summary> <p> <details><summary><b>lodash-2.4.2.tgz</b></p></summary> <p>A utility library delivering consistency, customization, performance, & extras.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz">https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/archiver/node_modules/lodash/package.json,/node_modules/findup-sync/node_modules/lodash/package.json,/node_modules/wd/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - zuul-3.12.0.tgz (Root Library) - wd-0.3.11.tgz - :x: **lodash-2.4.2.tgz** (Vulnerable Library) </details> <details><summary><b>lodash-3.10.1.tgz</b></p></summary> <p>The modern build of lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz">https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/zuul/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - zuul-3.12.0.tgz (Root Library) - :x: **lodash-3.10.1.tgz** (Vulnerable Library) </details> <details><summary><b>lodash-3.2.0.tgz</b></p></summary> <p>The modern build of lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.2.0.tgz">https://registry.npmjs.org/lodash/-/lodash-3.2.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/istanbul-middleware/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - zuul-3.12.0.tgz (Root Library) - istanbul-middleware-0.2.2.tgz - archiver-0.14.4.tgz - :x: **lodash-3.2.0.tgz** (Vulnerable Library) </details> <details><summary><b>lodash-2.1.0.tgz</b></p></summary> <p>A utility library delivering consistency, customization, performance, & extras.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-2.1.0.tgz">https://registry.npmjs.org/lodash/-/lodash-2.1.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/file-utils/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - zuul-3.12.0.tgz (Root Library) - firefox-profile-0.2.7.tgz - archiver-0.7.1.tgz - file-utils-0.1.5.tgz - :x: **lodash-2.1.0.tgz** (Vulnerable Library) </details> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> lodash prior to 4.17.11 is affected by: CWE-400: Uncontrolled Resource Consumption. The impact is: Denial of service. The component is: Date handler. The attack vector is: Attacker provides very long strings, which the library attempts to match using a regular expression. The fixed version is: 4.17.11. <p>Publish Date: 2019-07-17 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-1010266>CVE-2019-1010266</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-1010266">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-1010266</a></p> <p>Release Date: 2019-07-17</p> <p>Fix Resolution: 4.17.11</p> </p> </details> <p></p>
True
CVE-2019-1010266 (Medium) detected in multiple libraries - ## CVE-2019-1010266 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash-2.4.2.tgz</b>, <b>lodash-3.10.1.tgz</b>, <b>lodash-3.2.0.tgz</b>, <b>lodash-2.1.0.tgz</b></p></summary> <p> <details><summary><b>lodash-2.4.2.tgz</b></p></summary> <p>A utility library delivering consistency, customization, performance, & extras.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz">https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/archiver/node_modules/lodash/package.json,/node_modules/findup-sync/node_modules/lodash/package.json,/node_modules/wd/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - zuul-3.12.0.tgz (Root Library) - wd-0.3.11.tgz - :x: **lodash-2.4.2.tgz** (Vulnerable Library) </details> <details><summary><b>lodash-3.10.1.tgz</b></p></summary> <p>The modern build of lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz">https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/zuul/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - zuul-3.12.0.tgz (Root Library) - :x: **lodash-3.10.1.tgz** (Vulnerable Library) </details> <details><summary><b>lodash-3.2.0.tgz</b></p></summary> <p>The modern build of lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.2.0.tgz">https://registry.npmjs.org/lodash/-/lodash-3.2.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/istanbul-middleware/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - zuul-3.12.0.tgz (Root Library) - istanbul-middleware-0.2.2.tgz - archiver-0.14.4.tgz - :x: **lodash-3.2.0.tgz** (Vulnerable Library) </details> <details><summary><b>lodash-2.1.0.tgz</b></p></summary> <p>A utility library delivering consistency, customization, performance, & extras.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-2.1.0.tgz">https://registry.npmjs.org/lodash/-/lodash-2.1.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/file-utils/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - zuul-3.12.0.tgz (Root Library) - firefox-profile-0.2.7.tgz - archiver-0.7.1.tgz - file-utils-0.1.5.tgz - :x: **lodash-2.1.0.tgz** (Vulnerable Library) </details> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> lodash prior to 4.17.11 is affected by: CWE-400: Uncontrolled Resource Consumption. The impact is: Denial of service. The component is: Date handler. The attack vector is: Attacker provides very long strings, which the library attempts to match using a regular expression. The fixed version is: 4.17.11. <p>Publish Date: 2019-07-17 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-1010266>CVE-2019-1010266</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-1010266">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-1010266</a></p> <p>Release Date: 2019-07-17</p> <p>Fix Resolution: 4.17.11</p> </p> </details> <p></p>
non_process
cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries lodash tgz lodash tgz lodash tgz lodash tgz lodash tgz a utility library delivering consistency customization performance extras library home page a href path to dependency file package json path to vulnerable library node modules archiver node modules lodash package json node modules findup sync node modules lodash package json node modules wd node modules lodash package json dependency hierarchy zuul tgz root library wd tgz x lodash tgz vulnerable library lodash tgz the modern build of lodash modular utilities library home page a href path to dependency file package json path to vulnerable library node modules zuul node modules lodash package json dependency hierarchy zuul tgz root library x lodash tgz vulnerable library lodash tgz the modern build of lodash modular utilities library home page a href path to dependency file package json path to vulnerable library node modules istanbul middleware node modules lodash package json dependency hierarchy zuul tgz root library istanbul middleware tgz archiver tgz x lodash tgz vulnerable library lodash tgz a utility library delivering consistency customization performance extras library home page a href path to dependency file package json path to vulnerable library node modules file utils node modules lodash package json dependency hierarchy zuul tgz root library firefox profile tgz archiver tgz file utils tgz x lodash tgz vulnerable library found in base branch master vulnerability details lodash prior to is affected by cwe uncontrolled resource consumption the impact is denial of service the component is date handler the attack vector is attacker provides very long strings which the library attempts to match using a regular expression the fixed version is publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution
0
20,797
3,419,228,968
IssuesEvent
2015-12-08 08:37:51
dart-lang/sdk
https://api.github.com/repos/dart-lang/sdk
closed
List.from is fixed length, but loop through list still checks for ioore
area-dart2js dart2js-optimization Priority-Medium triaged Type-Defect
Consider this code: List&lt;String&gt; fruits = new List.from(['apples', 'oranges']); void main() { &nbsp;&nbsp;for (int i = 0; i &lt; fruits.length; i++) { &nbsp;&nbsp;&nbsp;&nbsp;print(fruits[i]); &nbsp;&nbsp;} } this compiles to the following JS code: $.main = function() { &nbsp;&nbsp;var i, t1; &nbsp;&nbsp;for (i = 0; i &lt; $.get$fruits().length; ++i) { &nbsp;&nbsp;&nbsp;&nbsp;t1 = $.get$fruits(); &nbsp;&nbsp;&nbsp;&nbsp;if (i &gt;= t1.length) &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;throw $.ioore(i); &nbsp;&nbsp;&nbsp;&nbsp;$.Primitives_printString($.toString$0(t1[i])); &nbsp;&nbsp;} }; I would expect that the constructor for List.from() returns a fixed length list. Thus, dart2js could inline the list length in a for loop and avoid the ioore check.
1.0
List.from is fixed length, but loop through list still checks for ioore - Consider this code: List&lt;String&gt; fruits = new List.from(['apples', 'oranges']); void main() { &nbsp;&nbsp;for (int i = 0; i &lt; fruits.length; i++) { &nbsp;&nbsp;&nbsp;&nbsp;print(fruits[i]); &nbsp;&nbsp;} } this compiles to the following JS code: $.main = function() { &nbsp;&nbsp;var i, t1; &nbsp;&nbsp;for (i = 0; i &lt; $.get$fruits().length; ++i) { &nbsp;&nbsp;&nbsp;&nbsp;t1 = $.get$fruits(); &nbsp;&nbsp;&nbsp;&nbsp;if (i &gt;= t1.length) &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;throw $.ioore(i); &nbsp;&nbsp;&nbsp;&nbsp;$.Primitives_printString($.toString$0(t1[i])); &nbsp;&nbsp;} }; I would expect that the constructor for List.from() returns a fixed length list. Thus, dart2js could inline the list length in a for loop and avoid the ioore check.
non_process
list from is fixed length but loop through list still checks for ioore consider this code list lt string gt fruits new list from void main nbsp nbsp for int i i lt fruits length i nbsp nbsp nbsp nbsp print fruits nbsp nbsp this compiles to the following js code main function nbsp nbsp var i nbsp nbsp for i i lt get fruits length i nbsp nbsp nbsp nbsp get fruits nbsp nbsp nbsp nbsp if i gt length nbsp nbsp nbsp nbsp nbsp nbsp throw ioore i nbsp nbsp nbsp nbsp primitives printstring tostring nbsp nbsp i would expect that the constructor for list from returns a fixed length list thus could inline the list length in a for loop and avoid the ioore check
0
20,212
26,803,987,434
IssuesEvent
2023-02-01 16:55:53
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
Mongo does not handle nested limit
Type:Bug Database/Mongo Querying/Processor .Backend
As part of https://github.com/metabase/metabase/issues/23422 we discovered that `metabase.query-processor-test.expressions-test/expression-using-aggregation-test` was failing (partly) because the `:limit 3` inside source query was being ignored.
1.0
Mongo does not handle nested limit - As part of https://github.com/metabase/metabase/issues/23422 we discovered that `metabase.query-processor-test.expressions-test/expression-using-aggregation-test` was failing (partly) because the `:limit 3` inside source query was being ignored.
process
mongo does not handle nested limit as part of we discovered that metabase query processor test expressions test expression using aggregation test was failing partly because the limit inside source query was being ignored
1
7,843
11,014,444,235
IssuesEvent
2019-12-04 22:46:57
googleapis/java-recommender
https://api.github.com/repos/googleapis/java-recommender
opened
Release a BOM
type: process
Create a google-cloud-recommender-bom artifact that includes the versions of the artifacts released from this library.
1.0
Release a BOM - Create a google-cloud-recommender-bom artifact that includes the versions of the artifacts released from this library.
process
release a bom create a google cloud recommender bom artifact that includes the versions of the artifacts released from this library
1
13,211
15,683,157,237
IssuesEvent
2021-03-25 08:24:03
ropensci/software-review-meta
https://api.github.com/repos/ropensci/software-review-meta
closed
New editor task after acceptance: check reviewer volunteering form link to authors
process
if they aren't authors yet. Also mention to the author who opened an issue that they should forward the link to other major contributors of the package. Of course, this will be added to the editor guide only after I updated the survey. 😁
1.0
New editor task after acceptance: check reviewer volunteering form link to authors - if they aren't authors yet. Also mention to the author who opened an issue that they should forward the link to other major contributors of the package. Of course, this will be added to the editor guide only after I updated the survey. 😁
process
new editor task after acceptance check reviewer volunteering form link to authors if they aren t authors yet also mention to the author who opened an issue that they should forward the link to other major contributors of the package of course this will be added to the editor guide only after i updated the survey 😁
1
4,621
7,467,019,244
IssuesEvent
2018-04-02 13:42:46
agroportal/agroportal_web_ui
https://api.github.com/repos/agroportal/agroportal_web_ui
opened
Biorefinery & Transmat failed to parse
ontology processing problem
Error from parsing log file (Biorefinery): Illegal rdf:nodeID value '_:genid259' there is an equivalent error for Transmat. This error seems to have been identified in the NCBO BioPortal: see - [https://sourceforge.net/p/owlapi/mailman/message/3594402/](url) - [https://github.com/ncbo/bioportal-project/issues/32#event-1226205997](url) - [https://github.com/ncbo/bioportal-project/issues/9](url) @jvendetti Did you solve this problem?
1.0
Biorefinery & Transmat failed to parse - Error from parsing log file (Biorefinery): Illegal rdf:nodeID value '_:genid259' there is an equivalent error for Transmat. This error seems to have been identified in the NCBO BioPortal: see - [https://sourceforge.net/p/owlapi/mailman/message/3594402/](url) - [https://github.com/ncbo/bioportal-project/issues/32#event-1226205997](url) - [https://github.com/ncbo/bioportal-project/issues/9](url) @jvendetti Did you solve this problem?
process
biorefinery transmat failed to parse error from parsing log file biorefinery illegal rdf nodeid value there is an equivalent error for transmat this error seems to have been identified in the ncbo bioportal see url url url jvendetti did you solve this problem
1
215,443
16,671,965,140
IssuesEvent
2021-06-07 12:04:19
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
closed
com.hazelcast.jet.impl.JobSummaryTest.when_manyJobs_then_sortedBySubmissionTime
Team: Core Type: Test-Failure
_master_ (commit e3352af34221c58de14ed09dcde0edc9206098c8) Failed on Oracle JDK 11: http://jenkins.hazelcast.com/view/Official%20Builds/job/Hazelcast-master-OracleJDK11/271/testReport/com.hazelcast.jet.impl/JobSummaryTest/when_manyJobs_then_sortedBySubmissionTime/ Stacktrace: ``` org.junit.ComparisonFailure: expected:<job [7]> but was:<job [8]> at org.junit.Assert.assertEquals(Assert.java:117) at org.junit.Assert.assertEquals(Assert.java:146) at com.hazelcast.jet.impl.JobSummaryTest.lambda$when_manyJobs_then_sortedBySubmissionTime$4(JobSummaryTest.java:141) at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:1249) at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:1266) at com.hazelcast.jet.impl.JobSummaryTest.when_manyJobs_then_sortedBySubmissionTime(JobSummaryTest.java:133) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:115) at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:107) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.lang.Thread.run(Thread.java:834) ``` Standard output: ``` 22:23:02,793 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobRepository] hz.heuristic_montalcini.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Job cleanup took 0ms 22:23:02,793 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobRepository] hz.heuristic_montalcini.cached.thread-5 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Job cleanup took 0ms 22:23:07,606 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] hz.heuristic_montalcini.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not scaling up job 'job 7', execution 064b-d046-2f10-0001: not running or already running on all members 22:23:07,606 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] hz.heuristic_montalcini.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not scaling up job 'job 8', execution 064b-d046-2f12-0001: not running or already running on all members 22:23:07,606 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] hz.heuristic_montalcini.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not scaling up job 'job 5', execution 064b-d046-2f0c-0001: not running or already running on all members 22:23:07,606 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] hz.heuristic_montalcini.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not scaling up job 'job 6', execution 064b-d046-2f0e-0001: not running or already running on all members 22:23:07,606 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] hz.heuristic_montalcini.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not scaling up job 'job 3', execution 064b-d046-2f08-0001: not running or already running on all members 22:23:07,606 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] hz.heuristic_montalcini.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not scaling up job 'job 4', execution 064b-d046-2f0a-0001: not running or already running on all members 22:23:07,606 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] hz.heuristic_montalcini.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not scaling up job 'job 0', execution 064b-d046-2f02-0001: not running or already running on all members 22:23:07,606 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] hz.heuristic_montalcini.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not scaling up job 'job 1', execution 064b-d046-2f04-0001: not running or already running on all members 22:23:07,606 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] hz.heuristic_montalcini.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not scaling up job 'job 9', execution 064b-d046-2f13-0001: not running or already running on all members 22:23:07,606 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] hz.heuristic_montalcini.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not scaling up job 'job 2', execution 064b-d046-2f06-0001: not running or already running on all members 22:23:07,795 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobRepository] hz.heuristic_montalcini.cached.thread-7 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Job cleanup took 0ms 22:23:07,795 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobRepository] hz.heuristic_montalcini.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Job cleanup took 0ms 22:23:12,796 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobRepository] hz.heuristic_montalcini.cached.thread-7 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Job cleanup took 0ms 22:23:12,796 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobRepository] hz.heuristic_montalcini.cached.thread-12 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Job cleanup took 0ms 22:23:17,798 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobRepository] hz.heuristic_montalcini.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Job cleanup took 0ms 22:23:17,798 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobRepository] hz.heuristic_montalcini.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Job cleanup took 0ms 22:23:18,073 INFO |when_manyJobs_then_sortedBySubmissionTime| - [JetTestSupport] Thread-2673 - Terminating instanceFactory in JetTestSupport.@After 22:23:18,073 INFO |when_manyJobs_then_sortedBySubmissionTime| - [LifecycleService] Thread-2692 - hz.client_69 [dev] [5.0-SNAPSHOT] [5.0-SNAPSHOT] HazelcastClient 5.0-SNAPSHOT (20210605 - e3352af) is SHUTTING_DOWN 22:23:18,074 WARN |when_manyJobs_then_sortedBySubmissionTime| - [TestClientRegistry$MockedServerConnection] pool-210-thread-1 - Server connection closed: null 22:23:18,074 INFO |when_manyJobs_then_sortedBySubmissionTime| - [MockServer] pool-210-thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:40002, connection: MockedNodeConnection{ remoteAddress = [127.0.0.1]:40002, localAddress = [127.0.0.1]:5701, connectionId = 2} 22:23:18,074 INFO |when_manyJobs_then_sortedBySubmissionTime| - [ClientConnectionManager] Thread-2692 - hz.client_69 [dev] [5.0-SNAPSHOT] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5701:24b45c95-fd78-4e7a-bdf0-9b30bf99663a, connection: MockedClientConnection{localAddress=[127.0.0.1]:40002, super=ClientConnection{alive=false, connectionId=2, channel=null, remoteAddress=[127.0.0.1]:5701, lastReadTime=2021-06-05 22:23:17.753, lastWriteTime=2021-06-05 22:23:17.752, closedTime=2021-06-05 22:23:18.074, connected server version=5.0-SNAPSHOT}} 22:23:18,074 WARN |when_manyJobs_then_sortedBySubmissionTime| - [TestClientRegistry$MockedServerConnection] pool-200-thread-1 - Server connection closed: null 22:23:18,074 INFO |when_manyJobs_then_sortedBySubmissionTime| - [ClientConnectionManager] Thread-2692 - hz.client_69 [dev] [5.0-SNAPSHOT] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5702:ef994296-1f47-465f-833d-3d080037e1e0, connection: MockedClientConnection{localAddress=[127.0.0.1]:40001, super=ClientConnection{alive=false, connectionId=1, channel=null, remoteAddress=[127.0.0.1]:5702, lastReadTime=2021-06-05 22:23:17.725, lastWriteTime=2021-06-05 22:23:17.725, closedTime=2021-06-05 22:23:18.074, connected server version=5.0-SNAPSHOT}} 22:23:18,074 INFO |when_manyJobs_then_sortedBySubmissionTime| - [ClientEndpointManager] hz.heuristic_montalcini.event-706 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Destroying ClientEndpoint{connection=MockedNodeConnection{ remoteAddress = [127.0.0.1]:40002, localAddress = [127.0.0.1]:5701, connectionId = 2}, clientUuid='6596a088-c5f9-4cf3-a607-61d2e1c658bf, authenticated=true, clientVersion=5.0-SNAPSHOT, creationTime=1622931777720, latest clientAttributes=lastStatisticsCollectionTime=1622931792723,enterprise=false,clientType=JVM,clientVersion=5.0-SNAPSHOT,clusterConnectionTimestamp=1622931777716,clientAddress=127.0.0.1,clientName=hz.client_69,credentials.principal=null,os.committedVirtualMemorySize=31333642240,os.freePhysicalMemorySize=162795298816,os.freeSwapSpaceSize=2924552192,os.maxFileDescriptorCount=120000,os.openFileDescriptorCount=564,os.processCpuTime=400840000000,os.systemLoadAverage=4.04,os.totalPhysicalMemorySize=405449981952,os.totalSwapSpaceSize=4294963200,runtime.availableProcessors=8,runtime.freeMemory=214761992,runtime.maxMemory=2147483648,runtime.totalMemory=716177408,runtime.uptime=429391,runtime.usedMemory=501415416, labels=[]} 22:23:18,074 INFO |when_manyJobs_then_sortedBySubmissionTime| - [MockServer] pool-200-thread-1 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:40001, connection: MockedNodeConnection{ remoteAddress = [127.0.0.1]:40001, localAddress = [127.0.0.1]:5702, connectionId = 1} 22:23:18,074 INFO |when_manyJobs_then_sortedBySubmissionTime| - [LifecycleService] Thread-2692 - hz.client_69 [dev] [5.0-SNAPSHOT] [5.0-SNAPSHOT] HazelcastClient 5.0-SNAPSHOT (20210605 - e3352af) is CLIENT_DISCONNECTED 22:23:18,074 INFO |when_manyJobs_then_sortedBySubmissionTime| - [ClientEndpointManager] hz.frosty_montalcini.event-710 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Destroying ClientEndpoint{connection=MockedNodeConnection{ remoteAddress = [127.0.0.1]:40001, localAddress = [127.0.0.1]:5702, connectionId = 1}, clientUuid='6596a088-c5f9-4cf3-a607-61d2e1c658bf, authenticated=true, clientVersion=5.0-SNAPSHOT, creationTime=1622931777714, latest clientAttributes=lastStatisticsCollectionTime=1622931797723,enterprise=false,clientType=JVM,clientVersion=5.0-SNAPSHOT,clusterConnectionTimestamp=1622931777713,clientAddress=127.0.0.1,clientName=hz.client_69,credentials.principal=null,os.committedVirtualMemorySize=31333642240,os.freePhysicalMemorySize=162797019136,os.freeSwapSpaceSize=2924552192,os.maxFileDescriptorCount=120000,os.openFileDescriptorCount=564,os.processCpuTime=402050000000,os.systemLoadAverage=3.87,os.totalPhysicalMemorySize=405449981952,os.totalSwapSpaceSize=4294963200,runtime.availableProcessors=8,runtime.freeMemory=194479800,runtime.maxMemory=2147483648,runtime.totalMemory=716177408,runtime.uptime=434391,runtime.usedMemory=521697608, labels=[]} 22:23:18,074 WARN |when_manyJobs_then_sortedBySubmissionTime| - [TwoWayBlockableExecutor] pool-210-thread-1 - Dropping incoming runnable since other end closed. Server Closed EOF. MockedClientConnection{localAddress=[127.0.0.1]:40002, super=ClientConnection{alive=false, connectionId=2, channel=null, remoteAddress=[127.0.0.1]:5701, lastReadTime=2021-06-05 22:23:17.753, lastWriteTime=2021-06-05 22:23:17.752, closedTime=2021-06-05 22:23:18.074, connected server version=5.0-SNAPSHOT}} 22:23:18,074 WARN |when_manyJobs_then_sortedBySubmissionTime| - [TwoWayBlockableExecutor] pool-200-thread-1 - Dropping incoming runnable since other end closed. Server Closed EOF. MockedClientConnection{localAddress=[127.0.0.1]:40001, super=ClientConnection{alive=false, connectionId=1, channel=null, remoteAddress=[127.0.0.1]:5702, lastReadTime=2021-06-05 22:23:17.725, lastWriteTime=2021-06-05 22:23:17.725, closedTime=2021-06-05 22:23:18.074, connected server version=5.0-SNAPSHOT}} 22:23:18,075 INFO |when_manyJobs_then_sortedBySubmissionTime| - [LifecycleService] Thread-2692 - hz.client_69 [dev] [5.0-SNAPSHOT] [5.0-SNAPSHOT] HazelcastClient 5.0-SNAPSHOT (20210605 - e3352af) is SHUTDOWN 22:23:18,075 INFO |when_manyJobs_then_sortedBySubmissionTime| - [LifecycleService] Thread-2692 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] [127.0.0.1]:5702 is SHUTTING_DOWN 22:23:18,075 WARN |when_manyJobs_then_sortedBySubmissionTime| - [Node] Thread-2692 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Terminating forcefully... 22:23:18,075 INFO |when_manyJobs_then_sortedBySubmissionTime| - [Node] Thread-2692 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Shutting down connection manager... 22:23:18,075 INFO |when_manyJobs_then_sortedBySubmissionTime| - [MockServer] Thread-2692 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5702, connection: MockConnection{localEndpoint=[127.0.0.1]:5701, remoteEndpoint=[127.0.0.1]:5702, alive=false} 22:23:18,075 INFO |when_manyJobs_then_sortedBySubmissionTime| - [MockServer] Thread-2692 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5701, connection: MockConnection{localEndpoint=[127.0.0.1]:5702, remoteEndpoint=[127.0.0.1]:5701, alive=false} 22:23:18,075 INFO |when_manyJobs_then_sortedBySubmissionTime| - [MembershipManager] Thread-2692 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Removing Member [127.0.0.1]:5702 - ef994296-1f47-465f-833d-3d080037e1e0 22:23:18,076 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 0', execution 064b-d046-2f02-0001 received response to StartExecutionOperation from [127.0.0.1]:5702: com.hazelcast.core.MemberLeftException: Member [127.0.0.1]:5702 - ef994296-1f47-465f-833d-3d080037e1e0 has left cluster! 22:23:18,077 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 2', execution 064b-d046-2f06-0001 received response to StartExecutionOperation from [127.0.0.1]:5702: com.hazelcast.core.MemberLeftException: Member [127.0.0.1]:5702 - ef994296-1f47-465f-833d-3d080037e1e0 has left cluster! 22:23:18,077 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-15 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 3', execution 064b-d046-2f08-0001 received response to StartExecutionOperation from [127.0.0.1]:5702: com.hazelcast.core.MemberLeftException: Member [127.0.0.1]:5702 - ef994296-1f47-465f-833d-3d080037e1e0 has left cluster! 22:23:18,076 INFO |when_manyJobs_then_sortedBySubmissionTime| - [ClusterService] Thread-2692 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Members {size:1, ver:3} [ Member [127.0.0.1]:5701 - 24b45c95-fd78-4e7a-bdf0-9b30bf99663a this ] 22:23:18,077 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-7 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 1', execution 064b-d046-2f04-0001 received response to StartExecutionOperation from [127.0.0.1]:5702: com.hazelcast.core.MemberLeftException: Member [127.0.0.1]:5702 - ef994296-1f47-465f-833d-3d080037e1e0 has left cluster! 22:23:18,077 INFO |when_manyJobs_then_sortedBySubmissionTime| - [Node] Thread-2692 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Shutting down node engine... 22:23:18,077 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-11 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 5', execution 064b-d046-2f0c-0001 received response to StartExecutionOperation from [127.0.0.1]:5702: com.hazelcast.core.MemberLeftException: Member [127.0.0.1]:5702 - ef994296-1f47-465f-833d-3d080037e1e0 has left cluster! 22:23:18,077 INFO |when_manyJobs_then_sortedBySubmissionTime| - [TransactionManagerService] hz.heuristic_montalcini.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Committing/rolling-back live transactions of [127.0.0.1]:5702, UUID: ef994296-1f47-465f-833d-3d080037e1e0 22:23:18,077 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-13 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 6', execution 064b-d046-2f0e-0001 received response to StartExecutionOperation from [127.0.0.1]:5702: com.hazelcast.core.MemberLeftException: Member [127.0.0.1]:5702 - ef994296-1f47-465f-833d-3d080037e1e0 has left cluster! 22:23:18,077 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-7 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 8', execution 064b-d046-2f12-0001 received response to StartExecutionOperation from [127.0.0.1]:5702: com.hazelcast.core.MemberLeftException: Member [127.0.0.1]:5702 - ef994296-1f47-465f-833d-3d080037e1e0 has left cluster! 22:23:18,077 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-5 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 4', execution 064b-d046-2f0a-0001 received response to StartExecutionOperation from [127.0.0.1]:5702: com.hazelcast.core.MemberLeftException: Member [127.0.0.1]:5702 - ef994296-1f47-465f-833d-3d080037e1e0 has left cluster! 22:23:18,077 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-15 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 9', execution 064b-d046-2f13-0001 received response to StartExecutionOperation from [127.0.0.1]:5702: com.hazelcast.core.MemberLeftException: Member [127.0.0.1]:5702 - ef994296-1f47-465f-833d-3d080037e1e0 has left cluster! 22:23:18,077 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 7', execution 064b-d046-2f10-0001 received response to StartExecutionOperation from [127.0.0.1]:5702: com.hazelcast.core.MemberLeftException: Member [127.0.0.1]:5702 - ef994296-1f47-465f-833d-3d080037e1e0 has left cluster! 22:23:18,078 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobExecutionService] hz.heuristic_montalcini.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Completing job 'job 5', execution 064b-d046-2f0c-0001 locally. Reason: Member [127.0.0.1]:5702 left the cluster 22:23:18,078 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobExecutionService] hz.heuristic_montalcini.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Completing job 'job 6', execution 064b-d046-2f0e-0001 locally. Reason: Member [127.0.0.1]:5702 left the cluster 22:23:18,078 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobExecutionService] hz.heuristic_montalcini.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Completing job 'job 3', execution 064b-d046-2f08-0001 locally. Reason: Member [127.0.0.1]:5702 left the cluster 22:23:18,078 ERROR |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-15 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 0', execution 064b-d046-2f02-0001: some TerminateExecutionOperation invocations failed, execution might remain stuck: [MemberInfo{address=[127.0.0.1]:5702, uuid=ef994296-1f47-465f-833d-3d080037e1e0, liteMember=false, memberListJoinVersion=2}=com.hazelcast.spi.exception.TargetNotMemberException: Not Member! target: [127.0.0.1]:5702, partitionId: -1, operation: com.hazelcast.jet.impl.operation.TerminateExecutionOperation, service: hz:impl:jetService, MemberInfo{address=[127.0.0.1]:5701, uuid=24b45c95-fd78-4e7a-bdf0-9b30bf99663a, liteMember=false, memberListJoinVersion=1}=null] 22:23:18,078 ERROR |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-11 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 1', execution 064b-d046-2f04-0001: some TerminateExecutionOperation invocations failed, execution might remain stuck: [MemberInfo{address=[127.0.0.1]:5702, uuid=ef994296-1f47-465f-833d-3d080037e1e0, liteMember=false, memberListJoinVersion=2}=com.hazelcast.spi.exception.TargetNotMemberException: Not Member! target: [127.0.0.1]:5702, partitionId: -1, operation: com.hazelcast.jet.impl.operation.TerminateExecutionOperation, service: hz:impl:jetService, MemberInfo{address=[127.0.0.1]:5701, uuid=24b45c95-fd78-4e7a-bdf0-9b30bf99663a, liteMember=false, memberListJoinVersion=1}=null] 22:23:18,078 ERROR |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 9', execution 064b-d046-2f13-0001: some TerminateExecutionOperation invocations failed, execution might remain stuck: [MemberInfo{address=[127.0.0.1]:5702, uuid=ef994296-1f47-465f-833d-3d080037e1e0, liteMember=false, memberListJoinVersion=2}=com.hazelcast.spi.exception.TargetNotMemberException: Not Member! target: [127.0.0.1]:5702, partitionId: -1, operation: com.hazelcast.jet.impl.operation.TerminateExecutionOperation, service: hz:impl:jetService, MemberInfo{address=[127.0.0.1]:5701, uuid=24b45c95-fd78-4e7a-bdf0-9b30bf99663a, liteMember=false, memberListJoinVersion=1}=null] 22:23:18,078 ERROR |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-15 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 3', execution 064b-d046-2f08-0001: some TerminateExecutionOperation invocations failed, execution might remain stuck: [MemberInfo{address=[127.0.0.1]:5702, uuid=ef994296-1f47-465f-833d-3d080037e1e0, liteMember=false, memberListJoinVersion=2}=com.hazelcast.spi.exception.TargetNotMemberException: Not Member! target: [127.0.0.1]:5702, partitionId: -1, operation: com.hazelcast.jet.impl.operation.TerminateExecutionOperation, service: hz:impl:jetService, MemberInfo{address=[127.0.0.1]:5701, uuid=24b45c95-fd78-4e7a-bdf0-9b30bf99663a, liteMember=false, memberListJoinVersion=1}=null] 22:23:18,078 ERROR |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-11 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 7', execution 064b-d046-2f10-0001: some TerminateExecutionOperation invocations failed, execution might remain stuck: [MemberInfo{address=[127.0.0.1]:5702, uuid=ef994296-1f47-465f-833d-3d080037e1e0, liteMember=false, memberListJoinVersion=2}=com.hazelcast.spi.exception.TargetNotMemberException: Not Member! target: [127.0.0.1]:5702, partitionId: -1, operation: com.hazelcast.jet.impl.operation.TerminateExecutionOperation, service: hz:impl:jetService, MemberInfo{address=[127.0.0.1]:5701, uuid=24b45c95-fd78-4e7a-bdf0-9b30bf99663a, liteMember=false, memberListJoinVersion=1}=null] 22:23:18,078 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-5 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 0', execution 064b-d046-2f02-0001 22:23:18,078 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-15 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 5', execution 064b-d046-2f0c-0001 22:23:18,078 ERROR |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 5', execution 064b-d046-2f0c-0001: some TerminateExecutionOperation invocations failed, execution might remain stuck: [MemberInfo{address=[127.0.0.1]:5702, uuid=ef994296-1f47-465f-833d-3d080037e1e0, liteMember=false, memberListJoinVersion=2}=com.hazelcast.spi.exception.TargetNotMemberException: Not Member! target: [127.0.0.1]:5702, partitionId: -1, operation: com.hazelcast.jet.impl.operation.TerminateExecutionOperation, service: hz:impl:jetService, MemberInfo{address=[127.0.0.1]:5701, uuid=24b45c95-fd78-4e7a-bdf0-9b30bf99663a, liteMember=false, memberListJoinVersion=1}=null] 22:23:18,078 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-7 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 9', execution 064b-d046-2f13-0001 22:23:18,078 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-11 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 6', execution 064b-d046-2f0e-0001 22:23:18,078 ERROR |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 2', execution 064b-d046-2f06-0001: some TerminateExecutionOperation invocations failed, execution might remain stuck: [MemberInfo{address=[127.0.0.1]:5702, uuid=ef994296-1f47-465f-833d-3d080037e1e0, liteMember=false, memberListJoinVersion=2}=com.hazelcast.spi.exception.TargetNotMemberException: Not Member! target: [127.0.0.1]:5702, partitionId: -1, operation: com.hazelcast.jet.impl.operation.TerminateExecutionOperation, service: hz:impl:jetService, MemberInfo{address=[127.0.0.1]:5701, uuid=24b45c95-fd78-4e7a-bdf0-9b30bf99663a, liteMember=false, memberListJoinVersion=1}=null] 22:23:18,078 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobExecutionService] hz.heuristic_montalcini.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Completing job 'job 4', execution 064b-d046-2f0a-0001 locally. Reason: Member [127.0.0.1]:5702 left the cluster 22:23:18,078 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-13 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 3', execution 064b-d046-2f08-0001 22:23:18,078 ERROR |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 8', execution 064b-d046-2f12-0001: some TerminateExecutionOperation invocations failed, execution might remain stuck: [MemberInfo{address=[127.0.0.1]:5702, uuid=ef994296-1f47-465f-833d-3d080037e1e0, liteMember=false, memberListJoinVersion=2}=com.hazelcast.spi.exception.TargetNotMemberException: Not Member! target: [127.0.0.1]:5702, partitionId: -1, operation: com.hazelcast.jet.impl.operation.TerminateExecutionOperation, service: hz:impl:jetService, MemberInfo{address=[127.0.0.1]:5701, uuid=24b45c95-fd78-4e7a-bdf0-9b30bf99663a, liteMember=false, memberListJoinVersion=1}=null] 22:23:18,078 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobExecutionService] hz.heuristic_montalcini.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Completing job 'job 1', execution 064b-d046-2f04-0001 locally. Reason: Member [127.0.0.1]:5702 left the cluster 22:23:18,078 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobExecutionService] Thread-2692 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Completing job 'job 5', execution 064b-d046-2f0c-0001 locally. Reason: Node is shutting down 22:23:18,078 ERROR |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 4', execution 064b-d046-2f0a-0001: some TerminateExecutionOperation invocations failed, execution might remain stuck: [MemberInfo{address=[127.0.0.1]:5702, uuid=ef994296-1f47-465f-833d-3d080037e1e0, liteMember=false, memberListJoinVersion=2}=com.hazelcast.spi.exception.TargetNotMemberException: Not Member! target: [127.0.0.1]:5702, partitionId: -1, operation: com.hazelcast.jet.impl.operation.TerminateExecutionOperation, service: hz:impl:jetService, MemberInfo{address=[127.0.0.1]:5701, uuid=24b45c95-fd78-4e7a-bdf0-9b30bf99663a, liteMember=false, memberListJoinVersion=1}=null] 22:23:18,078 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobExecutionService] hz.heuristic_montalcini.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Completing job 'job 2', execution 064b-d046-2f06-0001 locally. Reason: Member [127.0.0.1]:5702 left the cluster 22:23:18,079 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 7', execution 064b-d046-2f10-0001 22:23:18,079 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobExecutionService] hz.heuristic_montalcini.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Completing job 'job 8', execution 064b-d046-2f12-0001 locally. Reason: Member [127.0.0.1]:5702 left the cluster 22:23:18,079 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 1', execution 064b-d046-2f04-0001 22:23:18,079 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobExecutionService] Thread-2692 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Completing job 'job 6', execution 064b-d046-2f0e-0001 locally. Reason: Node is shutting down 22:23:18,079 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobExecutionService] Thread-2692 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Completing job 'job 3', execution 064b-d046-2f08-0001 locally. Reason: Node is shutting down 22:23:18,079 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobExecutionService] Thread-2692 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Completing job 'job 4', execution 064b-d046-2f0a-0001 locally. Reason: Node is shutting down 22:23:18,079 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobExecutionService] Thread-2692 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Completing job 'job 1', execution 064b-d046-2f04-0001 locally. Reason: Node is shutting down 22:23:18,078 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-15 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Execution of job 'job 5', execution 064b-d046-2f0c-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:417) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:403) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:353) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.lambda$onMemberRemoved$5(JobExecutionService.java:252) ~[classes/:?] at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183) ~[?:?] at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177) ~[?:?] at java.util.concurrent.ConcurrentHashMap$ValueSpliterator.forEachRemaining(ConcurrentHashMap.java:3605) ~[?:?] at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) ~[?:?] at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) ~[?:?] at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) ~[?:?] at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173) ~[?:?] at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:?] at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497) ~[?:?] at com.hazelcast.jet.impl.JobExecutionService.onMemberRemoved(JobExecutionService.java:249) ~[classes/:?] at com.hazelcast.jet.impl.JetServiceBackend.memberRemoved(JetServiceBackend.java:274) ~[classes/:?] at com.hazelcast.internal.cluster.impl.MembershipManager.lambda$sendMembershipEventNotifications$1(MembershipManager.java:830) ~[classes/:?] at com.hazelcast.internal.util.executor.CachedExecutorServiceDelegate$Worker.run(CachedExecutorServiceDelegate.java:217) ~[classes/:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76) ~[classes/:?] at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) ~[classes/:?] 22:23:18,078 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-5 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Execution of job 'job 0', execution 064b-d046-2f02-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:417) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:403) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:353) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution(JobExecutionService.java:619) ~[classes/:?] at com.hazelcast.jet.impl.operation.TerminateExecutionOperation.run(TerminateExecutionOperation.java:58) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.Operation.call(Operation.java:189) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:272) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:248) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:213) ~[classes/:?] at com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.run(OperationExecutorImpl.java:411) ~[classes/:?] at com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.runOrExecute(OperationExecutorImpl.java:438) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvokeLocal(Invocation.java:600) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvoke(Invocation.java:579) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke0(Invocation.java:540) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke(Invocation.java:240) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.InvocationBuilderImpl.invoke(InvocationBuilderImpl.java:59) ~[classes/:?] at com.hazelcast.jet.impl.MasterContext.invokeOnParticipant(MasterContext.java:284) ~[classes/:?] at com.hazelcast.jet.impl.MasterContext.invokeOnParticipants(MasterContext.java:267) ~[classes/:?] at com.hazelcast.jet.impl.MasterJobContext.lambda$cancelExecutionInvocations$16(MasterJobContext.java:591) ~[classes/:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76) ~[classes/:?] at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) ~[classes/:?] 22:23:18,078 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-13 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Execution of job 'job 3', execution 064b-d046-2f08-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:417) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:403) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:353) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution(JobExecutionService.java:619) ~[classes/:?] at com.hazelcast.jet.impl.operation.TerminateExecutionOperation.run(TerminateExecutionOperation.java:58) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.Operation.call(Operation.java:189) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:272) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:248) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:213) ~[classes/:?] at com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.run(OperationExecutorImpl.java:411) ~[classes/:?] at com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.runOrExecute(OperationExecutorImpl.java:438) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvokeLocal(Invocation.java:600) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvoke(Invocation.java:579) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke0(Invocation.java:540) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke(Invocation.java:240) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.InvocationBuilderImpl.invoke(InvocationBuilderImpl.java:59) ~[classes/:?] at com.hazelcast.jet.impl.MasterContext.invokeOnParticipant(MasterContext.java:284) ~[classes/:?] at com.hazelcast.jet.impl.MasterContext.invokeOnParticipants(MasterContext.java:267) ~[classes/:?] at com.hazelcast.jet.impl.MasterJobContext.lambda$cancelExecutionInvocations$16(MasterJobContext.java:591) ~[classes/:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76) ~[classes/:?] at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) ~[classes/:?] 22:23:18,078 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-7 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Execution of job 'job 9', execution 064b-d046-2f13-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:417) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:403) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:353) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution(JobExecutionService.java:619) ~[classes/:?] at com.hazelcast.jet.impl.operation.TerminateExecutionOperation.run(TerminateExecutionOperation.java:58) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.Operation.call(Operation.java:189) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:272) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:248) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:213) ~[classes/:?] at com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.run(OperationExecutorImpl.java:411) ~[classes/:?] at com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.runOrExecute(OperationExecutorImpl.java:438) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvokeLocal(Invocation.java:600) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvoke(Invocation.java:579) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke0(Invocation.java:540) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke(Invocation.java:240) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.InvocationBuilderImpl.invoke(InvocationBuilderImpl.java:59) ~[classes/:?] at com.hazelcast.jet.impl.MasterContext.invokeOnParticipant(MasterContext.java:284) ~[classes/:?] at com.hazelcast.jet.impl.MasterContext.invokeOnParticipants(MasterContext.java:267) ~[classes/:?] at com.hazelcast.jet.impl.MasterJobContext.lambda$cancelExecutionInvocations$16(MasterJobContext.java:591) ~[classes/:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76) ~[classes/:?] at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) ~[classes/:?] 22:23:18,078 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-11 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Execution of job 'job 6', execution 064b-d046-2f0e-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:417) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:403) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:353) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.lambda$onMemberRemoved$5(JobExecutionService.java:252) ~[classes/:?] at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183) ~[?:?] at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177) ~[?:?] at java.util.concurrent.ConcurrentHashMap$ValueSpliterator.forEachRemaining(ConcurrentHashMap.java:3605) ~[?:?] at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) ~[?:?] at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) ~[?:?] at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) ~[?:?] at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173) ~[?:?] at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:?] at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497) ~[?:?] at com.hazelcast.jet.impl.JobExecutionService.onMemberRemoved(JobExecutionService.java:249) ~[classes/:?] at com.hazelcast.jet.impl.JetServiceBackend.memberRemoved(JetServiceBackend.java:274) ~[classes/:?] at com.hazelcast.internal.cluster.impl.MembershipManager.lambda$sendMembershipEventNotifications$1(MembershipManager.java:830) ~[classes/:?] at com.hazelcast.internal.util.executor.CachedExecutorServiceDelegate$Worker.run(CachedExecutorServiceDelegate.java:217) ~[classes/:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76) ~[classes/:?] at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) ~[classes/:?] 22:23:18,079 DEBUG || - [MasterJobContext] ForkJoinPool.commonPool-worker-7 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 9', execution 064b-d046-2f13-0001 received response to StartExecutionOperation from [127.0.0.1]:5701: java.util.concurrent.CancellationException 22:23:18,079 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Execution of job 'job 1', execution 064b-d046-2f04-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:417) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:403) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:353) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Cause ...[truncated 23890 chars]... utor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76) ~[classes/:?] at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) ~[classes/:?] 22:23:18,080 DEBUG || - [MasterJobContext] ForkJoinPool.commonPool-worker-15 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Execution of job 'job 2', execution 064b-d046-2f06-0001 has failures: [[127.0.0.1]:5702=com.hazelcast.core.MemberLeftException: Member [127.0.0.1]:5702 - ef994296-1f47-465f-833d-3d080037e1e0 has left cluster!, [127.0.0.1]:5701=java.util.concurrent.CancellationException] 22:23:18,080 DEBUG || - [MasterJobContext] ForkJoinPool.commonPool-worker-5 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 8', execution 064b-d046-2f12-0001 received response to StartExecutionOperation from [127.0.0.1]:5701: java.util.concurrent.CancellationException 22:23:18,080 DEBUG || - [MasterJobContext] ForkJoinPool.commonPool-worker-5 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Execution of job 'job 8', execution 064b-d046-2f12-0001 has failures: [[127.0.0.1]:5702=com.hazelcast.core.MemberLeftException: Member [127.0.0.1]:5702 - ef994296-1f47-465f-833d-3d080037e1e0 has left cluster!, [127.0.0.1]:5701=java.util.concurrent.CancellationException] 22:23:18,080 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobCoordinationService] hz.heuristic_montalcini.cached.thread-12 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Scheduling restart on master for job job 2 22:23:18,080 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Execution of job 'job 4', execution 064b-d046-2f0a-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:417) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:403) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:353) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution(JobExecutionService.java:619) ~[classes/:?] at com.hazelcast.jet.impl.operation.TerminateExecutionOperation.run(TerminateExecutionOperation.java:58) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.Operation.call(Operation.java:189) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:272) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:248) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:213) ~[classes/:?] at com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.run(OperationExecutorImpl.java:411) ~[classes/:?] at com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.runOrExecute(OperationExecutorImpl.java:438) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvokeLocal(Invocation.java:600) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvoke(Invocation.java:579) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke0(Invocation.java:540) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke(Invocation.java:240) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.InvocationBuilderImpl.invoke(InvocationBuilderImpl.java:59) ~[classes/:?] at com.hazelcast.jet.impl.MasterContext.invokeOnParticipant(MasterContext.java:284) ~[classes/:?] at com.hazelcast.jet.impl.MasterContext.invokeOnParticipants(MasterContext.java:267) ~[classes/:?] at com.hazelcast.jet.impl.MasterJobContext.lambda$cancelExecutionInvocations$16(MasterJobContext.java:591) ~[classes/:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76) ~[classes/:?] at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) ~[classes/:?] 22:23:18,080 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobCoordinationService] hz.heuristic_montalcini.cached.thread-3 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Scheduling restart on master for job job 8 22:23:18,080 DEBUG || - [MasterJobContext] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 4', execution 064b-d046-2f0a-0001 received response to StartExecutionOperation from [127.0.0.1]:5701: java.util.concurrent.CancellationException 22:23:18,080 DEBUG || - [MasterJobContext] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Execution of job 'job 4', execution 064b-d046-2f0a-0001 has failures: [[127.0.0.1]:5702=com.hazelcast.core.MemberLeftException: Member [127.0.0.1]:5702 - ef994296-1f47-465f-833d-3d080037e1e0 has left cluster!, [127.0.0.1]:5701=java.util.concurrent.CancellationException] 22:23:18,080 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobCoordinationService] hz.heuristic_montalcini.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Scheduling restart on master for job job 4 22:23:18,081 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 2', execution 064b-d046-2f06-0001 22:23:18,081 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Execution of job 'job 2', execution 064b-d046-2f06-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:417) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:403) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:353) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.cancelAllExecutions(JobExecutionService.java:236) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.shutdown(JobExecutionService.java:221) ~[classes/:?] at com.hazelcast.jet.impl.JetServiceBackend.shutdown(JetServiceBackend.java:188) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdownService(ServiceManagerImpl.java:307) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdown(ServiceManagerImpl.java:298) ~[classes/:?] at com.hazelcast.spi.impl.NodeEngineImpl.shutdown(NodeEngineImpl.java:515) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdownServices(Node.java:594) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdown(Node.java:533) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.shutdown(LifecycleServiceImpl.java:101) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.terminate(LifecycleServiceImpl.java:89) ~[classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.shutdown(TestNodeRegistry.java:123) ~[test-classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.terminate(TestNodeRegistry.java:113) ~[test-classes/:?] at com.hazelcast.test.TestHazelcastInstanceFactory.terminateAll(TestHazelcastInstanceFactory.java:331) ~[test-classes/:?] at com.hazelcast.client.test.TestHazelcastFactory.terminateAll(TestHazelcastFactory.java:183) ~[test-classes/:?] at com.hazelcast.jet.JetTestInstanceFactory.terminateAll(JetTestInstanceFactory.java:129) ~[test-classes/:?] at com.hazelcast.jet.core.JetTestSupport.lambda$shutdownFactory$0(JetTestSupport.java:85) ~[test-classes/:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] ... 1 more 22:23:18,084 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 3', execution 064b-d046-2f08-0001 22:23:18,084 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Execution of job 'job 3', execution 064b-d046-2f08-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:417) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:403) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:353) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.cancelAllExecutions(JobExecutionService.java:236) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.shutdown(JobExecutionService.java:221) ~[classes/:?] at com.hazelcast.jet.impl.JetServiceBackend.shutdown(JetServiceBackend.java:188) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdownService(ServiceManagerImpl.java:307) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdown(ServiceManagerImpl.java:298) ~[classes/:?] at com.hazelcast.spi.impl.NodeEngineImpl.shutdown(NodeEngineImpl.java:515) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdownServices(Node.java:594) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdown(Node.java:533) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.shutdown(LifecycleServiceImpl.java:101) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.terminate(LifecycleServiceImpl.java:89) ~[classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.shutdown(TestNodeRegistry.java:123) ~[test-classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.terminate(TestNodeRegistry.java:113) ~[test-classes/:?] at com.hazelcast.test.TestHazelcastInstanceFactory.terminateAll(TestHazelcastInstanceFactory.java:331) ~[test-classes/:?] at com.hazelcast.client.test.TestHazelcastFactory.terminateAll(TestHazelcastFactory.java:183) ~[test-classes/:?] at com.hazelcast.jet.JetTestInstanceFactory.terminateAll(JetTestInstanceFactory.java:129) ~[test-classes/:?] at com.hazelcast.jet.core.JetTestSupport.lambda$shutdownFactory$0(JetTestSupport.java:85) ~[test-classes/:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] ... 1 more 22:23:18,085 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 4', execution 064b-d046-2f0a-0001 22:23:18,085 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Execution of job 'job 4', execution 064b-d046-2f0a-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:417) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:403) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:353) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.cancelAllExecutions(JobExecutionService.java:236) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.shutdown(JobExecutionService.java:221) ~[classes/:?] at com.hazelcast.jet.impl.JetServiceBackend.shutdown(JetServiceBackend.java:188) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdownService(ServiceManagerImpl.java:307) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdown(ServiceManagerImpl.java:298) ~[classes/:?] at com.hazelcast.spi.impl.NodeEngineImpl.shutdown(NodeEngineImpl.java:515) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdownServices(Node.java:594) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdown(Node.java:533) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.shutdown(LifecycleServiceImpl.java:101) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.terminate(LifecycleServiceImpl.java:89) ~[classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.shutdown(TestNodeRegistry.java:123) ~[test-classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.terminate(TestNodeRegistry.java:113) ~[test-classes/:?] at com.hazelcast.test.TestHazelcastInstanceFactory.terminateAll(TestHazelcastInstanceFactory.java:331) ~[test-classes/:?] at com.hazelcast.client.test.TestHazelcastFactory.terminateAll(TestHazelcastFactory.java:183) ~[test-classes/:?] at com.hazelcast.jet.JetTestInstanceFactory.terminateAll(JetTestInstanceFactory.java:129) ~[test-classes/:?] at com.hazelcast.jet.core.JetTestSupport.lambda$shutdownFactory$0(JetTestSupport.java:85) ~[test-classes/:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] ... 1 more 22:23:18,086 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 5', execution 064b-d046-2f0c-0001 22:23:18,086 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Execution of job 'job 5', execution 064b-d046-2f0c-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:417) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:403) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:353) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.cancelAllExecutions(JobExecutionService.java:236) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.shutdown(JobExecutionService.java:221) ~[classes/:?] at com.hazelcast.jet.impl.JetServiceBackend.shutdown(JetServiceBackend.java:188) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdownService(ServiceManagerImpl.java:307) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdown(ServiceManagerImpl.java:298) ~[classes/:?] at com.hazelcast.spi.impl.NodeEngineImpl.shutdown(NodeEngineImpl.java:515) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdownServices(Node.java:594) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdown(Node.java:533) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.shutdown(LifecycleServiceImpl.java:101) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.terminate(LifecycleServiceImpl.java:89) ~[classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.shutdown(TestNodeRegistry.java:123) ~[test-classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.terminate(TestNodeRegistry.java:113) ~[test-classes/:?] at com.hazelcast.test.TestHazelcastInstanceFactory.terminateAll(TestHazelcastInstanceFactory.java:331) ~[test-classes/:?] at com.hazelcast.client.test.TestHazelcastFactory.terminateAll(TestHazelcastFactory.java:183) ~[test-classes/:?] at com.hazelcast.jet.JetTestInstanceFactory.terminateAll(JetTestInstanceFactory.java:129) ~[test-classes/:?] at com.hazelcast.jet.core.JetTestSupport.lambda$shutdownFactory$0(JetTestSupport.java:85) ~[test-classes/:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] ... 1 more 22:23:18,086 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 6', execution 064b-d046-2f0e-0001 22:23:18,086 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Execution of job 'job 6', execution 064b-d046-2f0e-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:417) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:403) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:353) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.cancelAllExecutions(JobExecutionService.java:236) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.shutdown(JobExecutionService.java:221) ~[classes/:?] at com.hazelcast.jet.impl.JetServiceBackend.shutdown(JetServiceBackend.java:188) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdownService(ServiceManagerImpl.java:307) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdown(ServiceManagerImpl.java:298) ~[classes/:?] at com.hazelcast.spi.impl.NodeEngineImpl.shutdown(NodeEngineImpl.java:515) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdownServices(Node.java:594) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdown(Node.java:533) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.shutdown(LifecycleServiceImpl.java:101) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.terminate(LifecycleServiceImpl.java:89) ~[classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.shutdown(TestNodeRegistry.java:123) ~[test-classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.terminate(TestNodeRegistry.java:113) ~[test-classes/:?] at com.hazelcast.test.TestHazelcastInstanceFactory.terminateAll(TestHazelcastInstanceFactory.java:331) ~[test-classes/:?] at com.hazelcast.client.test.TestHazelcastFactory.terminateAll(TestHazelcastFactory.java:183) ~[test-classes/:?] at com.hazelcast.jet.JetTestInstanceFactory.terminateAll(JetTestInstanceFactory.java:129) ~[test-classes/:?] at com.hazelcast.jet.core.JetTestSupport.lambda$shutdownFactory$0(JetTestSupport.java:85) ~[test-classes/:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] ... 1 more 22:23:18,087 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 7', execution 064b-d046-2f10-0001 22:23:18,087 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-5 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 1', execution 064b-d046-2f04-0001 22:23:18,087 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Execution of job 'job 7', execution 064b-d046-2f10-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:417) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:403) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:353) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.cancelAllExecutions(JobExecutionService.java:236) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.shutdown(JobExecutionService.java:221) ~[classes/:?] at com.hazelcast.jet.impl.JetServiceBackend.shutdown(JetServiceBackend.java:188) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdownService(ServiceManagerImpl.java:307) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdown(ServiceManagerImpl.java:298) ~[classes/:?] at com.hazelcast.spi.impl.NodeEngineImpl.shutdown(NodeEngineImpl.java:515) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdownServices(Node.java:594) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdown(Node.java:533) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.shutdown(LifecycleServiceImpl.java:101) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.terminate(LifecycleServiceImpl.java:89) ~[classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.shutdown(TestNodeRegistry.java:123) ~[test-classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.terminate(TestNodeRegistry.java:113) ~[test-classes/:?] at com.hazelcast.test.TestHazelcastInstanceFactory.terminateAll(TestHazelcastInstanceFactory.java:331) ~[test-classes/:?] at com.hazelcast.client.test.TestHazelcastFactory.terminateAll(TestHazelcastFactory.java:183) ~[test-classes/:?] at com.hazelcast.jet.JetTestInstanceFactory.terminateAll(JetTestInstanceFactory.java:129) ~[test-classes/:?] at com.hazelcast.jet.core.JetTestSupport.lambda$shutdownFactory$0(JetTestSupport.java:85) ~[test-classes/:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] ... 1 more 22:23:18,087 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-5 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Execution of job 'job 1', execution 064b-d046-2f04-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.lambda$run$0(TaskletExecutionService.java:373) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:373) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.cancelAllExecutions(JobExecutionService.java:236) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.shutdown(JobExecutionService.java:221) ~[classes/:?] at com.hazelcast.jet.impl.JetServiceBackend.shutdown(JetServiceBackend.java:188) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdownService(ServiceManagerImpl.java:307) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdown(ServiceManagerImpl.java:298) ~[classes/:?] at com.hazelcast.spi.impl.NodeEngineImpl.shutdown(NodeEngineImpl.java:515) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdownServices(Node.java:594) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdown(Node.java:533) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.shutdown(LifecycleServiceImpl.java:101) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.terminate(LifecycleServiceImpl.java:89) ~[classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.shutdown(TestNodeRegistry.java:123) ~[test-classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.terminate(TestNodeRegistry.java:113) ~[test-classes/:?] at com.hazelcast.test.TestHazelcastInstanceFactory.terminateAll(TestHazelcastInstanceFactory.java:331) ~[test-classes/:?] at com.hazelcast.client.test.TestHazelcastFactory.terminateAll(TestHazelcastFactory.java:183) ~[test-classes/:?] at com.hazelcast.jet.JetTestInstanceFactory.terminateAll(JetTestInstanceFactory.java:129) ~[test-classes/:?] at com.hazelcast.jet.core.JetTestSupport.lambda$shutdownFactory$0(JetTestSupport.java:85) ~[test-classes/:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] ... 1 more 22:23:18,088 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-5 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 8', execution 064b-d046-2f12-0001 22:23:18,088 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-5 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Execution of job 'job 8', execution 064b-d046-2f12-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:417) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:403) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:353) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.cancelAllExecutions(JobExecutionService.java:236) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.shutdown(JobExecutionService.java:221) ~[classes/:?] at com.hazelcast.jet.impl.JetServiceBackend.shutdown(JetServiceBackend.java:188) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdownService(ServiceManagerImpl.java:307) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdown(ServiceManagerImpl.java:298) ~[classes/:?] at com.hazelcast.spi.impl.NodeEngineImpl.shutdown(NodeEngineImpl.java:515) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdownServices(Node.java:594) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdown(Node.java:533) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.shutdown(LifecycleServiceImpl.java:101) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.terminate(LifecycleServiceImpl.java:89) ~[classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.shutdown(TestNodeRegistry.java:123) ~[test-classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.terminate(TestNodeRegistry.java:113) ~[test-classes/:?] at com.hazelcast.test.TestHazelcastInstanceFactory.terminateAll(TestHazelcastInstanceFactory.java:331) ~[test-classes/:?] at com.hazelcast.client.test.TestHazelcastFactory.terminateAll(TestHazelcastFactory.java:183) ~[test-classes/:?] at com.hazelcast.jet.JetTestInstanceFactory.terminateAll(JetTestInstanceFactory.java:129) ~[test-classes/:?] at com.hazelcast.jet.core.JetTestSupport.lambda$shutdownFactory$0(JetTestSupport.java:85) ~[test-classes/:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] ... 1 more 22:23:18,088 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-5 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 9', execution 064b-d046-2f13-0001 22:23:18,088 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-5 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Execution of job 'job 9', execution 064b-d046-2f13-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:417) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:403) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:353) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.cancelAllExecutions(JobExecutionService.java:236) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.shutdown(JobExecutionService.java:221) ~[classes/:?] at com.hazelcast.jet.impl.JetServiceBackend.shutdown(JetServiceBackend.java:188) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdownService(ServiceManagerImpl.java:307) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdown(ServiceManagerImpl.java:298) ~[classes/:?] at com.hazelcast.spi.impl.NodeEngineImpl.shutdown(NodeEngineImpl.java:515) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdownServices(Node.java:594) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdown(Node.java:533) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.shutdown(LifecycleServiceImpl.java:101) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.terminate(LifecycleServiceImpl.java:89) ~[classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.shutdown(TestNodeRegistry.java:123) ~[test-classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.terminate(TestNodeRegistry.java:113) ~[test-classes/:?] at com.hazelcast.test.TestHazelcastInstanceFactory.terminateAll(TestHazelcastInstanceFactory.java:331) ~[test-classes/:?] at com.hazelcast.client.test.TestHazelcastFactory.terminateAll(TestHazelcastFactory.java:183) ~[test-classes/:?] at com.hazelcast.jet.JetTestInstanceFactory.terminateAll(JetTestInstanceFactory.java:129) ~[test-classes/:?] at com.hazelcast.jet.core.JetTestSupport.lambda$shutdownFactory$0(JetTestSupport.java:85) ~[test-classes/:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] ... 1 more 22:23:18,089 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-5 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 0', execution 064b-d046-2f02-0001 22:23:18,089 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-5 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Execution of job 'job 0', execution 064b-d046-2f02-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.lambda$run$0(TaskletExecutionService.java:373) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:373) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.cancelAllExecutions(JobExecutionService.java:236) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.shutdown(JobExecutionService.java:221) ~[classes/:?] at com.hazelcast.jet.impl.JetServiceBackend.shutdown(JetServiceBackend.java:188) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdownService(ServiceManagerImpl.java:307) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdown(ServiceManagerImpl.java:298) ~[classes/:?] at com.hazelcast.spi.impl.NodeEngineImpl.shutdown(NodeEngineImpl.java:515) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdownServices(Node.java:594) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdown(Node.java:533) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.shutdown(LifecycleServiceImpl.java:101) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.terminate(LifecycleServiceImpl.java:89) ~[classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.shutdown(TestNodeRegistry.java:123) ~[test-classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.terminate(TestNodeRegistry.java:113) ~[test-classes/:?] at com.hazelcast.test.TestHazelcastInstanceFactory.terminateAll(TestHazelcastInstanceFactory.java:331) ~[test-classes/:?] at com.hazelcast.client.test.TestHazelcastFactory.terminateAll(TestHazelcastFactory.java:183) ~[test-classes/:?] at com.hazelcast.jet.JetTestInstanceFactory.terminateAll(JetTestInstanceFactory.java:129) ~[test-classes/:?] at com.hazelcast.jet.core.JetTestSupport.lambda$shutdownFactory$0(JetTestSupport.java:85) ~[test-classes/:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] ... 1 more 22:23:18,091 INFO |when_manyJobs_then_sortedBySubmissionTime| - [NodeExtension] Thread-2692 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Destroying node NodeExtension. 22:23:18,091 INFO |when_manyJobs_then_sortedBySubmissionTime| - [Node] Thread-2692 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Hazelcast Shutdown is completed in 16 ms. 22:23:18,091 INFO |when_manyJobs_then_sortedBySubmissionTime| - [LifecycleService] Thread-2692 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] [127.0.0.1]:5702 is SHUTDOWN 22:23:18,091 INFO |when_manyJobs_then_sortedBySubmissionTime| - [LifecycleService] Thread-2692 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] [127.0.0.1]:5701 is SHUTTING_DOWN 22:23:18,091 WARN |when_manyJobs_then_sortedBySubmissionTime| - [Node] Thread-2692 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Terminating forcefully... 22:23:18,091 INFO |when_manyJobs_then_sortedBySubmissionTime| - [Node] Thread-2692 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Shutting down connection manager... 22:23:18,091 INFO |when_manyJobs_then_sortedBySubmissionTime| - [Node] Thread-2692 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Shutting down node engine... 22:23:18,094 INFO |when_manyJobs_then_sortedBySubmissionTime| - [NodeExtension] Thread-2692 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Destroying node NodeExtension. 22:23:18,094 INFO |when_manyJobs_then_sortedBySubmissionTime| - [Node] Thread-2692 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Hazelcast Shutdown is completed in 3 ms. 22:23:18,094 INFO |when_manyJobs_then_sortedBySubmissionTime| - [LifecycleService] Thread-2692 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] [127.0.0.1]:5701 is SHUTDOWN BuildInfo right after when_manyJobs_then_sortedBySubmissionTime(com.hazelcast.jet.impl.JobSummaryTest): BuildInfo{version='5.0-SNAPSHOT', build='20210605', buildNumber=20210605, revision=e3352af, enterprise=false, serializationVersion=1, jet=JetBuildInfo{version='5.0-SNAPSHOT', build='20210605', revision='e3352af'}} Hiccups measured while running test 'when_manyJobs_then_sortedBySubmissionTime(com.hazelcast.jet.impl.JobSummaryTest):' 22:22:55, accumulated pauses: 117 ms, max pause: 28 ms, pauses over 1000 ms: 0 22:23:00, accumulated pauses: 39 ms, max pause: 0 ms, pauses over 1000 ms: 0 22:23:05, accumulated pauses: 41 ms, max pause: 0 ms, pauses over 1000 ms: 0 22:23:10, accumulated pauses: 68 ms, max pause: 25 ms, pauses over 1000 ms: 0 22:23:15, accumulated pauses: 127 ms, max pause: 101 ms, pauses over 1000 ms: 0 ```
1.0
com.hazelcast.jet.impl.JobSummaryTest.when_manyJobs_then_sortedBySubmissionTime - _master_ (commit e3352af34221c58de14ed09dcde0edc9206098c8) Failed on Oracle JDK 11: http://jenkins.hazelcast.com/view/Official%20Builds/job/Hazelcast-master-OracleJDK11/271/testReport/com.hazelcast.jet.impl/JobSummaryTest/when_manyJobs_then_sortedBySubmissionTime/ Stacktrace: ``` org.junit.ComparisonFailure: expected:<job [7]> but was:<job [8]> at org.junit.Assert.assertEquals(Assert.java:117) at org.junit.Assert.assertEquals(Assert.java:146) at com.hazelcast.jet.impl.JobSummaryTest.lambda$when_manyJobs_then_sortedBySubmissionTime$4(JobSummaryTest.java:141) at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:1249) at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:1266) at com.hazelcast.jet.impl.JobSummaryTest.when_manyJobs_then_sortedBySubmissionTime(JobSummaryTest.java:133) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:115) at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:107) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.lang.Thread.run(Thread.java:834) ``` Standard output: ``` 22:23:02,793 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobRepository] hz.heuristic_montalcini.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Job cleanup took 0ms 22:23:02,793 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobRepository] hz.heuristic_montalcini.cached.thread-5 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Job cleanup took 0ms 22:23:07,606 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] hz.heuristic_montalcini.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not scaling up job 'job 7', execution 064b-d046-2f10-0001: not running or already running on all members 22:23:07,606 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] hz.heuristic_montalcini.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not scaling up job 'job 8', execution 064b-d046-2f12-0001: not running or already running on all members 22:23:07,606 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] hz.heuristic_montalcini.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not scaling up job 'job 5', execution 064b-d046-2f0c-0001: not running or already running on all members 22:23:07,606 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] hz.heuristic_montalcini.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not scaling up job 'job 6', execution 064b-d046-2f0e-0001: not running or already running on all members 22:23:07,606 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] hz.heuristic_montalcini.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not scaling up job 'job 3', execution 064b-d046-2f08-0001: not running or already running on all members 22:23:07,606 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] hz.heuristic_montalcini.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not scaling up job 'job 4', execution 064b-d046-2f0a-0001: not running or already running on all members 22:23:07,606 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] hz.heuristic_montalcini.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not scaling up job 'job 0', execution 064b-d046-2f02-0001: not running or already running on all members 22:23:07,606 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] hz.heuristic_montalcini.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not scaling up job 'job 1', execution 064b-d046-2f04-0001: not running or already running on all members 22:23:07,606 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] hz.heuristic_montalcini.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not scaling up job 'job 9', execution 064b-d046-2f13-0001: not running or already running on all members 22:23:07,606 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] hz.heuristic_montalcini.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Not scaling up job 'job 2', execution 064b-d046-2f06-0001: not running or already running on all members 22:23:07,795 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobRepository] hz.heuristic_montalcini.cached.thread-7 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Job cleanup took 0ms 22:23:07,795 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobRepository] hz.heuristic_montalcini.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Job cleanup took 0ms 22:23:12,796 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobRepository] hz.heuristic_montalcini.cached.thread-7 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Job cleanup took 0ms 22:23:12,796 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobRepository] hz.heuristic_montalcini.cached.thread-12 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Job cleanup took 0ms 22:23:17,798 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobRepository] hz.heuristic_montalcini.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Job cleanup took 0ms 22:23:17,798 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobRepository] hz.heuristic_montalcini.cached.thread-8 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Job cleanup took 0ms 22:23:18,073 INFO |when_manyJobs_then_sortedBySubmissionTime| - [JetTestSupport] Thread-2673 - Terminating instanceFactory in JetTestSupport.@After 22:23:18,073 INFO |when_manyJobs_then_sortedBySubmissionTime| - [LifecycleService] Thread-2692 - hz.client_69 [dev] [5.0-SNAPSHOT] [5.0-SNAPSHOT] HazelcastClient 5.0-SNAPSHOT (20210605 - e3352af) is SHUTTING_DOWN 22:23:18,074 WARN |when_manyJobs_then_sortedBySubmissionTime| - [TestClientRegistry$MockedServerConnection] pool-210-thread-1 - Server connection closed: null 22:23:18,074 INFO |when_manyJobs_then_sortedBySubmissionTime| - [MockServer] pool-210-thread-1 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:40002, connection: MockedNodeConnection{ remoteAddress = [127.0.0.1]:40002, localAddress = [127.0.0.1]:5701, connectionId = 2} 22:23:18,074 INFO |when_manyJobs_then_sortedBySubmissionTime| - [ClientConnectionManager] Thread-2692 - hz.client_69 [dev] [5.0-SNAPSHOT] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5701:24b45c95-fd78-4e7a-bdf0-9b30bf99663a, connection: MockedClientConnection{localAddress=[127.0.0.1]:40002, super=ClientConnection{alive=false, connectionId=2, channel=null, remoteAddress=[127.0.0.1]:5701, lastReadTime=2021-06-05 22:23:17.753, lastWriteTime=2021-06-05 22:23:17.752, closedTime=2021-06-05 22:23:18.074, connected server version=5.0-SNAPSHOT}} 22:23:18,074 WARN |when_manyJobs_then_sortedBySubmissionTime| - [TestClientRegistry$MockedServerConnection] pool-200-thread-1 - Server connection closed: null 22:23:18,074 INFO |when_manyJobs_then_sortedBySubmissionTime| - [ClientConnectionManager] Thread-2692 - hz.client_69 [dev] [5.0-SNAPSHOT] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5702:ef994296-1f47-465f-833d-3d080037e1e0, connection: MockedClientConnection{localAddress=[127.0.0.1]:40001, super=ClientConnection{alive=false, connectionId=1, channel=null, remoteAddress=[127.0.0.1]:5702, lastReadTime=2021-06-05 22:23:17.725, lastWriteTime=2021-06-05 22:23:17.725, closedTime=2021-06-05 22:23:18.074, connected server version=5.0-SNAPSHOT}} 22:23:18,074 INFO |when_manyJobs_then_sortedBySubmissionTime| - [ClientEndpointManager] hz.heuristic_montalcini.event-706 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Destroying ClientEndpoint{connection=MockedNodeConnection{ remoteAddress = [127.0.0.1]:40002, localAddress = [127.0.0.1]:5701, connectionId = 2}, clientUuid='6596a088-c5f9-4cf3-a607-61d2e1c658bf, authenticated=true, clientVersion=5.0-SNAPSHOT, creationTime=1622931777720, latest clientAttributes=lastStatisticsCollectionTime=1622931792723,enterprise=false,clientType=JVM,clientVersion=5.0-SNAPSHOT,clusterConnectionTimestamp=1622931777716,clientAddress=127.0.0.1,clientName=hz.client_69,credentials.principal=null,os.committedVirtualMemorySize=31333642240,os.freePhysicalMemorySize=162795298816,os.freeSwapSpaceSize=2924552192,os.maxFileDescriptorCount=120000,os.openFileDescriptorCount=564,os.processCpuTime=400840000000,os.systemLoadAverage=4.04,os.totalPhysicalMemorySize=405449981952,os.totalSwapSpaceSize=4294963200,runtime.availableProcessors=8,runtime.freeMemory=214761992,runtime.maxMemory=2147483648,runtime.totalMemory=716177408,runtime.uptime=429391,runtime.usedMemory=501415416, labels=[]} 22:23:18,074 INFO |when_manyJobs_then_sortedBySubmissionTime| - [MockServer] pool-200-thread-1 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:40001, connection: MockedNodeConnection{ remoteAddress = [127.0.0.1]:40001, localAddress = [127.0.0.1]:5702, connectionId = 1} 22:23:18,074 INFO |when_manyJobs_then_sortedBySubmissionTime| - [LifecycleService] Thread-2692 - hz.client_69 [dev] [5.0-SNAPSHOT] [5.0-SNAPSHOT] HazelcastClient 5.0-SNAPSHOT (20210605 - e3352af) is CLIENT_DISCONNECTED 22:23:18,074 INFO |when_manyJobs_then_sortedBySubmissionTime| - [ClientEndpointManager] hz.frosty_montalcini.event-710 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Destroying ClientEndpoint{connection=MockedNodeConnection{ remoteAddress = [127.0.0.1]:40001, localAddress = [127.0.0.1]:5702, connectionId = 1}, clientUuid='6596a088-c5f9-4cf3-a607-61d2e1c658bf, authenticated=true, clientVersion=5.0-SNAPSHOT, creationTime=1622931777714, latest clientAttributes=lastStatisticsCollectionTime=1622931797723,enterprise=false,clientType=JVM,clientVersion=5.0-SNAPSHOT,clusterConnectionTimestamp=1622931777713,clientAddress=127.0.0.1,clientName=hz.client_69,credentials.principal=null,os.committedVirtualMemorySize=31333642240,os.freePhysicalMemorySize=162797019136,os.freeSwapSpaceSize=2924552192,os.maxFileDescriptorCount=120000,os.openFileDescriptorCount=564,os.processCpuTime=402050000000,os.systemLoadAverage=3.87,os.totalPhysicalMemorySize=405449981952,os.totalSwapSpaceSize=4294963200,runtime.availableProcessors=8,runtime.freeMemory=194479800,runtime.maxMemory=2147483648,runtime.totalMemory=716177408,runtime.uptime=434391,runtime.usedMemory=521697608, labels=[]} 22:23:18,074 WARN |when_manyJobs_then_sortedBySubmissionTime| - [TwoWayBlockableExecutor] pool-210-thread-1 - Dropping incoming runnable since other end closed. Server Closed EOF. MockedClientConnection{localAddress=[127.0.0.1]:40002, super=ClientConnection{alive=false, connectionId=2, channel=null, remoteAddress=[127.0.0.1]:5701, lastReadTime=2021-06-05 22:23:17.753, lastWriteTime=2021-06-05 22:23:17.752, closedTime=2021-06-05 22:23:18.074, connected server version=5.0-SNAPSHOT}} 22:23:18,074 WARN |when_manyJobs_then_sortedBySubmissionTime| - [TwoWayBlockableExecutor] pool-200-thread-1 - Dropping incoming runnable since other end closed. Server Closed EOF. MockedClientConnection{localAddress=[127.0.0.1]:40001, super=ClientConnection{alive=false, connectionId=1, channel=null, remoteAddress=[127.0.0.1]:5702, lastReadTime=2021-06-05 22:23:17.725, lastWriteTime=2021-06-05 22:23:17.725, closedTime=2021-06-05 22:23:18.074, connected server version=5.0-SNAPSHOT}} 22:23:18,075 INFO |when_manyJobs_then_sortedBySubmissionTime| - [LifecycleService] Thread-2692 - hz.client_69 [dev] [5.0-SNAPSHOT] [5.0-SNAPSHOT] HazelcastClient 5.0-SNAPSHOT (20210605 - e3352af) is SHUTDOWN 22:23:18,075 INFO |when_manyJobs_then_sortedBySubmissionTime| - [LifecycleService] Thread-2692 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] [127.0.0.1]:5702 is SHUTTING_DOWN 22:23:18,075 WARN |when_manyJobs_then_sortedBySubmissionTime| - [Node] Thread-2692 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Terminating forcefully... 22:23:18,075 INFO |when_manyJobs_then_sortedBySubmissionTime| - [Node] Thread-2692 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Shutting down connection manager... 22:23:18,075 INFO |when_manyJobs_then_sortedBySubmissionTime| - [MockServer] Thread-2692 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5702, connection: MockConnection{localEndpoint=[127.0.0.1]:5701, remoteEndpoint=[127.0.0.1]:5702, alive=false} 22:23:18,075 INFO |when_manyJobs_then_sortedBySubmissionTime| - [MockServer] Thread-2692 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Removed connection to endpoint: [127.0.0.1]:5701, connection: MockConnection{localEndpoint=[127.0.0.1]:5702, remoteEndpoint=[127.0.0.1]:5701, alive=false} 22:23:18,075 INFO |when_manyJobs_then_sortedBySubmissionTime| - [MembershipManager] Thread-2692 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Removing Member [127.0.0.1]:5702 - ef994296-1f47-465f-833d-3d080037e1e0 22:23:18,076 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 0', execution 064b-d046-2f02-0001 received response to StartExecutionOperation from [127.0.0.1]:5702: com.hazelcast.core.MemberLeftException: Member [127.0.0.1]:5702 - ef994296-1f47-465f-833d-3d080037e1e0 has left cluster! 22:23:18,077 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 2', execution 064b-d046-2f06-0001 received response to StartExecutionOperation from [127.0.0.1]:5702: com.hazelcast.core.MemberLeftException: Member [127.0.0.1]:5702 - ef994296-1f47-465f-833d-3d080037e1e0 has left cluster! 22:23:18,077 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-15 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 3', execution 064b-d046-2f08-0001 received response to StartExecutionOperation from [127.0.0.1]:5702: com.hazelcast.core.MemberLeftException: Member [127.0.0.1]:5702 - ef994296-1f47-465f-833d-3d080037e1e0 has left cluster! 22:23:18,076 INFO |when_manyJobs_then_sortedBySubmissionTime| - [ClusterService] Thread-2692 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Members {size:1, ver:3} [ Member [127.0.0.1]:5701 - 24b45c95-fd78-4e7a-bdf0-9b30bf99663a this ] 22:23:18,077 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-7 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 1', execution 064b-d046-2f04-0001 received response to StartExecutionOperation from [127.0.0.1]:5702: com.hazelcast.core.MemberLeftException: Member [127.0.0.1]:5702 - ef994296-1f47-465f-833d-3d080037e1e0 has left cluster! 22:23:18,077 INFO |when_manyJobs_then_sortedBySubmissionTime| - [Node] Thread-2692 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Shutting down node engine... 22:23:18,077 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-11 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 5', execution 064b-d046-2f0c-0001 received response to StartExecutionOperation from [127.0.0.1]:5702: com.hazelcast.core.MemberLeftException: Member [127.0.0.1]:5702 - ef994296-1f47-465f-833d-3d080037e1e0 has left cluster! 22:23:18,077 INFO |when_manyJobs_then_sortedBySubmissionTime| - [TransactionManagerService] hz.heuristic_montalcini.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Committing/rolling-back live transactions of [127.0.0.1]:5702, UUID: ef994296-1f47-465f-833d-3d080037e1e0 22:23:18,077 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-13 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 6', execution 064b-d046-2f0e-0001 received response to StartExecutionOperation from [127.0.0.1]:5702: com.hazelcast.core.MemberLeftException: Member [127.0.0.1]:5702 - ef994296-1f47-465f-833d-3d080037e1e0 has left cluster! 22:23:18,077 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-7 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 8', execution 064b-d046-2f12-0001 received response to StartExecutionOperation from [127.0.0.1]:5702: com.hazelcast.core.MemberLeftException: Member [127.0.0.1]:5702 - ef994296-1f47-465f-833d-3d080037e1e0 has left cluster! 22:23:18,077 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-5 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 4', execution 064b-d046-2f0a-0001 received response to StartExecutionOperation from [127.0.0.1]:5702: com.hazelcast.core.MemberLeftException: Member [127.0.0.1]:5702 - ef994296-1f47-465f-833d-3d080037e1e0 has left cluster! 22:23:18,077 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-15 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 9', execution 064b-d046-2f13-0001 received response to StartExecutionOperation from [127.0.0.1]:5702: com.hazelcast.core.MemberLeftException: Member [127.0.0.1]:5702 - ef994296-1f47-465f-833d-3d080037e1e0 has left cluster! 22:23:18,077 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 7', execution 064b-d046-2f10-0001 received response to StartExecutionOperation from [127.0.0.1]:5702: com.hazelcast.core.MemberLeftException: Member [127.0.0.1]:5702 - ef994296-1f47-465f-833d-3d080037e1e0 has left cluster! 22:23:18,078 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobExecutionService] hz.heuristic_montalcini.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Completing job 'job 5', execution 064b-d046-2f0c-0001 locally. Reason: Member [127.0.0.1]:5702 left the cluster 22:23:18,078 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobExecutionService] hz.heuristic_montalcini.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Completing job 'job 6', execution 064b-d046-2f0e-0001 locally. Reason: Member [127.0.0.1]:5702 left the cluster 22:23:18,078 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobExecutionService] hz.heuristic_montalcini.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Completing job 'job 3', execution 064b-d046-2f08-0001 locally. Reason: Member [127.0.0.1]:5702 left the cluster 22:23:18,078 ERROR |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-15 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 0', execution 064b-d046-2f02-0001: some TerminateExecutionOperation invocations failed, execution might remain stuck: [MemberInfo{address=[127.0.0.1]:5702, uuid=ef994296-1f47-465f-833d-3d080037e1e0, liteMember=false, memberListJoinVersion=2}=com.hazelcast.spi.exception.TargetNotMemberException: Not Member! target: [127.0.0.1]:5702, partitionId: -1, operation: com.hazelcast.jet.impl.operation.TerminateExecutionOperation, service: hz:impl:jetService, MemberInfo{address=[127.0.0.1]:5701, uuid=24b45c95-fd78-4e7a-bdf0-9b30bf99663a, liteMember=false, memberListJoinVersion=1}=null] 22:23:18,078 ERROR |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-11 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 1', execution 064b-d046-2f04-0001: some TerminateExecutionOperation invocations failed, execution might remain stuck: [MemberInfo{address=[127.0.0.1]:5702, uuid=ef994296-1f47-465f-833d-3d080037e1e0, liteMember=false, memberListJoinVersion=2}=com.hazelcast.spi.exception.TargetNotMemberException: Not Member! target: [127.0.0.1]:5702, partitionId: -1, operation: com.hazelcast.jet.impl.operation.TerminateExecutionOperation, service: hz:impl:jetService, MemberInfo{address=[127.0.0.1]:5701, uuid=24b45c95-fd78-4e7a-bdf0-9b30bf99663a, liteMember=false, memberListJoinVersion=1}=null] 22:23:18,078 ERROR |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 9', execution 064b-d046-2f13-0001: some TerminateExecutionOperation invocations failed, execution might remain stuck: [MemberInfo{address=[127.0.0.1]:5702, uuid=ef994296-1f47-465f-833d-3d080037e1e0, liteMember=false, memberListJoinVersion=2}=com.hazelcast.spi.exception.TargetNotMemberException: Not Member! target: [127.0.0.1]:5702, partitionId: -1, operation: com.hazelcast.jet.impl.operation.TerminateExecutionOperation, service: hz:impl:jetService, MemberInfo{address=[127.0.0.1]:5701, uuid=24b45c95-fd78-4e7a-bdf0-9b30bf99663a, liteMember=false, memberListJoinVersion=1}=null] 22:23:18,078 ERROR |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-15 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 3', execution 064b-d046-2f08-0001: some TerminateExecutionOperation invocations failed, execution might remain stuck: [MemberInfo{address=[127.0.0.1]:5702, uuid=ef994296-1f47-465f-833d-3d080037e1e0, liteMember=false, memberListJoinVersion=2}=com.hazelcast.spi.exception.TargetNotMemberException: Not Member! target: [127.0.0.1]:5702, partitionId: -1, operation: com.hazelcast.jet.impl.operation.TerminateExecutionOperation, service: hz:impl:jetService, MemberInfo{address=[127.0.0.1]:5701, uuid=24b45c95-fd78-4e7a-bdf0-9b30bf99663a, liteMember=false, memberListJoinVersion=1}=null] 22:23:18,078 ERROR |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-11 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 7', execution 064b-d046-2f10-0001: some TerminateExecutionOperation invocations failed, execution might remain stuck: [MemberInfo{address=[127.0.0.1]:5702, uuid=ef994296-1f47-465f-833d-3d080037e1e0, liteMember=false, memberListJoinVersion=2}=com.hazelcast.spi.exception.TargetNotMemberException: Not Member! target: [127.0.0.1]:5702, partitionId: -1, operation: com.hazelcast.jet.impl.operation.TerminateExecutionOperation, service: hz:impl:jetService, MemberInfo{address=[127.0.0.1]:5701, uuid=24b45c95-fd78-4e7a-bdf0-9b30bf99663a, liteMember=false, memberListJoinVersion=1}=null] 22:23:18,078 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-5 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 0', execution 064b-d046-2f02-0001 22:23:18,078 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-15 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 5', execution 064b-d046-2f0c-0001 22:23:18,078 ERROR |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 5', execution 064b-d046-2f0c-0001: some TerminateExecutionOperation invocations failed, execution might remain stuck: [MemberInfo{address=[127.0.0.1]:5702, uuid=ef994296-1f47-465f-833d-3d080037e1e0, liteMember=false, memberListJoinVersion=2}=com.hazelcast.spi.exception.TargetNotMemberException: Not Member! target: [127.0.0.1]:5702, partitionId: -1, operation: com.hazelcast.jet.impl.operation.TerminateExecutionOperation, service: hz:impl:jetService, MemberInfo{address=[127.0.0.1]:5701, uuid=24b45c95-fd78-4e7a-bdf0-9b30bf99663a, liteMember=false, memberListJoinVersion=1}=null] 22:23:18,078 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-7 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 9', execution 064b-d046-2f13-0001 22:23:18,078 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-11 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 6', execution 064b-d046-2f0e-0001 22:23:18,078 ERROR |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 2', execution 064b-d046-2f06-0001: some TerminateExecutionOperation invocations failed, execution might remain stuck: [MemberInfo{address=[127.0.0.1]:5702, uuid=ef994296-1f47-465f-833d-3d080037e1e0, liteMember=false, memberListJoinVersion=2}=com.hazelcast.spi.exception.TargetNotMemberException: Not Member! target: [127.0.0.1]:5702, partitionId: -1, operation: com.hazelcast.jet.impl.operation.TerminateExecutionOperation, service: hz:impl:jetService, MemberInfo{address=[127.0.0.1]:5701, uuid=24b45c95-fd78-4e7a-bdf0-9b30bf99663a, liteMember=false, memberListJoinVersion=1}=null] 22:23:18,078 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobExecutionService] hz.heuristic_montalcini.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Completing job 'job 4', execution 064b-d046-2f0a-0001 locally. Reason: Member [127.0.0.1]:5702 left the cluster 22:23:18,078 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-13 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 3', execution 064b-d046-2f08-0001 22:23:18,078 ERROR |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 8', execution 064b-d046-2f12-0001: some TerminateExecutionOperation invocations failed, execution might remain stuck: [MemberInfo{address=[127.0.0.1]:5702, uuid=ef994296-1f47-465f-833d-3d080037e1e0, liteMember=false, memberListJoinVersion=2}=com.hazelcast.spi.exception.TargetNotMemberException: Not Member! target: [127.0.0.1]:5702, partitionId: -1, operation: com.hazelcast.jet.impl.operation.TerminateExecutionOperation, service: hz:impl:jetService, MemberInfo{address=[127.0.0.1]:5701, uuid=24b45c95-fd78-4e7a-bdf0-9b30bf99663a, liteMember=false, memberListJoinVersion=1}=null] 22:23:18,078 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobExecutionService] hz.heuristic_montalcini.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Completing job 'job 1', execution 064b-d046-2f04-0001 locally. Reason: Member [127.0.0.1]:5702 left the cluster 22:23:18,078 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobExecutionService] Thread-2692 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Completing job 'job 5', execution 064b-d046-2f0c-0001 locally. Reason: Node is shutting down 22:23:18,078 ERROR |when_manyJobs_then_sortedBySubmissionTime| - [MasterJobContext] ForkJoinPool.commonPool-worker-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 4', execution 064b-d046-2f0a-0001: some TerminateExecutionOperation invocations failed, execution might remain stuck: [MemberInfo{address=[127.0.0.1]:5702, uuid=ef994296-1f47-465f-833d-3d080037e1e0, liteMember=false, memberListJoinVersion=2}=com.hazelcast.spi.exception.TargetNotMemberException: Not Member! target: [127.0.0.1]:5702, partitionId: -1, operation: com.hazelcast.jet.impl.operation.TerminateExecutionOperation, service: hz:impl:jetService, MemberInfo{address=[127.0.0.1]:5701, uuid=24b45c95-fd78-4e7a-bdf0-9b30bf99663a, liteMember=false, memberListJoinVersion=1}=null] 22:23:18,078 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobExecutionService] hz.heuristic_montalcini.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Completing job 'job 2', execution 064b-d046-2f06-0001 locally. Reason: Member [127.0.0.1]:5702 left the cluster 22:23:18,079 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 7', execution 064b-d046-2f10-0001 22:23:18,079 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobExecutionService] hz.heuristic_montalcini.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Completing job 'job 8', execution 064b-d046-2f12-0001 locally. Reason: Member [127.0.0.1]:5702 left the cluster 22:23:18,079 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 1', execution 064b-d046-2f04-0001 22:23:18,079 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobExecutionService] Thread-2692 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Completing job 'job 6', execution 064b-d046-2f0e-0001 locally. Reason: Node is shutting down 22:23:18,079 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobExecutionService] Thread-2692 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Completing job 'job 3', execution 064b-d046-2f08-0001 locally. Reason: Node is shutting down 22:23:18,079 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobExecutionService] Thread-2692 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Completing job 'job 4', execution 064b-d046-2f0a-0001 locally. Reason: Node is shutting down 22:23:18,079 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobExecutionService] Thread-2692 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Completing job 'job 1', execution 064b-d046-2f04-0001 locally. Reason: Node is shutting down 22:23:18,078 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-15 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Execution of job 'job 5', execution 064b-d046-2f0c-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:417) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:403) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:353) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.lambda$onMemberRemoved$5(JobExecutionService.java:252) ~[classes/:?] at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183) ~[?:?] at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177) ~[?:?] at java.util.concurrent.ConcurrentHashMap$ValueSpliterator.forEachRemaining(ConcurrentHashMap.java:3605) ~[?:?] at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) ~[?:?] at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) ~[?:?] at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) ~[?:?] at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173) ~[?:?] at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:?] at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497) ~[?:?] at com.hazelcast.jet.impl.JobExecutionService.onMemberRemoved(JobExecutionService.java:249) ~[classes/:?] at com.hazelcast.jet.impl.JetServiceBackend.memberRemoved(JetServiceBackend.java:274) ~[classes/:?] at com.hazelcast.internal.cluster.impl.MembershipManager.lambda$sendMembershipEventNotifications$1(MembershipManager.java:830) ~[classes/:?] at com.hazelcast.internal.util.executor.CachedExecutorServiceDelegate$Worker.run(CachedExecutorServiceDelegate.java:217) ~[classes/:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76) ~[classes/:?] at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) ~[classes/:?] 22:23:18,078 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-5 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Execution of job 'job 0', execution 064b-d046-2f02-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:417) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:403) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:353) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution(JobExecutionService.java:619) ~[classes/:?] at com.hazelcast.jet.impl.operation.TerminateExecutionOperation.run(TerminateExecutionOperation.java:58) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.Operation.call(Operation.java:189) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:272) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:248) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:213) ~[classes/:?] at com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.run(OperationExecutorImpl.java:411) ~[classes/:?] at com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.runOrExecute(OperationExecutorImpl.java:438) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvokeLocal(Invocation.java:600) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvoke(Invocation.java:579) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke0(Invocation.java:540) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke(Invocation.java:240) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.InvocationBuilderImpl.invoke(InvocationBuilderImpl.java:59) ~[classes/:?] at com.hazelcast.jet.impl.MasterContext.invokeOnParticipant(MasterContext.java:284) ~[classes/:?] at com.hazelcast.jet.impl.MasterContext.invokeOnParticipants(MasterContext.java:267) ~[classes/:?] at com.hazelcast.jet.impl.MasterJobContext.lambda$cancelExecutionInvocations$16(MasterJobContext.java:591) ~[classes/:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76) ~[classes/:?] at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) ~[classes/:?] 22:23:18,078 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-13 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Execution of job 'job 3', execution 064b-d046-2f08-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:417) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:403) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:353) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution(JobExecutionService.java:619) ~[classes/:?] at com.hazelcast.jet.impl.operation.TerminateExecutionOperation.run(TerminateExecutionOperation.java:58) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.Operation.call(Operation.java:189) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:272) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:248) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:213) ~[classes/:?] at com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.run(OperationExecutorImpl.java:411) ~[classes/:?] at com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.runOrExecute(OperationExecutorImpl.java:438) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvokeLocal(Invocation.java:600) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvoke(Invocation.java:579) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke0(Invocation.java:540) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke(Invocation.java:240) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.InvocationBuilderImpl.invoke(InvocationBuilderImpl.java:59) ~[classes/:?] at com.hazelcast.jet.impl.MasterContext.invokeOnParticipant(MasterContext.java:284) ~[classes/:?] at com.hazelcast.jet.impl.MasterContext.invokeOnParticipants(MasterContext.java:267) ~[classes/:?] at com.hazelcast.jet.impl.MasterJobContext.lambda$cancelExecutionInvocations$16(MasterJobContext.java:591) ~[classes/:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76) ~[classes/:?] at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) ~[classes/:?] 22:23:18,078 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-7 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Execution of job 'job 9', execution 064b-d046-2f13-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:417) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:403) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:353) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution(JobExecutionService.java:619) ~[classes/:?] at com.hazelcast.jet.impl.operation.TerminateExecutionOperation.run(TerminateExecutionOperation.java:58) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.Operation.call(Operation.java:189) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:272) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:248) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:213) ~[classes/:?] at com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.run(OperationExecutorImpl.java:411) ~[classes/:?] at com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.runOrExecute(OperationExecutorImpl.java:438) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvokeLocal(Invocation.java:600) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvoke(Invocation.java:579) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke0(Invocation.java:540) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke(Invocation.java:240) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.InvocationBuilderImpl.invoke(InvocationBuilderImpl.java:59) ~[classes/:?] at com.hazelcast.jet.impl.MasterContext.invokeOnParticipant(MasterContext.java:284) ~[classes/:?] at com.hazelcast.jet.impl.MasterContext.invokeOnParticipants(MasterContext.java:267) ~[classes/:?] at com.hazelcast.jet.impl.MasterJobContext.lambda$cancelExecutionInvocations$16(MasterJobContext.java:591) ~[classes/:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76) ~[classes/:?] at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) ~[classes/:?] 22:23:18,078 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-11 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Execution of job 'job 6', execution 064b-d046-2f0e-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:417) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:403) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:353) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.lambda$onMemberRemoved$5(JobExecutionService.java:252) ~[classes/:?] at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183) ~[?:?] at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177) ~[?:?] at java.util.concurrent.ConcurrentHashMap$ValueSpliterator.forEachRemaining(ConcurrentHashMap.java:3605) ~[?:?] at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) ~[?:?] at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) ~[?:?] at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) ~[?:?] at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173) ~[?:?] at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:?] at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497) ~[?:?] at com.hazelcast.jet.impl.JobExecutionService.onMemberRemoved(JobExecutionService.java:249) ~[classes/:?] at com.hazelcast.jet.impl.JetServiceBackend.memberRemoved(JetServiceBackend.java:274) ~[classes/:?] at com.hazelcast.internal.cluster.impl.MembershipManager.lambda$sendMembershipEventNotifications$1(MembershipManager.java:830) ~[classes/:?] at com.hazelcast.internal.util.executor.CachedExecutorServiceDelegate$Worker.run(CachedExecutorServiceDelegate.java:217) ~[classes/:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76) ~[classes/:?] at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) ~[classes/:?] 22:23:18,079 DEBUG || - [MasterJobContext] ForkJoinPool.commonPool-worker-7 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 9', execution 064b-d046-2f13-0001 received response to StartExecutionOperation from [127.0.0.1]:5701: java.util.concurrent.CancellationException 22:23:18,079 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Execution of job 'job 1', execution 064b-d046-2f04-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:417) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:403) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:353) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Cause ...[truncated 23890 chars]... utor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76) ~[classes/:?] at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) ~[classes/:?] 22:23:18,080 DEBUG || - [MasterJobContext] ForkJoinPool.commonPool-worker-15 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Execution of job 'job 2', execution 064b-d046-2f06-0001 has failures: [[127.0.0.1]:5702=com.hazelcast.core.MemberLeftException: Member [127.0.0.1]:5702 - ef994296-1f47-465f-833d-3d080037e1e0 has left cluster!, [127.0.0.1]:5701=java.util.concurrent.CancellationException] 22:23:18,080 DEBUG || - [MasterJobContext] ForkJoinPool.commonPool-worker-5 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 8', execution 064b-d046-2f12-0001 received response to StartExecutionOperation from [127.0.0.1]:5701: java.util.concurrent.CancellationException 22:23:18,080 DEBUG || - [MasterJobContext] ForkJoinPool.commonPool-worker-5 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Execution of job 'job 8', execution 064b-d046-2f12-0001 has failures: [[127.0.0.1]:5702=com.hazelcast.core.MemberLeftException: Member [127.0.0.1]:5702 - ef994296-1f47-465f-833d-3d080037e1e0 has left cluster!, [127.0.0.1]:5701=java.util.concurrent.CancellationException] 22:23:18,080 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobCoordinationService] hz.heuristic_montalcini.cached.thread-12 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Scheduling restart on master for job job 2 22:23:18,080 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Execution of job 'job 4', execution 064b-d046-2f0a-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:417) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:403) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:353) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution(JobExecutionService.java:619) ~[classes/:?] at com.hazelcast.jet.impl.operation.TerminateExecutionOperation.run(TerminateExecutionOperation.java:58) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.Operation.call(Operation.java:189) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:272) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:248) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:213) ~[classes/:?] at com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.run(OperationExecutorImpl.java:411) ~[classes/:?] at com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.runOrExecute(OperationExecutorImpl.java:438) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvokeLocal(Invocation.java:600) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvoke(Invocation.java:579) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke0(Invocation.java:540) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke(Invocation.java:240) ~[classes/:?] at com.hazelcast.spi.impl.operationservice.impl.InvocationBuilderImpl.invoke(InvocationBuilderImpl.java:59) ~[classes/:?] at com.hazelcast.jet.impl.MasterContext.invokeOnParticipant(MasterContext.java:284) ~[classes/:?] at com.hazelcast.jet.impl.MasterContext.invokeOnParticipants(MasterContext.java:267) ~[classes/:?] at com.hazelcast.jet.impl.MasterJobContext.lambda$cancelExecutionInvocations$16(MasterJobContext.java:591) ~[classes/:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76) ~[classes/:?] at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) ~[classes/:?] 22:23:18,080 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobCoordinationService] hz.heuristic_montalcini.cached.thread-3 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Scheduling restart on master for job job 8 22:23:18,080 DEBUG || - [MasterJobContext] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] job 'job 4', execution 064b-d046-2f0a-0001 received response to StartExecutionOperation from [127.0.0.1]:5701: java.util.concurrent.CancellationException 22:23:18,080 DEBUG || - [MasterJobContext] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Execution of job 'job 4', execution 064b-d046-2f0a-0001 has failures: [[127.0.0.1]:5702=com.hazelcast.core.MemberLeftException: Member [127.0.0.1]:5702 - ef994296-1f47-465f-833d-3d080037e1e0 has left cluster!, [127.0.0.1]:5701=java.util.concurrent.CancellationException] 22:23:18,080 DEBUG |when_manyJobs_then_sortedBySubmissionTime| - [JobCoordinationService] hz.heuristic_montalcini.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Scheduling restart on master for job job 4 22:23:18,081 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 2', execution 064b-d046-2f06-0001 22:23:18,081 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Execution of job 'job 2', execution 064b-d046-2f06-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:417) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:403) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:353) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.cancelAllExecutions(JobExecutionService.java:236) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.shutdown(JobExecutionService.java:221) ~[classes/:?] at com.hazelcast.jet.impl.JetServiceBackend.shutdown(JetServiceBackend.java:188) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdownService(ServiceManagerImpl.java:307) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdown(ServiceManagerImpl.java:298) ~[classes/:?] at com.hazelcast.spi.impl.NodeEngineImpl.shutdown(NodeEngineImpl.java:515) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdownServices(Node.java:594) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdown(Node.java:533) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.shutdown(LifecycleServiceImpl.java:101) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.terminate(LifecycleServiceImpl.java:89) ~[classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.shutdown(TestNodeRegistry.java:123) ~[test-classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.terminate(TestNodeRegistry.java:113) ~[test-classes/:?] at com.hazelcast.test.TestHazelcastInstanceFactory.terminateAll(TestHazelcastInstanceFactory.java:331) ~[test-classes/:?] at com.hazelcast.client.test.TestHazelcastFactory.terminateAll(TestHazelcastFactory.java:183) ~[test-classes/:?] at com.hazelcast.jet.JetTestInstanceFactory.terminateAll(JetTestInstanceFactory.java:129) ~[test-classes/:?] at com.hazelcast.jet.core.JetTestSupport.lambda$shutdownFactory$0(JetTestSupport.java:85) ~[test-classes/:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] ... 1 more 22:23:18,084 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 3', execution 064b-d046-2f08-0001 22:23:18,084 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Execution of job 'job 3', execution 064b-d046-2f08-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:417) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:403) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:353) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.cancelAllExecutions(JobExecutionService.java:236) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.shutdown(JobExecutionService.java:221) ~[classes/:?] at com.hazelcast.jet.impl.JetServiceBackend.shutdown(JetServiceBackend.java:188) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdownService(ServiceManagerImpl.java:307) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdown(ServiceManagerImpl.java:298) ~[classes/:?] at com.hazelcast.spi.impl.NodeEngineImpl.shutdown(NodeEngineImpl.java:515) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdownServices(Node.java:594) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdown(Node.java:533) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.shutdown(LifecycleServiceImpl.java:101) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.terminate(LifecycleServiceImpl.java:89) ~[classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.shutdown(TestNodeRegistry.java:123) ~[test-classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.terminate(TestNodeRegistry.java:113) ~[test-classes/:?] at com.hazelcast.test.TestHazelcastInstanceFactory.terminateAll(TestHazelcastInstanceFactory.java:331) ~[test-classes/:?] at com.hazelcast.client.test.TestHazelcastFactory.terminateAll(TestHazelcastFactory.java:183) ~[test-classes/:?] at com.hazelcast.jet.JetTestInstanceFactory.terminateAll(JetTestInstanceFactory.java:129) ~[test-classes/:?] at com.hazelcast.jet.core.JetTestSupport.lambda$shutdownFactory$0(JetTestSupport.java:85) ~[test-classes/:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] ... 1 more 22:23:18,085 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 4', execution 064b-d046-2f0a-0001 22:23:18,085 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Execution of job 'job 4', execution 064b-d046-2f0a-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:417) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:403) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:353) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.cancelAllExecutions(JobExecutionService.java:236) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.shutdown(JobExecutionService.java:221) ~[classes/:?] at com.hazelcast.jet.impl.JetServiceBackend.shutdown(JetServiceBackend.java:188) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdownService(ServiceManagerImpl.java:307) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdown(ServiceManagerImpl.java:298) ~[classes/:?] at com.hazelcast.spi.impl.NodeEngineImpl.shutdown(NodeEngineImpl.java:515) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdownServices(Node.java:594) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdown(Node.java:533) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.shutdown(LifecycleServiceImpl.java:101) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.terminate(LifecycleServiceImpl.java:89) ~[classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.shutdown(TestNodeRegistry.java:123) ~[test-classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.terminate(TestNodeRegistry.java:113) ~[test-classes/:?] at com.hazelcast.test.TestHazelcastInstanceFactory.terminateAll(TestHazelcastInstanceFactory.java:331) ~[test-classes/:?] at com.hazelcast.client.test.TestHazelcastFactory.terminateAll(TestHazelcastFactory.java:183) ~[test-classes/:?] at com.hazelcast.jet.JetTestInstanceFactory.terminateAll(JetTestInstanceFactory.java:129) ~[test-classes/:?] at com.hazelcast.jet.core.JetTestSupport.lambda$shutdownFactory$0(JetTestSupport.java:85) ~[test-classes/:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] ... 1 more 22:23:18,086 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 5', execution 064b-d046-2f0c-0001 22:23:18,086 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Execution of job 'job 5', execution 064b-d046-2f0c-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:417) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:403) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:353) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.cancelAllExecutions(JobExecutionService.java:236) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.shutdown(JobExecutionService.java:221) ~[classes/:?] at com.hazelcast.jet.impl.JetServiceBackend.shutdown(JetServiceBackend.java:188) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdownService(ServiceManagerImpl.java:307) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdown(ServiceManagerImpl.java:298) ~[classes/:?] at com.hazelcast.spi.impl.NodeEngineImpl.shutdown(NodeEngineImpl.java:515) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdownServices(Node.java:594) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdown(Node.java:533) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.shutdown(LifecycleServiceImpl.java:101) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.terminate(LifecycleServiceImpl.java:89) ~[classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.shutdown(TestNodeRegistry.java:123) ~[test-classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.terminate(TestNodeRegistry.java:113) ~[test-classes/:?] at com.hazelcast.test.TestHazelcastInstanceFactory.terminateAll(TestHazelcastInstanceFactory.java:331) ~[test-classes/:?] at com.hazelcast.client.test.TestHazelcastFactory.terminateAll(TestHazelcastFactory.java:183) ~[test-classes/:?] at com.hazelcast.jet.JetTestInstanceFactory.terminateAll(JetTestInstanceFactory.java:129) ~[test-classes/:?] at com.hazelcast.jet.core.JetTestSupport.lambda$shutdownFactory$0(JetTestSupport.java:85) ~[test-classes/:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] ... 1 more 22:23:18,086 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 6', execution 064b-d046-2f0e-0001 22:23:18,086 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Execution of job 'job 6', execution 064b-d046-2f0e-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:417) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:403) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:353) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.cancelAllExecutions(JobExecutionService.java:236) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.shutdown(JobExecutionService.java:221) ~[classes/:?] at com.hazelcast.jet.impl.JetServiceBackend.shutdown(JetServiceBackend.java:188) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdownService(ServiceManagerImpl.java:307) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdown(ServiceManagerImpl.java:298) ~[classes/:?] at com.hazelcast.spi.impl.NodeEngineImpl.shutdown(NodeEngineImpl.java:515) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdownServices(Node.java:594) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdown(Node.java:533) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.shutdown(LifecycleServiceImpl.java:101) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.terminate(LifecycleServiceImpl.java:89) ~[classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.shutdown(TestNodeRegistry.java:123) ~[test-classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.terminate(TestNodeRegistry.java:113) ~[test-classes/:?] at com.hazelcast.test.TestHazelcastInstanceFactory.terminateAll(TestHazelcastInstanceFactory.java:331) ~[test-classes/:?] at com.hazelcast.client.test.TestHazelcastFactory.terminateAll(TestHazelcastFactory.java:183) ~[test-classes/:?] at com.hazelcast.jet.JetTestInstanceFactory.terminateAll(JetTestInstanceFactory.java:129) ~[test-classes/:?] at com.hazelcast.jet.core.JetTestSupport.lambda$shutdownFactory$0(JetTestSupport.java:85) ~[test-classes/:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] ... 1 more 22:23:18,087 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 7', execution 064b-d046-2f10-0001 22:23:18,087 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-5 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 1', execution 064b-d046-2f04-0001 22:23:18,087 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-3 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Execution of job 'job 7', execution 064b-d046-2f10-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:417) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:403) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:353) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.cancelAllExecutions(JobExecutionService.java:236) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.shutdown(JobExecutionService.java:221) ~[classes/:?] at com.hazelcast.jet.impl.JetServiceBackend.shutdown(JetServiceBackend.java:188) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdownService(ServiceManagerImpl.java:307) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdown(ServiceManagerImpl.java:298) ~[classes/:?] at com.hazelcast.spi.impl.NodeEngineImpl.shutdown(NodeEngineImpl.java:515) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdownServices(Node.java:594) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdown(Node.java:533) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.shutdown(LifecycleServiceImpl.java:101) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.terminate(LifecycleServiceImpl.java:89) ~[classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.shutdown(TestNodeRegistry.java:123) ~[test-classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.terminate(TestNodeRegistry.java:113) ~[test-classes/:?] at com.hazelcast.test.TestHazelcastInstanceFactory.terminateAll(TestHazelcastInstanceFactory.java:331) ~[test-classes/:?] at com.hazelcast.client.test.TestHazelcastFactory.terminateAll(TestHazelcastFactory.java:183) ~[test-classes/:?] at com.hazelcast.jet.JetTestInstanceFactory.terminateAll(JetTestInstanceFactory.java:129) ~[test-classes/:?] at com.hazelcast.jet.core.JetTestSupport.lambda$shutdownFactory$0(JetTestSupport.java:85) ~[test-classes/:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] ... 1 more 22:23:18,087 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-5 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Execution of job 'job 1', execution 064b-d046-2f04-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.lambda$run$0(TaskletExecutionService.java:373) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:373) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.cancelAllExecutions(JobExecutionService.java:236) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.shutdown(JobExecutionService.java:221) ~[classes/:?] at com.hazelcast.jet.impl.JetServiceBackend.shutdown(JetServiceBackend.java:188) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdownService(ServiceManagerImpl.java:307) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdown(ServiceManagerImpl.java:298) ~[classes/:?] at com.hazelcast.spi.impl.NodeEngineImpl.shutdown(NodeEngineImpl.java:515) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdownServices(Node.java:594) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdown(Node.java:533) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.shutdown(LifecycleServiceImpl.java:101) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.terminate(LifecycleServiceImpl.java:89) ~[classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.shutdown(TestNodeRegistry.java:123) ~[test-classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.terminate(TestNodeRegistry.java:113) ~[test-classes/:?] at com.hazelcast.test.TestHazelcastInstanceFactory.terminateAll(TestHazelcastInstanceFactory.java:331) ~[test-classes/:?] at com.hazelcast.client.test.TestHazelcastFactory.terminateAll(TestHazelcastFactory.java:183) ~[test-classes/:?] at com.hazelcast.jet.JetTestInstanceFactory.terminateAll(JetTestInstanceFactory.java:129) ~[test-classes/:?] at com.hazelcast.jet.core.JetTestSupport.lambda$shutdownFactory$0(JetTestSupport.java:85) ~[test-classes/:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] ... 1 more 22:23:18,088 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-5 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 8', execution 064b-d046-2f12-0001 22:23:18,088 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-5 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Execution of job 'job 8', execution 064b-d046-2f12-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:417) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:403) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:353) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.cancelAllExecutions(JobExecutionService.java:236) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.shutdown(JobExecutionService.java:221) ~[classes/:?] at com.hazelcast.jet.impl.JetServiceBackend.shutdown(JetServiceBackend.java:188) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdownService(ServiceManagerImpl.java:307) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdown(ServiceManagerImpl.java:298) ~[classes/:?] at com.hazelcast.spi.impl.NodeEngineImpl.shutdown(NodeEngineImpl.java:515) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdownServices(Node.java:594) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdown(Node.java:533) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.shutdown(LifecycleServiceImpl.java:101) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.terminate(LifecycleServiceImpl.java:89) ~[classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.shutdown(TestNodeRegistry.java:123) ~[test-classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.terminate(TestNodeRegistry.java:113) ~[test-classes/:?] at com.hazelcast.test.TestHazelcastInstanceFactory.terminateAll(TestHazelcastInstanceFactory.java:331) ~[test-classes/:?] at com.hazelcast.client.test.TestHazelcastFactory.terminateAll(TestHazelcastFactory.java:183) ~[test-classes/:?] at com.hazelcast.jet.JetTestInstanceFactory.terminateAll(JetTestInstanceFactory.java:129) ~[test-classes/:?] at com.hazelcast.jet.core.JetTestSupport.lambda$shutdownFactory$0(JetTestSupport.java:85) ~[test-classes/:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] ... 1 more 22:23:18,088 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-5 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 9', execution 064b-d046-2f13-0001 22:23:18,088 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-5 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Execution of job 'job 9', execution 064b-d046-2f13-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:417) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:403) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:353) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.cancelAllExecutions(JobExecutionService.java:236) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.shutdown(JobExecutionService.java:221) ~[classes/:?] at com.hazelcast.jet.impl.JetServiceBackend.shutdown(JetServiceBackend.java:188) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdownService(ServiceManagerImpl.java:307) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdown(ServiceManagerImpl.java:298) ~[classes/:?] at com.hazelcast.spi.impl.NodeEngineImpl.shutdown(NodeEngineImpl.java:515) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdownServices(Node.java:594) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdown(Node.java:533) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.shutdown(LifecycleServiceImpl.java:101) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.terminate(LifecycleServiceImpl.java:89) ~[classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.shutdown(TestNodeRegistry.java:123) ~[test-classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.terminate(TestNodeRegistry.java:113) ~[test-classes/:?] at com.hazelcast.test.TestHazelcastInstanceFactory.terminateAll(TestHazelcastInstanceFactory.java:331) ~[test-classes/:?] at com.hazelcast.client.test.TestHazelcastFactory.terminateAll(TestHazelcastFactory.java:183) ~[test-classes/:?] at com.hazelcast.jet.JetTestInstanceFactory.terminateAll(JetTestInstanceFactory.java:129) ~[test-classes/:?] at com.hazelcast.jet.core.JetTestSupport.lambda$shutdownFactory$0(JetTestSupport.java:85) ~[test-classes/:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] ... 1 more 22:23:18,089 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-5 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Completed execution of job 'job 0', execution 064b-d046-2f02-0001 22:23:18,089 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-5 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Execution of job 'job 0', execution 064b-d046-2f02-0001 completed with failure java.util.concurrent.CompletionException: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:486) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.lambda$run$0(TaskletExecutionService.java:373) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:803) ~[?:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:373) ~[classes/:?] at java.lang.Thread.run(Thread.java:834) ~[?:?] Caused by: java.util.concurrent.CancellationException at java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396) ~[?:?] at com.hazelcast.jet.impl.execution.ExecutionContext.terminateExecution(ExecutionContext.java:281) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.terminateExecution0(JobExecutionService.java:623) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.cancelAllExecutions(JobExecutionService.java:236) ~[classes/:?] at com.hazelcast.jet.impl.JobExecutionService.shutdown(JobExecutionService.java:221) ~[classes/:?] at com.hazelcast.jet.impl.JetServiceBackend.shutdown(JetServiceBackend.java:188) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdownService(ServiceManagerImpl.java:307) ~[classes/:?] at com.hazelcast.spi.impl.servicemanager.impl.ServiceManagerImpl.shutdown(ServiceManagerImpl.java:298) ~[classes/:?] at com.hazelcast.spi.impl.NodeEngineImpl.shutdown(NodeEngineImpl.java:515) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdownServices(Node.java:594) ~[classes/:?] at com.hazelcast.instance.impl.Node.shutdown(Node.java:533) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.shutdown(LifecycleServiceImpl.java:101) ~[classes/:?] at com.hazelcast.instance.impl.LifecycleServiceImpl.terminate(LifecycleServiceImpl.java:89) ~[classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.shutdown(TestNodeRegistry.java:123) ~[test-classes/:?] at com.hazelcast.test.mocknetwork.TestNodeRegistry.terminate(TestNodeRegistry.java:113) ~[test-classes/:?] at com.hazelcast.test.TestHazelcastInstanceFactory.terminateAll(TestHazelcastInstanceFactory.java:331) ~[test-classes/:?] at com.hazelcast.client.test.TestHazelcastFactory.terminateAll(TestHazelcastFactory.java:183) ~[test-classes/:?] at com.hazelcast.jet.JetTestInstanceFactory.terminateAll(JetTestInstanceFactory.java:129) ~[test-classes/:?] at com.hazelcast.jet.core.JetTestSupport.lambda$shutdownFactory$0(JetTestSupport.java:85) ~[test-classes/:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] ... 1 more 22:23:18,091 INFO |when_manyJobs_then_sortedBySubmissionTime| - [NodeExtension] Thread-2692 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Destroying node NodeExtension. 22:23:18,091 INFO |when_manyJobs_then_sortedBySubmissionTime| - [Node] Thread-2692 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] Hazelcast Shutdown is completed in 16 ms. 22:23:18,091 INFO |when_manyJobs_then_sortedBySubmissionTime| - [LifecycleService] Thread-2692 - [127.0.0.1]:5702 [dev] [5.0-SNAPSHOT] [127.0.0.1]:5702 is SHUTDOWN 22:23:18,091 INFO |when_manyJobs_then_sortedBySubmissionTime| - [LifecycleService] Thread-2692 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] [127.0.0.1]:5701 is SHUTTING_DOWN 22:23:18,091 WARN |when_manyJobs_then_sortedBySubmissionTime| - [Node] Thread-2692 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Terminating forcefully... 22:23:18,091 INFO |when_manyJobs_then_sortedBySubmissionTime| - [Node] Thread-2692 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Shutting down connection manager... 22:23:18,091 INFO |when_manyJobs_then_sortedBySubmissionTime| - [Node] Thread-2692 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Shutting down node engine... 22:23:18,094 INFO |when_manyJobs_then_sortedBySubmissionTime| - [NodeExtension] Thread-2692 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Destroying node NodeExtension. 22:23:18,094 INFO |when_manyJobs_then_sortedBySubmissionTime| - [Node] Thread-2692 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] Hazelcast Shutdown is completed in 3 ms. 22:23:18,094 INFO |when_manyJobs_then_sortedBySubmissionTime| - [LifecycleService] Thread-2692 - [127.0.0.1]:5701 [dev] [5.0-SNAPSHOT] [127.0.0.1]:5701 is SHUTDOWN BuildInfo right after when_manyJobs_then_sortedBySubmissionTime(com.hazelcast.jet.impl.JobSummaryTest): BuildInfo{version='5.0-SNAPSHOT', build='20210605', buildNumber=20210605, revision=e3352af, enterprise=false, serializationVersion=1, jet=JetBuildInfo{version='5.0-SNAPSHOT', build='20210605', revision='e3352af'}} Hiccups measured while running test 'when_manyJobs_then_sortedBySubmissionTime(com.hazelcast.jet.impl.JobSummaryTest):' 22:22:55, accumulated pauses: 117 ms, max pause: 28 ms, pauses over 1000 ms: 0 22:23:00, accumulated pauses: 39 ms, max pause: 0 ms, pauses over 1000 ms: 0 22:23:05, accumulated pauses: 41 ms, max pause: 0 ms, pauses over 1000 ms: 0 22:23:10, accumulated pauses: 68 ms, max pause: 25 ms, pauses over 1000 ms: 0 22:23:15, accumulated pauses: 127 ms, max pause: 101 ms, pauses over 1000 ms: 0 ```
non_process
com hazelcast jet impl jobsummarytest when manyjobs then sortedbysubmissiontime master commit failed on oracle jdk stacktrace org junit comparisonfailure expected but was at org junit assert assertequals assert java at org junit assert assertequals assert java at com hazelcast jet impl jobsummarytest lambda when manyjobs then sortedbysubmissiontime jobsummarytest java at com hazelcast test hazelcasttestsupport asserttrueeventually hazelcasttestsupport java at com hazelcast test hazelcasttestsupport asserttrueeventually hazelcasttestsupport java at com hazelcast jet impl jobsummarytest when manyjobs then sortedbysubmissiontime jobsummarytest java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at com hazelcast test failontimeoutstatement callablestatement call failontimeoutstatement java at com hazelcast test failontimeoutstatement callablestatement call failontimeoutstatement java at java base java util concurrent futuretask run futuretask java at java base java lang thread run thread java standard output debug when manyjobs then sortedbysubmissiontime hz heuristic montalcini cached thread job cleanup took debug when manyjobs then sortedbysubmissiontime hz heuristic montalcini cached thread job cleanup took debug when manyjobs then sortedbysubmissiontime hz heuristic montalcini cached thread not scaling up job job execution not running or already running on all members debug when manyjobs then sortedbysubmissiontime hz heuristic montalcini cached thread not scaling up job job execution not running or already running on all members debug when manyjobs then sortedbysubmissiontime hz heuristic montalcini cached thread not scaling up job job execution not running or already running on all members debug when manyjobs then sortedbysubmissiontime hz heuristic montalcini cached thread not scaling up job job execution not running or already running on all members debug when manyjobs then sortedbysubmissiontime hz heuristic montalcini cached thread not scaling up job job execution not running or already running on all members debug when manyjobs then sortedbysubmissiontime hz heuristic montalcini cached thread not scaling up job job execution not running or already running on all members debug when manyjobs then sortedbysubmissiontime hz heuristic montalcini cached thread not scaling up job job execution not running or already running on all members debug when manyjobs then sortedbysubmissiontime hz heuristic montalcini cached thread not scaling up job job execution not running or already running on all members debug when manyjobs then sortedbysubmissiontime hz heuristic montalcini cached thread not scaling up job job execution not running or already running on all members debug when manyjobs then sortedbysubmissiontime hz heuristic montalcini cached thread not scaling up job job execution not running or already running on all members debug when manyjobs then sortedbysubmissiontime hz heuristic montalcini cached thread job cleanup took debug when manyjobs then sortedbysubmissiontime hz heuristic montalcini cached thread job cleanup took debug when manyjobs then sortedbysubmissiontime hz heuristic montalcini cached thread job cleanup took debug when manyjobs then sortedbysubmissiontime hz heuristic montalcini cached thread job cleanup took debug when manyjobs then sortedbysubmissiontime hz heuristic montalcini cached thread job cleanup took debug when manyjobs then sortedbysubmissiontime hz heuristic montalcini cached thread job cleanup took info when manyjobs then sortedbysubmissiontime thread terminating instancefactory in jettestsupport after info when manyjobs then sortedbysubmissiontime thread hz client hazelcastclient snapshot is shutting down warn when manyjobs then sortedbysubmissiontime pool thread server connection closed null info when manyjobs then sortedbysubmissiontime pool thread removed connection to endpoint connection mockednodeconnection remoteaddress localaddress connectionid info when manyjobs then sortedbysubmissiontime thread hz client removed connection to endpoint connection mockedclientconnection localaddress super clientconnection alive false connectionid channel null remoteaddress lastreadtime lastwritetime closedtime connected server version snapshot warn when manyjobs then sortedbysubmissiontime pool thread server connection closed null info when manyjobs then sortedbysubmissiontime thread hz client removed connection to endpoint connection mockedclientconnection localaddress super clientconnection alive false connectionid channel null remoteaddress lastreadtime lastwritetime closedtime connected server version snapshot info when manyjobs then sortedbysubmissiontime hz heuristic montalcini event destroying clientendpoint connection mockednodeconnection remoteaddress localaddress connectionid clientuuid authenticated true clientversion snapshot creationtime latest clientattributes laststatisticscollectiontime enterprise false clienttype jvm clientversion snapshot clusterconnectiontimestamp clientaddress clientname hz client credentials principal null os committedvirtualmemorysize os freephysicalmemorysize os freeswapspacesize os maxfiledescriptorcount os openfiledescriptorcount os processcputime os systemloadaverage os totalphysicalmemorysize os totalswapspacesize runtime availableprocessors runtime freememory runtime maxmemory runtime totalmemory runtime uptime runtime usedmemory labels info when manyjobs then sortedbysubmissiontime pool thread removed connection to endpoint connection mockednodeconnection remoteaddress localaddress connectionid info when manyjobs then sortedbysubmissiontime thread hz client hazelcastclient snapshot is client disconnected info when manyjobs then sortedbysubmissiontime hz frosty montalcini event destroying clientendpoint connection mockednodeconnection remoteaddress localaddress connectionid clientuuid authenticated true clientversion snapshot creationtime latest clientattributes laststatisticscollectiontime enterprise false clienttype jvm clientversion snapshot clusterconnectiontimestamp clientaddress clientname hz client credentials principal null os committedvirtualmemorysize os freephysicalmemorysize os freeswapspacesize os maxfiledescriptorcount os openfiledescriptorcount os processcputime os systemloadaverage os totalphysicalmemorysize os totalswapspacesize runtime availableprocessors runtime freememory runtime maxmemory runtime totalmemory runtime uptime runtime usedmemory labels warn when manyjobs then sortedbysubmissiontime pool thread dropping incoming runnable since other end closed server closed eof mockedclientconnection localaddress super clientconnection alive false connectionid channel null remoteaddress lastreadtime lastwritetime closedtime connected server version snapshot warn when manyjobs then sortedbysubmissiontime pool thread dropping incoming runnable since other end closed server closed eof mockedclientconnection localaddress super clientconnection alive false connectionid channel null remoteaddress lastreadtime lastwritetime closedtime connected server version snapshot info when manyjobs then sortedbysubmissiontime thread hz client hazelcastclient snapshot is shutdown info when manyjobs then sortedbysubmissiontime thread is shutting down warn when manyjobs then sortedbysubmissiontime thread terminating forcefully info when manyjobs then sortedbysubmissiontime thread shutting down connection manager info when manyjobs then sortedbysubmissiontime thread removed connection to endpoint connection mockconnection localendpoint remoteendpoint alive false info when manyjobs then sortedbysubmissiontime thread removed connection to endpoint connection mockconnection localendpoint remoteendpoint alive false info when manyjobs then sortedbysubmissiontime thread removing member debug when manyjobs then sortedbysubmissiontime forkjoinpool commonpool worker job job execution received response to startexecutionoperation from com hazelcast core memberleftexception member has left cluster debug when manyjobs then sortedbysubmissiontime forkjoinpool commonpool worker job job execution received response to startexecutionoperation from com hazelcast core memberleftexception member has left cluster debug when manyjobs then sortedbysubmissiontime forkjoinpool commonpool worker job job execution received response to startexecutionoperation from com hazelcast core memberleftexception member has left cluster info when manyjobs then sortedbysubmissiontime thread members size ver member this debug when manyjobs then sortedbysubmissiontime forkjoinpool commonpool worker job job execution received response to startexecutionoperation from com hazelcast core memberleftexception member has left cluster info when manyjobs then sortedbysubmissiontime thread shutting down node engine debug when manyjobs then sortedbysubmissiontime forkjoinpool commonpool worker job job execution received response to startexecutionoperation from com hazelcast core memberleftexception member has left cluster info when manyjobs then sortedbysubmissiontime hz heuristic montalcini cached thread committing rolling back live transactions of uuid debug when manyjobs then sortedbysubmissiontime forkjoinpool commonpool worker job job execution received response to startexecutionoperation from com hazelcast core memberleftexception member has left cluster debug when manyjobs then sortedbysubmissiontime forkjoinpool commonpool worker job job execution received response to startexecutionoperation from com hazelcast core memberleftexception member has left cluster debug when manyjobs then sortedbysubmissiontime forkjoinpool commonpool worker job job execution received response to startexecutionoperation from com hazelcast core memberleftexception member has left cluster debug when manyjobs then sortedbysubmissiontime forkjoinpool commonpool worker job job execution received response to startexecutionoperation from com hazelcast core memberleftexception member has left cluster debug when manyjobs then sortedbysubmissiontime forkjoinpool commonpool worker job job execution received response to startexecutionoperation from com hazelcast core memberleftexception member has left cluster debug when manyjobs then sortedbysubmissiontime hz heuristic montalcini cached thread completing job job execution locally reason member left the cluster debug when manyjobs then sortedbysubmissiontime hz heuristic montalcini cached thread completing job job execution locally reason member left the cluster debug when manyjobs then sortedbysubmissiontime hz heuristic montalcini cached thread completing job job execution locally reason member left the cluster error when manyjobs then sortedbysubmissiontime forkjoinpool commonpool worker job job execution some terminateexecutionoperation invocations failed execution might remain stuck uuid litemember false memberlistjoinversion com hazelcast spi exception targetnotmemberexception not member target partitionid operation com hazelcast jet impl operation terminateexecutionoperation service hz impl jetservice memberinfo address uuid litemember false memberlistjoinversion null error when manyjobs then sortedbysubmissiontime forkjoinpool commonpool worker job job execution some terminateexecutionoperation invocations failed execution might remain stuck uuid litemember false memberlistjoinversion com hazelcast spi exception targetnotmemberexception not member target partitionid operation com hazelcast jet impl operation terminateexecutionoperation service hz impl jetservice memberinfo address uuid litemember false memberlistjoinversion null error when manyjobs then sortedbysubmissiontime forkjoinpool commonpool worker job job execution some terminateexecutionoperation invocations failed execution might remain stuck uuid litemember false memberlistjoinversion com hazelcast spi exception targetnotmemberexception not member target partitionid operation com hazelcast jet impl operation terminateexecutionoperation service hz impl jetservice memberinfo address uuid litemember false memberlistjoinversion null error when manyjobs then sortedbysubmissiontime forkjoinpool commonpool worker job job execution some terminateexecutionoperation invocations failed execution might remain stuck uuid litemember false memberlistjoinversion com hazelcast spi exception targetnotmemberexception not member target partitionid operation com hazelcast jet impl operation terminateexecutionoperation service hz impl jetservice memberinfo address uuid litemember false memberlistjoinversion null error when manyjobs then sortedbysubmissiontime forkjoinpool commonpool worker job job execution some terminateexecutionoperation invocations failed execution might remain stuck uuid litemember false memberlistjoinversion com hazelcast spi exception targetnotmemberexception not member target partitionid operation com hazelcast jet impl operation terminateexecutionoperation service hz impl jetservice memberinfo address uuid litemember false memberlistjoinversion null debug forkjoinpool commonpool worker completed execution of job job execution debug forkjoinpool commonpool worker completed execution of job job execution error when manyjobs then sortedbysubmissiontime forkjoinpool commonpool worker job job execution some terminateexecutionoperation invocations failed execution might remain stuck uuid litemember false memberlistjoinversion com hazelcast spi exception targetnotmemberexception not member target partitionid operation com hazelcast jet impl operation terminateexecutionoperation service hz impl jetservice memberinfo address uuid litemember false memberlistjoinversion null debug forkjoinpool commonpool worker completed execution of job job execution debug forkjoinpool commonpool worker completed execution of job job execution error when manyjobs then sortedbysubmissiontime forkjoinpool commonpool worker job job execution some terminateexecutionoperation invocations failed execution might remain stuck uuid litemember false memberlistjoinversion com hazelcast spi exception targetnotmemberexception not member target partitionid operation com hazelcast jet impl operation terminateexecutionoperation service hz impl jetservice memberinfo address uuid litemember false memberlistjoinversion null debug when manyjobs then sortedbysubmissiontime hz heuristic montalcini cached thread completing job job execution locally reason member left the cluster debug forkjoinpool commonpool worker completed execution of job job execution error when manyjobs then sortedbysubmissiontime forkjoinpool commonpool worker job job execution some terminateexecutionoperation invocations failed execution might remain stuck uuid litemember false memberlistjoinversion com hazelcast spi exception targetnotmemberexception not member target partitionid operation com hazelcast jet impl operation terminateexecutionoperation service hz impl jetservice memberinfo address uuid litemember false memberlistjoinversion null debug when manyjobs then sortedbysubmissiontime hz heuristic montalcini cached thread completing job job execution locally reason member left the cluster debug when manyjobs then sortedbysubmissiontime thread completing job job execution locally reason node is shutting down error when manyjobs then sortedbysubmissiontime forkjoinpool commonpool worker job job execution some terminateexecutionoperation invocations failed execution might remain stuck uuid litemember false memberlistjoinversion com hazelcast spi exception targetnotmemberexception not member target partitionid operation com hazelcast jet impl operation terminateexecutionoperation service hz impl jetservice memberinfo address uuid litemember false memberlistjoinversion null debug when manyjobs then sortedbysubmissiontime hz heuristic montalcini cached thread completing job job execution locally reason member left the cluster debug forkjoinpool commonpool worker completed execution of job job execution debug when manyjobs then sortedbysubmissiontime hz heuristic montalcini cached thread completing job job execution locally reason member left the cluster debug forkjoinpool commonpool worker completed execution of job job execution debug when manyjobs then sortedbysubmissiontime thread completing job job execution locally reason node is shutting down debug when manyjobs then sortedbysubmissiontime thread completing job job execution locally reason node is shutting down debug when manyjobs then sortedbysubmissiontime thread completing job job execution locally reason node is shutting down debug when manyjobs then sortedbysubmissiontime thread completing job job execution locally reason node is shutting down debug forkjoinpool commonpool worker execution of job job execution completed with failure java util concurrent completionexception java util concurrent cancellationexception at java util concurrent completablefuture encodethrowable completablefuture java at java util concurrent completablefuture completethrowable completablefuture java at java util concurrent completablefuture uniwhencomplete completablefuture java at java util concurrent completablefuture uniwhencomplete tryfire completablefuture java at java util concurrent completablefuture postcomplete completablefuture java at java util concurrent completablefuture completeexceptionally completablefuture java at com hazelcast jet impl util noncompletablefuture internalcompleteexceptionally noncompletablefuture java at com hazelcast jet impl execution taskletexecutionservice executiontracker taskletdone taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker dismisstasklet taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker runtasklet taskletexecutionservice java at java util concurrent copyonwritearraylist foreach copyonwritearraylist java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker run taskletexecutionservice java at java lang thread run thread java caused by java util concurrent cancellationexception at java util concurrent completablefuture cancel completablefuture java at com hazelcast jet impl execution executioncontext terminateexecution executioncontext java at com hazelcast jet impl jobexecutionservice jobexecutionservice java at com hazelcast jet impl jobexecutionservice lambda onmemberremoved jobexecutionservice java at java util stream foreachops foreachop ofref accept foreachops java at java util stream referencepipeline accept referencepipeline java at java util concurrent concurrenthashmap valuespliterator foreachremaining concurrenthashmap java at java util stream abstractpipeline copyinto abstractpipeline java at java util stream abstractpipeline wrapandcopyinto abstractpipeline java at java util stream foreachops foreachop evaluatesequential foreachops java at java util stream foreachops foreachop ofref evaluatesequential foreachops java at java util stream abstractpipeline evaluate abstractpipeline java at java util stream referencepipeline foreach referencepipeline java at com hazelcast jet impl jobexecutionservice onmemberremoved jobexecutionservice java at com hazelcast jet impl jetservicebackend memberremoved jetservicebackend java at com hazelcast internal cluster impl membershipmanager lambda sendmembershipeventnotifications membershipmanager java at com hazelcast internal util executor cachedexecutorservicedelegate worker run cachedexecutorservicedelegate java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java at com hazelcast internal util executor hazelcastmanagedthread executerun hazelcastmanagedthread java at com hazelcast internal util executor hazelcastmanagedthread run hazelcastmanagedthread java debug forkjoinpool commonpool worker execution of job job execution completed with failure java util concurrent completionexception java util concurrent cancellationexception at java util concurrent completablefuture encodethrowable completablefuture java at java util concurrent completablefuture completethrowable completablefuture java at java util concurrent completablefuture uniwhencomplete completablefuture java at java util concurrent completablefuture uniwhencomplete tryfire completablefuture java at java util concurrent completablefuture postcomplete completablefuture java at java util concurrent completablefuture completeexceptionally completablefuture java at com hazelcast jet impl util noncompletablefuture internalcompleteexceptionally noncompletablefuture java at com hazelcast jet impl execution taskletexecutionservice executiontracker taskletdone taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker dismisstasklet taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker runtasklet taskletexecutionservice java at java util concurrent copyonwritearraylist foreach copyonwritearraylist java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker run taskletexecutionservice java at java lang thread run thread java caused by java util concurrent cancellationexception at java util concurrent completablefuture cancel completablefuture java at com hazelcast jet impl execution executioncontext terminateexecution executioncontext java at com hazelcast jet impl jobexecutionservice jobexecutionservice java at com hazelcast jet impl jobexecutionservice terminateexecution jobexecutionservice java at com hazelcast jet impl operation terminateexecutionoperation run terminateexecutionoperation java at com hazelcast spi impl operationservice operation call operation java at com hazelcast spi impl operationservice impl operationrunnerimpl call operationrunnerimpl java at com hazelcast spi impl operationservice impl operationrunnerimpl run operationrunnerimpl java at com hazelcast spi impl operationservice impl operationrunnerimpl run operationrunnerimpl java at com hazelcast spi impl operationexecutor impl operationexecutorimpl run operationexecutorimpl java at com hazelcast spi impl operationexecutor impl operationexecutorimpl runorexecute operationexecutorimpl java at com hazelcast spi impl operationservice impl invocation doinvokelocal invocation java at com hazelcast spi impl operationservice impl invocation doinvoke invocation java at com hazelcast spi impl operationservice impl invocation invocation java at com hazelcast spi impl operationservice impl invocation invoke invocation java at com hazelcast spi impl operationservice impl invocationbuilderimpl invoke invocationbuilderimpl java at com hazelcast jet impl mastercontext invokeonparticipant mastercontext java at com hazelcast jet impl mastercontext invokeonparticipants mastercontext java at com hazelcast jet impl masterjobcontext lambda cancelexecutioninvocations masterjobcontext java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java at com hazelcast internal util executor hazelcastmanagedthread executerun hazelcastmanagedthread java at com hazelcast internal util executor hazelcastmanagedthread run hazelcastmanagedthread java debug forkjoinpool commonpool worker execution of job job execution completed with failure java util concurrent completionexception java util concurrent cancellationexception at java util concurrent completablefuture encodethrowable completablefuture java at java util concurrent completablefuture completethrowable completablefuture java at java util concurrent completablefuture uniwhencomplete completablefuture java at java util concurrent completablefuture uniwhencomplete tryfire completablefuture java at java util concurrent completablefuture postcomplete completablefuture java at java util concurrent completablefuture completeexceptionally completablefuture java at com hazelcast jet impl util noncompletablefuture internalcompleteexceptionally noncompletablefuture java at com hazelcast jet impl execution taskletexecutionservice executiontracker taskletdone taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker dismisstasklet taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker runtasklet taskletexecutionservice java at java util concurrent copyonwritearraylist foreach copyonwritearraylist java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker run taskletexecutionservice java at java lang thread run thread java caused by java util concurrent cancellationexception at java util concurrent completablefuture cancel completablefuture java at com hazelcast jet impl execution executioncontext terminateexecution executioncontext java at com hazelcast jet impl jobexecutionservice jobexecutionservice java at com hazelcast jet impl jobexecutionservice terminateexecution jobexecutionservice java at com hazelcast jet impl operation terminateexecutionoperation run terminateexecutionoperation java at com hazelcast spi impl operationservice operation call operation java at com hazelcast spi impl operationservice impl operationrunnerimpl call operationrunnerimpl java at com hazelcast spi impl operationservice impl operationrunnerimpl run operationrunnerimpl java at com hazelcast spi impl operationservice impl operationrunnerimpl run operationrunnerimpl java at com hazelcast spi impl operationexecutor impl operationexecutorimpl run operationexecutorimpl java at com hazelcast spi impl operationexecutor impl operationexecutorimpl runorexecute operationexecutorimpl java at com hazelcast spi impl operationservice impl invocation doinvokelocal invocation java at com hazelcast spi impl operationservice impl invocation doinvoke invocation java at com hazelcast spi impl operationservice impl invocation invocation java at com hazelcast spi impl operationservice impl invocation invoke invocation java at com hazelcast spi impl operationservice impl invocationbuilderimpl invoke invocationbuilderimpl java at com hazelcast jet impl mastercontext invokeonparticipant mastercontext java at com hazelcast jet impl mastercontext invokeonparticipants mastercontext java at com hazelcast jet impl masterjobcontext lambda cancelexecutioninvocations masterjobcontext java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java at com hazelcast internal util executor hazelcastmanagedthread executerun hazelcastmanagedthread java at com hazelcast internal util executor hazelcastmanagedthread run hazelcastmanagedthread java debug forkjoinpool commonpool worker execution of job job execution completed with failure java util concurrent completionexception java util concurrent cancellationexception at java util concurrent completablefuture encodethrowable completablefuture java at java util concurrent completablefuture completethrowable completablefuture java at java util concurrent completablefuture uniwhencomplete completablefuture java at java util concurrent completablefuture uniwhencomplete tryfire completablefuture java at java util concurrent completablefuture postcomplete completablefuture java at java util concurrent completablefuture completeexceptionally completablefuture java at com hazelcast jet impl util noncompletablefuture internalcompleteexceptionally noncompletablefuture java at com hazelcast jet impl execution taskletexecutionservice executiontracker taskletdone taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker dismisstasklet taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker runtasklet taskletexecutionservice java at java util concurrent copyonwritearraylist foreach copyonwritearraylist java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker run taskletexecutionservice java at java lang thread run thread java caused by java util concurrent cancellationexception at java util concurrent completablefuture cancel completablefuture java at com hazelcast jet impl execution executioncontext terminateexecution executioncontext java at com hazelcast jet impl jobexecutionservice jobexecutionservice java at com hazelcast jet impl jobexecutionservice terminateexecution jobexecutionservice java at com hazelcast jet impl operation terminateexecutionoperation run terminateexecutionoperation java at com hazelcast spi impl operationservice operation call operation java at com hazelcast spi impl operationservice impl operationrunnerimpl call operationrunnerimpl java at com hazelcast spi impl operationservice impl operationrunnerimpl run operationrunnerimpl java at com hazelcast spi impl operationservice impl operationrunnerimpl run operationrunnerimpl java at com hazelcast spi impl operationexecutor impl operationexecutorimpl run operationexecutorimpl java at com hazelcast spi impl operationexecutor impl operationexecutorimpl runorexecute operationexecutorimpl java at com hazelcast spi impl operationservice impl invocation doinvokelocal invocation java at com hazelcast spi impl operationservice impl invocation doinvoke invocation java at com hazelcast spi impl operationservice impl invocation invocation java at com hazelcast spi impl operationservice impl invocation invoke invocation java at com hazelcast spi impl operationservice impl invocationbuilderimpl invoke invocationbuilderimpl java at com hazelcast jet impl mastercontext invokeonparticipant mastercontext java at com hazelcast jet impl mastercontext invokeonparticipants mastercontext java at com hazelcast jet impl masterjobcontext lambda cancelexecutioninvocations masterjobcontext java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java at com hazelcast internal util executor hazelcastmanagedthread executerun hazelcastmanagedthread java at com hazelcast internal util executor hazelcastmanagedthread run hazelcastmanagedthread java debug forkjoinpool commonpool worker execution of job job execution completed with failure java util concurrent completionexception java util concurrent cancellationexception at java util concurrent completablefuture encodethrowable completablefuture java at java util concurrent completablefuture completethrowable completablefuture java at java util concurrent completablefuture uniwhencomplete completablefuture java at java util concurrent completablefuture uniwhencomplete tryfire completablefuture java at java util concurrent completablefuture postcomplete completablefuture java at java util concurrent completablefuture completeexceptionally completablefuture java at com hazelcast jet impl util noncompletablefuture internalcompleteexceptionally noncompletablefuture java at com hazelcast jet impl execution taskletexecutionservice executiontracker taskletdone taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker dismisstasklet taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker runtasklet taskletexecutionservice java at java util concurrent copyonwritearraylist foreach copyonwritearraylist java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker run taskletexecutionservice java at java lang thread run thread java caused by java util concurrent cancellationexception at java util concurrent completablefuture cancel completablefuture java at com hazelcast jet impl execution executioncontext terminateexecution executioncontext java at com hazelcast jet impl jobexecutionservice jobexecutionservice java at com hazelcast jet impl jobexecutionservice lambda onmemberremoved jobexecutionservice java at java util stream foreachops foreachop ofref accept foreachops java at java util stream referencepipeline accept referencepipeline java at java util concurrent concurrenthashmap valuespliterator foreachremaining concurrenthashmap java at java util stream abstractpipeline copyinto abstractpipeline java at java util stream abstractpipeline wrapandcopyinto abstractpipeline java at java util stream foreachops foreachop evaluatesequential foreachops java at java util stream foreachops foreachop ofref evaluatesequential foreachops java at java util stream abstractpipeline evaluate abstractpipeline java at java util stream referencepipeline foreach referencepipeline java at com hazelcast jet impl jobexecutionservice onmemberremoved jobexecutionservice java at com hazelcast jet impl jetservicebackend memberremoved jetservicebackend java at com hazelcast internal cluster impl membershipmanager lambda sendmembershipeventnotifications membershipmanager java at com hazelcast internal util executor cachedexecutorservicedelegate worker run cachedexecutorservicedelegate java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java at com hazelcast internal util executor hazelcastmanagedthread executerun hazelcastmanagedthread java at com hazelcast internal util executor hazelcastmanagedthread run hazelcastmanagedthread java debug forkjoinpool commonpool worker job job execution received response to startexecutionoperation from java util concurrent cancellationexception debug forkjoinpool commonpool worker execution of job job execution completed with failure java util concurrent completionexception java util concurrent cancellationexception at java util concurrent completablefuture encodethrowable completablefuture java at java util concurrent completablefuture completethrowable completablefuture java at java util concurrent completablefuture uniwhencomplete completablefuture java at java util concurrent completablefuture uniwhencomplete tryfire completablefuture java at java util concurrent completablefuture postcomplete completablefuture java at java util concurrent completablefuture completeexceptionally completablefuture java at com hazelcast jet impl util noncompletablefuture internalcompleteexceptionally noncompletablefuture java at com hazelcast jet impl execution taskletexecutionservice executiontracker taskletdone taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker dismisstasklet taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker runtasklet taskletexecutionservice java at java util concurrent copyonwritearraylist foreach copyonwritearraylist java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker run taskletexecutionservice java at java lang thread run thread java cause utor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java at com hazelcast internal util executor hazelcastmanagedthread executerun hazelcastmanagedthread java at com hazelcast internal util executor hazelcastmanagedthread run hazelcastmanagedthread java debug forkjoinpool commonpool worker execution of job job execution has failures com hazelcast core memberleftexception member has left cluster java util concurrent cancellationexception debug forkjoinpool commonpool worker job job execution received response to startexecutionoperation from java util concurrent cancellationexception debug forkjoinpool commonpool worker execution of job job execution has failures com hazelcast core memberleftexception member has left cluster java util concurrent cancellationexception debug when manyjobs then sortedbysubmissiontime hz heuristic montalcini cached thread scheduling restart on master for job job debug forkjoinpool commonpool worker execution of job job execution completed with failure java util concurrent completionexception java util concurrent cancellationexception at java util concurrent completablefuture encodethrowable completablefuture java at java util concurrent completablefuture completethrowable completablefuture java at java util concurrent completablefuture uniwhencomplete completablefuture java at java util concurrent completablefuture uniwhencomplete tryfire completablefuture java at java util concurrent completablefuture postcomplete completablefuture java at java util concurrent completablefuture completeexceptionally completablefuture java at com hazelcast jet impl util noncompletablefuture internalcompleteexceptionally noncompletablefuture java at com hazelcast jet impl execution taskletexecutionservice executiontracker taskletdone taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker dismisstasklet taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker runtasklet taskletexecutionservice java at java util concurrent copyonwritearraylist foreach copyonwritearraylist java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker run taskletexecutionservice java at java lang thread run thread java caused by java util concurrent cancellationexception at java util concurrent completablefuture cancel completablefuture java at com hazelcast jet impl execution executioncontext terminateexecution executioncontext java at com hazelcast jet impl jobexecutionservice jobexecutionservice java at com hazelcast jet impl jobexecutionservice terminateexecution jobexecutionservice java at com hazelcast jet impl operation terminateexecutionoperation run terminateexecutionoperation java at com hazelcast spi impl operationservice operation call operation java at com hazelcast spi impl operationservice impl operationrunnerimpl call operationrunnerimpl java at com hazelcast spi impl operationservice impl operationrunnerimpl run operationrunnerimpl java at com hazelcast spi impl operationservice impl operationrunnerimpl run operationrunnerimpl java at com hazelcast spi impl operationexecutor impl operationexecutorimpl run operationexecutorimpl java at com hazelcast spi impl operationexecutor impl operationexecutorimpl runorexecute operationexecutorimpl java at com hazelcast spi impl operationservice impl invocation doinvokelocal invocation java at com hazelcast spi impl operationservice impl invocation doinvoke invocation java at com hazelcast spi impl operationservice impl invocation invocation java at com hazelcast spi impl operationservice impl invocation invoke invocation java at com hazelcast spi impl operationservice impl invocationbuilderimpl invoke invocationbuilderimpl java at com hazelcast jet impl mastercontext invokeonparticipant mastercontext java at com hazelcast jet impl mastercontext invokeonparticipants mastercontext java at com hazelcast jet impl masterjobcontext lambda cancelexecutioninvocations masterjobcontext java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java at com hazelcast internal util executor hazelcastmanagedthread executerun hazelcastmanagedthread java at com hazelcast internal util executor hazelcastmanagedthread run hazelcastmanagedthread java debug when manyjobs then sortedbysubmissiontime hz heuristic montalcini cached thread scheduling restart on master for job job debug forkjoinpool commonpool worker job job execution received response to startexecutionoperation from java util concurrent cancellationexception debug forkjoinpool commonpool worker execution of job job execution has failures com hazelcast core memberleftexception member has left cluster java util concurrent cancellationexception debug when manyjobs then sortedbysubmissiontime hz heuristic montalcini cached thread scheduling restart on master for job job debug forkjoinpool commonpool worker completed execution of job job execution debug forkjoinpool commonpool worker execution of job job execution completed with failure java util concurrent completionexception java util concurrent cancellationexception at java util concurrent completablefuture encodethrowable completablefuture java at java util concurrent completablefuture completethrowable completablefuture java at java util concurrent completablefuture uniwhencomplete completablefuture java at java util concurrent completablefuture uniwhencomplete tryfire completablefuture java at java util concurrent completablefuture postcomplete completablefuture java at java util concurrent completablefuture completeexceptionally completablefuture java at com hazelcast jet impl util noncompletablefuture internalcompleteexceptionally noncompletablefuture java at com hazelcast jet impl execution taskletexecutionservice executiontracker taskletdone taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker dismisstasklet taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker runtasklet taskletexecutionservice java at java util concurrent copyonwritearraylist foreach copyonwritearraylist java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker run taskletexecutionservice java at java lang thread run thread java caused by java util concurrent cancellationexception at java util concurrent completablefuture cancel completablefuture java at com hazelcast jet impl execution executioncontext terminateexecution executioncontext java at com hazelcast jet impl jobexecutionservice jobexecutionservice java at com hazelcast jet impl jobexecutionservice cancelallexecutions jobexecutionservice java at com hazelcast jet impl jobexecutionservice shutdown jobexecutionservice java at com hazelcast jet impl jetservicebackend shutdown jetservicebackend java at com hazelcast spi impl servicemanager impl servicemanagerimpl shutdownservice servicemanagerimpl java at com hazelcast spi impl servicemanager impl servicemanagerimpl shutdown servicemanagerimpl java at com hazelcast spi impl nodeengineimpl shutdown nodeengineimpl java at com hazelcast instance impl node shutdownservices node java at com hazelcast instance impl node shutdown node java at com hazelcast instance impl lifecycleserviceimpl shutdown lifecycleserviceimpl java at com hazelcast instance impl lifecycleserviceimpl terminate lifecycleserviceimpl java at com hazelcast test mocknetwork testnoderegistry shutdown testnoderegistry java at com hazelcast test mocknetwork testnoderegistry terminate testnoderegistry java at com hazelcast test testhazelcastinstancefactory terminateall testhazelcastinstancefactory java at com hazelcast client test testhazelcastfactory terminateall testhazelcastfactory java at com hazelcast jet jettestinstancefactory terminateall jettestinstancefactory java at com hazelcast jet core jettestsupport lambda shutdownfactory jettestsupport java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java more debug forkjoinpool commonpool worker completed execution of job job execution debug forkjoinpool commonpool worker execution of job job execution completed with failure java util concurrent completionexception java util concurrent cancellationexception at java util concurrent completablefuture encodethrowable completablefuture java at java util concurrent completablefuture completethrowable completablefuture java at java util concurrent completablefuture uniwhencomplete completablefuture java at java util concurrent completablefuture uniwhencomplete tryfire completablefuture java at java util concurrent completablefuture postcomplete completablefuture java at java util concurrent completablefuture completeexceptionally completablefuture java at com hazelcast jet impl util noncompletablefuture internalcompleteexceptionally noncompletablefuture java at com hazelcast jet impl execution taskletexecutionservice executiontracker taskletdone taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker dismisstasklet taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker runtasklet taskletexecutionservice java at java util concurrent copyonwritearraylist foreach copyonwritearraylist java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker run taskletexecutionservice java at java lang thread run thread java caused by java util concurrent cancellationexception at java util concurrent completablefuture cancel completablefuture java at com hazelcast jet impl execution executioncontext terminateexecution executioncontext java at com hazelcast jet impl jobexecutionservice jobexecutionservice java at com hazelcast jet impl jobexecutionservice cancelallexecutions jobexecutionservice java at com hazelcast jet impl jobexecutionservice shutdown jobexecutionservice java at com hazelcast jet impl jetservicebackend shutdown jetservicebackend java at com hazelcast spi impl servicemanager impl servicemanagerimpl shutdownservice servicemanagerimpl java at com hazelcast spi impl servicemanager impl servicemanagerimpl shutdown servicemanagerimpl java at com hazelcast spi impl nodeengineimpl shutdown nodeengineimpl java at com hazelcast instance impl node shutdownservices node java at com hazelcast instance impl node shutdown node java at com hazelcast instance impl lifecycleserviceimpl shutdown lifecycleserviceimpl java at com hazelcast instance impl lifecycleserviceimpl terminate lifecycleserviceimpl java at com hazelcast test mocknetwork testnoderegistry shutdown testnoderegistry java at com hazelcast test mocknetwork testnoderegistry terminate testnoderegistry java at com hazelcast test testhazelcastinstancefactory terminateall testhazelcastinstancefactory java at com hazelcast client test testhazelcastfactory terminateall testhazelcastfactory java at com hazelcast jet jettestinstancefactory terminateall jettestinstancefactory java at com hazelcast jet core jettestsupport lambda shutdownfactory jettestsupport java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java more debug forkjoinpool commonpool worker completed execution of job job execution debug forkjoinpool commonpool worker execution of job job execution completed with failure java util concurrent completionexception java util concurrent cancellationexception at java util concurrent completablefuture encodethrowable completablefuture java at java util concurrent completablefuture completethrowable completablefuture java at java util concurrent completablefuture uniwhencomplete completablefuture java at java util concurrent completablefuture uniwhencomplete tryfire completablefuture java at java util concurrent completablefuture postcomplete completablefuture java at java util concurrent completablefuture completeexceptionally completablefuture java at com hazelcast jet impl util noncompletablefuture internalcompleteexceptionally noncompletablefuture java at com hazelcast jet impl execution taskletexecutionservice executiontracker taskletdone taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker dismisstasklet taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker runtasklet taskletexecutionservice java at java util concurrent copyonwritearraylist foreach copyonwritearraylist java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker run taskletexecutionservice java at java lang thread run thread java caused by java util concurrent cancellationexception at java util concurrent completablefuture cancel completablefuture java at com hazelcast jet impl execution executioncontext terminateexecution executioncontext java at com hazelcast jet impl jobexecutionservice jobexecutionservice java at com hazelcast jet impl jobexecutionservice cancelallexecutions jobexecutionservice java at com hazelcast jet impl jobexecutionservice shutdown jobexecutionservice java at com hazelcast jet impl jetservicebackend shutdown jetservicebackend java at com hazelcast spi impl servicemanager impl servicemanagerimpl shutdownservice servicemanagerimpl java at com hazelcast spi impl servicemanager impl servicemanagerimpl shutdown servicemanagerimpl java at com hazelcast spi impl nodeengineimpl shutdown nodeengineimpl java at com hazelcast instance impl node shutdownservices node java at com hazelcast instance impl node shutdown node java at com hazelcast instance impl lifecycleserviceimpl shutdown lifecycleserviceimpl java at com hazelcast instance impl lifecycleserviceimpl terminate lifecycleserviceimpl java at com hazelcast test mocknetwork testnoderegistry shutdown testnoderegistry java at com hazelcast test mocknetwork testnoderegistry terminate testnoderegistry java at com hazelcast test testhazelcastinstancefactory terminateall testhazelcastinstancefactory java at com hazelcast client test testhazelcastfactory terminateall testhazelcastfactory java at com hazelcast jet jettestinstancefactory terminateall jettestinstancefactory java at com hazelcast jet core jettestsupport lambda shutdownfactory jettestsupport java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java more debug forkjoinpool commonpool worker completed execution of job job execution debug forkjoinpool commonpool worker execution of job job execution completed with failure java util concurrent completionexception java util concurrent cancellationexception at java util concurrent completablefuture encodethrowable completablefuture java at java util concurrent completablefuture completethrowable completablefuture java at java util concurrent completablefuture uniwhencomplete completablefuture java at java util concurrent completablefuture uniwhencomplete tryfire completablefuture java at java util concurrent completablefuture postcomplete completablefuture java at java util concurrent completablefuture completeexceptionally completablefuture java at com hazelcast jet impl util noncompletablefuture internalcompleteexceptionally noncompletablefuture java at com hazelcast jet impl execution taskletexecutionservice executiontracker taskletdone taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker dismisstasklet taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker runtasklet taskletexecutionservice java at java util concurrent copyonwritearraylist foreach copyonwritearraylist java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker run taskletexecutionservice java at java lang thread run thread java caused by java util concurrent cancellationexception at java util concurrent completablefuture cancel completablefuture java at com hazelcast jet impl execution executioncontext terminateexecution executioncontext java at com hazelcast jet impl jobexecutionservice jobexecutionservice java at com hazelcast jet impl jobexecutionservice cancelallexecutions jobexecutionservice java at com hazelcast jet impl jobexecutionservice shutdown jobexecutionservice java at com hazelcast jet impl jetservicebackend shutdown jetservicebackend java at com hazelcast spi impl servicemanager impl servicemanagerimpl shutdownservice servicemanagerimpl java at com hazelcast spi impl servicemanager impl servicemanagerimpl shutdown servicemanagerimpl java at com hazelcast spi impl nodeengineimpl shutdown nodeengineimpl java at com hazelcast instance impl node shutdownservices node java at com hazelcast instance impl node shutdown node java at com hazelcast instance impl lifecycleserviceimpl shutdown lifecycleserviceimpl java at com hazelcast instance impl lifecycleserviceimpl terminate lifecycleserviceimpl java at com hazelcast test mocknetwork testnoderegistry shutdown testnoderegistry java at com hazelcast test mocknetwork testnoderegistry terminate testnoderegistry java at com hazelcast test testhazelcastinstancefactory terminateall testhazelcastinstancefactory java at com hazelcast client test testhazelcastfactory terminateall testhazelcastfactory java at com hazelcast jet jettestinstancefactory terminateall jettestinstancefactory java at com hazelcast jet core jettestsupport lambda shutdownfactory jettestsupport java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java more debug forkjoinpool commonpool worker completed execution of job job execution debug forkjoinpool commonpool worker execution of job job execution completed with failure java util concurrent completionexception java util concurrent cancellationexception at java util concurrent completablefuture encodethrowable completablefuture java at java util concurrent completablefuture completethrowable completablefuture java at java util concurrent completablefuture uniwhencomplete completablefuture java at java util concurrent completablefuture uniwhencomplete tryfire completablefuture java at java util concurrent completablefuture postcomplete completablefuture java at java util concurrent completablefuture completeexceptionally completablefuture java at com hazelcast jet impl util noncompletablefuture internalcompleteexceptionally noncompletablefuture java at com hazelcast jet impl execution taskletexecutionservice executiontracker taskletdone taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker dismisstasklet taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker runtasklet taskletexecutionservice java at java util concurrent copyonwritearraylist foreach copyonwritearraylist java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker run taskletexecutionservice java at java lang thread run thread java caused by java util concurrent cancellationexception at java util concurrent completablefuture cancel completablefuture java at com hazelcast jet impl execution executioncontext terminateexecution executioncontext java at com hazelcast jet impl jobexecutionservice jobexecutionservice java at com hazelcast jet impl jobexecutionservice cancelallexecutions jobexecutionservice java at com hazelcast jet impl jobexecutionservice shutdown jobexecutionservice java at com hazelcast jet impl jetservicebackend shutdown jetservicebackend java at com hazelcast spi impl servicemanager impl servicemanagerimpl shutdownservice servicemanagerimpl java at com hazelcast spi impl servicemanager impl servicemanagerimpl shutdown servicemanagerimpl java at com hazelcast spi impl nodeengineimpl shutdown nodeengineimpl java at com hazelcast instance impl node shutdownservices node java at com hazelcast instance impl node shutdown node java at com hazelcast instance impl lifecycleserviceimpl shutdown lifecycleserviceimpl java at com hazelcast instance impl lifecycleserviceimpl terminate lifecycleserviceimpl java at com hazelcast test mocknetwork testnoderegistry shutdown testnoderegistry java at com hazelcast test mocknetwork testnoderegistry terminate testnoderegistry java at com hazelcast test testhazelcastinstancefactory terminateall testhazelcastinstancefactory java at com hazelcast client test testhazelcastfactory terminateall testhazelcastfactory java at com hazelcast jet jettestinstancefactory terminateall jettestinstancefactory java at com hazelcast jet core jettestsupport lambda shutdownfactory jettestsupport java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java more debug forkjoinpool commonpool worker completed execution of job job execution debug forkjoinpool commonpool worker completed execution of job job execution debug forkjoinpool commonpool worker execution of job job execution completed with failure java util concurrent completionexception java util concurrent cancellationexception at java util concurrent completablefuture encodethrowable completablefuture java at java util concurrent completablefuture completethrowable completablefuture java at java util concurrent completablefuture uniwhencomplete completablefuture java at java util concurrent completablefuture uniwhencomplete tryfire completablefuture java at java util concurrent completablefuture postcomplete completablefuture java at java util concurrent completablefuture completeexceptionally completablefuture java at com hazelcast jet impl util noncompletablefuture internalcompleteexceptionally noncompletablefuture java at com hazelcast jet impl execution taskletexecutionservice executiontracker taskletdone taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker dismisstasklet taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker runtasklet taskletexecutionservice java at java util concurrent copyonwritearraylist foreach copyonwritearraylist java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker run taskletexecutionservice java at java lang thread run thread java caused by java util concurrent cancellationexception at java util concurrent completablefuture cancel completablefuture java at com hazelcast jet impl execution executioncontext terminateexecution executioncontext java at com hazelcast jet impl jobexecutionservice jobexecutionservice java at com hazelcast jet impl jobexecutionservice cancelallexecutions jobexecutionservice java at com hazelcast jet impl jobexecutionservice shutdown jobexecutionservice java at com hazelcast jet impl jetservicebackend shutdown jetservicebackend java at com hazelcast spi impl servicemanager impl servicemanagerimpl shutdownservice servicemanagerimpl java at com hazelcast spi impl servicemanager impl servicemanagerimpl shutdown servicemanagerimpl java at com hazelcast spi impl nodeengineimpl shutdown nodeengineimpl java at com hazelcast instance impl node shutdownservices node java at com hazelcast instance impl node shutdown node java at com hazelcast instance impl lifecycleserviceimpl shutdown lifecycleserviceimpl java at com hazelcast instance impl lifecycleserviceimpl terminate lifecycleserviceimpl java at com hazelcast test mocknetwork testnoderegistry shutdown testnoderegistry java at com hazelcast test mocknetwork testnoderegistry terminate testnoderegistry java at com hazelcast test testhazelcastinstancefactory terminateall testhazelcastinstancefactory java at com hazelcast client test testhazelcastfactory terminateall testhazelcastfactory java at com hazelcast jet jettestinstancefactory terminateall jettestinstancefactory java at com hazelcast jet core jettestsupport lambda shutdownfactory jettestsupport java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java more debug forkjoinpool commonpool worker execution of job job execution completed with failure java util concurrent completionexception java util concurrent cancellationexception at java util concurrent completablefuture encodethrowable completablefuture java at java util concurrent completablefuture completethrowable completablefuture java at java util concurrent completablefuture uniwhencomplete completablefuture java at java util concurrent completablefuture uniwhencomplete tryfire completablefuture java at java util concurrent completablefuture postcomplete completablefuture java at java util concurrent completablefuture completeexceptionally completablefuture java at com hazelcast jet impl util noncompletablefuture internalcompleteexceptionally noncompletablefuture java at com hazelcast jet impl execution taskletexecutionservice executiontracker taskletdone taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker lambda run taskletexecutionservice java at java util concurrent copyonwritearraylist foreach copyonwritearraylist java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker run taskletexecutionservice java at java lang thread run thread java caused by java util concurrent cancellationexception at java util concurrent completablefuture cancel completablefuture java at com hazelcast jet impl execution executioncontext terminateexecution executioncontext java at com hazelcast jet impl jobexecutionservice jobexecutionservice java at com hazelcast jet impl jobexecutionservice cancelallexecutions jobexecutionservice java at com hazelcast jet impl jobexecutionservice shutdown jobexecutionservice java at com hazelcast jet impl jetservicebackend shutdown jetservicebackend java at com hazelcast spi impl servicemanager impl servicemanagerimpl shutdownservice servicemanagerimpl java at com hazelcast spi impl servicemanager impl servicemanagerimpl shutdown servicemanagerimpl java at com hazelcast spi impl nodeengineimpl shutdown nodeengineimpl java at com hazelcast instance impl node shutdownservices node java at com hazelcast instance impl node shutdown node java at com hazelcast instance impl lifecycleserviceimpl shutdown lifecycleserviceimpl java at com hazelcast instance impl lifecycleserviceimpl terminate lifecycleserviceimpl java at com hazelcast test mocknetwork testnoderegistry shutdown testnoderegistry java at com hazelcast test mocknetwork testnoderegistry terminate testnoderegistry java at com hazelcast test testhazelcastinstancefactory terminateall testhazelcastinstancefactory java at com hazelcast client test testhazelcastfactory terminateall testhazelcastfactory java at com hazelcast jet jettestinstancefactory terminateall jettestinstancefactory java at com hazelcast jet core jettestsupport lambda shutdownfactory jettestsupport java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java more debug forkjoinpool commonpool worker completed execution of job job execution debug forkjoinpool commonpool worker execution of job job execution completed with failure java util concurrent completionexception java util concurrent cancellationexception at java util concurrent completablefuture encodethrowable completablefuture java at java util concurrent completablefuture completethrowable completablefuture java at java util concurrent completablefuture uniwhencomplete completablefuture java at java util concurrent completablefuture uniwhencomplete tryfire completablefuture java at java util concurrent completablefuture postcomplete completablefuture java at java util concurrent completablefuture completeexceptionally completablefuture java at com hazelcast jet impl util noncompletablefuture internalcompleteexceptionally noncompletablefuture java at com hazelcast jet impl execution taskletexecutionservice executiontracker taskletdone taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker dismisstasklet taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker runtasklet taskletexecutionservice java at java util concurrent copyonwritearraylist foreach copyonwritearraylist java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker run taskletexecutionservice java at java lang thread run thread java caused by java util concurrent cancellationexception at java util concurrent completablefuture cancel completablefuture java at com hazelcast jet impl execution executioncontext terminateexecution executioncontext java at com hazelcast jet impl jobexecutionservice jobexecutionservice java at com hazelcast jet impl jobexecutionservice cancelallexecutions jobexecutionservice java at com hazelcast jet impl jobexecutionservice shutdown jobexecutionservice java at com hazelcast jet impl jetservicebackend shutdown jetservicebackend java at com hazelcast spi impl servicemanager impl servicemanagerimpl shutdownservice servicemanagerimpl java at com hazelcast spi impl servicemanager impl servicemanagerimpl shutdown servicemanagerimpl java at com hazelcast spi impl nodeengineimpl shutdown nodeengineimpl java at com hazelcast instance impl node shutdownservices node java at com hazelcast instance impl node shutdown node java at com hazelcast instance impl lifecycleserviceimpl shutdown lifecycleserviceimpl java at com hazelcast instance impl lifecycleserviceimpl terminate lifecycleserviceimpl java at com hazelcast test mocknetwork testnoderegistry shutdown testnoderegistry java at com hazelcast test mocknetwork testnoderegistry terminate testnoderegistry java at com hazelcast test testhazelcastinstancefactory terminateall testhazelcastinstancefactory java at com hazelcast client test testhazelcastfactory terminateall testhazelcastfactory java at com hazelcast jet jettestinstancefactory terminateall jettestinstancefactory java at com hazelcast jet core jettestsupport lambda shutdownfactory jettestsupport java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java more debug forkjoinpool commonpool worker completed execution of job job execution debug forkjoinpool commonpool worker execution of job job execution completed with failure java util concurrent completionexception java util concurrent cancellationexception at java util concurrent completablefuture encodethrowable completablefuture java at java util concurrent completablefuture completethrowable completablefuture java at java util concurrent completablefuture uniwhencomplete completablefuture java at java util concurrent completablefuture uniwhencomplete tryfire completablefuture java at java util concurrent completablefuture postcomplete completablefuture java at java util concurrent completablefuture completeexceptionally completablefuture java at com hazelcast jet impl util noncompletablefuture internalcompleteexceptionally noncompletablefuture java at com hazelcast jet impl execution taskletexecutionservice executiontracker taskletdone taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker dismisstasklet taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker runtasklet taskletexecutionservice java at java util concurrent copyonwritearraylist foreach copyonwritearraylist java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker run taskletexecutionservice java at java lang thread run thread java caused by java util concurrent cancellationexception at java util concurrent completablefuture cancel completablefuture java at com hazelcast jet impl execution executioncontext terminateexecution executioncontext java at com hazelcast jet impl jobexecutionservice jobexecutionservice java at com hazelcast jet impl jobexecutionservice cancelallexecutions jobexecutionservice java at com hazelcast jet impl jobexecutionservice shutdown jobexecutionservice java at com hazelcast jet impl jetservicebackend shutdown jetservicebackend java at com hazelcast spi impl servicemanager impl servicemanagerimpl shutdownservice servicemanagerimpl java at com hazelcast spi impl servicemanager impl servicemanagerimpl shutdown servicemanagerimpl java at com hazelcast spi impl nodeengineimpl shutdown nodeengineimpl java at com hazelcast instance impl node shutdownservices node java at com hazelcast instance impl node shutdown node java at com hazelcast instance impl lifecycleserviceimpl shutdown lifecycleserviceimpl java at com hazelcast instance impl lifecycleserviceimpl terminate lifecycleserviceimpl java at com hazelcast test mocknetwork testnoderegistry shutdown testnoderegistry java at com hazelcast test mocknetwork testnoderegistry terminate testnoderegistry java at com hazelcast test testhazelcastinstancefactory terminateall testhazelcastinstancefactory java at com hazelcast client test testhazelcastfactory terminateall testhazelcastfactory java at com hazelcast jet jettestinstancefactory terminateall jettestinstancefactory java at com hazelcast jet core jettestsupport lambda shutdownfactory jettestsupport java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java more debug forkjoinpool commonpool worker completed execution of job job execution debug forkjoinpool commonpool worker execution of job job execution completed with failure java util concurrent completionexception java util concurrent cancellationexception at java util concurrent completablefuture encodethrowable completablefuture java at java util concurrent completablefuture completethrowable completablefuture java at java util concurrent completablefuture uniwhencomplete completablefuture java at java util concurrent completablefuture uniwhencomplete tryfire completablefuture java at java util concurrent completablefuture postcomplete completablefuture java at java util concurrent completablefuture completeexceptionally completablefuture java at com hazelcast jet impl util noncompletablefuture internalcompleteexceptionally noncompletablefuture java at com hazelcast jet impl execution taskletexecutionservice executiontracker taskletdone taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker lambda run taskletexecutionservice java at java util concurrent copyonwritearraylist foreach copyonwritearraylist java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker run taskletexecutionservice java at java lang thread run thread java caused by java util concurrent cancellationexception at java util concurrent completablefuture cancel completablefuture java at com hazelcast jet impl execution executioncontext terminateexecution executioncontext java at com hazelcast jet impl jobexecutionservice jobexecutionservice java at com hazelcast jet impl jobexecutionservice cancelallexecutions jobexecutionservice java at com hazelcast jet impl jobexecutionservice shutdown jobexecutionservice java at com hazelcast jet impl jetservicebackend shutdown jetservicebackend java at com hazelcast spi impl servicemanager impl servicemanagerimpl shutdownservice servicemanagerimpl java at com hazelcast spi impl servicemanager impl servicemanagerimpl shutdown servicemanagerimpl java at com hazelcast spi impl nodeengineimpl shutdown nodeengineimpl java at com hazelcast instance impl node shutdownservices node java at com hazelcast instance impl node shutdown node java at com hazelcast instance impl lifecycleserviceimpl shutdown lifecycleserviceimpl java at com hazelcast instance impl lifecycleserviceimpl terminate lifecycleserviceimpl java at com hazelcast test mocknetwork testnoderegistry shutdown testnoderegistry java at com hazelcast test mocknetwork testnoderegistry terminate testnoderegistry java at com hazelcast test testhazelcastinstancefactory terminateall testhazelcastinstancefactory java at com hazelcast client test testhazelcastfactory terminateall testhazelcastfactory java at com hazelcast jet jettestinstancefactory terminateall jettestinstancefactory java at com hazelcast jet core jettestsupport lambda shutdownfactory jettestsupport java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java more info when manyjobs then sortedbysubmissiontime thread destroying node nodeextension info when manyjobs then sortedbysubmissiontime thread hazelcast shutdown is completed in ms info when manyjobs then sortedbysubmissiontime thread is shutdown info when manyjobs then sortedbysubmissiontime thread is shutting down warn when manyjobs then sortedbysubmissiontime thread terminating forcefully info when manyjobs then sortedbysubmissiontime thread shutting down connection manager info when manyjobs then sortedbysubmissiontime thread shutting down node engine info when manyjobs then sortedbysubmissiontime thread destroying node nodeextension info when manyjobs then sortedbysubmissiontime thread hazelcast shutdown is completed in ms info when manyjobs then sortedbysubmissiontime thread is shutdown buildinfo right after when manyjobs then sortedbysubmissiontime com hazelcast jet impl jobsummarytest buildinfo version snapshot build buildnumber revision enterprise false serializationversion jet jetbuildinfo version snapshot build revision hiccups measured while running test when manyjobs then sortedbysubmissiontime com hazelcast jet impl jobsummarytest accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms
0
219,089
16,817,081,341
IssuesEvent
2021-06-17 08:40:10
nilisha-jais/Musicophilia
https://api.github.com/repos/nilisha-jais/Musicophilia
opened
Modification of Navbar.
documentation
I want to modify the navbar , to be more precise the CSS of the navbar to make the site look much better than before, @nilisha-jais please assign this task to me.
1.0
Modification of Navbar. - I want to modify the navbar , to be more precise the CSS of the navbar to make the site look much better than before, @nilisha-jais please assign this task to me.
non_process
modification of navbar i want to modify the navbar to be more precise the css of the navbar to make the site look much better than before nilisha jais please assign this task to me
0
82,044
10,267,756,636
IssuesEvent
2019-08-23 03:07:04
OpenPHDGuiding/phd2
https://api.github.com/repos/OpenPHDGuiding/phd2
closed
documentation: need more info about camera gain setting
Type-Documentation
camera gain description should describe how phd2's 0-100 scale maps to camera native gain value
1.0
documentation: need more info about camera gain setting - camera gain description should describe how phd2's 0-100 scale maps to camera native gain value
non_process
documentation need more info about camera gain setting camera gain description should describe how s scale maps to camera native gain value
0
241,372
26,256,762,171
IssuesEvent
2023-01-06 01:55:31
dkushwah/WhiteSourceTs
https://api.github.com/repos/dkushwah/WhiteSourceTs
opened
CVE-2021-3803 (High) detected in nth-check-1.0.1.tgz
security vulnerability
## CVE-2021-3803 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>nth-check-1.0.1.tgz</b></p></summary> <p>performant nth-check parser & compiler</p> <p>Library home page: <a href="https://registry.npmjs.org/nth-check/-/nth-check-1.0.1.tgz">https://registry.npmjs.org/nth-check/-/nth-check-1.0.1.tgz</a></p> <p> Dependency Hierarchy: - html-webpack-plugin-2.29.0.tgz (Root Library) - pretty-error-2.1.1.tgz - renderkid-2.0.1.tgz - css-select-1.2.0.tgz - :x: **nth-check-1.0.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/dkushwah/WhiteSourceTs/commit/c3e484161c80b5ca0982f78e6cf89d9970ba88cd">c3e484161c80b5ca0982f78e6cf89d9970ba88cd</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> nth-check is vulnerable to Inefficient Regular Expression Complexity <p>Publish Date: 2021-09-17 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-3803>CVE-2021-3803</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2021-09-17</p> <p>Fix Resolution (nth-check): 2.0.1</p> <p>Direct dependency fix Resolution (html-webpack-plugin): 2.30.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-3803 (High) detected in nth-check-1.0.1.tgz - ## CVE-2021-3803 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>nth-check-1.0.1.tgz</b></p></summary> <p>performant nth-check parser & compiler</p> <p>Library home page: <a href="https://registry.npmjs.org/nth-check/-/nth-check-1.0.1.tgz">https://registry.npmjs.org/nth-check/-/nth-check-1.0.1.tgz</a></p> <p> Dependency Hierarchy: - html-webpack-plugin-2.29.0.tgz (Root Library) - pretty-error-2.1.1.tgz - renderkid-2.0.1.tgz - css-select-1.2.0.tgz - :x: **nth-check-1.0.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/dkushwah/WhiteSourceTs/commit/c3e484161c80b5ca0982f78e6cf89d9970ba88cd">c3e484161c80b5ca0982f78e6cf89d9970ba88cd</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> nth-check is vulnerable to Inefficient Regular Expression Complexity <p>Publish Date: 2021-09-17 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-3803>CVE-2021-3803</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2021-09-17</p> <p>Fix Resolution (nth-check): 2.0.1</p> <p>Direct dependency fix Resolution (html-webpack-plugin): 2.30.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in nth check tgz cve high severity vulnerability vulnerable library nth check tgz performant nth check parser compiler library home page a href dependency hierarchy html webpack plugin tgz root library pretty error tgz renderkid tgz css select tgz x nth check tgz vulnerable library found in head commit a href vulnerability details nth check is vulnerable to inefficient regular expression complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution nth check direct dependency fix resolution html webpack plugin step up your open source security game with mend
0
15,670
19,847,318,363
IssuesEvent
2022-01-21 08:19:47
ooi-data/RS03AXPS-SF03A-2A-CTDPFA302-streamed-ctdpf_sbe43_sample
https://api.github.com/repos/ooi-data/RS03AXPS-SF03A-2A-CTDPFA302-streamed-ctdpf_sbe43_sample
opened
🛑 Processing failed: ValueError
process
## Overview `ValueError` found in `processing_task` task during run ended on 2022-01-21T08:19:46.430177. ## Details Flow name: `RS03AXPS-SF03A-2A-CTDPFA302-streamed-ctdpf_sbe43_sample` Task name: `processing_task` Error type: `ValueError` Error message: cannot reshape array of size 1209600 into shape (12500000,) <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing final_path = finalize_data_stream( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream append_to_zarr(mod_ds, final_store, enc, logger=logger) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr _append_zarr(store, mod_ds) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr existing_arr.append(var_data.values) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2305, in append return self._write_op(self._append_nosync, data, axis=axis) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2211, in _write_op return self._synchronized_op(f, *args, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2201, in _synchronized_op result = f(*args, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2341, in _append_nosync self[append_selection] = data File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1224, in __setitem__ self.set_basic_selection(selection, value, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1319, in set_basic_selection return self._set_basic_selection_nd(selection, value, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1610, in _set_basic_selection_nd self._set_selection(indexer, value, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1682, in _set_selection self._chunk_setitems(lchunk_coords, lchunk_selection, chunk_values, File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1871, in _chunk_setitems cdatas = [self._process_for_setitem(key, sel, val, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1871, in <listcomp> cdatas = [self._process_for_setitem(key, sel, val, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1950, in _process_for_setitem chunk = self._decode_chunk(cdata) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2003, in _decode_chunk chunk = chunk.reshape(expected_shape or self._chunks, order=self._order) ValueError: cannot reshape array of size 1209600 into shape (12500000,) ``` </details>
1.0
🛑 Processing failed: ValueError - ## Overview `ValueError` found in `processing_task` task during run ended on 2022-01-21T08:19:46.430177. ## Details Flow name: `RS03AXPS-SF03A-2A-CTDPFA302-streamed-ctdpf_sbe43_sample` Task name: `processing_task` Error type: `ValueError` Error message: cannot reshape array of size 1209600 into shape (12500000,) <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing final_path = finalize_data_stream( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream append_to_zarr(mod_ds, final_store, enc, logger=logger) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr _append_zarr(store, mod_ds) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr existing_arr.append(var_data.values) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2305, in append return self._write_op(self._append_nosync, data, axis=axis) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2211, in _write_op return self._synchronized_op(f, *args, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2201, in _synchronized_op result = f(*args, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2341, in _append_nosync self[append_selection] = data File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1224, in __setitem__ self.set_basic_selection(selection, value, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1319, in set_basic_selection return self._set_basic_selection_nd(selection, value, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1610, in _set_basic_selection_nd self._set_selection(indexer, value, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1682, in _set_selection self._chunk_setitems(lchunk_coords, lchunk_selection, chunk_values, File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1871, in _chunk_setitems cdatas = [self._process_for_setitem(key, sel, val, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1871, in <listcomp> cdatas = [self._process_for_setitem(key, sel, val, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1950, in _process_for_setitem chunk = self._decode_chunk(cdata) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2003, in _decode_chunk chunk = chunk.reshape(expected_shape or self._chunks, order=self._order) ValueError: cannot reshape array of size 1209600 into shape (12500000,) ``` </details>
process
🛑 processing failed valueerror overview valueerror found in processing task task during run ended on details flow name streamed ctdpf sample task name processing task error type valueerror error message cannot reshape array of size into shape traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream append to zarr mod ds final store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr append zarr store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in append zarr existing arr append var data values file srv conda envs notebook lib site packages zarr core py line in append return self write op self append nosync data axis axis file srv conda envs notebook lib site packages zarr core py line in write op return self synchronized op f args kwargs file srv conda envs notebook lib site packages zarr core py line in synchronized op result f args kwargs file srv conda envs notebook lib site packages zarr core py line in append nosync self data file srv conda envs notebook lib site packages zarr core py line in setitem self set basic selection selection value fields fields file srv conda envs notebook lib site packages zarr core py line in set basic selection return self set basic selection nd selection value fields fields file srv conda envs notebook lib site packages zarr core py line in set basic selection nd self set selection indexer value fields fields file srv conda envs notebook lib site packages zarr core py line in set selection self chunk setitems lchunk coords lchunk selection chunk values file srv conda envs notebook lib site packages zarr core py line in chunk setitems cdatas self process for setitem key sel val fields fields file srv conda envs notebook lib site packages zarr core py line in cdatas self process for setitem key sel val fields fields file srv conda envs notebook lib site packages zarr core py line in process for setitem chunk self decode chunk cdata file srv conda envs notebook lib site packages zarr core py line in decode chunk chunk chunk reshape expected shape or self chunks order self order valueerror cannot reshape array of size into shape
1
42,329
9,203,344,164
IssuesEvent
2019-03-08 02:02:22
GSA/datagov-deploy
https://api.github.com/repos/GSA/datagov-deploy
closed
Document: Detail changes
application codeigniter component/dashboard php
Task: Document how repo should be used and how it integrates with its parent repo. Repo: datagov-deploy-dashboard
1.0
Document: Detail changes - Task: Document how repo should be used and how it integrates with its parent repo. Repo: datagov-deploy-dashboard
non_process
document detail changes task document how repo should be used and how it integrates with its parent repo repo datagov deploy dashboard
0
18,608
24,579,075,351
IssuesEvent
2022-10-13 14:23:31
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[Auth] Signup email > Support email address is not updated in resend verification code flow
Bug P1 Process: Fixed Process: Tested dev Auth server
**Steps:** 1. Install mobile app 2. Signup. Support email is updated in email body 3. Click on 'Resend verification code' 4. Observe the email body **Actual:** Support email address is displayed as `{{supportEMail}}` in resend verification code flow **Expected:** Support email added in SB manage apps section should be displayed ![Screenshot_3](https://user-images.githubusercontent.com/60386291/188565973-bd5c3697-301a-4af6-9408-6a30e0ecff50.png)
2.0
[Auth] Signup email > Support email address is not updated in resend verification code flow - **Steps:** 1. Install mobile app 2. Signup. Support email is updated in email body 3. Click on 'Resend verification code' 4. Observe the email body **Actual:** Support email address is displayed as `{{supportEMail}}` in resend verification code flow **Expected:** Support email added in SB manage apps section should be displayed ![Screenshot_3](https://user-images.githubusercontent.com/60386291/188565973-bd5c3697-301a-4af6-9408-6a30e0ecff50.png)
process
signup email support email address is not updated in resend verification code flow steps install mobile app signup support email is updated in email body click on resend verification code observe the email body actual support email address is displayed as supportemail in resend verification code flow expected support email added in sb manage apps section should be displayed
1
255,126
19,293,606,281
IssuesEvent
2021-12-12 07:43:47
typedorm/typedorm
https://api.github.com/repos/typedorm/typedorm
closed
Could not resolve primary key on find.
documentation enhancement
Hello, I'm trying to do a simple find on a GSI but I have the error `"id" was referenced in ITEM#{{id}} but it's value could not be resolved.` even if it's not used in the request Here is my code ```typescript import { Attribute, AUTO_GENERATE_ATTRIBUTE_STRATEGY, AutoGenerateAttribute, Entity, INDEX_TYPE } from '@typedorm/common'; @Entity({ name: 'item', primaryKey: { partitionKey: 'ITEM#{{id}}', sortKey: 'ITEM#{{id}}', }, indexes: { GSI1: { type: INDEX_TYPE.GSI, partitionKey: 'PARENT#{{parentId}}', sortKey: 'CREATED_AT#{{createdAt}}', }, }, }) export class Item{ @AutoGenerateAttribute({ strategy: AUTO_GENERATE_ATTRIBUTE_STRATEGY.UUID4, }) id!: string; @Attribute() parentId!: string; @AutoGenerateAttribute({ strategy: AUTO_GENERATE_ATTRIBUTE_STRATEGY.EPOCH_DATE, }) createdAt: string; @AutoGenerateAttribute({ strategy: AUTO_GENERATE_ATTRIBUTE_STRATEGY.EPOCH_DATE, autoUpdate: true, }) updatedAt: string; } // Dynamodb request const items = await entityManager.find(Item, { parentId: "parent-1", queryIndex: "GSI1" }) // ERROR [ExceptionsHandler] "id" was referenced in ITEM#{{id}} but it's value could not be resolved. ```
1.0
Could not resolve primary key on find. - Hello, I'm trying to do a simple find on a GSI but I have the error `"id" was referenced in ITEM#{{id}} but it's value could not be resolved.` even if it's not used in the request Here is my code ```typescript import { Attribute, AUTO_GENERATE_ATTRIBUTE_STRATEGY, AutoGenerateAttribute, Entity, INDEX_TYPE } from '@typedorm/common'; @Entity({ name: 'item', primaryKey: { partitionKey: 'ITEM#{{id}}', sortKey: 'ITEM#{{id}}', }, indexes: { GSI1: { type: INDEX_TYPE.GSI, partitionKey: 'PARENT#{{parentId}}', sortKey: 'CREATED_AT#{{createdAt}}', }, }, }) export class Item{ @AutoGenerateAttribute({ strategy: AUTO_GENERATE_ATTRIBUTE_STRATEGY.UUID4, }) id!: string; @Attribute() parentId!: string; @AutoGenerateAttribute({ strategy: AUTO_GENERATE_ATTRIBUTE_STRATEGY.EPOCH_DATE, }) createdAt: string; @AutoGenerateAttribute({ strategy: AUTO_GENERATE_ATTRIBUTE_STRATEGY.EPOCH_DATE, autoUpdate: true, }) updatedAt: string; } // Dynamodb request const items = await entityManager.find(Item, { parentId: "parent-1", queryIndex: "GSI1" }) // ERROR [ExceptionsHandler] "id" was referenced in ITEM#{{id}} but it's value could not be resolved. ```
non_process
could not resolve primary key on find hello i m trying to do a simple find on a gsi but i have the error id was referenced in item id but it s value could not be resolved even if it s not used in the request here is my code typescript import attribute auto generate attribute strategy autogenerateattribute entity index type from typedorm common entity name item primarykey partitionkey item id sortkey item id indexes type index type gsi partitionkey parent parentid sortkey created at createdat export class item autogenerateattribute strategy auto generate attribute strategy id string attribute parentid string autogenerateattribute strategy auto generate attribute strategy epoch date createdat string autogenerateattribute strategy auto generate attribute strategy epoch date autoupdate true updatedat string dynamodb request const items await entitymanager find item parentid parent queryindex error id was referenced in item id but it s value could not be resolved
0
8,679
11,810,650,141
IssuesEvent
2020-03-19 16:49:50
MHRA/products
https://api.github.com/repos/MHRA/products
opened
Move Environment Variable Checks to Bootstrap
EPIC - Auto Batch Process :oncoming_automobile: STORY :book:
## User want As an _operator_ I want _the system to fail immediately if environment is configured wrong_ So that _I don't have nasty surprises during runtime_ ## Acceptance Criteria - [ ] All env var checks should happen before we start routing. ## Data - Potential impact **Size** **Value** **Effort** ### Exit Criteria met - [ ] Backlog - [ ] Discovery - [ ] DUXD - [ ] Development - [ ] Quality Assurance - [ ] Release and Validate
1.0
Move Environment Variable Checks to Bootstrap - ## User want As an _operator_ I want _the system to fail immediately if environment is configured wrong_ So that _I don't have nasty surprises during runtime_ ## Acceptance Criteria - [ ] All env var checks should happen before we start routing. ## Data - Potential impact **Size** **Value** **Effort** ### Exit Criteria met - [ ] Backlog - [ ] Discovery - [ ] DUXD - [ ] Development - [ ] Quality Assurance - [ ] Release and Validate
process
move environment variable checks to bootstrap user want as an operator i want the system to fail immediately if environment is configured wrong so that i don t have nasty surprises during runtime acceptance criteria all env var checks should happen before we start routing data potential impact size value effort exit criteria met backlog discovery duxd development quality assurance release and validate
1
539,130
15,783,918,953
IssuesEvent
2021-04-01 14:32:55
wso2/product-apim
https://api.github.com/repos/wso2/product-apim
closed
API Product subscriptions are not displayed under the API subscription page.
API-M 4.0.0 Feature/APIProducts Priority/High Type/Bug
### Description: <!-- Describe the issue --> ![image](https://user-images.githubusercontent.com/32265029/112598936-7cb4ca80-8dcc-11eb-9215-9e177ed18ea2.png) visible in application's subscriptions section ![image](https://user-images.githubusercontent.com/32265029/112598984-8c341380-8dcc-11eb-9887-628a81db49a3.png) ### Steps to reproduce: - Create an API Product - subscribe an application - Go to the API product in DevPortal - Click subscriptions ### Affected Product Version: API-M 4.0.0 ### Environment details (with versions): - OS: Windows 10 - DB: MSSQL 2019 #### Suggested Labels: API-M 4.0.0, Priority/High
1.0
API Product subscriptions are not displayed under the API subscription page. - ### Description: <!-- Describe the issue --> ![image](https://user-images.githubusercontent.com/32265029/112598936-7cb4ca80-8dcc-11eb-9215-9e177ed18ea2.png) visible in application's subscriptions section ![image](https://user-images.githubusercontent.com/32265029/112598984-8c341380-8dcc-11eb-9887-628a81db49a3.png) ### Steps to reproduce: - Create an API Product - subscribe an application - Go to the API product in DevPortal - Click subscriptions ### Affected Product Version: API-M 4.0.0 ### Environment details (with versions): - OS: Windows 10 - DB: MSSQL 2019 #### Suggested Labels: API-M 4.0.0, Priority/High
non_process
api product subscriptions are not displayed under the api subscription page description visible in application s subscriptions section steps to reproduce create an api product subscribe an application go to the api product in devportal click subscriptions affected product version api m environment details with versions os windows db mssql suggested labels api m priority high
0
3,221
6,279,534,869
IssuesEvent
2017-07-18 16:25:53
dotnet/corefx
https://api.github.com/repos/dotnet/corefx
closed
S.D.Process tests are failing on uap
area-System.Diagnostics.Process
Currently all S.D.Process tests are disabled (https://github.com/dotnet/corefx/issues/20948). I'm planning to reenable not failing tests and substitute that issue so that only relevant tests are disabled. <details> <summary>List of failing tests on uap</summary> ``` ERROR: System.Diagnostics.Tests.ProcessCollectionTests.TestThreadCollectionBehavior [FAIL] ERROR: System.Diagnostics.Tests.ProcessModuleTests.Modules_Get_ContainsHostFileName [FAIL] ERROR: System.Diagnostics.Tests.ProcessStandardConsoleTests.TestChangesInConsoleEncoding [FAIL] ERROR: System.Diagnostics.Tests.ProcessStartInfoTests.TestCreateNoWindowProperty(value: True) [FAIL] ERROR: System.Diagnostics.Tests.ProcessStartInfoTests.TestCreateNoWindowProperty(value: False) [FAIL] ERROR: System.Diagnostics.Tests.ProcessStartInfoTests.Verbs_GetWithExeExtension_ReturnsExpected [FAIL] ERROR: System.Diagnostics.Tests.ProcessStartInfoTests.TestWorkingDirectoryProperty [FAIL] ERROR: System.Diagnostics.Tests.ProcessStartInfoTests.TestEnvironmentOfChildProcess [FAIL] ERROR: System.Diagnostics.Tests.ProcessStreamReadTests.TestAsyncErrorStream [FAIL] ERROR: System.Diagnostics.Tests.ProcessStreamReadTests.TestSyncStreams [FAIL] ERROR: System.Diagnostics.Tests.ProcessStreamReadTests.TestAsyncOutputStream [FAIL] ERROR: System.Diagnostics.Tests.ProcessStreamReadTests.TestSyncOutputStream [FAIL] ERROR: System.Diagnostics.Tests.ProcessStreamReadTests.TestStreamNegativeTests [FAIL] ERROR: System.Diagnostics.Tests.ProcessStreamReadTests.TestManyOutputLines [FAIL] ERROR: System.Diagnostics.Tests.ProcessStreamReadTests.TestAsyncHalfCharacterAtATime [FAIL] ERROR: System.Diagnostics.Tests.ProcessStreamReadTests.TestEOFReceivedWhenStdInClosed [FAIL] ERROR: System.Diagnostics.Tests.ProcessStreamReadTests.TestSyncErrorStream [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestPeakWorkingSet [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestVirtualMemorySize [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.GetProcessesByName_EmptyMachineName_ThrowsArgumentException [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestSessionId [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.StartInfo_GetFileName_ReturnsExpected [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestPeakVirtualMemorySize [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestProcessName [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestMainModuleOnNonOSX [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.GetProcessesByName_ProcessName_ReturnsExpected [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestId [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.MainWindowHandle_NoWindow_ReturnsEmptyHandle [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.Process_StartTest [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestVirtualMemorySize64 [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestPagedMemorySize [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestPriorityClassWindows [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.HandleCountChanges [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestNonpagedSystemMemorySize [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestExitTime [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestWorkingSet64 [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestPagedMemorySize64 [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestPrivateMemorySize64 [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestPeakPagedMemorySize [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestEnableRaiseEvents(enable: True) [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestEnableRaiseEvents(enable: False) [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestEnableRaiseEvents(enable: null) [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.GetProcessesByName_NullMachineName_ThrowsArgumentNullException [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.CloseMainWindow_NotStarted_ThrowsInvalidOperationException [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.CloseMainWindow_NoWindow_ReturnsFalse [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.Process_StartWithArgumentsTest [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestGetProcesses [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestPrivateMemorySize [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestProcessStartTime [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestHasExited [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestMaxWorkingSet [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.GetProcessesByName_NoSuchProcess_ReturnsEmpty [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestPagedSystemMemorySize [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestMachineName [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestPagedSystemMemorySize64 [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestGetProcessById [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.MainWindowTitle_NoWindow_ReturnsEmpty [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestPeakPagedMemorySize64 [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.StartInfo_SetOnRunningProcess_ThrowsInvalidOperationException [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestPeakWorkingSet64 [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.GetProcessesByName_ProcessNameMachineName_ReturnsExpected(machineName: ".") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.GetProcessesByName_ProcessNameMachineName_ReturnsExpected(machineName: ".") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.GetProcessesByName_ProcessNameMachineName_ReturnsExpected(machineName: "krwq-win10") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestPeakVirtualMemorySize64 [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestProcessorTime [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestExitCode [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.Process_StartWithInvalidUserNamePassword [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "\"abc\" d e", expectedArgv: "abc,d,e") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "b d \"\"a\"\" ", expectedArgv: "b,d,a") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "\"\"a\"\" b d", expectedArgv: "a,b,d") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "\"\"c \"\"b\"\" d\"\\", expectedArgv: "c,b,d\\") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "c\"\"\"\" b \"\"\\", expectedArgv: "c\",b,\\") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "\"\"\"\" b c", expectedArgv: "\",b,c") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "\"\" b \"\"", expectedArgv: ",b,") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "\"\\a\\\" \\\\\"\\\\\\ b c", expectedArgv: "\\a\" \\\\\\\\,b,c") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "a b c\"def", expectedArgv: "a,b,cdef") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "a\"b c\"d e\"f g\"h i\"j k\"l", expectedArgv: "ab cd,ef gh,ij kl") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "a\\\\\\\\\"b c\" d e", expectedArgv: "a\\\\b c,d,e") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "a\\\\\\\"b c d", expectedArgv: "a\\\"b,c,d") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "\\ \\\\ \\\\\\", expectedArgv: "\\,\\\\,\\\\\\") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "a\\\\b d\"e f\"g h", expectedArgv: "a\\\\b,de fg,h") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "\"abc\"\t\td\te", expectedArgv: "abc,d,e") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "\"abc\" d e", expectedArgv: "abc,d,e") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "\\\"\\\"a\\\"\\\" b d", expectedArgv: "\"\"a\"\",b,d") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "b d \\\"\\\"a\\\"\\\"", expectedArgv: "b,d,\"\"a\"\"") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestSafeHandle [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestBasePriorityOnWindows [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestWorkingSet [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestPriorityBoostEnabled [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestHandleCount [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestMinWorkingSet [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestNonpagedSystemMemorySize64 [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestProcessorAffinity [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestProcessOnRemoteMachineWindows [FAIL] ERROR: System.Diagnostics.Tests.ProcessThreadTests.TestCommonPriorityAndTimeProperties [FAIL] ERROR: System.Diagnostics.Tests.ProcessThreadTests.TestStartTimeProperty [FAIL] ERROR: System.Diagnostics.Tests.ProcessThreadTests.TestThreadCount [FAIL] ERROR: System.Diagnostics.Tests.ProcessThreadTests.Threads_GetMultipleTimes_ReturnsSameInstance [FAIL] ERROR: System.Diagnostics.Tests.ProcessThreadTests.TestStartAddressProperty [FAIL] ERROR: System.Diagnostics.Tests.ProcessThreadTests.TestPriorityLevelProperty [FAIL] ERROR: System.Diagnostics.Tests.ProcessThreadTests.TestThreadStateProperty [FAIL] ERROR: System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_EnableRaisingEvents_CorrectExitCode(exitCode: 0) [FAIL] ERROR: System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_EnableRaisingEvents_CorrectExitCode(exitCode: 1) [FAIL] ERROR: System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_EnableRaisingEvents_CorrectExitCode(exitCode: 127) [FAIL] ERROR: System.Diagnostics.Tests.ProcessWaitingTests.WaitForPeerProcess [FAIL] ERROR: System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_CopiesShareExitInformation [FAIL] ERROR: System.Diagnostics.Tests.ProcessWaitingTests.MultipleProcesses_StartAllKillAllWaitAll [FAIL] ERROR: System.Diagnostics.Tests.ProcessWaitingTests.MultipleProcesses_ParallelStartKillWait [FAIL] ERROR: System.Diagnostics.Tests.ProcessWaitingTests.WaitChain [FAIL] ERROR: System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_TryWaitMultipleTimesBeforeCompleting [FAIL] ERROR: System.Diagnostics.Tests.ProcessWaitingTests.MultipleProcesses_SerialStartKillWait [FAIL] ERROR: System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_WaitAfterExited(addHandlerBeforeStart: False) [FAIL] ERROR: System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_WaitAfterExited(addHandlerBeforeStart: True) [FAIL] ERROR: System.Diagnostics.Tests.ProcessWaitingTests.WaitForSelfTerminatingChild [FAIL] ``` </details> They are related to RemoteInvoke not giving back process handle
1.0
S.D.Process tests are failing on uap - Currently all S.D.Process tests are disabled (https://github.com/dotnet/corefx/issues/20948). I'm planning to reenable not failing tests and substitute that issue so that only relevant tests are disabled. <details> <summary>List of failing tests on uap</summary> ``` ERROR: System.Diagnostics.Tests.ProcessCollectionTests.TestThreadCollectionBehavior [FAIL] ERROR: System.Diagnostics.Tests.ProcessModuleTests.Modules_Get_ContainsHostFileName [FAIL] ERROR: System.Diagnostics.Tests.ProcessStandardConsoleTests.TestChangesInConsoleEncoding [FAIL] ERROR: System.Diagnostics.Tests.ProcessStartInfoTests.TestCreateNoWindowProperty(value: True) [FAIL] ERROR: System.Diagnostics.Tests.ProcessStartInfoTests.TestCreateNoWindowProperty(value: False) [FAIL] ERROR: System.Diagnostics.Tests.ProcessStartInfoTests.Verbs_GetWithExeExtension_ReturnsExpected [FAIL] ERROR: System.Diagnostics.Tests.ProcessStartInfoTests.TestWorkingDirectoryProperty [FAIL] ERROR: System.Diagnostics.Tests.ProcessStartInfoTests.TestEnvironmentOfChildProcess [FAIL] ERROR: System.Diagnostics.Tests.ProcessStreamReadTests.TestAsyncErrorStream [FAIL] ERROR: System.Diagnostics.Tests.ProcessStreamReadTests.TestSyncStreams [FAIL] ERROR: System.Diagnostics.Tests.ProcessStreamReadTests.TestAsyncOutputStream [FAIL] ERROR: System.Diagnostics.Tests.ProcessStreamReadTests.TestSyncOutputStream [FAIL] ERROR: System.Diagnostics.Tests.ProcessStreamReadTests.TestStreamNegativeTests [FAIL] ERROR: System.Diagnostics.Tests.ProcessStreamReadTests.TestManyOutputLines [FAIL] ERROR: System.Diagnostics.Tests.ProcessStreamReadTests.TestAsyncHalfCharacterAtATime [FAIL] ERROR: System.Diagnostics.Tests.ProcessStreamReadTests.TestEOFReceivedWhenStdInClosed [FAIL] ERROR: System.Diagnostics.Tests.ProcessStreamReadTests.TestSyncErrorStream [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestPeakWorkingSet [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestVirtualMemorySize [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.GetProcessesByName_EmptyMachineName_ThrowsArgumentException [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestSessionId [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.StartInfo_GetFileName_ReturnsExpected [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestPeakVirtualMemorySize [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestProcessName [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestMainModuleOnNonOSX [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.GetProcessesByName_ProcessName_ReturnsExpected [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestId [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.MainWindowHandle_NoWindow_ReturnsEmptyHandle [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.Process_StartTest [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestVirtualMemorySize64 [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestPagedMemorySize [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestPriorityClassWindows [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.HandleCountChanges [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestNonpagedSystemMemorySize [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestExitTime [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestWorkingSet64 [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestPagedMemorySize64 [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestPrivateMemorySize64 [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestPeakPagedMemorySize [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestEnableRaiseEvents(enable: True) [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestEnableRaiseEvents(enable: False) [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestEnableRaiseEvents(enable: null) [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.GetProcessesByName_NullMachineName_ThrowsArgumentNullException [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.CloseMainWindow_NotStarted_ThrowsInvalidOperationException [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.CloseMainWindow_NoWindow_ReturnsFalse [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.Process_StartWithArgumentsTest [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestGetProcesses [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestPrivateMemorySize [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestProcessStartTime [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestHasExited [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestMaxWorkingSet [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.GetProcessesByName_NoSuchProcess_ReturnsEmpty [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestPagedSystemMemorySize [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestMachineName [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestPagedSystemMemorySize64 [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestGetProcessById [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.MainWindowTitle_NoWindow_ReturnsEmpty [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestPeakPagedMemorySize64 [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.StartInfo_SetOnRunningProcess_ThrowsInvalidOperationException [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestPeakWorkingSet64 [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.GetProcessesByName_ProcessNameMachineName_ReturnsExpected(machineName: ".") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.GetProcessesByName_ProcessNameMachineName_ReturnsExpected(machineName: ".") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.GetProcessesByName_ProcessNameMachineName_ReturnsExpected(machineName: "krwq-win10") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestPeakVirtualMemorySize64 [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestProcessorTime [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestExitCode [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.Process_StartWithInvalidUserNamePassword [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "\"abc\" d e", expectedArgv: "abc,d,e") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "b d \"\"a\"\" ", expectedArgv: "b,d,a") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "\"\"a\"\" b d", expectedArgv: "a,b,d") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "\"\"c \"\"b\"\" d\"\\", expectedArgv: "c,b,d\\") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "c\"\"\"\" b \"\"\\", expectedArgv: "c\",b,\\") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "\"\"\"\" b c", expectedArgv: "\",b,c") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "\"\" b \"\"", expectedArgv: ",b,") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "\"\\a\\\" \\\\\"\\\\\\ b c", expectedArgv: "\\a\" \\\\\\\\,b,c") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "a b c\"def", expectedArgv: "a,b,cdef") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "a\"b c\"d e\"f g\"h i\"j k\"l", expectedArgv: "ab cd,ef gh,ij kl") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "a\\\\\\\\\"b c\" d e", expectedArgv: "a\\\\b c,d,e") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "a\\\\\\\"b c d", expectedArgv: "a\\\"b,c,d") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "\\ \\\\ \\\\\\", expectedArgv: "\\,\\\\,\\\\\\") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "a\\\\b d\"e f\"g h", expectedArgv: "a\\\\b,de fg,h") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "\"abc\"\t\td\te", expectedArgv: "abc,d,e") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "\"abc\" d e", expectedArgv: "abc,d,e") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "\\\"\\\"a\\\"\\\" b d", expectedArgv: "\"\"a\"\",b,d") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestArgumentParsing(inputArguments: "b d \\\"\\\"a\\\"\\\"", expectedArgv: "b,d,\"\"a\"\"") [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestSafeHandle [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestBasePriorityOnWindows [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestWorkingSet [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestPriorityBoostEnabled [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestHandleCount [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestMinWorkingSet [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestNonpagedSystemMemorySize64 [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestProcessorAffinity [FAIL] ERROR: System.Diagnostics.Tests.ProcessTests.TestProcessOnRemoteMachineWindows [FAIL] ERROR: System.Diagnostics.Tests.ProcessThreadTests.TestCommonPriorityAndTimeProperties [FAIL] ERROR: System.Diagnostics.Tests.ProcessThreadTests.TestStartTimeProperty [FAIL] ERROR: System.Diagnostics.Tests.ProcessThreadTests.TestThreadCount [FAIL] ERROR: System.Diagnostics.Tests.ProcessThreadTests.Threads_GetMultipleTimes_ReturnsSameInstance [FAIL] ERROR: System.Diagnostics.Tests.ProcessThreadTests.TestStartAddressProperty [FAIL] ERROR: System.Diagnostics.Tests.ProcessThreadTests.TestPriorityLevelProperty [FAIL] ERROR: System.Diagnostics.Tests.ProcessThreadTests.TestThreadStateProperty [FAIL] ERROR: System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_EnableRaisingEvents_CorrectExitCode(exitCode: 0) [FAIL] ERROR: System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_EnableRaisingEvents_CorrectExitCode(exitCode: 1) [FAIL] ERROR: System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_EnableRaisingEvents_CorrectExitCode(exitCode: 127) [FAIL] ERROR: System.Diagnostics.Tests.ProcessWaitingTests.WaitForPeerProcess [FAIL] ERROR: System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_CopiesShareExitInformation [FAIL] ERROR: System.Diagnostics.Tests.ProcessWaitingTests.MultipleProcesses_StartAllKillAllWaitAll [FAIL] ERROR: System.Diagnostics.Tests.ProcessWaitingTests.MultipleProcesses_ParallelStartKillWait [FAIL] ERROR: System.Diagnostics.Tests.ProcessWaitingTests.WaitChain [FAIL] ERROR: System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_TryWaitMultipleTimesBeforeCompleting [FAIL] ERROR: System.Diagnostics.Tests.ProcessWaitingTests.MultipleProcesses_SerialStartKillWait [FAIL] ERROR: System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_WaitAfterExited(addHandlerBeforeStart: False) [FAIL] ERROR: System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_WaitAfterExited(addHandlerBeforeStart: True) [FAIL] ERROR: System.Diagnostics.Tests.ProcessWaitingTests.WaitForSelfTerminatingChild [FAIL] ``` </details> They are related to RemoteInvoke not giving back process handle
process
s d process tests are failing on uap currently all s d process tests are disabled i m planning to reenable not failing tests and substitute that issue so that only relevant tests are disabled list of failing tests on uap error system diagnostics tests processcollectiontests testthreadcollectionbehavior error system diagnostics tests processmoduletests modules get containshostfilename error system diagnostics tests processstandardconsoletests testchangesinconsoleencoding error system diagnostics tests processstartinfotests testcreatenowindowproperty value true error system diagnostics tests processstartinfotests testcreatenowindowproperty value false error system diagnostics tests processstartinfotests verbs getwithexeextension returnsexpected error system diagnostics tests processstartinfotests testworkingdirectoryproperty error system diagnostics tests processstartinfotests testenvironmentofchildprocess error system diagnostics tests processstreamreadtests testasyncerrorstream error system diagnostics tests processstreamreadtests testsyncstreams error system diagnostics tests processstreamreadtests testasyncoutputstream error system diagnostics tests processstreamreadtests testsyncoutputstream error system diagnostics tests processstreamreadtests teststreamnegativetests error system diagnostics tests processstreamreadtests testmanyoutputlines error system diagnostics tests processstreamreadtests testasynchalfcharacteratatime error system diagnostics tests processstreamreadtests testeofreceivedwhenstdinclosed error system diagnostics tests processstreamreadtests testsyncerrorstream error system diagnostics tests processtests testpeakworkingset error system diagnostics tests processtests testvirtualmemorysize error system diagnostics tests processtests getprocessesbyname emptymachinename throwsargumentexception error system diagnostics tests processtests testsessionid error system diagnostics tests processtests startinfo getfilename returnsexpected error system diagnostics tests processtests testpeakvirtualmemorysize error system diagnostics tests processtests testprocessname error system diagnostics tests processtests testmainmoduleonnonosx error system diagnostics tests processtests getprocessesbyname processname returnsexpected error system diagnostics tests processtests testid error system diagnostics tests processtests mainwindowhandle nowindow returnsemptyhandle error system diagnostics tests processtests process starttest error system diagnostics tests processtests error system diagnostics tests processtests testpagedmemorysize error system diagnostics tests processtests testpriorityclasswindows error system diagnostics tests processtests handlecountchanges error system diagnostics tests processtests testnonpagedsystemmemorysize error system diagnostics tests processtests testexittime error system diagnostics tests processtests error system diagnostics tests processtests error system diagnostics tests processtests error system diagnostics tests processtests testpeakpagedmemorysize error system diagnostics tests processtests testenableraiseevents enable true error system diagnostics tests processtests testenableraiseevents enable false error system diagnostics tests processtests testenableraiseevents enable null error system diagnostics tests processtests getprocessesbyname nullmachinename throwsargumentnullexception error system diagnostics tests processtests closemainwindow notstarted throwsinvalidoperationexception error system diagnostics tests processtests closemainwindow nowindow returnsfalse error system diagnostics tests processtests process startwithargumentstest error system diagnostics tests processtests testgetprocesses error system diagnostics tests processtests testprivatememorysize error system diagnostics tests processtests testprocessstarttime error system diagnostics tests processtests testhasexited error system diagnostics tests processtests testmaxworkingset error system diagnostics tests processtests getprocessesbyname nosuchprocess returnsempty error system diagnostics tests processtests testpagedsystemmemorysize error system diagnostics tests processtests testmachinename error system diagnostics tests processtests error system diagnostics tests processtests testgetprocessbyid error system diagnostics tests processtests mainwindowtitle nowindow returnsempty error system diagnostics tests processtests error system diagnostics tests processtests startinfo setonrunningprocess throwsinvalidoperationexception error system diagnostics tests processtests error system diagnostics tests processtests getprocessesbyname processnamemachinename returnsexpected machinename error system diagnostics tests processtests getprocessesbyname processnamemachinename returnsexpected machinename error system diagnostics tests processtests getprocessesbyname processnamemachinename returnsexpected machinename krwq error system diagnostics tests processtests error system diagnostics tests processtests testprocessortime error system diagnostics tests processtests testexitcode error system diagnostics tests processtests process startwithinvalidusernamepassword error system diagnostics tests processtests testargumentparsing inputarguments abc d e expectedargv abc d e error system diagnostics tests processtests testargumentparsing inputarguments b d a expectedargv b d a error system diagnostics tests processtests testargumentparsing inputarguments a b d expectedargv a b d error system diagnostics tests processtests testargumentparsing inputarguments c b d expectedargv c b d error system diagnostics tests processtests testargumentparsing inputarguments c b expectedargv c b error system diagnostics tests processtests testargumentparsing inputarguments b c expectedargv b c error system diagnostics tests processtests testargumentparsing inputarguments b expectedargv b error system diagnostics tests processtests testargumentparsing inputarguments a b c expectedargv a b c error system diagnostics tests processtests testargumentparsing inputarguments a b c def expectedargv a b cdef error system diagnostics tests processtests testargumentparsing inputarguments a b c d e f g h i j k l expectedargv ab cd ef gh ij kl error system diagnostics tests processtests testargumentparsing inputarguments a b c d e expectedargv a b c d e error system diagnostics tests processtests testargumentparsing inputarguments a b c d expectedargv a b c d error system diagnostics tests processtests testargumentparsing inputarguments expectedargv error system diagnostics tests processtests testargumentparsing inputarguments a b d e f g h expectedargv a b de fg h error system diagnostics tests processtests testargumentparsing inputarguments abc t td te expectedargv abc d e error system diagnostics tests processtests testargumentparsing inputarguments abc d e expectedargv abc d e error system diagnostics tests processtests testargumentparsing inputarguments a b d expectedargv a b d error system diagnostics tests processtests testargumentparsing inputarguments b d a expectedargv b d a error system diagnostics tests processtests testsafehandle error system diagnostics tests processtests testbasepriorityonwindows error system diagnostics tests processtests testworkingset error system diagnostics tests processtests testpriorityboostenabled error system diagnostics tests processtests testhandlecount error system diagnostics tests processtests testminworkingset error system diagnostics tests processtests error system diagnostics tests processtests testprocessoraffinity error system diagnostics tests processtests testprocessonremotemachinewindows error system diagnostics tests processthreadtests testcommonpriorityandtimeproperties error system diagnostics tests processthreadtests teststarttimeproperty error system diagnostics tests processthreadtests testthreadcount error system diagnostics tests processthreadtests threads getmultipletimes returnssameinstance error system diagnostics tests processthreadtests teststartaddressproperty error system diagnostics tests processthreadtests testprioritylevelproperty error system diagnostics tests processthreadtests testthreadstateproperty error system diagnostics tests processwaitingtests singleprocess enableraisingevents correctexitcode exitcode error system diagnostics tests processwaitingtests singleprocess enableraisingevents correctexitcode exitcode error system diagnostics tests processwaitingtests singleprocess enableraisingevents correctexitcode exitcode error system diagnostics tests processwaitingtests waitforpeerprocess error system diagnostics tests processwaitingtests singleprocess copiesshareexitinformation error system diagnostics tests processwaitingtests multipleprocesses startallkillallwaitall error system diagnostics tests processwaitingtests multipleprocesses parallelstartkillwait error system diagnostics tests processwaitingtests waitchain error system diagnostics tests processwaitingtests singleprocess trywaitmultipletimesbeforecompleting error system diagnostics tests processwaitingtests multipleprocesses serialstartkillwait error system diagnostics tests processwaitingtests singleprocess waitafterexited addhandlerbeforestart false error system diagnostics tests processwaitingtests singleprocess waitafterexited addhandlerbeforestart true error system diagnostics tests processwaitingtests waitforselfterminatingchild they are related to remoteinvoke not giving back process handle
1
75,047
25,499,093,425
IssuesEvent
2022-11-28 01:05:51
dkfans/keeperfx
https://api.github.com/repos/dkfans/keeperfx
opened
Game crashes after exiting possession after defeat
Type-Defect Priority-High
To reproduce the crash: 1) Have heroes destroy your heart 2) Click possession -> You possess the floating spirit 3) Right click to exit possession -> crash ``` === Crash ===Error: Attempt of integer division by zero. in keeperfx.exe at 0023:00d6bc23, base 00b50000 ```
1.0
Game crashes after exiting possession after defeat - To reproduce the crash: 1) Have heroes destroy your heart 2) Click possession -> You possess the floating spirit 3) Right click to exit possession -> crash ``` === Crash ===Error: Attempt of integer division by zero. in keeperfx.exe at 0023:00d6bc23, base 00b50000 ```
non_process
game crashes after exiting possession after defeat to reproduce the crash have heroes destroy your heart click possession you possess the floating spirit right click to exit possession crash crash error attempt of integer division by zero in keeperfx exe at base
0
13,426
15,881,000,087
IssuesEvent
2021-04-09 14:19:53
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Python3 runbook code does not work
Pri2 automation/svc cxp doc-enhancement process-automation/subsvc triaged
- Some imports were missing - `azure_credential`, `runas_connection` were not declared --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 2fcc4942-ed7f-891b-8fb2-f0aed8237d38 * Version Independent ID: c061735f-c5ce-d6fb-b6c0-fe02ed4eae3d * Content: [Create a Python 3 runbook (preview) in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/learn/automation-tutorial-runbook-textual-python-3) * Content Source: [articles/automation/learn/automation-tutorial-runbook-textual-python-3.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/learn/automation-tutorial-runbook-textual-python-3.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @MGoedtel * Microsoft Alias: **magoedte**
1.0
Python3 runbook code does not work - - Some imports were missing - `azure_credential`, `runas_connection` were not declared --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 2fcc4942-ed7f-891b-8fb2-f0aed8237d38 * Version Independent ID: c061735f-c5ce-d6fb-b6c0-fe02ed4eae3d * Content: [Create a Python 3 runbook (preview) in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/learn/automation-tutorial-runbook-textual-python-3) * Content Source: [articles/automation/learn/automation-tutorial-runbook-textual-python-3.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/automation/learn/automation-tutorial-runbook-textual-python-3.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @MGoedtel * Microsoft Alias: **magoedte**
process
runbook code does not work some imports were missing azure credential runas connection were not declared document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login mgoedtel microsoft alias magoedte
1
12,922
15,294,998,873
IssuesEvent
2021-02-24 03:46:21
MineCake147E/Shamisen
https://api.github.com/repos/MineCake147E/Shamisen
closed
.NET 5
CPU: AMD x64 🖥️ CPU: Fujitsu A64FX 📱🖥️🖥️ CPU: Intel x64 🖥️ CPU: Other ARMv8 📱 Feature: Signal Processing 🎛️ Kind: High Latency 🐌 Priority: High 🚅 Status: Working ▶️
In order to improve performance of Shamisen, we have to move Shamisen from .NET Standard 2.0 to .NET 5. - [x] .NET 5 Release # Major changes - [x] Change target framework of Core library `Shamisen` from `netstandard2.0` to `net5.0;netcoreapp3.1;netstandard2.1;netstandard2.0` => a7e2741cef021f12e837fec2fffcfc9ac4ee98a8 - [ ] Adopt new APIs **almost EVERYWHERE** - [ ] x86/64 - [ ] SSE - [ ] SSE2 - [ ] SSE3 - [ ] SSE4.x - [ ] AVX - [ ] AVX2 - [ ] Bmi1 - [ ] Bmi2 - [ ] Fma - [ ] Lzcnt - [ ] Popcnt - [ ] ARM - [ ] AdvSimd - [ ] ArmBase - [ ] Crc32 - [ ] Dp - [ ] Rdm - [ ] Cross Platform - [ ] System.Numerics.BitOperations - [ ] System.MathF - [ ] System.Half - [ ] System.HashCode - [ ] System.Math.Tau - [ ] System.Math.FusedMultiplyAdd - [ ] System.MathF.FusedMultiplyAdd
1.0
.NET 5 - In order to improve performance of Shamisen, we have to move Shamisen from .NET Standard 2.0 to .NET 5. - [x] .NET 5 Release # Major changes - [x] Change target framework of Core library `Shamisen` from `netstandard2.0` to `net5.0;netcoreapp3.1;netstandard2.1;netstandard2.0` => a7e2741cef021f12e837fec2fffcfc9ac4ee98a8 - [ ] Adopt new APIs **almost EVERYWHERE** - [ ] x86/64 - [ ] SSE - [ ] SSE2 - [ ] SSE3 - [ ] SSE4.x - [ ] AVX - [ ] AVX2 - [ ] Bmi1 - [ ] Bmi2 - [ ] Fma - [ ] Lzcnt - [ ] Popcnt - [ ] ARM - [ ] AdvSimd - [ ] ArmBase - [ ] Crc32 - [ ] Dp - [ ] Rdm - [ ] Cross Platform - [ ] System.Numerics.BitOperations - [ ] System.MathF - [ ] System.Half - [ ] System.HashCode - [ ] System.Math.Tau - [ ] System.Math.FusedMultiplyAdd - [ ] System.MathF.FusedMultiplyAdd
process
net in order to improve performance of shamisen we have to move shamisen from net standard to net net release major changes change target framework of core library shamisen from to adopt new apis almost everywhere sse x avx fma lzcnt popcnt arm advsimd armbase dp rdm cross platform system numerics bitoperations system mathf system half system hashcode system math tau system math fusedmultiplyadd system mathf fusedmultiplyadd
1
522,653
15,164,653,660
IssuesEvent
2021-02-12 14:02:27
threefoldtech/0-db
https://api.github.com/repos/threefoldtech/0-db
opened
hook: not found hook argument cause multiple fork
priority_major
When a `--hook` is specified and doesn't start correctly, error is not catched and fork remain alive. This should never happen.
1.0
hook: not found hook argument cause multiple fork - When a `--hook` is specified and doesn't start correctly, error is not catched and fork remain alive. This should never happen.
non_process
hook not found hook argument cause multiple fork when a hook is specified and doesn t start correctly error is not catched and fork remain alive this should never happen
0
13,045
15,387,277,919
IssuesEvent
2021-03-03 09:20:11
prisma/prisma
https://api.github.com/repos/prisma/prisma
closed
Prisma fails to migrate in dev after git reset
kind/improvement process/candidate team/migrations topic: migrate dev
Hi Prisma Team! Prisma Migrate just crashed. ## Command `migrate --preview-feature dev` ## Versions | Name | Version | |-------------|--------------------| | Platform | darwin | | Node | v12.17.0 | | Prisma CLI | 2.16.1 | | Binary | 8b74ad57aaf2cc6c155f382a18a8e3ba95aceb03| ## Error ``` Error: Error in migration engine. Reason: [/root/build/migration-engine/core/src/commands/diagnose_migration_history.rs:101:26] Failed to read migration script: ReadMigrationScriptError(Os { code: 2, kind: NotFound, message: "No such file or directory" }, SpanTrace [{ target: "migration_connector::migrations_directory", name: "read_migration_script", fields: "self=MigrationDirectory { path: \"/Users/eloi/development/dsi-ing/server/prisma/migrations/20210225183705_options_units_on_quote_items_with_defaults\" }", file: "migration-engine/connectors/migration-connector/src/migrations_directory.rs", line: 233 }, { target: "migration_connector::migrations_directory", name: "matches_checksum", fields: "self=MigrationDirectory { path: \"/Users/eloi/development/dsi-ing/server/prisma/migrations/20210225183705_options_units_on_quote_items_with_defaults\" } checksum_str=\"a6398bef174ab2202a9758e831ffc14f5c7d8bee9dc2fa917053642be19f4d9f\"", file: "migration-engine/connectors/migration-connector/src/migrations_directory.rs", line: 207 }, { target: "migration_core::api", name: "DevDiagnostic", file: "migration-engine/core/src/api.rs", line: 100 }]) Please create an issue with your `schema.prisma` at https://github.com/prisma/prisma/issues/new ``` Hey there, I was just prototyping some feature on my local machine and decided I wanted to drop my work. Now it looks like prisma migrate fails to lookup the prototype migration I had worked on and crashes. I don't care if prisma does not find it because it was just a prototyping exercise.
1.0
Prisma fails to migrate in dev after git reset - Hi Prisma Team! Prisma Migrate just crashed. ## Command `migrate --preview-feature dev` ## Versions | Name | Version | |-------------|--------------------| | Platform | darwin | | Node | v12.17.0 | | Prisma CLI | 2.16.1 | | Binary | 8b74ad57aaf2cc6c155f382a18a8e3ba95aceb03| ## Error ``` Error: Error in migration engine. Reason: [/root/build/migration-engine/core/src/commands/diagnose_migration_history.rs:101:26] Failed to read migration script: ReadMigrationScriptError(Os { code: 2, kind: NotFound, message: "No such file or directory" }, SpanTrace [{ target: "migration_connector::migrations_directory", name: "read_migration_script", fields: "self=MigrationDirectory { path: \"/Users/eloi/development/dsi-ing/server/prisma/migrations/20210225183705_options_units_on_quote_items_with_defaults\" }", file: "migration-engine/connectors/migration-connector/src/migrations_directory.rs", line: 233 }, { target: "migration_connector::migrations_directory", name: "matches_checksum", fields: "self=MigrationDirectory { path: \"/Users/eloi/development/dsi-ing/server/prisma/migrations/20210225183705_options_units_on_quote_items_with_defaults\" } checksum_str=\"a6398bef174ab2202a9758e831ffc14f5c7d8bee9dc2fa917053642be19f4d9f\"", file: "migration-engine/connectors/migration-connector/src/migrations_directory.rs", line: 207 }, { target: "migration_core::api", name: "DevDiagnostic", file: "migration-engine/core/src/api.rs", line: 100 }]) Please create an issue with your `schema.prisma` at https://github.com/prisma/prisma/issues/new ``` Hey there, I was just prototyping some feature on my local machine and decided I wanted to drop my work. Now it looks like prisma migrate fails to lookup the prototype migration I had worked on and crashes. I don't care if prisma does not find it because it was just a prototyping exercise.
process
prisma fails to migrate in dev after git reset hi prisma team prisma migrate just crashed command migrate preview feature dev versions name version platform darwin node prisma cli binary error error error in migration engine reason failed to read migration script readmigrationscripterror os code kind notfound message no such file or directory spantrace please create an issue with your schema prisma at hey there i was just prototyping some feature on my local machine and decided i wanted to drop my work now it looks like prisma migrate fails to lookup the prototype migration i had worked on and crashes i don t care if prisma does not find it because it was just a prototyping exercise
1
20,424
27,086,971,562
IssuesEvent
2023-02-14 17:42:46
openxla/stablehlo
https://api.github.com/repos/openxla/stablehlo
closed
Create an openxla.org@ account to host shareable documents
Process
### Request description One immediate candidate to share would be the raw editable slide deck for creating specification images.
1.0
Create an openxla.org@ account to host shareable documents - ### Request description One immediate candidate to share would be the raw editable slide deck for creating specification images.
process
create an openxla org account to host shareable documents request description one immediate candidate to share would be the raw editable slide deck for creating specification images
1
149,981
23,583,485,863
IssuesEvent
2022-08-23 09:35:39
Regalis11/Barotrauma
https://api.github.com/repos/Regalis11/Barotrauma
closed
[XML] Depleted fuel revolver rounds cause no severance
Design Unstable
### Disclaimers - [X] I have searched the issue tracker to check if the issue has already been reported. - [ ] My issue happened while using mods. ### What happened? While all other ammunition, even DF ammunition causes some severance chance, DF RR don't cause any. ``` <Attack structuredamage="10" targetforce="10" itemdamage="15" penetration="0.25"> <Affliction identifier="bleeding" strength="10" /> <Affliction identifier="gunshotwound" strength="35" /> <Affliction identifier="stun" strength="0.4" /> ``` ### Version 0.18.12.0
1.0
[XML] Depleted fuel revolver rounds cause no severance - ### Disclaimers - [X] I have searched the issue tracker to check if the issue has already been reported. - [ ] My issue happened while using mods. ### What happened? While all other ammunition, even DF ammunition causes some severance chance, DF RR don't cause any. ``` <Attack structuredamage="10" targetforce="10" itemdamage="15" penetration="0.25"> <Affliction identifier="bleeding" strength="10" /> <Affliction identifier="gunshotwound" strength="35" /> <Affliction identifier="stun" strength="0.4" /> ``` ### Version 0.18.12.0
non_process
depleted fuel revolver rounds cause no severance disclaimers i have searched the issue tracker to check if the issue has already been reported my issue happened while using mods what happened while all other ammunition even df ammunition causes some severance chance df rr don t cause any version
0
18,237
24,305,121,252
IssuesEvent
2022-09-29 16:43:07
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Processing "split with lines" creates (wrong) sliver polygons
Processing Bug
### What is the bug or the crash? When using the tool "Split with lines" to split polygons with a line feature, more than the expected number of polygon features are created in the resulting layer . A simple project ist provided by the following .gpkg: [split_polygons_lines.zip](https://github.com/qgis/QGIS/files/9299919/split_polygons_lines.zip) ### Steps to reproduce the issue 1.: Use a topological correct polygon layer with at least two features. ![starting_point](https://user-images.githubusercontent.com/86417466/183898070-701cde60-dc29-4f77-b902-cb9175c1073b.jpg) 2.: Produce a line feature that covers the created polygons. 3.: Utilise the "Split with lines" tool from vector overlay with the input from 1 and 2. 4.: Look in the the attribute table of the resulting layer. After splitting there **can** be more the the expected number of features. ![result](https://user-images.githubusercontent.com/86417466/183898871-127ee800-388a-49cb-b7b2-54ae88640e2d.jpg) (in this example, fid 4 should not be there and is not shown in the map canvas, although it has geometry properties like an area) ### Versions <!--StartFragment--><!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN" "http://www.w3.org/TR/REC-html40/strict.dtd"> <html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><style type="text/css"> p, li { white-space: pre-wrap; } </style></head><body> QGIS version | 3.16.16-Hannover | QGIS code revision | f5778a89 -- | -- | -- | -- Compiled against Qt | 5.15.2 | Running against Qt | 5.15.2 Compiled against GDAL/OGR | 3.4.1 | Running against GDAL/OGR | 3.4.1 Compiled against GEOS | 3.10.0-CAPI-1.16.0 | Running against GEOS | 3.10.0-CAPI-1.16.0 Compiled against SQLite | 3.35.2 | Running against SQLite | 3.35.2 PostgreSQL Client Version | 13.0 | SpatiaLite Version | 5.0.1 QWT Version | 6.1.3 | QScintilla2 Version | 2.11.5 Compiled against PROJ | 8.2.1 | Running against PROJ | Rel. 8.2.1, January 1st, 2022 OS Version | Windows 10 Version 2009 Active python plugins | db_manager; MetaSearch; processing </body></html><!--EndFragment-->QGIS version 3.16.16-Hannover QGIS code revision [f5778a89](https://github.com/qgis/QGIS/commit/f5778a89) Compiled against Qt 5.15.2 Running against Qt 5.15.2 Compiled against GDAL/OGR 3.4.1 Running against GDAL/OGR 3.4.1 Compiled against GEOS 3.10.0-CAPI-1.16.0 Running against GEOS 3.10.0-CAPI-1.16.0 Compiled against SQLite 3.35.2 Running against SQLite 3.35.2 PostgreSQL Client Version 13.0 SpatiaLite Version 5.0.1 QWT Version 6.1.3 QScintilla2 Version 2.11.5 Compiled against PROJ 8.2.1 Running against PROJ Rel. 8.2.1, January 1st, 2022 OS Version Windows 10 Version 2009 Active python plugins db_manager; MetaSearch; processing ### Supported QGIS version - [X] I'm running a supported QGIS version according to the roadmap. ### New profile - [X] I tried with a new QGIS profile ### Additional context I reported this issue with 3.16.16, but the geopackage with the test data was created with 3.26.1 and the problem still exists with that current version.
1.0
Processing "split with lines" creates (wrong) sliver polygons - ### What is the bug or the crash? When using the tool "Split with lines" to split polygons with a line feature, more than the expected number of polygon features are created in the resulting layer . A simple project ist provided by the following .gpkg: [split_polygons_lines.zip](https://github.com/qgis/QGIS/files/9299919/split_polygons_lines.zip) ### Steps to reproduce the issue 1.: Use a topological correct polygon layer with at least two features. ![starting_point](https://user-images.githubusercontent.com/86417466/183898070-701cde60-dc29-4f77-b902-cb9175c1073b.jpg) 2.: Produce a line feature that covers the created polygons. 3.: Utilise the "Split with lines" tool from vector overlay with the input from 1 and 2. 4.: Look in the the attribute table of the resulting layer. After splitting there **can** be more the the expected number of features. ![result](https://user-images.githubusercontent.com/86417466/183898871-127ee800-388a-49cb-b7b2-54ae88640e2d.jpg) (in this example, fid 4 should not be there and is not shown in the map canvas, although it has geometry properties like an area) ### Versions <!--StartFragment--><!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN" "http://www.w3.org/TR/REC-html40/strict.dtd"> <html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><style type="text/css"> p, li { white-space: pre-wrap; } </style></head><body> QGIS version | 3.16.16-Hannover | QGIS code revision | f5778a89 -- | -- | -- | -- Compiled against Qt | 5.15.2 | Running against Qt | 5.15.2 Compiled against GDAL/OGR | 3.4.1 | Running against GDAL/OGR | 3.4.1 Compiled against GEOS | 3.10.0-CAPI-1.16.0 | Running against GEOS | 3.10.0-CAPI-1.16.0 Compiled against SQLite | 3.35.2 | Running against SQLite | 3.35.2 PostgreSQL Client Version | 13.0 | SpatiaLite Version | 5.0.1 QWT Version | 6.1.3 | QScintilla2 Version | 2.11.5 Compiled against PROJ | 8.2.1 | Running against PROJ | Rel. 8.2.1, January 1st, 2022 OS Version | Windows 10 Version 2009 Active python plugins | db_manager; MetaSearch; processing </body></html><!--EndFragment-->QGIS version 3.16.16-Hannover QGIS code revision [f5778a89](https://github.com/qgis/QGIS/commit/f5778a89) Compiled against Qt 5.15.2 Running against Qt 5.15.2 Compiled against GDAL/OGR 3.4.1 Running against GDAL/OGR 3.4.1 Compiled against GEOS 3.10.0-CAPI-1.16.0 Running against GEOS 3.10.0-CAPI-1.16.0 Compiled against SQLite 3.35.2 Running against SQLite 3.35.2 PostgreSQL Client Version 13.0 SpatiaLite Version 5.0.1 QWT Version 6.1.3 QScintilla2 Version 2.11.5 Compiled against PROJ 8.2.1 Running against PROJ Rel. 8.2.1, January 1st, 2022 OS Version Windows 10 Version 2009 Active python plugins db_manager; MetaSearch; processing ### Supported QGIS version - [X] I'm running a supported QGIS version according to the roadmap. ### New profile - [X] I tried with a new QGIS profile ### Additional context I reported this issue with 3.16.16, but the geopackage with the test data was created with 3.26.1 and the problem still exists with that current version.
process
processing split with lines creates wrong sliver polygons what is the bug or the crash when using the tool split with lines to split polygons with a line feature more than the expected number of polygon features are created in the resulting layer a simple project ist provided by the following gpkg steps to reproduce the issue use a topological correct polygon layer with at least two features produce a line feature that covers the created polygons utilise the split with lines tool from vector overlay with the input from and look in the the attribute table of the resulting layer after splitting there can be more the the expected number of features in this example fid should not be there and is not shown in the map canvas although it has geometry properties like an area versions doctype html public dtd html en p li white space pre wrap qgis version hannover qgis code revision compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi compiled against sqlite running against sqlite postgresql client version spatialite version qwt version version compiled against proj running against proj rel january os version windows version active python plugins db manager metasearch processing qgis version hannover qgis code revision compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi compiled against sqlite running against sqlite postgresql client version spatialite version qwt version version compiled against proj running against proj rel january os version windows version active python plugins db manager metasearch processing supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context i reported this issue with but the geopackage with the test data was created with and the problem still exists with that current version
1
19,013
25,013,842,259
IssuesEvent
2022-11-03 17:08:23
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
reopened
incompatible_config_setting_private_default_visibility
P2 type: process team-Configurability incompatible-change migration-ready breaking-change-6.0
Visibility on `config_setting` isn't historically enforced. This is purely for legacy reasons. There's no philosophical reason to distinguish them. This flag, in conjunction with `--incompatible_enforce_config_setting_visibility` (https://github.com/bazelbuild/bazel/issues/12932), removes that distinction. Values: * `--incompatible_config_setting_private_default_visibility=off`: if `--incompatible_enforce_config_setting_visibility=off`, every config_setting is visible to every target, regardless of visibility settings. Else, every `config_setting` without an explicit `visibility` setting is `//visibility:public` (ignoring package visibility defaults) * `--incompatible_config_setting_private_default_visibility=on`: if `--incompatible_enforce_config_setting_visibility=off`, every config_setting is visible to every target, regardless of visibility settings. Else, `config_setting` follows the same visibility rules as all other targets. **Incompatibility error:** `ERROR: myapp/BUILD:4:1: in config_setting rule //myapp:my_config: target 'myapp:my_config' is not visible from target '//some:other_target. Check the visibility declaration of the former target if you think the dependency is legitimate` **Migration:** Treat all `config_setting`s as if they follow standard visibility logic at https://docs.bazel.build/versions/master/visibility.html: have them set visibility explicitly if they'll be used anywhere outside their own package. The ultimate goal of this migration is to fully enforce that expectation.
1.0
incompatible_config_setting_private_default_visibility - Visibility on `config_setting` isn't historically enforced. This is purely for legacy reasons. There's no philosophical reason to distinguish them. This flag, in conjunction with `--incompatible_enforce_config_setting_visibility` (https://github.com/bazelbuild/bazel/issues/12932), removes that distinction. Values: * `--incompatible_config_setting_private_default_visibility=off`: if `--incompatible_enforce_config_setting_visibility=off`, every config_setting is visible to every target, regardless of visibility settings. Else, every `config_setting` without an explicit `visibility` setting is `//visibility:public` (ignoring package visibility defaults) * `--incompatible_config_setting_private_default_visibility=on`: if `--incompatible_enforce_config_setting_visibility=off`, every config_setting is visible to every target, regardless of visibility settings. Else, `config_setting` follows the same visibility rules as all other targets. **Incompatibility error:** `ERROR: myapp/BUILD:4:1: in config_setting rule //myapp:my_config: target 'myapp:my_config' is not visible from target '//some:other_target. Check the visibility declaration of the former target if you think the dependency is legitimate` **Migration:** Treat all `config_setting`s as if they follow standard visibility logic at https://docs.bazel.build/versions/master/visibility.html: have them set visibility explicitly if they'll be used anywhere outside their own package. The ultimate goal of this migration is to fully enforce that expectation.
process
incompatible config setting private default visibility visibility on config setting isn t historically enforced this is purely for legacy reasons there s no philosophical reason to distinguish them this flag in conjunction with incompatible enforce config setting visibility removes that distinction values incompatible config setting private default visibility off if incompatible enforce config setting visibility off every config setting is visible to every target regardless of visibility settings else every config setting without an explicit visibility setting is visibility public ignoring package visibility defaults incompatible config setting private default visibility on if incompatible enforce config setting visibility off every config setting is visible to every target regardless of visibility settings else config setting follows the same visibility rules as all other targets incompatibility error error myapp build in config setting rule myapp my config target myapp my config is not visible from target some other target check the visibility declaration of the former target if you think the dependency is legitimate migration treat all config setting s as if they follow standard visibility logic at have them set visibility explicitly if they ll be used anywhere outside their own package the ultimate goal of this migration is to fully enforce that expectation
1
17,205
22,783,852,485
IssuesEvent
2022-07-09 00:59:38
km4ack/pi-build
https://api.github.com/repos/km4ack/pi-build
closed
CQRLOG not installed in beta
in process
New Bullseye SD card. 32 bit using a not pi username Update system install BAP beta select install all. CQRLOG not installed run BAP update select install CQRLOG run BAP update again CQRLOG in now installed.
1.0
CQRLOG not installed in beta - New Bullseye SD card. 32 bit using a not pi username Update system install BAP beta select install all. CQRLOG not installed run BAP update select install CQRLOG run BAP update again CQRLOG in now installed.
process
cqrlog not installed in beta new bullseye sd card bit using a not pi username update system install bap beta select install all cqrlog not installed run bap update select install cqrlog run bap update again cqrlog in now installed
1
11,162
13,957,693,995
IssuesEvent
2020-10-24 08:11:08
alexanderkotsev/geoportal
https://api.github.com/repos/alexanderkotsev/geoportal
opened
CZ: Missing resources in Geoportal
CZ - Czech Republic Geoportal Harvesting process
Collected from the Geoportal Workshop online survey answers: Data for themes EL and OI are provided by WCS. WCS GetCapabilities responce contains INSPIRE extended capabilities with (besides other things) spatialDataSetIdentifier and MetadataUrl. There is an issue for OI: &ldquo;it did not respond within the specified timeout of 10000 ms&rdquo;. But there is no such message for EL and an aspect NETWORK_SERVICE_MATCHING_SERVICE_IS_AVAILABLE is still missing. metadata of the EL data: http://inspire-geoportal.ec.europa.eu/proxybrowser/#fq=memberStateCountryCode%3Acz&amp;fq=resourceTitle% 3Ael&amp;fq=uriCode%3ACZ-00025712-CUZK_EL&amp;q=*%3A*; metadata of the WCS for EL: http://inspire-geoportal.ec.europa.eu/proxybrowser/#fq=memberStateCountryCode%3Acz&amp;fq=resourceTitle% 3Ael&amp;fq=remoteMetadataIdentifier%3ACZ-CUZK-WCS-EL&amp;q=*%3A*
1.0
CZ: Missing resources in Geoportal - Collected from the Geoportal Workshop online survey answers: Data for themes EL and OI are provided by WCS. WCS GetCapabilities responce contains INSPIRE extended capabilities with (besides other things) spatialDataSetIdentifier and MetadataUrl. There is an issue for OI: &ldquo;it did not respond within the specified timeout of 10000 ms&rdquo;. But there is no such message for EL and an aspect NETWORK_SERVICE_MATCHING_SERVICE_IS_AVAILABLE is still missing. metadata of the EL data: http://inspire-geoportal.ec.europa.eu/proxybrowser/#fq=memberStateCountryCode%3Acz&amp;fq=resourceTitle% 3Ael&amp;fq=uriCode%3ACZ-00025712-CUZK_EL&amp;q=*%3A*; metadata of the WCS for EL: http://inspire-geoportal.ec.europa.eu/proxybrowser/#fq=memberStateCountryCode%3Acz&amp;fq=resourceTitle% 3Ael&amp;fq=remoteMetadataIdentifier%3ACZ-CUZK-WCS-EL&amp;q=*%3A*
process
cz missing resources in geoportal collected from the geoportal workshop online survey answers data for themes el and oi are provided by wcs wcs getcapabilities responce contains inspire extended capabilities with besides other things spatialdatasetidentifier and metadataurl there is an issue for oi ldquo it did not respond within the specified timeout of ms rdquo but there is no such message for el and an aspect network service matching service is available is still missing metadata of the el data amp fq uricode cuzk el amp q metadata of the wcs for el amp fq remotemetadataidentifier cuzk wcs el amp q
1
17,053
22,467,491,482
IssuesEvent
2022-06-22 04:08:00
zephyrproject-rtos/zephyr
https://api.github.com/repos/zephyrproject-rtos/zephyr
opened
Process: low priority bug management
Process
The purpose of this issue is to continue discussion begun at the TSC about how to handle low priority bugs. We have been unable to hit our low priority bug targets for a few releases now, and have agreed to do the following things as a project at the TSC level: 1. Ask maintainers for help triaging bugs into "will fix this release" vs. "known issue with no plans for a fix" categories (this applies to all bugs, not just low priority bugs) 2. Establish new criteria for low priority bugs in a release; likely just counting those in the "will fix in this release" category (this does **not** apply to medium and high priority bugs; the total amount of those open will still affect release readiness as it does today, regardless of the category from 1.) 3. Establish timelines and processes for when we do this triaging 4. Establish timelines and processes for when we revisit bug counts from the the "known issue" category, so things don't get out of hand The details of how this will work were left to the process WG to sort out. This issue will track the resulting discussion and document the resulting recommendations for the TSC.
1.0
Process: low priority bug management - The purpose of this issue is to continue discussion begun at the TSC about how to handle low priority bugs. We have been unable to hit our low priority bug targets for a few releases now, and have agreed to do the following things as a project at the TSC level: 1. Ask maintainers for help triaging bugs into "will fix this release" vs. "known issue with no plans for a fix" categories (this applies to all bugs, not just low priority bugs) 2. Establish new criteria for low priority bugs in a release; likely just counting those in the "will fix in this release" category (this does **not** apply to medium and high priority bugs; the total amount of those open will still affect release readiness as it does today, regardless of the category from 1.) 3. Establish timelines and processes for when we do this triaging 4. Establish timelines and processes for when we revisit bug counts from the the "known issue" category, so things don't get out of hand The details of how this will work were left to the process WG to sort out. This issue will track the resulting discussion and document the resulting recommendations for the TSC.
process
process low priority bug management the purpose of this issue is to continue discussion begun at the tsc about how to handle low priority bugs we have been unable to hit our low priority bug targets for a few releases now and have agreed to do the following things as a project at the tsc level ask maintainers for help triaging bugs into will fix this release vs known issue with no plans for a fix categories this applies to all bugs not just low priority bugs establish new criteria for low priority bugs in a release likely just counting those in the will fix in this release category this does not apply to medium and high priority bugs the total amount of those open will still affect release readiness as it does today regardless of the category from establish timelines and processes for when we do this triaging establish timelines and processes for when we revisit bug counts from the the known issue category so things don t get out of hand the details of how this will work were left to the process wg to sort out this issue will track the resulting discussion and document the resulting recommendations for the tsc
1
2,165
5,011,575,981
IssuesEvent
2016-12-13 08:26:20
VeliovGroup/Meteor-Files
https://api.github.com/repos/VeliovGroup/Meteor-Files
closed
Create and Manage Subversions: file.someHowConvertVideoAndReturnFileData(format) and version info
Post Processing question
Could you explain what should happen during `file.someHowConvertVideoAndReturnFileData(format)` ? Do you use graphicsmagic or imagemagic in that example? (seems like that imagemagic to me, please correct me if I'm wrong). How do you get `version` info from that function (like path, size, type)? I'm trying to understand what you've did on video subversion to create image thumbnails (and upload it to S3 later on). Thanks a lot!
1.0
Create and Manage Subversions: file.someHowConvertVideoAndReturnFileData(format) and version info - Could you explain what should happen during `file.someHowConvertVideoAndReturnFileData(format)` ? Do you use graphicsmagic or imagemagic in that example? (seems like that imagemagic to me, please correct me if I'm wrong). How do you get `version` info from that function (like path, size, type)? I'm trying to understand what you've did on video subversion to create image thumbnails (and upload it to S3 later on). Thanks a lot!
process
create and manage subversions file somehowconvertvideoandreturnfiledata format and version info could you explain what should happen during file somehowconvertvideoandreturnfiledata format do you use graphicsmagic or imagemagic in that example seems like that imagemagic to me please correct me if i m wrong how do you get version info from that function like path size type i m trying to understand what you ve did on video subversion to create image thumbnails and upload it to later on thanks a lot
1
14,137
17,029,942,646
IssuesEvent
2021-07-04 10:57:38
prisma/prisma
https://api.github.com/repos/prisma/prisma
opened
`prisma:engine stdout Unknown error`
bug/0-needs-info kind/bug process/candidate team/client team/migrations tech/typescript topic: logging
Recently the `prisma:engine` logging is full with `Unknown error` message: ``` ... prisma:engine stdout Fetched a connection from the pool +10ms prisma:engine stdout Unknown error +23ms prisma:query BEGIN prisma:engine stdout Unknown error +61ms prisma:query SELECT "public"."BuyerGroup"."id" FROM "public"."BuyerGroup" WHERE 1=1 OFFSET $1 prisma:engine stdout Unknown error +63ms prisma:query SELECT "public"."BuyerGroup"."id" FROM "public"."BuyerGroup" WHERE 1=1 prisma:engine stdout Unknown error +22ms prisma:query COMMIT ... ``` Logging can be achieved with `DEBUG=*` and ``` const prisma = new PrismaClient({ log: ['query', 'info', `warn`, `error`], }) ```
1.0
`prisma:engine stdout Unknown error` - Recently the `prisma:engine` logging is full with `Unknown error` message: ``` ... prisma:engine stdout Fetched a connection from the pool +10ms prisma:engine stdout Unknown error +23ms prisma:query BEGIN prisma:engine stdout Unknown error +61ms prisma:query SELECT "public"."BuyerGroup"."id" FROM "public"."BuyerGroup" WHERE 1=1 OFFSET $1 prisma:engine stdout Unknown error +63ms prisma:query SELECT "public"."BuyerGroup"."id" FROM "public"."BuyerGroup" WHERE 1=1 prisma:engine stdout Unknown error +22ms prisma:query COMMIT ... ``` Logging can be achieved with `DEBUG=*` and ``` const prisma = new PrismaClient({ log: ['query', 'info', `warn`, `error`], }) ```
process
prisma engine stdout unknown error recently the prisma engine logging is full with unknown error message prisma engine stdout fetched a connection from the pool prisma engine stdout unknown error prisma query begin prisma engine stdout unknown error prisma query select public buyergroup id from public buyergroup where offset prisma engine stdout unknown error prisma query select public buyergroup id from public buyergroup where prisma engine stdout unknown error prisma query commit logging can be achieved with debug and const prisma new prismaclient log
1
12,500
14,961,496,154
IssuesEvent
2021-01-27 07:51:57
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
Scroll bar moved to default position when the user clicks on load more icon
Bug P2 Participant manager Process: Fixed Process: Tested dev
AR : Scroll bar moved to default position when the user clicks on load more icon ER : Scroll bar should not be moved to the default position [Note : Issue is observed only in firefox browser] https://user-images.githubusercontent.com/71445210/104184500-8cfa1200-5439-11eb-9577-50f1c82aa2eb.mp4
2.0
Scroll bar moved to default position when the user clicks on load more icon - AR : Scroll bar moved to default position when the user clicks on load more icon ER : Scroll bar should not be moved to the default position [Note : Issue is observed only in firefox browser] https://user-images.githubusercontent.com/71445210/104184500-8cfa1200-5439-11eb-9577-50f1c82aa2eb.mp4
process
scroll bar moved to default position when the user clicks on load more icon ar scroll bar moved to default position when the user clicks on load more icon er scroll bar should not be moved to the default position
1
77,921
22,042,092,158
IssuesEvent
2022-05-29 14:07:14
rocm-arch/rocm-arch
https://api.github.com/repos/rocm-arch/rocm-arch
closed
[rocblas] build error: use of undeclared identifier 'noinline'
build error
When building rocblas-5.1.1-2 (specifically Tensile), I get `error: use of undeclared identifier 'noinline'`. The build was started in a chroot with `paru --chroot -S rocm-hip-sdk rocm-opencl-sdk`. This issue is possibly related to the rocblas error mentioned in #777, although I don't know if it's the same error as @t1nux. [rocblas build output](https://github.com/rocm-arch/rocm-arch/files/8691727/rocblas_build.txt)
1.0
[rocblas] build error: use of undeclared identifier 'noinline' - When building rocblas-5.1.1-2 (specifically Tensile), I get `error: use of undeclared identifier 'noinline'`. The build was started in a chroot with `paru --chroot -S rocm-hip-sdk rocm-opencl-sdk`. This issue is possibly related to the rocblas error mentioned in #777, although I don't know if it's the same error as @t1nux. [rocblas build output](https://github.com/rocm-arch/rocm-arch/files/8691727/rocblas_build.txt)
non_process
build error use of undeclared identifier noinline when building rocblas specifically tensile i get error use of undeclared identifier noinline the build was started in a chroot with paru chroot s rocm hip sdk rocm opencl sdk this issue is possibly related to the rocblas error mentioned in although i don t know if it s the same error as
0
365,642
10,790,175,415
IssuesEvent
2019-11-05 14:00:11
ncssar/sign-in
https://api.github.com/repos/ncssar/sign-in
opened
top bar buttons disappear after typing text in event name field
Priority: High bug
the buttons go away as soon as first character is typed in event name or event location fields. The only way to get to the keypad after that is to restart the app and go to keypad without entering an event name!
1.0
top bar buttons disappear after typing text in event name field - the buttons go away as soon as first character is typed in event name or event location fields. The only way to get to the keypad after that is to restart the app and go to keypad without entering an event name!
non_process
top bar buttons disappear after typing text in event name field the buttons go away as soon as first character is typed in event name or event location fields the only way to get to the keypad after that is to restart the app and go to keypad without entering an event name
0
17,377
3,002,699,245
IssuesEvent
2015-07-24 18:45:28
kayuri/HNC
https://api.github.com/repos/kayuri/HNC
opened
Nested local functions in local variables aren't supported
Defect
``` foo x = { bar = { quux x = sum x 42 mul (quux 2) (quux 3) } mul bar bar } ```
1.0
Nested local functions in local variables aren't supported - ``` foo x = { bar = { quux x = sum x 42 mul (quux 2) (quux 3) } mul bar bar } ```
non_process
nested local functions in local variables aren t supported foo x bar quux x sum x mul quux quux mul bar bar
0
20,607
27,272,174,164
IssuesEvent
2023-02-22 23:39:07
hashgraph/hedera-mirror-node
https://api.github.com/repos/hashgraph/hedera-mirror-node
closed
Release checklist 0.74
enhancement process
### Problem We need a checklist to verify the release is rolled out successfully. ### Solution ## Preparation - [x] Milestone field populated on relevant [issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc) - [x] Nothing open for [milestone](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.74.0) - [x] GitHub checks for branch are passing - [x] Automated Kubernetes deployment successful - [x] Tag release - [x] Upload release artifacts - [x] Manual Submission for GCP Marketplace verification by google - [x] Publish marketplace release - [x] Publish release ## Performance - [x] Deploy to Kubernetes - [x] Deploy to VM - [x] gRPC API performance tests - [x] Importer performance tests - [x] REST API performance tests ## Previewnet - [x] Deploy to Kubernetes ## Staging - [x] Deploy to Kubernetes ## Testnet - [x] Deploy to VM ## Mainnet - [x] Deploy to Kubernetes EU - [x] Deploy to Kubernetes NA - [x] Deploy to VM - [x] Deploy to ETL ### Alternatives _No response_
1.0
Release checklist 0.74 - ### Problem We need a checklist to verify the release is rolled out successfully. ### Solution ## Preparation - [x] Milestone field populated on relevant [issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc) - [x] Nothing open for [milestone](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.74.0) - [x] GitHub checks for branch are passing - [x] Automated Kubernetes deployment successful - [x] Tag release - [x] Upload release artifacts - [x] Manual Submission for GCP Marketplace verification by google - [x] Publish marketplace release - [x] Publish release ## Performance - [x] Deploy to Kubernetes - [x] Deploy to VM - [x] gRPC API performance tests - [x] Importer performance tests - [x] REST API performance tests ## Previewnet - [x] Deploy to Kubernetes ## Staging - [x] Deploy to Kubernetes ## Testnet - [x] Deploy to VM ## Mainnet - [x] Deploy to Kubernetes EU - [x] Deploy to Kubernetes NA - [x] Deploy to VM - [x] Deploy to ETL ### Alternatives _No response_
process
release checklist problem we need a checklist to verify the release is rolled out successfully solution preparation milestone field populated on relevant nothing open for github checks for branch are passing automated kubernetes deployment successful tag release upload release artifacts manual submission for gcp marketplace verification by google publish marketplace release publish release performance deploy to kubernetes deploy to vm grpc api performance tests importer performance tests rest api performance tests previewnet deploy to kubernetes staging deploy to kubernetes testnet deploy to vm mainnet deploy to kubernetes eu deploy to kubernetes na deploy to vm deploy to etl alternatives no response
1
14,432
17,481,184,438
IssuesEvent
2021-08-09 02:49:09
brucemiller/LaTeXML
https://api.github.com/repos/brucemiller/LaTeXML
closed
Use of <code> tag for listings
question postprocessing
Would it be possible to use the `<code>` and `<pre>`/`<code>` tags when generating HTML5? It seems this would allow for a cleaner description of appearance using CSS.
1.0
Use of <code> tag for listings - Would it be possible to use the `<code>` and `<pre>`/`<code>` tags when generating HTML5? It seems this would allow for a cleaner description of appearance using CSS.
process
use of tag for listings would it be possible to use the and tags when generating it seems this would allow for a cleaner description of appearance using css
1
16,336
20,992,235,402
IssuesEvent
2022-03-29 10:19:56
DevExpress/testcafe-hammerhead
https://api.github.com/repos/DevExpress/testcafe-hammerhead
closed
Test Fails due to JavaScript error (works with --skip-js-errors)
TYPE: bug AREA: client STATE: Need response FREQUENCY: level 2 SYSTEM: client side processing
<!-- If you have all reproduction steps with a complete sample app, please share as many details as possible in the sections below. Make sure that you tried using the latest TestCafe version (https://github.com/DevExpress/testcafe/releases), where this behavior might have been already addressed. Before submitting an issue, please check CONTRIBUTING.md and existing issues in this repository (https://github.com/DevExpress/testcafe/issues) in case a similar issue exists or was already addressed. This may save your time (and ours). --> ### What is your Test Scenario? Just simple hover action, beginner with TestCafe ### What is the Current behavior? Test Fails (however in my opinion it shall be reported as Error instead of Fail ... unless this case really relates a launched page and is in fact a test failure ### What is the Expected behavior? Work as when --skip-js-errors is launched? Unless I am too rookie to know what am I talking about :-) ### What is your web application and your TestCafe test code? Your website URL (or attach your complete example): <details> <summary>Test Code:</summary> <!-- Paste your test code here: --> ``` import { Selector } from 'testcafe'; fixture`Getting Started` .page`https://poczta.interia.pl/`; const cos = Selector('.standard-interia-logo'); test.skip('My first test', async t => { await t .setTestSpeed(0.1) .typeText('input.gLFyf.gsfi', 'John Smith') .pressKey('enter') .expect(Selector('html').textContent).contains('John Smith') }); test('My second test', async t => { await t .hover(cos) .wait(50000) }); ``` </details> <details> <summary>Your complete test report:</summary> <!-- Paste your complete result test report here (even if it is huge): --> ``` Getting Started - My first test × My second test 1) A JavaScript error occurred on "https://poczta.interia.pl/". Repeat test actions in the browser and check the console for errors. If you see this error, it means that the tested website caused it. You can fix it or disable tracking JavaScript errors in TestCafe. To do the latter, enable the "--skip-js-errors" option. If this error does not occur, please write a new issue at: "https://github.com/DevExpress/testcafe/issues/new?template=bug-report.md". JavaScript error details: h.indexOf is not a function: value@https://iwa2.iplsc.com/main.iwa.js:2:94620 value/<@https://iwa2.iplsc.com/main.iwa.js:2:91166 value@https://iwa2.iplsc.com/main.iwa.js:2:91123 e@https://iwa2.iplsc.com/main.iwa.js:2:89833 e@https://iwa2.iplsc.com/main.iwa.js:2:86436 e@https://iwa2.iplsc.com/main.iwa.js:2:24501 e@https://iwa2.iplsc.com/main.iwa.js:2:81423 e@https://iwa2.iplsc.com/main.iwa.js:2:78199 @https://iwa2.iplsc.com/main.iwa.js:2:59596 i@https://iwa2.iplsc.com/main.iwa.js:2:398 @https://iwa2.iplsc.com/main.iwa.js:2:52061 i@https://iwa2.iplsc.com/main.iwa.js:2:398 @https://iwa2.iplsc.com/main.iwa.js:2:2164 @https://iwa2.iplsc.com/main.iwa.js:2:2174 Browser: Firefox 68.0.0 / Windows 10.0.0 13 | .expect(Selector('html').textContent).contains('John Smith') 14 |}); 15 | 16 |test('My second test', async t => { 17 | await t > 18 | .hover(cos) 19 | .wait(50000) 20 |}); at hover (C:\testcafe\test.js:18:10) at test (C:\testcafe\test.js:16:1) at <anonymous> (C:\testcafe\node_modules\testcafe\src\api\wrap-test-function.js:17:28) at TestRun._executeTestFn (C:\testcafe\node_modules\testcafe\src\test-run\index.js:288:19) at TestRun.start (C:\testcafe\node_modules\testcafe\src\test-run\index.js:337:24) 1/1 failed (6s) 1 skipped ``` </details> ### Steps to Reproduce: <!-- Describe what we should do to reproduce the behavior you encountered. --> 1. Open page 3. Hover mouse on Interia logo ### Your Environment details: * testcafe version: 1.4.2 <!-- run `testcafe -v` --> * node.js version: v10.16.0 <!-- run `node -v` --> * command-line arguments: testcafe chrome test.js ** the same with firefox <!-- example: "testcafe ie,chrome -e test.js" --> * browser name and version: Chrome 76.0.3809 / Windows 10.0.0<!-- example: IE 11, Chrome 69, Firefox 100, etc. --> * platform and version: Windows 10 <!-- example: "macOS 10.14, Windows, Linux Ubuntu 18.04.1, iOS 12 -->
1.0
Test Fails due to JavaScript error (works with --skip-js-errors) - <!-- If you have all reproduction steps with a complete sample app, please share as many details as possible in the sections below. Make sure that you tried using the latest TestCafe version (https://github.com/DevExpress/testcafe/releases), where this behavior might have been already addressed. Before submitting an issue, please check CONTRIBUTING.md and existing issues in this repository (https://github.com/DevExpress/testcafe/issues) in case a similar issue exists or was already addressed. This may save your time (and ours). --> ### What is your Test Scenario? Just simple hover action, beginner with TestCafe ### What is the Current behavior? Test Fails (however in my opinion it shall be reported as Error instead of Fail ... unless this case really relates a launched page and is in fact a test failure ### What is the Expected behavior? Work as when --skip-js-errors is launched? Unless I am too rookie to know what am I talking about :-) ### What is your web application and your TestCafe test code? Your website URL (or attach your complete example): <details> <summary>Test Code:</summary> <!-- Paste your test code here: --> ``` import { Selector } from 'testcafe'; fixture`Getting Started` .page`https://poczta.interia.pl/`; const cos = Selector('.standard-interia-logo'); test.skip('My first test', async t => { await t .setTestSpeed(0.1) .typeText('input.gLFyf.gsfi', 'John Smith') .pressKey('enter') .expect(Selector('html').textContent).contains('John Smith') }); test('My second test', async t => { await t .hover(cos) .wait(50000) }); ``` </details> <details> <summary>Your complete test report:</summary> <!-- Paste your complete result test report here (even if it is huge): --> ``` Getting Started - My first test × My second test 1) A JavaScript error occurred on "https://poczta.interia.pl/". Repeat test actions in the browser and check the console for errors. If you see this error, it means that the tested website caused it. You can fix it or disable tracking JavaScript errors in TestCafe. To do the latter, enable the "--skip-js-errors" option. If this error does not occur, please write a new issue at: "https://github.com/DevExpress/testcafe/issues/new?template=bug-report.md". JavaScript error details: h.indexOf is not a function: value@https://iwa2.iplsc.com/main.iwa.js:2:94620 value/<@https://iwa2.iplsc.com/main.iwa.js:2:91166 value@https://iwa2.iplsc.com/main.iwa.js:2:91123 e@https://iwa2.iplsc.com/main.iwa.js:2:89833 e@https://iwa2.iplsc.com/main.iwa.js:2:86436 e@https://iwa2.iplsc.com/main.iwa.js:2:24501 e@https://iwa2.iplsc.com/main.iwa.js:2:81423 e@https://iwa2.iplsc.com/main.iwa.js:2:78199 @https://iwa2.iplsc.com/main.iwa.js:2:59596 i@https://iwa2.iplsc.com/main.iwa.js:2:398 @https://iwa2.iplsc.com/main.iwa.js:2:52061 i@https://iwa2.iplsc.com/main.iwa.js:2:398 @https://iwa2.iplsc.com/main.iwa.js:2:2164 @https://iwa2.iplsc.com/main.iwa.js:2:2174 Browser: Firefox 68.0.0 / Windows 10.0.0 13 | .expect(Selector('html').textContent).contains('John Smith') 14 |}); 15 | 16 |test('My second test', async t => { 17 | await t > 18 | .hover(cos) 19 | .wait(50000) 20 |}); at hover (C:\testcafe\test.js:18:10) at test (C:\testcafe\test.js:16:1) at <anonymous> (C:\testcafe\node_modules\testcafe\src\api\wrap-test-function.js:17:28) at TestRun._executeTestFn (C:\testcafe\node_modules\testcafe\src\test-run\index.js:288:19) at TestRun.start (C:\testcafe\node_modules\testcafe\src\test-run\index.js:337:24) 1/1 failed (6s) 1 skipped ``` </details> ### Steps to Reproduce: <!-- Describe what we should do to reproduce the behavior you encountered. --> 1. Open page 3. Hover mouse on Interia logo ### Your Environment details: * testcafe version: 1.4.2 <!-- run `testcafe -v` --> * node.js version: v10.16.0 <!-- run `node -v` --> * command-line arguments: testcafe chrome test.js ** the same with firefox <!-- example: "testcafe ie,chrome -e test.js" --> * browser name and version: Chrome 76.0.3809 / Windows 10.0.0<!-- example: IE 11, Chrome 69, Firefox 100, etc. --> * platform and version: Windows 10 <!-- example: "macOS 10.14, Windows, Linux Ubuntu 18.04.1, iOS 12 -->
process
test fails due to javascript error works with skip js errors if you have all reproduction steps with a complete sample app please share as many details as possible in the sections below make sure that you tried using the latest testcafe version where this behavior might have been already addressed before submitting an issue please check contributing md and existing issues in this repository  in case a similar issue exists or was already addressed this may save your time and ours what is your test scenario just simple hover action beginner with testcafe what is the current behavior test fails however in my opinion it shall be reported as error instead of fail unless this case really relates a launched page and is in fact a test failure what is the expected behavior work as when skip js errors is launched unless i am too rookie to know what am i talking about what is your web application and your testcafe test code your website url or attach your complete example test code import selector from testcafe fixture getting started page const cos selector standard interia logo test skip my first test async t await t settestspeed typetext input glfyf gsfi john smith presskey enter expect selector html textcontent contains john smith test my second test async t await t hover cos wait your complete test report getting started my first test × my second test a javascript error occurred on repeat test actions in the browser and check the console for errors if you see this error it means that the tested website caused it you can fix it or disable tracking javascript errors in testcafe to do the latter enable the skip js errors option if this error does not occur please write a new issue at javascript error details h indexof is not a function value value value e e e e e i i browser firefox windows expect selector html textcontent contains john smith test my second test async t await t hover cos wait at hover c testcafe test js at test c testcafe test js at c testcafe node modules testcafe src api wrap test function js at testrun executetestfn c testcafe node modules testcafe src test run index js at testrun start c testcafe node modules testcafe src test run index js failed skipped steps to reproduce open page hover mouse on interia logo your environment details testcafe version node js version command line arguments testcafe chrome test js the same with firefox browser name and version chrome windows platform and version windows
1
17,328
23,144,394,684
IssuesEvent
2022-07-28 22:09:21
pytorch/pytorch
https://api.github.com/repos/pytorch/pytorch
closed
Sharing complex non-pickleable object between processes with BaseManager
module: multiprocessing triaged
### 🐛 Describe the bug I am attempting to share a complex object that cannot be pickled between processes in a multi-gpu DDP training scenario. The recommended pythonic way I found to do this [here](https://stackoverflow.com/questions/3671666/sharing-a-complex-object-between-processes) is using a Manager object and manipulate my object with proxies. However when I import the following I get that the module cannot be found: ``` from torch.multiprocessing.managers import BaseManager ``` Does torch.multiprocessing not directly provide all the functionality of pythons multiprocessing and is there a better way to share non-pickleable custom objects between training processes in pytorch? ### Versions PyTorch version: 1.12.0+cu116 Is debug build: False CUDA used to build PyTorch: 11.6 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.4 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04) 9.4.0 Clang version: Could not collect CMake version: version 3.21.1 Libc version: glibc-2.31 Python version: 3.8.10 (default, Nov 26 2021, 20:14:08) [GCC 9.3.0] (64-bit runtime) Python platform: Linux-5.4.0-80-generic-x86_64-with-glibc2.29 Is CUDA available: True CUDA runtime version: 11.6.112 GPU models and configuration: GPU 0: NVIDIA Tesla V100-DGXS-32GB GPU 1: NVIDIA Tesla V100-DGXS-32GB GPU 2: NVIDIA Tesla V100-DGXS-32GB GPU 3: NVIDIA Tesla V100-DGXS-32GB Nvidia driver version: 465.19.01 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.3.3 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.3.3 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.3.3 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.3.3 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.3.3 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.3.3 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.3.3 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] numpy==1.21.1 [pip3] numpydoc==1.4.0 [pip3] torch==1.12.0+cu116 [pip3] torchaudio==0.12.0+cu116 [pip3] torchmetrics==0.9.2 [pip3] torchvision==0.13.0+cu116 cc @SsnL @VitalyFedyunin @ejguan @NivekT
1.0
Sharing complex non-pickleable object between processes with BaseManager - ### 🐛 Describe the bug I am attempting to share a complex object that cannot be pickled between processes in a multi-gpu DDP training scenario. The recommended pythonic way I found to do this [here](https://stackoverflow.com/questions/3671666/sharing-a-complex-object-between-processes) is using a Manager object and manipulate my object with proxies. However when I import the following I get that the module cannot be found: ``` from torch.multiprocessing.managers import BaseManager ``` Does torch.multiprocessing not directly provide all the functionality of pythons multiprocessing and is there a better way to share non-pickleable custom objects between training processes in pytorch? ### Versions PyTorch version: 1.12.0+cu116 Is debug build: False CUDA used to build PyTorch: 11.6 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.4 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04) 9.4.0 Clang version: Could not collect CMake version: version 3.21.1 Libc version: glibc-2.31 Python version: 3.8.10 (default, Nov 26 2021, 20:14:08) [GCC 9.3.0] (64-bit runtime) Python platform: Linux-5.4.0-80-generic-x86_64-with-glibc2.29 Is CUDA available: True CUDA runtime version: 11.6.112 GPU models and configuration: GPU 0: NVIDIA Tesla V100-DGXS-32GB GPU 1: NVIDIA Tesla V100-DGXS-32GB GPU 2: NVIDIA Tesla V100-DGXS-32GB GPU 3: NVIDIA Tesla V100-DGXS-32GB Nvidia driver version: 465.19.01 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.3.3 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.3.3 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.3.3 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.3.3 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.3.3 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.3.3 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.3.3 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] numpy==1.21.1 [pip3] numpydoc==1.4.0 [pip3] torch==1.12.0+cu116 [pip3] torchaudio==0.12.0+cu116 [pip3] torchmetrics==0.9.2 [pip3] torchvision==0.13.0+cu116 cc @SsnL @VitalyFedyunin @ejguan @NivekT
process
sharing complex non pickleable object between processes with basemanager 🐛 describe the bug i am attempting to share a complex object that cannot be pickled between processes in a multi gpu ddp training scenario the recommended pythonic way i found to do this is using a manager object and manipulate my object with proxies however when i import the following i get that the module cannot be found from torch multiprocessing managers import basemanager does torch multiprocessing not directly provide all the functionality of pythons multiprocessing and is there a better way to share non pickleable custom objects between training processes in pytorch versions pytorch version is debug build false cuda used to build pytorch rocm used to build pytorch n a os ubuntu lts gcc version ubuntu clang version could not collect cmake version version libc version glibc python version default nov bit runtime python platform linux generic with is cuda available true cuda runtime version gpu models and configuration gpu nvidia tesla dgxs gpu nvidia tesla dgxs gpu nvidia tesla dgxs gpu nvidia tesla dgxs nvidia driver version cudnn version probably one of the following usr lib linux gnu libcudnn so usr lib linux gnu libcudnn adv infer so usr lib linux gnu libcudnn adv train so usr lib linux gnu libcudnn cnn infer so usr lib linux gnu libcudnn cnn train so usr lib linux gnu libcudnn ops infer so usr lib linux gnu libcudnn ops train so hip runtime version n a miopen runtime version n a is xnnpack available true versions of relevant libraries numpy numpydoc torch torchaudio torchmetrics torchvision cc ssnl vitalyfedyunin ejguan nivekt
1
14,107
16,994,551,269
IssuesEvent
2021-07-01 03:39:10
Arch666Angel/mods
https://api.github.com/repos/Arch666Angel/mods
closed
Small inconsistency in AB
Angels Bio Processing Impact: Bug
The gold plate in the module logic board should be replaced with a silver plate to match the superior circuit board.
1.0
Small inconsistency in AB - The gold plate in the module logic board should be replaced with a silver plate to match the superior circuit board.
process
small inconsistency in ab the gold plate in the module logic board should be replaced with a silver plate to match the superior circuit board
1
128,374
10,526,673,348
IssuesEvent
2019-09-30 17:36:53
MicrosoftDocs/vsts-docs
https://api.github.com/repos/MicrosoftDocs/vsts-docs
closed
Cannot playback video recordings created by the Test & Feedback extension
Pri1 devops-test/tech devops/prod support-request
[I am unable to playback video recordings from Test & Feedback extension. I have both Chrome and VLC Video player installed. However, when I click on the video recording from Test &Feedback, it is trying to open MTMS files, advising me to look for a new app. ![image](https://user-images.githubusercontent.com/48555490/65588527-98dd6e80-df7f-11e9-9cc2-ce40f0098b0a.png) ] --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 985ecee3-e347-43bb-5ed2-08ce3aee7621 * Version Independent ID: fdf465fc-a9e1-0198-2e8c-e859c74252b3 * Content: [FAQs and problem solutions - Azure Test Plans](https://docs.microsoft.com/en-us/azure/devops/test/reference-qa?view=azure-devops#feedback) * Content Source: [docs/test/reference-qa.md](https://github.com/MicrosoftDocs/vsts-docs/blob/master/docs/test/reference-qa.md) * Product: **devops** * Technology: **devops-test** * GitHub Login: @steved0x * Microsoft Alias: **sdanie**
1.0
Cannot playback video recordings created by the Test & Feedback extension - [I am unable to playback video recordings from Test & Feedback extension. I have both Chrome and VLC Video player installed. However, when I click on the video recording from Test &Feedback, it is trying to open MTMS files, advising me to look for a new app. ![image](https://user-images.githubusercontent.com/48555490/65588527-98dd6e80-df7f-11e9-9cc2-ce40f0098b0a.png) ] --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 985ecee3-e347-43bb-5ed2-08ce3aee7621 * Version Independent ID: fdf465fc-a9e1-0198-2e8c-e859c74252b3 * Content: [FAQs and problem solutions - Azure Test Plans](https://docs.microsoft.com/en-us/azure/devops/test/reference-qa?view=azure-devops#feedback) * Content Source: [docs/test/reference-qa.md](https://github.com/MicrosoftDocs/vsts-docs/blob/master/docs/test/reference-qa.md) * Product: **devops** * Technology: **devops-test** * GitHub Login: @steved0x * Microsoft Alias: **sdanie**
non_process
cannot playback video recordings created by the test feedback extension i am unable to playback video recordings from test feedback extension i have both chrome and vlc video player installed however when i click on the video recording from test feedback it is trying to open mtms files advising me to look for a new app document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops test github login microsoft alias sdanie
0
445,406
31,237,015,108
IssuesEvent
2023-08-20 12:05:49
kkssbbb/spring-board-project
https://api.github.com/repos/kkssbbb/spring-board-project
closed
데이터베이스 접근 로직 테스트 정의
documentation enhancement
도메인 설계 내용 #9를 바탕으로 DB 와 연동하기 위한 방법을 구상, 셋팅, 테스트를 먼저 작성 * [x] DB 기술 선택 * [x] DB 에 접근할 수 있는 상태로 환경 셋팅 하기 - jpa 의존성 설정 - mysql 의존성 설정 - h2 db 의존성 절정 * [x] 테스트
1.0
데이터베이스 접근 로직 테스트 정의 - 도메인 설계 내용 #9를 바탕으로 DB 와 연동하기 위한 방법을 구상, 셋팅, 테스트를 먼저 작성 * [x] DB 기술 선택 * [x] DB 에 접근할 수 있는 상태로 환경 셋팅 하기 - jpa 의존성 설정 - mysql 의존성 설정 - h2 db 의존성 절정 * [x] 테스트
non_process
데이터베이스 접근 로직 테스트 정의 도메인 설계 내용 바탕으로 db 와 연동하기 위한 방법을 구상 셋팅 테스트를 먼저 작성 db 기술 선택 db 에 접근할 수 있는 상태로 환경 셋팅 하기 jpa 의존성 설정 mysql 의존성 설정 db 의존성 절정 테스트
0
44,453
5,628,754,774
IssuesEvent
2017-04-05 07:34:57
khartec/waltz
https://api.github.com/repos/khartec/waltz
closed
Survey: Ensure duplicate instance recipients are not created
bug fixed (test & close)
if a person has two roles in the same entity
1.0
Survey: Ensure duplicate instance recipients are not created - if a person has two roles in the same entity
non_process
survey ensure duplicate instance recipients are not created if a person has two roles in the same entity
0