Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 4
112
| repo_url
stringlengths 33
141
| action
stringclasses 3
values | title
stringlengths 1
1.02k
| labels
stringlengths 4
1.54k
| body
stringlengths 1
262k
| index
stringclasses 17
values | text_combine
stringlengths 95
262k
| label
stringclasses 2
values | text
stringlengths 96
252k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
163,663
| 25,854,654,009
|
IssuesEvent
|
2022-12-13 12:54:52
|
jupyterlab/jupyterlab
|
https://api.github.com/repos/jupyterlab/jupyterlab
|
closed
|
Erroneous style for active menu
|
bug tag:Design System CSS
|
The new color introduced in https://github.com/jupyterlab/jupyterlab/pull/13276 for:
https://github.com/jupyterlab/jupyterlab/blob/5c91f4be6bbdbf394cd6ea5f44a242b6df7cd2f0/packages/application/style/menus.css#L61
is wrong as it set the text color to the background color.
|
1.0
|
Erroneous style for active menu - The new color introduced in https://github.com/jupyterlab/jupyterlab/pull/13276 for:
https://github.com/jupyterlab/jupyterlab/blob/5c91f4be6bbdbf394cd6ea5f44a242b6df7cd2f0/packages/application/style/menus.css#L61
is wrong as it set the text color to the background color.
|
non_test
|
erroneous style for active menu the new color introduced in for is wrong as it set the text color to the background color
| 0
|
26,440
| 4,226,271,079
|
IssuesEvent
|
2016-07-02 10:31:07
|
Bearded-Hen/Android-Bootstrap
|
https://api.github.com/repos/Bearded-Hen/Android-Bootstrap
|
closed
|
OnClickListener of BootstrapButton within BootstrapButtonGroup
|
bug ready_for_testing
|
Any special method needed to capture click events of BootstrapButtons within BootstrapButtonGroup?
Within a recycleview I have:
// vBtnFinished is a BootstrapButton
holder.vBtnFinished.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
ToastHelper.toast(context, "finish: " + task.getClientName());
}
});
// vBtnWriteReport is a regular Button
holder.vBtnWriteReport.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
ToastHelper.toast(context, "report: " + task.getClientName());
}
});
The report button 'vBtnWriteReport' fires just fine whereas 'vBtnFinished' does not.
Tried every workaround I could think of including 'android:descendantFocusability="blocksDescendants"' on the root of items being inflated.
On touch listener does fire .. however, multiple times it seems.
Thanks in advance
|
1.0
|
OnClickListener of BootstrapButton within BootstrapButtonGroup - Any special method needed to capture click events of BootstrapButtons within BootstrapButtonGroup?
Within a recycleview I have:
// vBtnFinished is a BootstrapButton
holder.vBtnFinished.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
ToastHelper.toast(context, "finish: " + task.getClientName());
}
});
// vBtnWriteReport is a regular Button
holder.vBtnWriteReport.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
ToastHelper.toast(context, "report: " + task.getClientName());
}
});
The report button 'vBtnWriteReport' fires just fine whereas 'vBtnFinished' does not.
Tried every workaround I could think of including 'android:descendantFocusability="blocksDescendants"' on the root of items being inflated.
On touch listener does fire .. however, multiple times it seems.
Thanks in advance
|
test
|
onclicklistener of bootstrapbutton within bootstrapbuttongroup any special method needed to capture click events of bootstrapbuttons within bootstrapbuttongroup within a recycleview i have vbtnfinished is a bootstrapbutton holder vbtnfinished setonclicklistener new view onclicklistener override public void onclick view view toasthelper toast context finish task getclientname vbtnwritereport is a regular button holder vbtnwritereport setonclicklistener new view onclicklistener override public void onclick view view toasthelper toast context report task getclientname the report button vbtnwritereport fires just fine whereas vbtnfinished does not tried every workaround i could think of including android descendantfocusability blocksdescendants on the root of items being inflated on touch listener does fire however multiple times it seems thanks in advance
| 1
|
404,532
| 27,489,767,991
|
IssuesEvent
|
2023-03-04 13:23:09
|
cleodora-forecasting/cleodora
|
https://api.github.com/repos/cleodora-forecasting/cleodora
|
closed
|
Doc: Update for 0.2.0
|
documentation
|
- [ ] Update cleosrv screenshot
- [x] Update changelog
- [x] Document how to update from 0.1.1 or earlier
|
1.0
|
Doc: Update for 0.2.0 - - [ ] Update cleosrv screenshot
- [x] Update changelog
- [x] Document how to update from 0.1.1 or earlier
|
non_test
|
doc update for update cleosrv screenshot update changelog document how to update from or earlier
| 0
|
478,138
| 13,773,785,388
|
IssuesEvent
|
2020-10-08 04:39:38
|
AY2021S1-CS2113-T14-1/tp
|
https://api.github.com/repos/AY2021S1-CS2113-T14-1/tp
|
closed
|
Change Usage display style
|
priority.Medium type.Task
|
Change to:
1. Name:
Status:
Power Consumption:
2. Name:
Status:
Power Consumption:
3. Name:
Status:
Power Consumption:
Total Consumption: .........
|
1.0
|
Change Usage display style - Change to:
1. Name:
Status:
Power Consumption:
2. Name:
Status:
Power Consumption:
3. Name:
Status:
Power Consumption:
Total Consumption: .........
|
non_test
|
change usage display style change to name status power consumption name status power consumption name status power consumption total consumption
| 0
|
40,344
| 5,287,441,324
|
IssuesEvent
|
2017-02-08 12:21:49
|
nextgis/nextgisweb_compulink
|
https://api.github.com/repos/nextgis/nextgisweb_compulink
|
closed
|
Более заметное выделение принятых участков
|
4 test enhancement frontend
|
>> Не понял как одновременно отобразить выделение всех принятых участков
Есть подозрение что они еще попадают под другие слои
|
1.0
|
Более заметное выделение принятых участков - >> Не понял как одновременно отобразить выделение всех принятых участков
Есть подозрение что они еще попадают под другие слои
|
test
|
более заметное выделение принятых участков не понял как одновременно отобразить выделение всех принятых участков есть подозрение что они еще попадают под другие слои
| 1
|
423,752
| 28,932,270,464
|
IssuesEvent
|
2023-05-09 01:07:32
|
Codelife-Compet/codelife.dev
|
https://api.github.com/repos/Codelife-Compet/codelife.dev
|
closed
|
Carousel component
|
documentation enhancement
|
### What your feature is related?
Components
### Description
This feature aims to create an animated Carousel component and improve that functionality to any kind of data.
### Tasks
- [x] Create styles for the next button and previous button
- [x] Users can see what position the current element is on the carousel
- [x] Works in any data
- [x] Use animations on the mount and unmount states
- [x] #37
- [x] Documentation in Storybook
### Acceptance Criteria
1. First criteria: Use framer-motion to animate
2. Second criteria: Have three variants (primary, secondary, and tertiary)
3. Third criteria: Improve performance in components like that (if necessary).
4. Fourth criteria: Create documentation on the Storybook with a use case
|
1.0
|
Carousel component - ### What your feature is related?
Components
### Description
This feature aims to create an animated Carousel component and improve that functionality to any kind of data.
### Tasks
- [x] Create styles for the next button and previous button
- [x] Users can see what position the current element is on the carousel
- [x] Works in any data
- [x] Use animations on the mount and unmount states
- [x] #37
- [x] Documentation in Storybook
### Acceptance Criteria
1. First criteria: Use framer-motion to animate
2. Second criteria: Have three variants (primary, secondary, and tertiary)
3. Third criteria: Improve performance in components like that (if necessary).
4. Fourth criteria: Create documentation on the Storybook with a use case
|
non_test
|
carousel component what your feature is related components description this feature aims to create an animated carousel component and improve that functionality to any kind of data tasks create styles for the next button and previous button users can see what position the current element is on the carousel works in any data use animations on the mount and unmount states documentation in storybook acceptance criteria first criteria use framer motion to animate second criteria have three variants primary secondary and tertiary third criteria improve performance in components like that if necessary fourth criteria create documentation on the storybook with a use case
| 0
|
74,620
| 7,434,232,792
|
IssuesEvent
|
2018-03-26 10:17:38
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
closed
|
When I click on Add Member (add/edit cluster/project) change focus of cursor to Name
|
area/cluster area/projects area/ui kind/bug status/resolved status/to-test version/2.0
|
**Rancher versions:** 2.0 master 3/8
**Steps to Reproduce:**
1. Add/Edit a cluster or project
2. In members section click on Add Member
**Results:** Change focus of cursor to Name when I add new member so I can just start adding them.
|
1.0
|
When I click on Add Member (add/edit cluster/project) change focus of cursor to Name - **Rancher versions:** 2.0 master 3/8
**Steps to Reproduce:**
1. Add/Edit a cluster or project
2. In members section click on Add Member
**Results:** Change focus of cursor to Name when I add new member so I can just start adding them.
|
test
|
when i click on add member add edit cluster project change focus of cursor to name rancher versions master steps to reproduce add edit a cluster or project in members section click on add member results change focus of cursor to name when i add new member so i can just start adding them
| 1
|
115,226
| 14,705,157,198
|
IssuesEvent
|
2021-01-04 17:38:53
|
department-of-veterans-affairs/va.gov-team
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
|
closed
|
[Development] 526/BDD: Conditionally Show the Rated Disabilities Information
|
526 BDD design development frontend vsa-benefits
|
## Issue Description
While doing BDD, our service member was shown the rated disabilities information. For both 526 and BDD, this information does not apply unless they have rated disabilities. It is more likely for BDD that they do NOT have rated disabilities but it's possible. Therefore, we likely need to actually check for rated disabilities when determining what the display content looks like.

That being said, it does say "if you have any", so the value-add to making this change may not be high versus the implementation difficulty. We have not done usability to determine whether this causes confusion since it is just an informational page.
## Acceptance Criteria
- [ ] If we decide to move forward with this, conditionally show the rated disabilities bullet only when relevant
- [ ] Remove the ", if you have any" from the bullet since we will know
## How to configure this issue
- [x] **Attached to a Milestone** (when will this be completed?)
- [x] **Attached to an Epic** (what body of work is this a part of?)
- [x] **Labeled with Team** (`product support`, `analytics-insights`, `operations`, `service-design`, `tools-be`, `tools-fe`)
- [x] **Labeled with Practice Area** (`backend`, `frontend`, `devops`, `design`, `research`, `product`, `ia`, `qa`, `analytics`, `contact center`, `research`, `accessibility`, `content`)
- [x] **Labeled with Type** (`bug`, `request`, `discovery`, `documentation`, etc.)
|
1.0
|
[Development] 526/BDD: Conditionally Show the Rated Disabilities Information - ## Issue Description
While doing BDD, our service member was shown the rated disabilities information. For both 526 and BDD, this information does not apply unless they have rated disabilities. It is more likely for BDD that they do NOT have rated disabilities but it's possible. Therefore, we likely need to actually check for rated disabilities when determining what the display content looks like.

That being said, it does say "if you have any", so the value-add to making this change may not be high versus the implementation difficulty. We have not done usability to determine whether this causes confusion since it is just an informational page.
## Acceptance Criteria
- [ ] If we decide to move forward with this, conditionally show the rated disabilities bullet only when relevant
- [ ] Remove the ", if you have any" from the bullet since we will know
## How to configure this issue
- [x] **Attached to a Milestone** (when will this be completed?)
- [x] **Attached to an Epic** (what body of work is this a part of?)
- [x] **Labeled with Team** (`product support`, `analytics-insights`, `operations`, `service-design`, `tools-be`, `tools-fe`)
- [x] **Labeled with Practice Area** (`backend`, `frontend`, `devops`, `design`, `research`, `product`, `ia`, `qa`, `analytics`, `contact center`, `research`, `accessibility`, `content`)
- [x] **Labeled with Type** (`bug`, `request`, `discovery`, `documentation`, etc.)
|
non_test
|
bdd conditionally show the rated disabilities information issue description while doing bdd our service member was shown the rated disabilities information for both and bdd this information does not apply unless they have rated disabilities it is more likely for bdd that they do not have rated disabilities but it s possible therefore we likely need to actually check for rated disabilities when determining what the display content looks like that being said it does say if you have any so the value add to making this change may not be high versus the implementation difficulty we have not done usability to determine whether this causes confusion since it is just an informational page acceptance criteria if we decide to move forward with this conditionally show the rated disabilities bullet only when relevant remove the if you have any from the bullet since we will know how to configure this issue attached to a milestone when will this be completed attached to an epic what body of work is this a part of labeled with team product support analytics insights operations service design tools be tools fe labeled with practice area backend frontend devops design research product ia qa analytics contact center research accessibility content labeled with type bug request discovery documentation etc
| 0
|
52,935
| 6,286,930,810
|
IssuesEvent
|
2017-07-19 14:01:32
|
JamesIves/discord-wow-armory-bot
|
https://api.github.com/repos/JamesIves/discord-wow-armory-bot
|
opened
|
More Options
|
enhancement testing
|
Wondering if supporting things like Mythic+ leaderboards and game token data might be helpful? These options look to be made available with the new Game Data API: https://us.battle.net/forums/en/bnet/topic/20757557566?page=1#post-15
It would also be useful to merge some of the existing API calls to use this single API if such a thing is possible.
|
1.0
|
More Options - Wondering if supporting things like Mythic+ leaderboards and game token data might be helpful? These options look to be made available with the new Game Data API: https://us.battle.net/forums/en/bnet/topic/20757557566?page=1#post-15
It would also be useful to merge some of the existing API calls to use this single API if such a thing is possible.
|
test
|
more options wondering if supporting things like mythic leaderboards and game token data might be helpful these options look to be made available with the new game data api it would also be useful to merge some of the existing api calls to use this single api if such a thing is possible
| 1
|
218,166
| 24,351,805,034
|
IssuesEvent
|
2022-10-03 01:21:16
|
samqws-marketing/amzn-ion-hive-serde
|
https://api.github.com/repos/samqws-marketing/amzn-ion-hive-serde
|
opened
|
CVE-2022-42003 (Medium) detected in jackson-databind-2.6.5.jar
|
security vulnerability
|
## CVE-2022-42003 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.6.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /integration-test/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.6.5/d50be1723a09befd903887099ff2014ea9020333/jackson-databind-2.6.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.6.5/d50be1723a09befd903887099ff2014ea9020333/jackson-databind-2.6.5.jar</p>
<p>
Dependency Hierarchy:
- hive-jdbc-2.3.9.jar (Root Library)
- hive-common-2.3.9.jar
- :x: **jackson-databind-2.6.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/amzn-ion-hive-serde/commit/ffb6641ebb10aac58bb7eec412635e91e79fac24">ffb6641ebb10aac58bb7eec412635e91e79fac24</a></p>
<p>Found in base branch: <b>0.3.0</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In FasterXML jackson-databind before 2.14.0-rc1, resource exhaustion can occur because of a lack of a check in primitive value deserializers to avoid deep wrapper array nesting, when the UNWRAP_SINGLE_VALUE_ARRAYS feature is enabled.
<p>Publish Date: 2022-10-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-42003>CVE-2022-42003</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
|
True
|
CVE-2022-42003 (Medium) detected in jackson-databind-2.6.5.jar - ## CVE-2022-42003 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.6.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /integration-test/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.6.5/d50be1723a09befd903887099ff2014ea9020333/jackson-databind-2.6.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.6.5/d50be1723a09befd903887099ff2014ea9020333/jackson-databind-2.6.5.jar</p>
<p>
Dependency Hierarchy:
- hive-jdbc-2.3.9.jar (Root Library)
- hive-common-2.3.9.jar
- :x: **jackson-databind-2.6.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/amzn-ion-hive-serde/commit/ffb6641ebb10aac58bb7eec412635e91e79fac24">ffb6641ebb10aac58bb7eec412635e91e79fac24</a></p>
<p>Found in base branch: <b>0.3.0</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In FasterXML jackson-databind before 2.14.0-rc1, resource exhaustion can occur because of a lack of a check in primitive value deserializers to avoid deep wrapper array nesting, when the UNWRAP_SINGLE_VALUE_ARRAYS feature is enabled.
<p>Publish Date: 2022-10-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-42003>CVE-2022-42003</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
|
non_test
|
cve medium detected in jackson databind jar cve medium severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file integration test build gradle path to vulnerable library home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy hive jdbc jar root library hive common jar x jackson databind jar vulnerable library found in head commit a href found in base branch vulnerability details in fasterxml jackson databind before resource exhaustion can occur because of a lack of a check in primitive value deserializers to avoid deep wrapper array nesting when the unwrap single value arrays feature is enabled publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href
| 0
|
93,328
| 8,409,181,532
|
IssuesEvent
|
2018-10-12 06:11:52
|
ifmeorg/ifme
|
https://api.github.com/repos/ifmeorg/ifme
|
opened
|
Flakey test in strategies_controller_spec.rb related to `ActiveRecord::RecordNotUnique`
|
hacktoberfest refactoring tests
|
<!--[
Thank you for contributing! Please use this issue template.
Contributor Blurb: https://github.com/ifmeorg/ifme/wiki/Contributor-Blurb
Join Our Slack: https://github.com/ifmeorg/ifme/wiki/Join-Our-Slack
Issue creation is a contribution!
Need help? Post in the #dev channel on Slack
Please use the appropriate labels to tag this issue
]-->
# Description
<!--[Description of issue, this includes a feature suggestion, bug report, code cleanup, and refactoring idea]-->
I notice this is a frequent flakey test that fails on CircleCI. Sometimes I see it happen randomly when I run rspec in my dev environment.
```
spec/controllers/strategies_controller_spec.rb
Failure/Error: another_user = create(:user)
ActiveRecord::RecordNotUnique:
PG::UniqueViolation: ERROR: duplicate key value violates unique constraint "users_pkey"
DETAIL: Key (id)=(1) already exists.
: INSERT INTO "users" ("email", "encrypted_password", "created_at", "updated_at", "name", "uid") VALUES ($1, $2, $3, $4, $5, $6) RETURNING "id"
./spec/controllers/strategies_controller_spec.rb:298:in `block (5 levels) in <top (required)>'
------------------
--- Caused by: ---
PG::UniqueViolation:
ERROR: duplicate key value violates unique constraint "users_pkey"
DETAIL: Key (id)=(1) already exists.
./spec/controllers/strategies_controller_spec.rb:298:in `block (5 levels) in <top (required)>'
```
# Do you want to be the assignee to work on this?
🚫 <!--[NO, remove line if not applicable]-->
<!--[
You don't have to work on the issue to file an issue!
If you want to, assign yourself to the issue
If you are unable to find your username in the Assignees dropdown, let us know in #dev on Slack
]-->
|
1.0
|
Flakey test in strategies_controller_spec.rb related to `ActiveRecord::RecordNotUnique` - <!--[
Thank you for contributing! Please use this issue template.
Contributor Blurb: https://github.com/ifmeorg/ifme/wiki/Contributor-Blurb
Join Our Slack: https://github.com/ifmeorg/ifme/wiki/Join-Our-Slack
Issue creation is a contribution!
Need help? Post in the #dev channel on Slack
Please use the appropriate labels to tag this issue
]-->
# Description
<!--[Description of issue, this includes a feature suggestion, bug report, code cleanup, and refactoring idea]-->
I notice this is a frequent flakey test that fails on CircleCI. Sometimes I see it happen randomly when I run rspec in my dev environment.
```
spec/controllers/strategies_controller_spec.rb
Failure/Error: another_user = create(:user)
ActiveRecord::RecordNotUnique:
PG::UniqueViolation: ERROR: duplicate key value violates unique constraint "users_pkey"
DETAIL: Key (id)=(1) already exists.
: INSERT INTO "users" ("email", "encrypted_password", "created_at", "updated_at", "name", "uid") VALUES ($1, $2, $3, $4, $5, $6) RETURNING "id"
./spec/controllers/strategies_controller_spec.rb:298:in `block (5 levels) in <top (required)>'
------------------
--- Caused by: ---
PG::UniqueViolation:
ERROR: duplicate key value violates unique constraint "users_pkey"
DETAIL: Key (id)=(1) already exists.
./spec/controllers/strategies_controller_spec.rb:298:in `block (5 levels) in <top (required)>'
```
# Do you want to be the assignee to work on this?
🚫 <!--[NO, remove line if not applicable]-->
<!--[
You don't have to work on the issue to file an issue!
If you want to, assign yourself to the issue
If you are unable to find your username in the Assignees dropdown, let us know in #dev on Slack
]-->
|
test
|
flakey test in strategies controller spec rb related to activerecord recordnotunique thank you for contributing please use this issue template contributor blurb join our slack issue creation is a contribution need help post in the dev channel on slack please use the appropriate labels to tag this issue description i notice this is a frequent flakey test that fails on circleci sometimes i see it happen randomly when i run rspec in my dev environment spec controllers strategies controller spec rb failure error another user create user activerecord recordnotunique pg uniqueviolation error duplicate key value violates unique constraint users pkey detail key id already exists insert into users email encrypted password created at updated at name uid values returning id spec controllers strategies controller spec rb in block levels in caused by pg uniqueviolation error duplicate key value violates unique constraint users pkey detail key id already exists spec controllers strategies controller spec rb in block levels in do you want to be the assignee to work on this 🚫 you don t have to work on the issue to file an issue if you want to assign yourself to the issue if you are unable to find your username in the assignees dropdown let us know in dev on slack
| 1
|
217,663
| 16,724,961,239
|
IssuesEvent
|
2021-06-10 11:55:56
|
RECETOX/galaxytools
|
https://api.github.com/repos/RECETOX/galaxytools
|
closed
|
Create a tool for batch normalization
|
documentation
|
### Research
- [x] Select an appropriate algorithm for normalization (options: ComBat, WaveICA, NormAE)
-> [WaveICA](https://github.com/dengkuistat/WaveICA) (R package)
- [x] Identify the proper input format
### Integration
- [x] Create a Docker image
- [x] Create a galaxy wrapper
### Further steps
- [x] Change inputs for the tool once we handle metadata in a workflow
- [x] Add Galaxy tests
- [ ] Add "Workflow position" section once we have all the up/downstream tools
|
1.0
|
Create a tool for batch normalization - ### Research
- [x] Select an appropriate algorithm for normalization (options: ComBat, WaveICA, NormAE)
-> [WaveICA](https://github.com/dengkuistat/WaveICA) (R package)
- [x] Identify the proper input format
### Integration
- [x] Create a Docker image
- [x] Create a galaxy wrapper
### Further steps
- [x] Change inputs for the tool once we handle metadata in a workflow
- [x] Add Galaxy tests
- [ ] Add "Workflow position" section once we have all the up/downstream tools
|
non_test
|
create a tool for batch normalization research select an appropriate algorithm for normalization options combat waveica normae r package identify the proper input format integration create a docker image create a galaxy wrapper further steps change inputs for the tool once we handle metadata in a workflow add galaxy tests add workflow position section once we have all the up downstream tools
| 0
|
335,825
| 30,086,916,862
|
IssuesEvent
|
2023-06-29 09:18:57
|
SWM-Cupid/jikting-backend
|
https://api.github.com/repos/SWM-Cupid/jikting-backend
|
closed
|
[SP-29] 미팅팀 조회 API 명세서 수정
|
docs fix test
|
## Jira Issue
[SP-29](https://soma-cupid.atlassian.net/browse/SP-29?atlOrigin=eyJpIjoiMzQ4NWM1NjZiNWZlNGFlM2I1MDA4NzlhMDU0ZjJlYmEiLCJwIjoiaiJ9)
## 요구사항
- [ ] 미팅팀 조회 api path 수정
- [ ] 미팅팀 조회 response List로 감싸기
[SP-29]: https://soma-cupid.atlassian.net/browse/SP-29?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ
|
1.0
|
[SP-29] 미팅팀 조회 API 명세서 수정 - ## Jira Issue
[SP-29](https://soma-cupid.atlassian.net/browse/SP-29?atlOrigin=eyJpIjoiMzQ4NWM1NjZiNWZlNGFlM2I1MDA4NzlhMDU0ZjJlYmEiLCJwIjoiaiJ9)
## 요구사항
- [ ] 미팅팀 조회 api path 수정
- [ ] 미팅팀 조회 response List로 감싸기
[SP-29]: https://soma-cupid.atlassian.net/browse/SP-29?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ
|
test
|
미팅팀 조회 api 명세서 수정 jira issue 요구사항 미팅팀 조회 api path 수정 미팅팀 조회 response list로 감싸기
| 1
|
130,670
| 10,632,154,622
|
IssuesEvent
|
2019-10-15 09:47:22
|
inbo/vespa-watch
|
https://api.github.com/repos/inbo/vespa-watch
|
closed
|
Authenticated users should also provide contact information
|
✅ to test 🐝 observation form
|
On the nest and individual form, authenticated users currently don't have to provide contact information. As a result, we don't know who submitted those obs.
- [ ] Adapt form so authenticated users also have to provide all the required contact info + terms_of_service
- [ ] Since there is not difference anymore between auth and no-auth users, the model can probably be simplified: e.g. making fields mandatory.
|
1.0
|
Authenticated users should also provide contact information - On the nest and individual form, authenticated users currently don't have to provide contact information. As a result, we don't know who submitted those obs.
- [ ] Adapt form so authenticated users also have to provide all the required contact info + terms_of_service
- [ ] Since there is not difference anymore between auth and no-auth users, the model can probably be simplified: e.g. making fields mandatory.
|
test
|
authenticated users should also provide contact information on the nest and individual form authenticated users currently don t have to provide contact information as a result we don t know who submitted those obs adapt form so authenticated users also have to provide all the required contact info terms of service since there is not difference anymore between auth and no auth users the model can probably be simplified e g making fields mandatory
| 1
|
47,705
| 5,908,640,616
|
IssuesEvent
|
2017-05-19 20:59:19
|
cerberustesting/cerberus-source
|
https://api.github.com/repos/cerberustesting/cerberus-source
|
closed
|
Cannot save robot host that contains @
|
Nat : bug Perim : GUIExecution Prio : minor TestCase : ToBeCreated
|
On Robot Page, host field is sanitized.
@ is converted into ?
|
1.0
|
Cannot save robot host that contains @ - On Robot Page, host field is sanitized.
@ is converted into ?
|
test
|
cannot save robot host that contains on robot page host field is sanitized is converted into
| 1
|
243,734
| 20,516,266,469
|
IssuesEvent
|
2022-03-01 12:05:35
|
Cookie-AutoDelete/Cookie-AutoDelete
|
https://api.github.com/repos/Cookie-AutoDelete/Cookie-AutoDelete
|
opened
|
[Bug] Logging out of some Progressive Web Apps (PWA) on desktop (chromium)
|
untested bug/issue
|
### Acknowledgements
- [X] I acknowledge that I have read the above items
### Describe the bug
Whenever I use the YouTube, Twitter, or Mastodon PWA on desktop, *some* of the cookies get deleted, even if I sign in immediately after starting the app. Whenever I log in from the PWA, less than a minute (immediately after, in the case of YouTube apps), the cookies get deleted, and I need to log in again. The only way to prevent this is to leave the webpage for the app open in a browser tab, making the app itself pointless. This is on the Brave Browser.
### To Reproduce
1. Go to any non-whitelisted website with a PWA feature.
3. Install the PWA.
4. Ensure that you have *no* open browser tabs for the website of the PWA that you just installed.
5. Open the PWA that you just installed.
6. Log in.
7. Go to the home page.
8. Wait.
### Expected Behavior
That, when I log into a PWA, I will stay logged in until I log out or close the PWA.
### Screenshots
_No response_
### System Info - Operating System (OS)
Manjaro KDE Edition (Kernel 5.16)
### System Info - Browser Info
Brave v1.36.105
### System Info - CookieAutoDelete Version
3.6.0
### Additional Context
While I've not tested this in other browsers, I don't see why this issue wouldn't persist on other chromium based browsers.
This only happens for some apps. This is *not* an issue for Reddit, but as mentioned before, is an issue that I've had with YouTube's Apps, as well as those for Mastodon.online, Twitter, and Odysee. I can get Odysee to stay logged in by greylisting its 'auth_token', but that's a bandaid, not a real solution. I'd like to be able to go back to *not* having to do that.
It also still keeps some of the app's cookies until I close it, but it gets rid of the ones that keep me logged in, even if I don't close it.
|
1.0
|
[Bug] Logging out of some Progressive Web Apps (PWA) on desktop (chromium) - ### Acknowledgements
- [X] I acknowledge that I have read the above items
### Describe the bug
Whenever I use the YouTube, Twitter, or Mastodon PWA on desktop, *some* of the cookies get deleted, even if I sign in immediately after starting the app. Whenever I log in from the PWA, less than a minute (immediately after, in the case of YouTube apps), the cookies get deleted, and I need to log in again. The only way to prevent this is to leave the webpage for the app open in a browser tab, making the app itself pointless. This is on the Brave Browser.
### To Reproduce
1. Go to any non-whitelisted website with a PWA feature.
3. Install the PWA.
4. Ensure that you have *no* open browser tabs for the website of the PWA that you just installed.
5. Open the PWA that you just installed.
6. Log in.
7. Go to the home page.
8. Wait.
### Expected Behavior
That, when I log into a PWA, I will stay logged in until I log out or close the PWA.
### Screenshots
_No response_
### System Info - Operating System (OS)
Manjaro KDE Edition (Kernel 5.16)
### System Info - Browser Info
Brave v1.36.105
### System Info - CookieAutoDelete Version
3.6.0
### Additional Context
While I've not tested this in other browsers, I don't see why this issue wouldn't persist on other chromium based browsers.
This only happens for some apps. This is *not* an issue for Reddit, but as mentioned before, is an issue that I've had with YouTube's Apps, as well as those for Mastodon.online, Twitter, and Odysee. I can get Odysee to stay logged in by greylisting its 'auth_token', but that's a bandaid, not a real solution. I'd like to be able to go back to *not* having to do that.
It also still keeps some of the app's cookies until I close it, but it gets rid of the ones that keep me logged in, even if I don't close it.
|
test
|
logging out of some progressive web apps pwa on desktop chromium acknowledgements i acknowledge that i have read the above items describe the bug whenever i use the youtube twitter or mastodon pwa on desktop some of the cookies get deleted even if i sign in immediately after starting the app whenever i log in from the pwa less than a minute immediately after in the case of youtube apps the cookies get deleted and i need to log in again the only way to prevent this is to leave the webpage for the app open in a browser tab making the app itself pointless this is on the brave browser to reproduce go to any non whitelisted website with a pwa feature install the pwa ensure that you have no open browser tabs for the website of the pwa that you just installed open the pwa that you just installed log in go to the home page wait expected behavior that when i log into a pwa i will stay logged in until i log out or close the pwa screenshots no response system info operating system os manjaro kde edition kernel system info browser info brave system info cookieautodelete version additional context while i ve not tested this in other browsers i don t see why this issue wouldn t persist on other chromium based browsers this only happens for some apps this is not an issue for reddit but as mentioned before is an issue that i ve had with youtube s apps as well as those for mastodon online twitter and odysee i can get odysee to stay logged in by greylisting its auth token but that s a bandaid not a real solution i d like to be able to go back to not having to do that it also still keeps some of the app s cookies until i close it but it gets rid of the ones that keep me logged in even if i don t close it
| 1
|
307,857
| 26,568,876,180
|
IssuesEvent
|
2023-01-20 23:53:44
|
US-EPA-CAMD/easey-testing
|
https://api.github.com/repos/US-EPA-CAMD/easey-testing
|
closed
|
MP Spans Test Automation v1.0
|
Emissioners Test Automation
|
## Context
Create one script that complete the following actions for MP spans
Create, Edit, Evaluate, Revert to Offical Record

|
1.0
|
MP Spans Test Automation v1.0 - ## Context
Create one script that complete the following actions for MP spans
Create, Edit, Evaluate, Revert to Offical Record

|
test
|
mp spans test automation context create one script that complete the following actions for mp spans create edit evaluate revert to offical record
| 1
|
30,130
| 14,429,565,479
|
IssuesEvent
|
2020-12-06 14:44:58
|
reakit/reakit
|
https://api.github.com/repos/reakit/reakit
|
closed
|
Menu performance (delay when initially rendering many Menu's)
|
performance stale
|
## 🐛 Bug report
### Current behavior
For a component which consists of many `Menu` components (in the below example, there are 20 menus w/ 10 items in each). There seems to be a bit of a delay when initially rendering the menus.
### Steps to reproduce the bug
Provide a repo or sandbox with the bug and describe the steps to reproduce it.
1. Open sandbox: https://codesandbox.io/s/billowing-mountain-izt5v?file=/index.js
2. When you click on "Show menus", you will see a delay before the `Menu`s render

3. Upon opening devtools, you will see the frame rate drops to 0 for about a second as the `Menu`'s are rendering
<img width="1649" alt="Screenshot 2020-08-31 at 5 09 26 pm" src="https://user-images.githubusercontent.com/7336481/91692807-5bc92580-ebad-11ea-8fa1-b0aa77c5d833.png">
### Expected behavior
Hopefully a delay which is unnoticeable by the user when "Show menus" is clicked.
### Possible solutions
I'm not exactly sure what the culprit is here to be honest, but upon investigation of the performance timeline in devtools, it looks like the `createComponent`, `createHook`, and popperjs fns (the `index.js`'s) are taking up a bit of time... Maybe something weird is happening in Popper?
<img width="823" alt="Screenshot 2020-08-31 at 5 11 21 pm" src="https://user-images.githubusercontent.com/7336481/91692810-5d92e900-ebad-11ea-8fc9-6333b8e085bf.png">
|
True
|
Menu performance (delay when initially rendering many Menu's) - ## 🐛 Bug report
### Current behavior
For a component which consists of many `Menu` components (in the below example, there are 20 menus w/ 10 items in each). There seems to be a bit of a delay when initially rendering the menus.
### Steps to reproduce the bug
Provide a repo or sandbox with the bug and describe the steps to reproduce it.
1. Open sandbox: https://codesandbox.io/s/billowing-mountain-izt5v?file=/index.js
2. When you click on "Show menus", you will see a delay before the `Menu`s render

3. Upon opening devtools, you will see the frame rate drops to 0 for about a second as the `Menu`'s are rendering
<img width="1649" alt="Screenshot 2020-08-31 at 5 09 26 pm" src="https://user-images.githubusercontent.com/7336481/91692807-5bc92580-ebad-11ea-8fa1-b0aa77c5d833.png">
### Expected behavior
Hopefully a delay which is unnoticeable by the user when "Show menus" is clicked.
### Possible solutions
I'm not exactly sure what the culprit is here to be honest, but upon investigation of the performance timeline in devtools, it looks like the `createComponent`, `createHook`, and popperjs fns (the `index.js`'s) are taking up a bit of time... Maybe something weird is happening in Popper?
<img width="823" alt="Screenshot 2020-08-31 at 5 11 21 pm" src="https://user-images.githubusercontent.com/7336481/91692810-5d92e900-ebad-11ea-8fc9-6333b8e085bf.png">
|
non_test
|
menu performance delay when initially rendering many menu s 🐛 bug report current behavior for a component which consists of many menu components in the below example there are menus w items in each there seems to be a bit of a delay when initially rendering the menus steps to reproduce the bug provide a repo or sandbox with the bug and describe the steps to reproduce it open sandbox when you click on show menus you will see a delay before the menu s render upon opening devtools you will see the frame rate drops to for about a second as the menu s are rendering img width alt screenshot at pm src expected behavior hopefully a delay which is unnoticeable by the user when show menus is clicked possible solutions i m not exactly sure what the culprit is here to be honest but upon investigation of the performance timeline in devtools it looks like the createcomponent createhook and popperjs fns the index js s are taking up a bit of time maybe something weird is happening in popper img width alt screenshot at pm src
| 0
|
97,207
| 8,651,574,268
|
IssuesEvent
|
2018-11-27 03:50:50
|
humera987/FXLabs-Test-Automation
|
https://api.github.com/repos/humera987/FXLabs-Test-Automation
|
closed
|
projecttest16 : ApiV1SkillsPostCreate
|
projecttest16
|
Project : projecttest16
Job : UAT
Env : UAT
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=MGI4ZWVmMTEtMDgwMy00YzU2LTlhOTYtNmRkMzA5NzBjYTVk; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Fri, 16 Nov 2018 05:41:28 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/skills
Request :
{
'accessKey' : null,
'createdBy' : null,
'createdDate' : null,
'description' : null,
'host' : null,
'id' : null,
'inactive' : null,
'key' : null,
'modifiedBy' : null,
'modifiedDate' : null,
'name' : null,
'opts' : [ ],
'org' : {
'createdBy' : null,
'createdDate' : null,
'id' : null,
'inactive' : null,
'modifiedBy' : null,
'modifiedDate' : null,
'name' : null,
'version' : null
},
'prop1' : null,
'prop2' : null,
'prop3' : null,
'prop4' : null,
'prop5' : null,
'secretKey' : null,
'skillType' : 'NOTIFICATION',
'version' : null
}
Response :
{
"timestamp" : "2018-11-16T05:41:28.575+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/skills"
}
Logs :
Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed]Assertion [@StatusCode != 500] resolved-to [404 != 500] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]Assertion [@StatusCode != 200] resolved-to [404 != 200] result [Passed]
--- FX Bot ---
|
1.0
|
projecttest16 : ApiV1SkillsPostCreate - Project : projecttest16
Job : UAT
Env : UAT
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=MGI4ZWVmMTEtMDgwMy00YzU2LTlhOTYtNmRkMzA5NzBjYTVk; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Fri, 16 Nov 2018 05:41:28 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/skills
Request :
{
'accessKey' : null,
'createdBy' : null,
'createdDate' : null,
'description' : null,
'host' : null,
'id' : null,
'inactive' : null,
'key' : null,
'modifiedBy' : null,
'modifiedDate' : null,
'name' : null,
'opts' : [ ],
'org' : {
'createdBy' : null,
'createdDate' : null,
'id' : null,
'inactive' : null,
'modifiedBy' : null,
'modifiedDate' : null,
'name' : null,
'version' : null
},
'prop1' : null,
'prop2' : null,
'prop3' : null,
'prop4' : null,
'prop5' : null,
'secretKey' : null,
'skillType' : 'NOTIFICATION',
'version' : null
}
Response :
{
"timestamp" : "2018-11-16T05:41:28.575+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/skills"
}
Logs :
Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed]Assertion [@StatusCode != 500] resolved-to [404 != 500] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]Assertion [@StatusCode != 200] resolved-to [404 != 200] result [Passed]
--- FX Bot ---
|
test
|
project job uat env uat region us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options set cookie content type transfer encoding date endpoint request accesskey null createdby null createddate null description null host null id null inactive null key null modifiedby null modifieddate null name null opts org createdby null createddate null id null inactive null modifiedby null modifieddate null name null version null null null null null null secretkey null skilltype notification version null response timestamp status error not found message no message available path api api skills logs assertion resolved to result assertion resolved to result assertion resolved to result assertion resolved to result fx bot
| 1
|
23,878
| 12,139,528,251
|
IssuesEvent
|
2020-04-23 19:00:15
|
mozilla-mobile/fenix
|
https://api.github.com/repos/mozilla-mobile/fenix
|
opened
|
Analyze dex load time telemetry
|
eng:performance
|
We added the dex load time telemetry https://github.com/mozilla-mobile/fenix/issues/8803. Unfortunately, our analysis is blocked because the `startup-timeline` ping is not being ingested: https://bugzilla.mozilla.org/show_bug.cgi?id=1631491 Once we're unblocked, we should analyze the probe using the original purpose of the probe: is the dex load time negligible across a wide range of devices?
|
True
|
Analyze dex load time telemetry - We added the dex load time telemetry https://github.com/mozilla-mobile/fenix/issues/8803. Unfortunately, our analysis is blocked because the `startup-timeline` ping is not being ingested: https://bugzilla.mozilla.org/show_bug.cgi?id=1631491 Once we're unblocked, we should analyze the probe using the original purpose of the probe: is the dex load time negligible across a wide range of devices?
|
non_test
|
analyze dex load time telemetry we added the dex load time telemetry unfortunately our analysis is blocked because the startup timeline ping is not being ingested once we re unblocked we should analyze the probe using the original purpose of the probe is the dex load time negligible across a wide range of devices
| 0
|
113,896
| 9,668,366,833
|
IssuesEvent
|
2019-05-21 15:00:40
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
roachtest: rebalance-leases-by-load failed
|
C-test-failure O-roachtest O-robot
|
SHA: https://github.com/cockroachdb/cockroach/commits/9671342fead0509bec0913bae4ae1f244660788e
Parameters:
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stress instead of stressrace and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stressrace TESTS=rebalance-leases-by-load PKG=roachtest TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1298500&tab=buildLog
```
The test failed on branch=release-19.1, cloud=gce:
rebalance_load.go:132,rebalance_load.go:147,test.go:1251: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-1298500-rebalance-leases-by-load:4 -- ./workload run kv --read-percent=95 --splits=2 --tolerate-errors --concurrency=128 --duration=3m0s {pgurl:1-3} returned:
stderr:
I190521 11:26:20.365020 1 workload/workload.go:562 starting 2 splits
Error: ALTER TABLE kv SPLIT AT VALUES (3074457345618257920): pq: splits would be immediately discarded by merge queue; disable the merge queue first by running 'SET CLUSTER SETTING kv.range_merge.queue_enabled = false'
Error: ssh verbose log retained in /root/.roachprod/debug/ssh_35.196.43.61_2019-05-21T11:26:19Z: exit status 1
stdout:
: exit status 1
```
|
2.0
|
roachtest: rebalance-leases-by-load failed - SHA: https://github.com/cockroachdb/cockroach/commits/9671342fead0509bec0913bae4ae1f244660788e
Parameters:
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stress instead of stressrace and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stressrace TESTS=rebalance-leases-by-load PKG=roachtest TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1298500&tab=buildLog
```
The test failed on branch=release-19.1, cloud=gce:
rebalance_load.go:132,rebalance_load.go:147,test.go:1251: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-1298500-rebalance-leases-by-load:4 -- ./workload run kv --read-percent=95 --splits=2 --tolerate-errors --concurrency=128 --duration=3m0s {pgurl:1-3} returned:
stderr:
I190521 11:26:20.365020 1 workload/workload.go:562 starting 2 splits
Error: ALTER TABLE kv SPLIT AT VALUES (3074457345618257920): pq: splits would be immediately discarded by merge queue; disable the merge queue first by running 'SET CLUSTER SETTING kv.range_merge.queue_enabled = false'
Error: ssh verbose log retained in /root/.roachprod/debug/ssh_35.196.43.61_2019-05-21T11:26:19Z: exit status 1
stdout:
: exit status 1
```
|
test
|
roachtest rebalance leases by load failed sha parameters to repro try don t forget to check out a clean suitable branch and experiment with the stress invocation until the desired results present themselves for example using stress instead of stressrace and passing the p stressflag which controls concurrency scripts gceworker sh start scripts gceworker sh mosh cd go src github com cockroachdb cockroach stdbuf ol el make stressrace tests rebalance leases by load pkg roachtest testtimeout stressflags maxtime timeout tee tmp stress log failed test the test failed on branch release cloud gce rebalance load go rebalance load go test go home agent work go src github com cockroachdb cockroach bin roachprod run teamcity rebalance leases by load workload run kv read percent splits tolerate errors concurrency duration pgurl returned stderr workload workload go starting splits error alter table kv split at values pq splits would be immediately discarded by merge queue disable the merge queue first by running set cluster setting kv range merge queue enabled false error ssh verbose log retained in root roachprod debug ssh exit status stdout exit status
| 1
|
166,398
| 6,304,039,979
|
IssuesEvent
|
2017-07-21 15:02:37
|
quintel/etmodel
|
https://api.github.com/repos/quintel/etmodel
|
closed
|
Hiding/showing goals/targets section doesn't work properly
|
Bug Priority
|
The target section does not seem to show up now (even if targets have been set) ...
|
1.0
|
Hiding/showing goals/targets section doesn't work properly - The target section does not seem to show up now (even if targets have been set) ...
|
non_test
|
hiding showing goals targets section doesn t work properly the target section does not seem to show up now even if targets have been set
| 0
|
309,279
| 26,659,532,308
|
IssuesEvent
|
2023-01-25 19:46:35
|
systemd/systemd
|
https://api.github.com/repos/systemd/systemd
|
closed
|
TEST-74-AUX-UTILS fails on Debian / systemd-firstboot --interactive inconsistent behaviour
|
tests vconsole
|
Running the autopkgtest suite on Debian, I get a failure of TEST-74-AUX-UTILS (full log attached).
[log.txt](https://github.com/systemd/systemd/files/10483942/log.txt)
Relevant parts are
```
[ 219.158009] testsuite-74.sh[273]: Welcome to your new installation of Linux!
[ 219.159553] testsuite-74.sh[273]: Please configure your system!
[ 219.160861] testsuite-74.sh[273]: -- Press any key to proceed --
[ 219.163951] testsuite-74.sh[273]: ‣ Please enter system locale name or number (empty to skip, "list" to list options): ‣ Please enter system message locale name or number (empty to skip, "list" to list options):
[ 219.164633] testsuite-74.sh[185]: + grep -q LANG=foo test-root/etc/default/locale
[ 219.170099] testsuite-74.sh[185]: + grep -q LC_MESSAGES=bar test-root/etc/default/locale
[ 219.176229] testsuite-74.sh[277]: + systemd-firstboot --root=test-root --prompt-keymap
[ 219.180245] testsuite-74.sh[276]: + echo -ne '\nfoo\n'
[ 219.191794] testsuite-74.sh[185]: + grep -q KEYMAP=foo test-root/etc/vconsole.conf
[ 219.196676] testsuite-74.sh[278]: grep: test-root/etc/vconsole.conf: No such file or directory
[ 219.199136] testsuite-74.sh[185]: + at_exit
[ 219.200910] testsuite-74.sh[185]: + [[ -n test-root ]]
[ 219.201602] testsuite-74.sh[185]: + ls -lR test-root
[ 219.213046] testsuite-74.sh[279]: test-root:
[ 219.213449] testsuite-74.sh[279]: total 8
[ 219.213791] testsuite-74.sh[279]: drwxr-xr-x 2 root root 4096 Jan 22 23:27 bin
[ 219.214087] testsuite-74.sh[279]: drwxr-xr-x 3 root root 4096 Jan 22 23:27 etc
[ 219.214405] testsuite-74.sh[279]: test-root/bin:
[ 219.214676] testsuite-74.sh[279]: total 0
[ 219.215034] testsuite-74.sh[279]: -rw-r--r-- 1 root root 0 Jan 22 23:27 barshell
[ 219.215277] testsuite-74.sh[279]: -rw-r--r-- 1 root root 0 Jan 22 23:27 fooshell
[ 219.215509] testsuite-74.sh[279]: test-root/etc:
[ 219.215761] testsuite-74.sh[279]: total 4
[ 219.216066] testsuite-74.sh[279]: drwxr-xr-x 2 root root 4096 Jan 22 23:27 default
[ 219.216501] testsuite-74.sh[279]: test-root/etc/default:
[ 219.219599] testsuite-74.sh[279]: total 4
[ 219.220017] testsuite-74.sh[279]: -rw-r--r-- 1 root root 25 Jan 22 23:27 locale
[ 219.220728] testsuite-74.sh[185]: + rm -fr test-root
```
Stracing systemd-firstboot in interactive mode one can see that it looks for keymaps in `/usr/lib/kbd/`, which do not exist on Debian/Ubuntu. If those are missing, systemd-firstboot in interactive mode will not generate an /etc/vconsole.conf.
Interestingly, in non-interactive mode, systemd-firstboot does not check for `/usr/lib/kbd/` and generates /etc/vconsole.conf.
As said, Debian/Ubuntu does not use (and does not ship!) the keymaps provided by kbd but uses a different setup based on console-setup.
Consequently, the Debian/Ubuntu package is built with `-Dvconsole=false`.
Ideas how to fix this test failure:
a/ Always generate /etc/vconsole.conf in interactive mode without checking for /usr/lib/kbd
b/ If systemd was built with `-Dvconsole=off`, do *not* generate a /etc/vconsole.conf and update TEST-74-AUX-UTILS accordingly to skip those checks.
I'd personally prefer b/ but would welcome feedback before working on a PR.
|
1.0
|
TEST-74-AUX-UTILS fails on Debian / systemd-firstboot --interactive inconsistent behaviour - Running the autopkgtest suite on Debian, I get a failure of TEST-74-AUX-UTILS (full log attached).
[log.txt](https://github.com/systemd/systemd/files/10483942/log.txt)
Relevant parts are
```
[ 219.158009] testsuite-74.sh[273]: Welcome to your new installation of Linux!
[ 219.159553] testsuite-74.sh[273]: Please configure your system!
[ 219.160861] testsuite-74.sh[273]: -- Press any key to proceed --
[ 219.163951] testsuite-74.sh[273]: ‣ Please enter system locale name or number (empty to skip, "list" to list options): ‣ Please enter system message locale name or number (empty to skip, "list" to list options):
[ 219.164633] testsuite-74.sh[185]: + grep -q LANG=foo test-root/etc/default/locale
[ 219.170099] testsuite-74.sh[185]: + grep -q LC_MESSAGES=bar test-root/etc/default/locale
[ 219.176229] testsuite-74.sh[277]: + systemd-firstboot --root=test-root --prompt-keymap
[ 219.180245] testsuite-74.sh[276]: + echo -ne '\nfoo\n'
[ 219.191794] testsuite-74.sh[185]: + grep -q KEYMAP=foo test-root/etc/vconsole.conf
[ 219.196676] testsuite-74.sh[278]: grep: test-root/etc/vconsole.conf: No such file or directory
[ 219.199136] testsuite-74.sh[185]: + at_exit
[ 219.200910] testsuite-74.sh[185]: + [[ -n test-root ]]
[ 219.201602] testsuite-74.sh[185]: + ls -lR test-root
[ 219.213046] testsuite-74.sh[279]: test-root:
[ 219.213449] testsuite-74.sh[279]: total 8
[ 219.213791] testsuite-74.sh[279]: drwxr-xr-x 2 root root 4096 Jan 22 23:27 bin
[ 219.214087] testsuite-74.sh[279]: drwxr-xr-x 3 root root 4096 Jan 22 23:27 etc
[ 219.214405] testsuite-74.sh[279]: test-root/bin:
[ 219.214676] testsuite-74.sh[279]: total 0
[ 219.215034] testsuite-74.sh[279]: -rw-r--r-- 1 root root 0 Jan 22 23:27 barshell
[ 219.215277] testsuite-74.sh[279]: -rw-r--r-- 1 root root 0 Jan 22 23:27 fooshell
[ 219.215509] testsuite-74.sh[279]: test-root/etc:
[ 219.215761] testsuite-74.sh[279]: total 4
[ 219.216066] testsuite-74.sh[279]: drwxr-xr-x 2 root root 4096 Jan 22 23:27 default
[ 219.216501] testsuite-74.sh[279]: test-root/etc/default:
[ 219.219599] testsuite-74.sh[279]: total 4
[ 219.220017] testsuite-74.sh[279]: -rw-r--r-- 1 root root 25 Jan 22 23:27 locale
[ 219.220728] testsuite-74.sh[185]: + rm -fr test-root
```
Stracing systemd-firstboot in interactive mode one can see that it looks for keymaps in `/usr/lib/kbd/`, which do not exist on Debian/Ubuntu. If those are missing, systemd-firstboot in interactive mode will not generate an /etc/vconsole.conf.
Interestingly, in non-interactive mode, systemd-firstboot does not check for `/usr/lib/kbd/` and generates /etc/vconsole.conf.
As said, Debian/Ubuntu does not use (and does not ship!) the keymaps provided by kbd but uses a different setup based on console-setup.
Consequently, the Debian/Ubuntu package is built with `-Dvconsole=false`.
Ideas how to fix this test failure:
a/ Always generate /etc/vconsole.conf in interactive mode without checking for /usr/lib/kbd
b/ If systemd was built with `-Dvconsole=off`, do *not* generate a /etc/vconsole.conf and update TEST-74-AUX-UTILS accordingly to skip those checks.
I'd personally prefer b/ but would welcome feedback before working on a PR.
|
test
|
test aux utils fails on debian systemd firstboot interactive inconsistent behaviour running the autopkgtest suite on debian i get a failure of test aux utils full log attached relevant parts are testsuite sh welcome to your new installation of linux testsuite sh please configure your system testsuite sh press any key to proceed testsuite sh ‣ please enter system locale name or number empty to skip list to list options ‣ please enter system message locale name or number empty to skip list to list options testsuite sh grep q lang foo test root etc default locale testsuite sh grep q lc messages bar test root etc default locale testsuite sh systemd firstboot root test root prompt keymap testsuite sh echo ne nfoo n testsuite sh grep q keymap foo test root etc vconsole conf testsuite sh grep test root etc vconsole conf no such file or directory testsuite sh at exit testsuite sh testsuite sh ls lr test root testsuite sh test root testsuite sh total testsuite sh drwxr xr x root root jan bin testsuite sh drwxr xr x root root jan etc testsuite sh test root bin testsuite sh total testsuite sh rw r r root root jan barshell testsuite sh rw r r root root jan fooshell testsuite sh test root etc testsuite sh total testsuite sh drwxr xr x root root jan default testsuite sh test root etc default testsuite sh total testsuite sh rw r r root root jan locale testsuite sh rm fr test root stracing systemd firstboot in interactive mode one can see that it looks for keymaps in usr lib kbd which do not exist on debian ubuntu if those are missing systemd firstboot in interactive mode will not generate an etc vconsole conf interestingly in non interactive mode systemd firstboot does not check for usr lib kbd and generates etc vconsole conf as said debian ubuntu does not use and does not ship the keymaps provided by kbd but uses a different setup based on console setup consequently the debian ubuntu package is built with dvconsole false ideas how to fix this test failure a always generate etc vconsole conf in interactive mode without checking for usr lib kbd b if systemd was built with dvconsole off do not generate a etc vconsole conf and update test aux utils accordingly to skip those checks i d personally prefer b but would welcome feedback before working on a pr
| 1
|
402,716
| 27,383,294,101
|
IssuesEvent
|
2023-02-28 11:32:45
|
alenahal/git_lesson
|
https://api.github.com/repos/alenahal/git_lesson
|
closed
|
Write more documentation
|
documentation
|
The only documentation we have for the moment is in readme file and python script
|
1.0
|
Write more documentation - The only documentation we have for the moment is in readme file and python script
|
non_test
|
write more documentation the only documentation we have for the moment is in readme file and python script
| 0
|
259,253
| 22,422,689,596
|
IssuesEvent
|
2022-06-20 06:02:28
|
samisagit/natskell
|
https://api.github.com/repos/samisagit/natskell
|
closed
|
Investigate docker in CI
|
good first issue tests CI
|
To run integration tests in CI we'll need to start docker containers in the test action.
|
1.0
|
Investigate docker in CI - To run integration tests in CI we'll need to start docker containers in the test action.
|
test
|
investigate docker in ci to run integration tests in ci we ll need to start docker containers in the test action
| 1
|
128,963
| 10,556,806,168
|
IssuesEvent
|
2019-10-04 03:35:31
|
rust-lang/rust
|
https://api.github.com/repos/rust-lang/rust
|
closed
|
ICE on higher-trait bounds
|
A-traits C-bug E-needstest I-ICE P-high T-compiler
|
The following code:
```
use std::iter::Map;
trait Foo { }
impl<T> Foo for &'_ T where T: Foo { }
impl<T> Foo for Option<T> where T: Foo { }
impl Foo for u32 { }
fn trigger_error<I, F>(iterable: I, functor: F)
where
for<'t> &'t I: IntoIterator,
for<'t> Map<<&'t I as IntoIterator>::IntoIter, F>: Iterator,
for<'t> <Map<<&'t I as IntoIterator>::IntoIter, F> as Iterator>::Item: Foo,
{
}
fn main() {
trigger_error(Vec::<u32>::new(), |x: &u32| Some(x))
}
```
results in an ICE
```
error: internal compiler error: src/librustc/infer/lexical_region_resolve/mod.rs:632: collect_error_for_expanding_node() could not find error for var '_#2r in universe U11, lower_bounds=[
RegionAndOrigin(RePlaceholder(Placeholder { universe: U1, name: BrNamed(crate0:DefIndex(1:14), 't) }),Subtype(TypeTrace(ObligationCause { span: src/main.rs:20:5: 20:18, body_id: HirId { owner: DefIndex(0:9), local_id: 24 }, code: ImplDerivedObligation(DerivedObligationCause { parent_trait_ref: Binder(<std::iter::Map<<&'t _ as std::iter::IntoIterator>::IntoIter, _> as std::iter::Iterator>), parent_code: ItemObligation(DefId(0/0:8 ~ playground[b1a9]::trigger_error[0])) }) }))),
RegionAndOrigin(RePlaceholder(Placeholder { universe: U1, name: BrNamed(crate0:DefIndex(1:14), 't) }),Subtype(TypeTrace(ObligationCause { span: src/main.rs:20:5: 20:18, body_id: HirId { owner: DefIndex(0:9), local_id: 24 }, code: ImplDerivedObligation(DerivedObligationCause { parent_trait_ref: Binder(<std::iter::Map<<&'t _ as std::iter::IntoIterator>::IntoIter, _> as std::iter::Iterator>), parent_code: ItemObligation(DefId(0/0:8 ~ playground[b1a9]::trigger_error[0])) }) })))
], upper_bounds=[
RegionAndOrigin(RePlaceholder(Placeholder { universe: U1, name: BrNamed(crate0:DefIndex(1:14), 't) }),Subtype(TypeTrace(ObligationCause { span: src/main.rs:20:5: 20:18, body_id: HirId { owner: DefIndex(0:9), local_id: 24 }, code: ImplDerivedObligation(DerivedObligationCause { parent_trait_ref: Binder(<std::iter::Map<<&'t _ as std::iter::IntoIterator>::IntoIter, _> as std::iter::Iterator>), parent_code: ItemObligation(DefId(0/0:8 ~ playground[b1a9]::trigger_error[0])) }) }))),
RegionAndOrigin(RePlaceholder(Placeholder { universe: U1, name: BrNamed(crate0:DefIndex(1:14), 't) }),Subtype(TypeTrace(ObligationCause { span: src/main.rs:20:5: 20:18, body_id: HirId { owner: DefIndex(0:9), local_id: 24 }, code: ImplDerivedObligation(DerivedObligationCause { parent_trait_ref: Binder(<std::iter::Map<<&'t _ as std::iter::IntoIterator>::IntoIter, _> as std::iter::Iterator>), parent_code: ItemObligation(DefId(0/0:8 ~ playground[b1a9]::trigger_error[0])) }) })))
]
```
Tested this on stable and nightly. May be somehow related to #60070.
|
1.0
|
ICE on higher-trait bounds - The following code:
```
use std::iter::Map;
trait Foo { }
impl<T> Foo for &'_ T where T: Foo { }
impl<T> Foo for Option<T> where T: Foo { }
impl Foo for u32 { }
fn trigger_error<I, F>(iterable: I, functor: F)
where
for<'t> &'t I: IntoIterator,
for<'t> Map<<&'t I as IntoIterator>::IntoIter, F>: Iterator,
for<'t> <Map<<&'t I as IntoIterator>::IntoIter, F> as Iterator>::Item: Foo,
{
}
fn main() {
trigger_error(Vec::<u32>::new(), |x: &u32| Some(x))
}
```
results in an ICE
```
error: internal compiler error: src/librustc/infer/lexical_region_resolve/mod.rs:632: collect_error_for_expanding_node() could not find error for var '_#2r in universe U11, lower_bounds=[
RegionAndOrigin(RePlaceholder(Placeholder { universe: U1, name: BrNamed(crate0:DefIndex(1:14), 't) }),Subtype(TypeTrace(ObligationCause { span: src/main.rs:20:5: 20:18, body_id: HirId { owner: DefIndex(0:9), local_id: 24 }, code: ImplDerivedObligation(DerivedObligationCause { parent_trait_ref: Binder(<std::iter::Map<<&'t _ as std::iter::IntoIterator>::IntoIter, _> as std::iter::Iterator>), parent_code: ItemObligation(DefId(0/0:8 ~ playground[b1a9]::trigger_error[0])) }) }))),
RegionAndOrigin(RePlaceholder(Placeholder { universe: U1, name: BrNamed(crate0:DefIndex(1:14), 't) }),Subtype(TypeTrace(ObligationCause { span: src/main.rs:20:5: 20:18, body_id: HirId { owner: DefIndex(0:9), local_id: 24 }, code: ImplDerivedObligation(DerivedObligationCause { parent_trait_ref: Binder(<std::iter::Map<<&'t _ as std::iter::IntoIterator>::IntoIter, _> as std::iter::Iterator>), parent_code: ItemObligation(DefId(0/0:8 ~ playground[b1a9]::trigger_error[0])) }) })))
], upper_bounds=[
RegionAndOrigin(RePlaceholder(Placeholder { universe: U1, name: BrNamed(crate0:DefIndex(1:14), 't) }),Subtype(TypeTrace(ObligationCause { span: src/main.rs:20:5: 20:18, body_id: HirId { owner: DefIndex(0:9), local_id: 24 }, code: ImplDerivedObligation(DerivedObligationCause { parent_trait_ref: Binder(<std::iter::Map<<&'t _ as std::iter::IntoIterator>::IntoIter, _> as std::iter::Iterator>), parent_code: ItemObligation(DefId(0/0:8 ~ playground[b1a9]::trigger_error[0])) }) }))),
RegionAndOrigin(RePlaceholder(Placeholder { universe: U1, name: BrNamed(crate0:DefIndex(1:14), 't) }),Subtype(TypeTrace(ObligationCause { span: src/main.rs:20:5: 20:18, body_id: HirId { owner: DefIndex(0:9), local_id: 24 }, code: ImplDerivedObligation(DerivedObligationCause { parent_trait_ref: Binder(<std::iter::Map<<&'t _ as std::iter::IntoIterator>::IntoIter, _> as std::iter::Iterator>), parent_code: ItemObligation(DefId(0/0:8 ~ playground[b1a9]::trigger_error[0])) }) })))
]
```
Tested this on stable and nightly. May be somehow related to #60070.
|
test
|
ice on higher trait bounds the following code use std iter map trait foo impl foo for t where t foo impl foo for option where t foo impl foo for fn trigger error iterable i functor f where for t i intoiterator for map intoiter f iterator for intoiter f as iterator item foo fn main trigger error vec new x some x results in an ice error internal compiler error src librustc infer lexical region resolve mod rs collect error for expanding node could not find error for var in universe lower bounds regionandorigin replaceholder placeholder universe name brnamed defindex t subtype typetrace obligationcause span src main rs body id hirid owner defindex local id code implderivedobligation derivedobligationcause parent trait ref binder intoiter as std iter iterator parent code itemobligation defid playground trigger error regionandorigin replaceholder placeholder universe name brnamed defindex t subtype typetrace obligationcause span src main rs body id hirid owner defindex local id code implderivedobligation derivedobligationcause parent trait ref binder intoiter as std iter iterator parent code itemobligation defid playground trigger error upper bounds regionandorigin replaceholder placeholder universe name brnamed defindex t subtype typetrace obligationcause span src main rs body id hirid owner defindex local id code implderivedobligation derivedobligationcause parent trait ref binder intoiter as std iter iterator parent code itemobligation defid playground trigger error regionandorigin replaceholder placeholder universe name brnamed defindex t subtype typetrace obligationcause span src main rs body id hirid owner defindex local id code implderivedobligation derivedobligationcause parent trait ref binder intoiter as std iter iterator parent code itemobligation defid playground trigger error tested this on stable and nightly may be somehow related to
| 1
|
65,434
| 7,879,409,266
|
IssuesEvent
|
2018-06-26 13:20:47
|
nextcloud/spreed
|
https://api.github.com/repos/nextcloud/spreed
|
closed
|
Active users should always be sorted first, even if not moderators
|
1. to develop design enhancement papercut
|
Currently the sorting is:
- Moderators (online)
- Moderators (off)
- Regular users (online)
- Regular users (off)
Instead it should be:
- Moderators (online)
- Regular users (online)
- Moderators (off)
- Regular users (off)
(Just talked about with @Ivansss @schiessle )
|
1.0
|
Active users should always be sorted first, even if not moderators - Currently the sorting is:
- Moderators (online)
- Moderators (off)
- Regular users (online)
- Regular users (off)
Instead it should be:
- Moderators (online)
- Regular users (online)
- Moderators (off)
- Regular users (off)
(Just talked about with @Ivansss @schiessle )
|
non_test
|
active users should always be sorted first even if not moderators currently the sorting is moderators online moderators off regular users online regular users off instead it should be moderators online regular users online moderators off regular users off just talked about with ivansss schiessle
| 0
|
115,779
| 14,887,496,773
|
IssuesEvent
|
2021-01-20 18:25:24
|
r-anime/surveysite
|
https://api.github.com/repos/r-anime/surveysite
|
opened
|
Yes/no questions with checkboxes may be unintuitive
|
design
|
Either the questions themselves have to be worded differently, or the checkbox has to be changed by e.g. [buttons](https://bootstrap-vue.org/docs/components/form-checkbox#button-style-checkboxes)
|
1.0
|
Yes/no questions with checkboxes may be unintuitive - Either the questions themselves have to be worded differently, or the checkbox has to be changed by e.g. [buttons](https://bootstrap-vue.org/docs/components/form-checkbox#button-style-checkboxes)
|
non_test
|
yes no questions with checkboxes may be unintuitive either the questions themselves have to be worded differently or the checkbox has to be changed by e g
| 0
|
12,929
| 3,295,466,442
|
IssuesEvent
|
2015-10-31 23:52:43
|
qux-lang/qux
|
https://api.github.com/repos/qux-lang/qux
|
closed
|
Test suite
|
kind: tests new: todo state: in progress status: breakdown
|
Add in a test suite. The suite should mostly include integration tests for checking how files compile together and run.
|
1.0
|
Test suite - Add in a test suite. The suite should mostly include integration tests for checking how files compile together and run.
|
test
|
test suite add in a test suite the suite should mostly include integration tests for checking how files compile together and run
| 1
|
142,464
| 11,473,217,826
|
IssuesEvent
|
2020-02-09 21:49:07
|
catalyst-cooperative/pudl
|
https://api.github.com/repos/catalyst-cooperative/pudl
|
closed
|
Set up OpenVPN on Travis CI to allow FTP of ferc1 & epacems data
|
testing
|
The Travis CI build servers undergo dynamic load balancing constantly, which means the IP addresses of individual instances are always changing. This makes it impossible for anything to reliably connect to the outside world over FTP, which is why the `ferc1` and `epacems` data have been impossible to download there for testing (not because Travis has been blacklisted by those agencies, whew!).
Unfortunately, neither FERC nor EPA supports access to these files over SFTP or HTTP, or HTTPS, any of which would work fine. Barring changes by those agencies, it is [apparently possible to work around this limitation](https://docs.travis-ci.com/user/common-build-problems/?utm_source=blog&utm_medium=web&utm_campaign=ftp_blog#ftpsmtpother-protocol-does-not-work) by setting up OpenVPN on the Travis CI build instance, and connecting to the outside world through that VPN tunnel. Not sure exactly what all we would need to do to make that work but maybe we can figure it out... This would be preferable to the current fake-data setup, since it would mean we were regularly testing the download of all the datasets, and wouldn't need to have the big honking fake data checked in to git.
More details [from Travis CI here](https://blog.travis-ci.com/2018-07-23-the-tale-of-ftp-at-travis-ci).
|
1.0
|
Set up OpenVPN on Travis CI to allow FTP of ferc1 & epacems data - The Travis CI build servers undergo dynamic load balancing constantly, which means the IP addresses of individual instances are always changing. This makes it impossible for anything to reliably connect to the outside world over FTP, which is why the `ferc1` and `epacems` data have been impossible to download there for testing (not because Travis has been blacklisted by those agencies, whew!).
Unfortunately, neither FERC nor EPA supports access to these files over SFTP or HTTP, or HTTPS, any of which would work fine. Barring changes by those agencies, it is [apparently possible to work around this limitation](https://docs.travis-ci.com/user/common-build-problems/?utm_source=blog&utm_medium=web&utm_campaign=ftp_blog#ftpsmtpother-protocol-does-not-work) by setting up OpenVPN on the Travis CI build instance, and connecting to the outside world through that VPN tunnel. Not sure exactly what all we would need to do to make that work but maybe we can figure it out... This would be preferable to the current fake-data setup, since it would mean we were regularly testing the download of all the datasets, and wouldn't need to have the big honking fake data checked in to git.
More details [from Travis CI here](https://blog.travis-ci.com/2018-07-23-the-tale-of-ftp-at-travis-ci).
|
test
|
set up openvpn on travis ci to allow ftp of epacems data the travis ci build servers undergo dynamic load balancing constantly which means the ip addresses of individual instances are always changing this makes it impossible for anything to reliably connect to the outside world over ftp which is why the and epacems data have been impossible to download there for testing not because travis has been blacklisted by those agencies whew unfortunately neither ferc nor epa supports access to these files over sftp or http or https any of which would work fine barring changes by those agencies it is by setting up openvpn on the travis ci build instance and connecting to the outside world through that vpn tunnel not sure exactly what all we would need to do to make that work but maybe we can figure it out this would be preferable to the current fake data setup since it would mean we were regularly testing the download of all the datasets and wouldn t need to have the big honking fake data checked in to git more details
| 1
|
244,138
| 20,611,830,538
|
IssuesEvent
|
2022-03-07 09:24:27
|
momentum-mod/game
|
https://api.github.com/repos/momentum-mod/game
|
closed
|
sv_airdecelerate no longer behaving as intended in conc gamemode
|
Type: Bug Blocked: Needs testing & verification Outcome: Resolved
|
### Describe the bug
decelerating with +back should be slower than the actual air acceleration value.
### How To Reproduce
press +back while zooming and it feels very strong
### Expected Behavior
shouldn't be as strong
### Operating System
Windows 10
### Renderer
DX11 (default)
|
1.0
|
sv_airdecelerate no longer behaving as intended in conc gamemode - ### Describe the bug
decelerating with +back should be slower than the actual air acceleration value.
### How To Reproduce
press +back while zooming and it feels very strong
### Expected Behavior
shouldn't be as strong
### Operating System
Windows 10
### Renderer
DX11 (default)
|
test
|
sv airdecelerate no longer behaving as intended in conc gamemode describe the bug decelerating with back should be slower than the actual air acceleration value how to reproduce press back while zooming and it feels very strong expected behavior shouldn t be as strong operating system windows renderer default
| 1
|
12,347
| 3,267,024,780
|
IssuesEvent
|
2015-10-22 23:49:54
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
E2e tests fail around "flock" command
|
area/test priority/P0 team/test-infra
|
Today I've seen multiple e2e failures due to a problem around ```flock``` command in https://github.com/kubernetes/kubernetes/blob/master/hack/jenkins/e2e.sh. AFAIU current code will fail (```-n``` flag) if another job is trying to update ```gcloud```. This means that an e2e test will be red until next scheduled run, which can be even 1h later. This blocks submit queue for a long time.
I think we should not use ```-n``` flag.
I'm not sending PR as the comment says only @brendandburns and @jlowdermilk can update this code :/
@wojtek-t @gmarek @davidopp
|
2.0
|
E2e tests fail around "flock" command - Today I've seen multiple e2e failures due to a problem around ```flock``` command in https://github.com/kubernetes/kubernetes/blob/master/hack/jenkins/e2e.sh. AFAIU current code will fail (```-n``` flag) if another job is trying to update ```gcloud```. This means that an e2e test will be red until next scheduled run, which can be even 1h later. This blocks submit queue for a long time.
I think we should not use ```-n``` flag.
I'm not sending PR as the comment says only @brendandburns and @jlowdermilk can update this code :/
@wojtek-t @gmarek @davidopp
|
test
|
tests fail around flock command today i ve seen multiple failures due to a problem around flock command in afaiu current code will fail n flag if another job is trying to update gcloud this means that an test will be red until next scheduled run which can be even later this blocks submit queue for a long time i think we should not use n flag i m not sending pr as the comment says only brendandburns and jlowdermilk can update this code wojtek t gmarek davidopp
| 1
|
474,032
| 13,651,110,019
|
IssuesEvent
|
2020-09-26 22:55:09
|
garden-io/garden
|
https://api.github.com/repos/garden-io/garden
|
closed
|
Modules that reference runtime task output ignore concurrency limits
|
bug priority:medium stale
|
## Bug
### Current Behavior
A module that references the runtime output of a task will not obey concurrency limits set by the task graph.
### Expected behavior
GARDEN_TASK_CONCURRENCY_LIMIT should be respected, or the default of 6 should be respected if not set.
### Reproducible example
run the command `GARDEN_TASK_CONCURRENCY_LIMIT=1 garden deploy` in the following project.
You will see that it deploys many services in parallel, i've seen 11 to 15 at once while testing this example.
If you remove the `FOO` environment variable referencing the task runtime output, the same command will deploy all the apps one at a time.
Without the limit set, the default of 6 still does not apply.
(The `sleep` healthcheck emulates waiting for an app to go healthy; there's no code running in the container apart from a tiny web server to keep the container live.)
```
kind: Project
name: temp
environments:
- name: local
providers:
- name: local-kubernetes
---
kind: Module
name: setup
type: exec
tasks:
- name: host
command: [echo, foo]
include: []
---
kind: Module
name: helloworldx16
type: container
image: crccheck/hello-world:latest
services:
- name: one
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
- name: two
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
- name: three
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
- name: four
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
- name: five
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
- name: six
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
- name: seven
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
- name: eight
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
- name: nine
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
- name: ten
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
- name: eleven
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
- name: twelve
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
- name: thirteen
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
- name: fourteen
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
- name: fifteen
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
- name: sixteen
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
```
### Additional context
<!-- Add any other context about the problem here. -->
### Your environment
<!-- PLEASE FILL THIS OUT -->
<!-- Please run and copy and paste the results -->
`garden version`
0.11.13
`kubectl version`
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-23T14:21:36Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.5", GitCommit:"20c265fef0741dd71a66480e35bd69f18351daea", GitTreeState:"clean", BuildDate:"2019-10-15T19:07:57Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
`docker version`
Client: Docker Engine - Community
Version: 19.03.8
API version: 1.40
Go version: go1.12.17
Git commit: afacb8b
Built: Wed Mar 11 01:21:11 2020
OS/Arch: darwin/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.8
API version: 1.40 (minimum version 1.12)
Go version: go1.12.17
Git commit: afacb8b
Built: Wed Mar 11 01:29:16 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
|
1.0
|
Modules that reference runtime task output ignore concurrency limits - ## Bug
### Current Behavior
A module that references the runtime output of a task will not obey concurrency limits set by the task graph.
### Expected behavior
GARDEN_TASK_CONCURRENCY_LIMIT should be respected, or the default of 6 should be respected if not set.
### Reproducible example
run the command `GARDEN_TASK_CONCURRENCY_LIMIT=1 garden deploy` in the following project.
You will see that it deploys many services in parallel, i've seen 11 to 15 at once while testing this example.
If you remove the `FOO` environment variable referencing the task runtime output, the same command will deploy all the apps one at a time.
Without the limit set, the default of 6 still does not apply.
(The `sleep` healthcheck emulates waiting for an app to go healthy; there's no code running in the container apart from a tiny web server to keep the container live.)
```
kind: Project
name: temp
environments:
- name: local
providers:
- name: local-kubernetes
---
kind: Module
name: setup
type: exec
tasks:
- name: host
command: [echo, foo]
include: []
---
kind: Module
name: helloworldx16
type: container
image: crccheck/hello-world:latest
services:
- name: one
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
- name: two
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
- name: three
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
- name: four
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
- name: five
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
- name: six
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
- name: seven
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
- name: eight
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
- name: nine
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
- name: ten
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
- name: eleven
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
- name: twelve
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
- name: thirteen
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
- name: fourteen
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
- name: fifteen
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
- name: sixteen
dependencies:
- host
env:
FOO: ${runtime.tasks.host.outputs.log}
healthCheck:
command: [sleep, "5"]
```
### Additional context
<!-- Add any other context about the problem here. -->
### Your environment
<!-- PLEASE FILL THIS OUT -->
<!-- Please run and copy and paste the results -->
`garden version`
0.11.13
`kubectl version`
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-23T14:21:36Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.5", GitCommit:"20c265fef0741dd71a66480e35bd69f18351daea", GitTreeState:"clean", BuildDate:"2019-10-15T19:07:57Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
`docker version`
Client: Docker Engine - Community
Version: 19.03.8
API version: 1.40
Go version: go1.12.17
Git commit: afacb8b
Built: Wed Mar 11 01:21:11 2020
OS/Arch: darwin/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.8
API version: 1.40 (minimum version 1.12)
Go version: go1.12.17
Git commit: afacb8b
Built: Wed Mar 11 01:29:16 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
|
non_test
|
modules that reference runtime task output ignore concurrency limits bug current behavior a module that references the runtime output of a task will not obey concurrency limits set by the task graph expected behavior garden task concurrency limit should be respected or the default of should be respected if not set reproducible example run the command garden task concurrency limit garden deploy in the following project you will see that it deploys many services in parallel i ve seen to at once while testing this example if you remove the foo environment variable referencing the task runtime output the same command will deploy all the apps one at a time without the limit set the default of still does not apply the sleep healthcheck emulates waiting for an app to go healthy there s no code running in the container apart from a tiny web server to keep the container live kind project name temp environments name local providers name local kubernetes kind module name setup type exec tasks name host command include kind module name type container image crccheck hello world latest services name one dependencies host env foo runtime tasks host outputs log healthcheck command name two dependencies host env foo runtime tasks host outputs log healthcheck command name three dependencies host env foo runtime tasks host outputs log healthcheck command name four dependencies host env foo runtime tasks host outputs log healthcheck command name five dependencies host env foo runtime tasks host outputs log healthcheck command name six dependencies host env foo runtime tasks host outputs log healthcheck command name seven dependencies host env foo runtime tasks host outputs log healthcheck command name eight dependencies host env foo runtime tasks host outputs log healthcheck command name nine dependencies host env foo runtime tasks host outputs log healthcheck command name ten dependencies host env foo runtime tasks host outputs log healthcheck command name eleven dependencies host env foo runtime tasks host outputs log healthcheck command name twelve dependencies host env foo runtime tasks host outputs log healthcheck command name thirteen dependencies host env foo runtime tasks host outputs log healthcheck command name fourteen dependencies host env foo runtime tasks host outputs log healthcheck command name fifteen dependencies host env foo runtime tasks host outputs log healthcheck command name sixteen dependencies host env foo runtime tasks host outputs log healthcheck command additional context your environment garden version kubectl version client version version info major minor gitversion gitcommit gittreestate clean builddate goversion compiler gc platform darwin server version version info major minor gitversion gitcommit gittreestate clean builddate goversion compiler gc platform linux docker version client docker engine community version api version go version git commit built wed mar os arch darwin experimental false server docker engine community engine version api version minimum version go version git commit built wed mar os arch linux experimental false containerd version gitcommit runc version gitcommit docker init version gitcommit
| 0
|
211,520
| 23,833,130,031
|
IssuesEvent
|
2022-09-06 01:05:51
|
samq-ghdemo/SEARCH-NCJIS-nibrs
|
https://api.github.com/repos/samq-ghdemo/SEARCH-NCJIS-nibrs
|
opened
|
CVE-2022-38752 (Medium) detected in multiple libraries
|
security vulnerability
|
## CVE-2022-38752 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>snakeyaml-1.23.jar</b>, <b>snakeyaml-1.17.jar</b>, <b>snakeyaml-1.19.jar</b></p></summary>
<p>
<details><summary><b>snakeyaml-1.23.jar</b></p></summary>
<p>YAML 1.1 parser and emitter for Java</p>
<p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p>
<p>Path to dependency file: /tools/nibrs-summary-report-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.23/snakeyaml-1.23.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.5.RELEASE.jar (Root Library)
- spring-boot-starter-2.1.5.RELEASE.jar
- :x: **snakeyaml-1.23.jar** (Vulnerable Library)
</details>
<details><summary><b>snakeyaml-1.17.jar</b></p></summary>
<p>YAML 1.1 parser and emitter for Java</p>
<p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p>
<p>Path to dependency file: /tools/nibrs-fbi-service/pom.xml</p>
<p>Path to vulnerable library: /tools/nibrs-fbi-service/target/nibrs-fbi-service-1.0.0/WEB-INF/lib/snakeyaml-1.17.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.17/snakeyaml-1.17.jar</p>
<p>
Dependency Hierarchy:
- :x: **snakeyaml-1.17.jar** (Vulnerable Library)
</details>
<details><summary><b>snakeyaml-1.19.jar</b></p></summary>
<p>YAML 1.1 parser and emitter for Java</p>
<p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p>
<p>Path to dependency file: /tools/nibrs-xmlfile/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.19/snakeyaml-1.19.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.19/snakeyaml-1.19.jar,/web/nibrs-web/target/nibrs-web/WEB-INF/lib/snakeyaml-1.19.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.19/snakeyaml-1.19.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.19/snakeyaml-1.19.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.19/snakeyaml-1.19.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.19/snakeyaml-1.19.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.19/snakeyaml-1.19.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.19/snakeyaml-1.19.jar</p>
<p>
Dependency Hierarchy:
- :x: **snakeyaml-1.19.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/samq-ghdemo/SEARCH-NCJIS-nibrs/commit/2643373aa9a184ff4ea81e98caf4009bf2ee8e91">2643373aa9a184ff4ea81e98caf4009bf2ee8e91</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Using snakeYAML to parse untrusted YAML files may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stack-overflow.
<p>Publish Date: 2022-09-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38752>CVE-2022-38752</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
|
True
|
CVE-2022-38752 (Medium) detected in multiple libraries - ## CVE-2022-38752 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>snakeyaml-1.23.jar</b>, <b>snakeyaml-1.17.jar</b>, <b>snakeyaml-1.19.jar</b></p></summary>
<p>
<details><summary><b>snakeyaml-1.23.jar</b></p></summary>
<p>YAML 1.1 parser and emitter for Java</p>
<p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p>
<p>Path to dependency file: /tools/nibrs-summary-report-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.23/snakeyaml-1.23.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.5.RELEASE.jar (Root Library)
- spring-boot-starter-2.1.5.RELEASE.jar
- :x: **snakeyaml-1.23.jar** (Vulnerable Library)
</details>
<details><summary><b>snakeyaml-1.17.jar</b></p></summary>
<p>YAML 1.1 parser and emitter for Java</p>
<p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p>
<p>Path to dependency file: /tools/nibrs-fbi-service/pom.xml</p>
<p>Path to vulnerable library: /tools/nibrs-fbi-service/target/nibrs-fbi-service-1.0.0/WEB-INF/lib/snakeyaml-1.17.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.17/snakeyaml-1.17.jar</p>
<p>
Dependency Hierarchy:
- :x: **snakeyaml-1.17.jar** (Vulnerable Library)
</details>
<details><summary><b>snakeyaml-1.19.jar</b></p></summary>
<p>YAML 1.1 parser and emitter for Java</p>
<p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p>
<p>Path to dependency file: /tools/nibrs-xmlfile/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.19/snakeyaml-1.19.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.19/snakeyaml-1.19.jar,/web/nibrs-web/target/nibrs-web/WEB-INF/lib/snakeyaml-1.19.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.19/snakeyaml-1.19.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.19/snakeyaml-1.19.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.19/snakeyaml-1.19.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.19/snakeyaml-1.19.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.19/snakeyaml-1.19.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.19/snakeyaml-1.19.jar</p>
<p>
Dependency Hierarchy:
- :x: **snakeyaml-1.19.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/samq-ghdemo/SEARCH-NCJIS-nibrs/commit/2643373aa9a184ff4ea81e98caf4009bf2ee8e91">2643373aa9a184ff4ea81e98caf4009bf2ee8e91</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Using snakeYAML to parse untrusted YAML files may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stack-overflow.
<p>Publish Date: 2022-09-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38752>CVE-2022-38752</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
|
non_test
|
cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries snakeyaml jar snakeyaml jar snakeyaml jar snakeyaml jar yaml parser and emitter for java library home page a href path to dependency file tools nibrs summary report common pom xml path to vulnerable library home wss scanner repository org yaml snakeyaml snakeyaml jar dependency hierarchy spring boot starter web release jar root library spring boot starter release jar x snakeyaml jar vulnerable library snakeyaml jar yaml parser and emitter for java library home page a href path to dependency file tools nibrs fbi service pom xml path to vulnerable library tools nibrs fbi service target nibrs fbi service web inf lib snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar dependency hierarchy x snakeyaml jar vulnerable library snakeyaml jar yaml parser and emitter for java library home page a href path to dependency file tools nibrs xmlfile pom xml path to vulnerable library home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar web nibrs web target nibrs web web inf lib snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar dependency hierarchy x snakeyaml jar vulnerable library found in head commit a href found in base branch master vulnerability details using snakeyaml to parse untrusted yaml files may be vulnerable to denial of service attacks dos if the parser is running on user supplied input an attacker may supply content that causes the parser to crash by stack overflow publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href
| 0
|
254,633
| 21,801,883,325
|
IssuesEvent
|
2022-05-16 06:32:50
|
arcus-azure/arcus.scripting
|
https://api.github.com/repos/arcus-azure/arcus.scripting
|
closed
|
Add integration tests for Azure Logic Apps
|
automated-testing area:logic-apps
|
**Is your feature request related to a problem? Please describe.**
We currently do not have any integration tests for Azure Logic Apps. All the available tests are using subbed versions of actual commands.
**Describe the solution you'd like**
- [x] Setup Azure Logic Apps that can be enabled/disabled during the integration tests. Keep in mind that parallel runs of the tests can occur
- [x] Write integration tests against Azure Logic Apps.
|
1.0
|
Add integration tests for Azure Logic Apps - **Is your feature request related to a problem? Please describe.**
We currently do not have any integration tests for Azure Logic Apps. All the available tests are using subbed versions of actual commands.
**Describe the solution you'd like**
- [x] Setup Azure Logic Apps that can be enabled/disabled during the integration tests. Keep in mind that parallel runs of the tests can occur
- [x] Write integration tests against Azure Logic Apps.
|
test
|
add integration tests for azure logic apps is your feature request related to a problem please describe we currently do not have any integration tests for azure logic apps all the available tests are using subbed versions of actual commands describe the solution you d like setup azure logic apps that can be enabled disabled during the integration tests keep in mind that parallel runs of the tests can occur write integration tests against azure logic apps
| 1
|
167,103
| 26,457,188,009
|
IssuesEvent
|
2023-01-16 15:05:32
|
OpenLiberty/open-liberty
|
https://api.github.com/repos/OpenLiberty/open-liberty
|
opened
|
gRPC generated java references javax.annotation.Generated which is not in Jakarta
|
design-issue
|
gRPC protobuf tools generate Java that has annotations that are NOT retained at runtime of
javax.annotation.Generated
Such java will not compile with Liberty in Jakarta 9/10 as that package is not present.
There are 2 possible solutions currently:
1 - Use the Eclipse transformer (The Liberty gRPC 1.0 guide stopped short of including this for simplicity's sake.)
2 - Add in a dependency, for example the gRPC read.me recommends:
https://github.com/grpc/grpc-java/issues/9179#issuecomment-1377982643
```
<dependency> <!-- necessary for Java 9+ -->
<groupId>org.apache.tomcat</groupId>
<artifactId>annotations-api</artifactId>
<version>6.0.53</version>
<scope>provided</scope>
</dependency>
```
(The Liberty gRPC guide ++ looks like it wants to use that route:
https://github.com/OpenLiberty/guide-grpc-intro/pull/43 )
3) As it an annotation that is not retained at RUNTIME could we not bundle it up and pull it in
to the compile dependencies IFF gRPC and Jakarta EE9/10... etc is the mode?
After all tomcat are bundling it without 'owning' the javax.annotation package so we should
be able do the same in a way that avoids it being on the compile classpath if NOT Jakarta?
|
1.0
|
gRPC generated java references javax.annotation.Generated which is not in Jakarta - gRPC protobuf tools generate Java that has annotations that are NOT retained at runtime of
javax.annotation.Generated
Such java will not compile with Liberty in Jakarta 9/10 as that package is not present.
There are 2 possible solutions currently:
1 - Use the Eclipse transformer (The Liberty gRPC 1.0 guide stopped short of including this for simplicity's sake.)
2 - Add in a dependency, for example the gRPC read.me recommends:
https://github.com/grpc/grpc-java/issues/9179#issuecomment-1377982643
```
<dependency> <!-- necessary for Java 9+ -->
<groupId>org.apache.tomcat</groupId>
<artifactId>annotations-api</artifactId>
<version>6.0.53</version>
<scope>provided</scope>
</dependency>
```
(The Liberty gRPC guide ++ looks like it wants to use that route:
https://github.com/OpenLiberty/guide-grpc-intro/pull/43 )
3) As it an annotation that is not retained at RUNTIME could we not bundle it up and pull it in
to the compile dependencies IFF gRPC and Jakarta EE9/10... etc is the mode?
After all tomcat are bundling it without 'owning' the javax.annotation package so we should
be able do the same in a way that avoids it being on the compile classpath if NOT Jakarta?
|
non_test
|
grpc generated java references javax annotation generated which is not in jakarta grpc protobuf tools generate java that has annotations that are not retained at runtime of javax annotation generated such java will not compile with liberty in jakarta as that package is not present there are possible solutions currently use the eclipse transformer the liberty grpc guide stopped short of including this for simplicity s sake add in a dependency for example the grpc read me recommends org apache tomcat annotations api provided the liberty grpc guide looks like it wants to use that route as it an annotation that is not retained at runtime could we not bundle it up and pull it in to the compile dependencies iff grpc and jakarta etc is the mode after all tomcat are bundling it without owning the javax annotation package so we should be able do the same in a way that avoids it being on the compile classpath if not jakarta
| 0
|
355,062
| 10,576,038,868
|
IssuesEvent
|
2019-10-07 16:58:24
|
compodoc/compodoc
|
https://api.github.com/repos/compodoc/compodoc
|
closed
|
[ENHANCEMENT] Compodoc installation takes 140MB of size
|
Priority: Medium Status: Accepted Time: ~6 hours Type: Enhancement wontfix
|
<!--
> Please follow the issue template below for bug reports and queries.
> For issue, start the label of the title with [BUG]
> For feature requests, start the label of the title with [FEATURE] and explain your use case and ideas clearly below, you can remove sections which are not relevant.
-->
##### **Overview of the issue**
The @compodoc/compodoc installation takes up **~109MB** of size. This is huge size and something looks incorrect with dependencies.
[](https://packagephobia.now.sh/result?p=@compodoc/compodoc@1.1.7)
##### **Operating System, Node.js, npm, compodoc version(s)**
Compodoc version: **1.1.7**
##### **Angular configuration, a `package.json` file in the root folder**
NA
##### **Compodoc installed globally or locally ?**
Locally
##### **Motivation for or Use Case**
|
1.0
|
[ENHANCEMENT] Compodoc installation takes 140MB of size - <!--
> Please follow the issue template below for bug reports and queries.
> For issue, start the label of the title with [BUG]
> For feature requests, start the label of the title with [FEATURE] and explain your use case and ideas clearly below, you can remove sections which are not relevant.
-->
##### **Overview of the issue**
The @compodoc/compodoc installation takes up **~109MB** of size. This is huge size and something looks incorrect with dependencies.
[](https://packagephobia.now.sh/result?p=@compodoc/compodoc@1.1.7)
##### **Operating System, Node.js, npm, compodoc version(s)**
Compodoc version: **1.1.7**
##### **Angular configuration, a `package.json` file in the root folder**
NA
##### **Compodoc installed globally or locally ?**
Locally
##### **Motivation for or Use Case**
|
non_test
|
compodoc installation takes of size please follow the issue template below for bug reports and queries for issue start the label of the title with for feature requests start the label of the title with and explain your use case and ideas clearly below you can remove sections which are not relevant overview of the issue the compodoc compodoc installation takes up of size this is huge size and something looks incorrect with dependencies operating system node js npm compodoc version s compodoc version angular configuration a package json file in the root folder na compodoc installed globally or locally locally motivation for or use case
| 0
|
13,577
| 2,770,514,017
|
IssuesEvent
|
2015-05-01 15:10:58
|
gizmoboard/gizmoboard
|
https://api.github.com/repos/gizmoboard/gizmoboard
|
opened
|
Issues in Documentation
|
defect documentation
|
Instructions in README should say:
`git clone https://github.com/gizmoboard/gizmoboard`
Instructions in Contributing has mispelling of "Gizmobodo"
|
1.0
|
Issues in Documentation - Instructions in README should say:
`git clone https://github.com/gizmoboard/gizmoboard`
Instructions in Contributing has mispelling of "Gizmobodo"
|
non_test
|
issues in documentation instructions in readme should say git clone instructions in contributing has mispelling of gizmobodo
| 0
|
332,123
| 29,185,751,283
|
IssuesEvent
|
2023-05-19 15:17:22
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
reopened
|
Fix jax_lax_operators.test_jax_lax_sort
|
JAX Frontend Sub Task Failing Test
|
| | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4715024913/jobs/8361738340" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4715024913/jobs/8361738340" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4715024913/jobs/8361738340" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/4715024913/jobs/8361738340" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|
1.0
|
Fix jax_lax_operators.test_jax_lax_sort - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4715024913/jobs/8361738340" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4715024913/jobs/8361738340" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4715024913/jobs/8361738340" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/4715024913/jobs/8361738340" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|
test
|
fix jax lax operators test jax lax sort tensorflow img src torch img src numpy img src jax img src
| 1
|
72,014
| 18,960,761,917
|
IssuesEvent
|
2021-11-19 04:18:36
|
CosmosOS/Cosmos
|
https://api.github.com/repos/CosmosOS/Cosmos
|
closed
|
IL2CPU task failed. - Object reference not set to an instance of an object.
|
Pending User Response Area: Visual Studio Integration Area: Build
|
Have you checked Github Issues for similar errors?
Yes, didn't find anything that was within the last few years (tried the solutions in the outdated ones i did find however, no luck)
**Exception**
Post the exception returned by Visual Studio
** VS Output Logs **
Post the entire output log given by Visual Studio for the build
```
Build started...
1>------ Build started: Project: CosmosKernel1, Configuration: Debug Any CPU ------
1>CosmosKernel1 -> C:\Users\tftwp\source\repos\CosmosKernel1\bin\Debug\net5.0\CosmosKernel1.dll
1>IL2CPU task took 00:00:00.0001047
1>C:\Users\tftwp\.nuget\packages\cosmos.build\0.1.0-localbuild20211118063126\build\Cosmos.Build.targets(172,9): error MSB4018: The "IL2CPU" task failed unexpectedly.
1>C:\Users\tftwp\.nuget\packages\cosmos.build\0.1.0-localbuild20211118063126\build\Cosmos.Build.targets(172,9): error MSB4018: System.NullReferenceException: Object reference not set to an instance of an object.
1>C:\Users\tftwp\.nuget\packages\cosmos.build\0.1.0-localbuild20211118063126\build\Cosmos.Build.targets(172,9): error MSB4018: at Cosmos.Build.Tasks.IL2CPU.GenerateResponseFileCommands() in C:\Users\tftwp\Documents\Cosmos\source\Cosmos.Build.Tasks\IL2CPU.cs:line 78
1>C:\Users\tftwp\.nuget\packages\cosmos.build\0.1.0-localbuild20211118063126\build\Cosmos.Build.targets(172,9): error MSB4018: at Microsoft.Build.Utilities.ToolTask.Execute()
1>C:\Users\tftwp\.nuget\packages\cosmos.build\0.1.0-localbuild20211118063126\build\Cosmos.Build.targets(172,9): error MSB4018: at Cosmos.Build.Tasks.IL2CPU.Execute() in C:\Users\tftwp\Documents\Cosmos\source\Cosmos.Build.Tasks\IL2CPU.cs:line 116
1>C:\Users\tftwp\.nuget\packages\cosmos.build\0.1.0-localbuild20211118063126\build\Cosmos.Build.targets(172,9): error MSB4018: at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute()
1>C:\Users\tftwp\.nuget\packages\cosmos.build\0.1.0-localbuild20211118063126\build\Cosmos.Build.targets(172,9): error MSB4018: at Microsoft.Build.BackEnd.TaskBuilder.<ExecuteInstantiatedTask>d__26.MoveNext()
1>Done building project "CosmosKernel1.csproj" -- FAILED.
========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========
```
**To Reproduce**
This issue occurs with the default kernel.
**Screenshots**
**Context**
Before posting please confirm that the following are in order
- [x] Both Cosmos VS Extensions are installed
- [x] In the NuGet Package Manager "Include prerelease" is selected
- [x] The Cosmos NuGet package store is selected (NOT nuget.org) in 'Manage NuGet Packages'
- [x] The Cosmos NuGet packages are installed
Add any other context about the problem which might be helpful.
This is also a clean installation of VS2022 and the Cosmos devkit.
(Using VS2022 branch of cosmos incase that has any relevance to this)
|
1.0
|
IL2CPU task failed. - Object reference not set to an instance of an object. - Have you checked Github Issues for similar errors?
Yes, didn't find anything that was within the last few years (tried the solutions in the outdated ones i did find however, no luck)
**Exception**
Post the exception returned by Visual Studio
** VS Output Logs **
Post the entire output log given by Visual Studio for the build
```
Build started...
1>------ Build started: Project: CosmosKernel1, Configuration: Debug Any CPU ------
1>CosmosKernel1 -> C:\Users\tftwp\source\repos\CosmosKernel1\bin\Debug\net5.0\CosmosKernel1.dll
1>IL2CPU task took 00:00:00.0001047
1>C:\Users\tftwp\.nuget\packages\cosmos.build\0.1.0-localbuild20211118063126\build\Cosmos.Build.targets(172,9): error MSB4018: The "IL2CPU" task failed unexpectedly.
1>C:\Users\tftwp\.nuget\packages\cosmos.build\0.1.0-localbuild20211118063126\build\Cosmos.Build.targets(172,9): error MSB4018: System.NullReferenceException: Object reference not set to an instance of an object.
1>C:\Users\tftwp\.nuget\packages\cosmos.build\0.1.0-localbuild20211118063126\build\Cosmos.Build.targets(172,9): error MSB4018: at Cosmos.Build.Tasks.IL2CPU.GenerateResponseFileCommands() in C:\Users\tftwp\Documents\Cosmos\source\Cosmos.Build.Tasks\IL2CPU.cs:line 78
1>C:\Users\tftwp\.nuget\packages\cosmos.build\0.1.0-localbuild20211118063126\build\Cosmos.Build.targets(172,9): error MSB4018: at Microsoft.Build.Utilities.ToolTask.Execute()
1>C:\Users\tftwp\.nuget\packages\cosmos.build\0.1.0-localbuild20211118063126\build\Cosmos.Build.targets(172,9): error MSB4018: at Cosmos.Build.Tasks.IL2CPU.Execute() in C:\Users\tftwp\Documents\Cosmos\source\Cosmos.Build.Tasks\IL2CPU.cs:line 116
1>C:\Users\tftwp\.nuget\packages\cosmos.build\0.1.0-localbuild20211118063126\build\Cosmos.Build.targets(172,9): error MSB4018: at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute()
1>C:\Users\tftwp\.nuget\packages\cosmos.build\0.1.0-localbuild20211118063126\build\Cosmos.Build.targets(172,9): error MSB4018: at Microsoft.Build.BackEnd.TaskBuilder.<ExecuteInstantiatedTask>d__26.MoveNext()
1>Done building project "CosmosKernel1.csproj" -- FAILED.
========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========
```
**To Reproduce**
This issue occurs with the default kernel.
**Screenshots**
**Context**
Before posting please confirm that the following are in order
- [x] Both Cosmos VS Extensions are installed
- [x] In the NuGet Package Manager "Include prerelease" is selected
- [x] The Cosmos NuGet package store is selected (NOT nuget.org) in 'Manage NuGet Packages'
- [x] The Cosmos NuGet packages are installed
Add any other context about the problem which might be helpful.
This is also a clean installation of VS2022 and the Cosmos devkit.
(Using VS2022 branch of cosmos incase that has any relevance to this)
|
non_test
|
task failed object reference not set to an instance of an object have you checked github issues for similar errors yes didn t find anything that was within the last few years tried the solutions in the outdated ones i did find however no luck exception post the exception returned by visual studio vs output logs post the entire output log given by visual studio for the build build started build started project configuration debug any cpu c users tftwp source repos bin debug dll task took c users tftwp nuget packages cosmos build build cosmos build targets error the task failed unexpectedly c users tftwp nuget packages cosmos build build cosmos build targets error system nullreferenceexception object reference not set to an instance of an object c users tftwp nuget packages cosmos build build cosmos build targets error at cosmos build tasks generateresponsefilecommands in c users tftwp documents cosmos source cosmos build tasks cs line c users tftwp nuget packages cosmos build build cosmos build targets error at microsoft build utilities tooltask execute c users tftwp nuget packages cosmos build build cosmos build targets error at cosmos build tasks execute in c users tftwp documents cosmos source cosmos build tasks cs line c users tftwp nuget packages cosmos build build cosmos build targets error at microsoft build backend taskexecutionhost microsoft build backend itaskexecutionhost execute c users tftwp nuget packages cosmos build build cosmos build targets error at microsoft build backend taskbuilder d movenext done building project csproj failed build succeeded failed up to date skipped to reproduce this issue occurs with the default kernel screenshots context before posting please confirm that the following are in order both cosmos vs extensions are installed in the nuget package manager include prerelease is selected the cosmos nuget package store is selected not nuget org in manage nuget packages the cosmos nuget packages are installed add any other context about the problem which might be helpful this is also a clean installation of and the cosmos devkit using branch of cosmos incase that has any relevance to this
| 0
|
427,635
| 29,831,614,936
|
IssuesEvent
|
2023-06-18 10:48:00
|
khardy-official/homepage
|
https://api.github.com/repos/khardy-official/homepage
|
closed
|
Create a contact info block
|
documentation
|
My links to social networks (LinkedIn, Facebook, Instagram), phone number, email, Telegram username, etc.
|
1.0
|
Create a contact info block - My links to social networks (LinkedIn, Facebook, Instagram), phone number, email, Telegram username, etc.
|
non_test
|
create a contact info block my links to social networks linkedin facebook instagram phone number email telegram username etc
| 0
|
44,510
| 5,631,840,983
|
IssuesEvent
|
2017-04-05 15:20:58
|
vmware/vic
|
https://api.github.com/repos/vmware/vic
|
opened
|
Need to hook up robot remote server testing with windows servers
|
component/test priority/medium
|
User statement: As a customer of VIC, I will likely be using windows as a VI admin, I want to be assured that VIC has been tested properly with it.
Details:
https://github.com/robotframework/PythonRemoteServer
https://github.com/robotframework/RemoteInterface
Acceptance criteria:
One or more windows servers are configured and setup to be used for testing with the remote server library.
|
1.0
|
Need to hook up robot remote server testing with windows servers - User statement: As a customer of VIC, I will likely be using windows as a VI admin, I want to be assured that VIC has been tested properly with it.
Details:
https://github.com/robotframework/PythonRemoteServer
https://github.com/robotframework/RemoteInterface
Acceptance criteria:
One or more windows servers are configured and setup to be used for testing with the remote server library.
|
test
|
need to hook up robot remote server testing with windows servers user statement as a customer of vic i will likely be using windows as a vi admin i want to be assured that vic has been tested properly with it details acceptance criteria one or more windows servers are configured and setup to be used for testing with the remote server library
| 1
|
655,441
| 21,690,867,356
|
IssuesEvent
|
2022-05-09 15:15:14
|
solgenomics/sgn
|
https://api.github.com/repos/solgenomics/sgn
|
closed
|
Germplasm: biologicalStatusOfAccessionCode field should be string
|
Type: Bug Priority: High
|
Expected Behavior <!-- Describe the desired or expected behavour here. -->
--------------------------------------------------------------------------
According Brapi spec biologicalStatusOfAccessionCode should be a string. It gives an integer
For Bugs:
---------
### Environment
<!-- Where did you encounter the error. -->
#### Steps to Reproduce
<!-- Provide an example, or an unambiguous set of steps to reproduce -->
<!-- this bug. Include code to reproduce, if relevant. -->
|
1.0
|
Germplasm: biologicalStatusOfAccessionCode field should be string - Expected Behavior <!-- Describe the desired or expected behavour here. -->
--------------------------------------------------------------------------
According Brapi spec biologicalStatusOfAccessionCode should be a string. It gives an integer
For Bugs:
---------
### Environment
<!-- Where did you encounter the error. -->
#### Steps to Reproduce
<!-- Provide an example, or an unambiguous set of steps to reproduce -->
<!-- this bug. Include code to reproduce, if relevant. -->
|
non_test
|
germplasm biologicalstatusofaccessioncode field should be string expected behavior according brapi spec biologicalstatusofaccessioncode should be a string it gives an integer for bugs environment steps to reproduce
| 0
|
103,695
| 8,933,035,080
|
IssuesEvent
|
2019-01-23 00:10:53
|
spring-cloud/spring-cloud-dataflow-acceptance-tests
|
https://api.github.com/repos/spring-cloud/spring-cloud-dataflow-acceptance-tests
|
closed
|
Update k8s AT configs
|
in progress test-coverage
|
Configs generated by AT's need to be updated to support latest changes, tests are currently broke. Sync with distro files.
end goal here is to do enough to get the AT's green. more in-depth restructuring will be done in spring-cloud/spring-cloud-dataflow#2778
|
1.0
|
Update k8s AT configs - Configs generated by AT's need to be updated to support latest changes, tests are currently broke. Sync with distro files.
end goal here is to do enough to get the AT's green. more in-depth restructuring will be done in spring-cloud/spring-cloud-dataflow#2778
|
test
|
update at configs configs generated by at s need to be updated to support latest changes tests are currently broke sync with distro files end goal here is to do enough to get the at s green more in depth restructuring will be done in spring cloud spring cloud dataflow
| 1
|
120,390
| 10,114,537,482
|
IssuesEvent
|
2019-07-30 19:26:17
|
ingresse/android-sdk
|
https://api.github.com/repos/ingresse/android-sdk
|
closed
|
[Backstage] Configurações - Mudança de atributos do evento
|
1pt feature mobile tested to deploy
|
## Descrição
Chamada para modificar os atributos do evento
Criada no ```AttributesService```
### Endpoint
__URL:__ _https://api.ingresse.com/event/{eventId}/attributes
__Parametros (Query):__
- apikey
- userToken
__Parametros (Body):__
- name
- value
### Response
```json
{
"responseDetails": string,
"responseError": null,
"responseStatus": int
}
```
|
1.0
|
[Backstage] Configurações - Mudança de atributos do evento - ## Descrição
Chamada para modificar os atributos do evento
Criada no ```AttributesService```
### Endpoint
__URL:__ _https://api.ingresse.com/event/{eventId}/attributes
__Parametros (Query):__
- apikey
- userToken
__Parametros (Body):__
- name
- value
### Response
```json
{
"responseDetails": string,
"responseError": null,
"responseStatus": int
}
```
|
test
|
configurações mudança de atributos do evento descrição chamada para modificar os atributos do evento criada no attributesservice endpoint url parametros query apikey usertoken parametros body name value response json responsedetails string responseerror null responsestatus int
| 1
|
71,737
| 13,734,911,983
|
IssuesEvent
|
2020-10-05 09:22:38
|
Regalis11/Barotrauma
|
https://api.github.com/repos/Regalis11/Barotrauma
|
closed
|
Restrict ammo types to be used in loaders
|
Code Design Feature request
|
- [x] I have searched the issue tracker to check if the issue has already been reported.
**Description**
This is a feature request.
Allow in the editor to restrict what ammo can be loaded in the Depth Charge , Rail gun & coil gun loaders. You can create more interesting designs where you force the players to only use certain types of ammo, depending on what gun. Some DC loaders could for example only use Decoys an some railgun loaders could only use nuclear shells. It could add an extra level of difficulty to the submarine that is in use.
|
1.0
|
Restrict ammo types to be used in loaders - - [x] I have searched the issue tracker to check if the issue has already been reported.
**Description**
This is a feature request.
Allow in the editor to restrict what ammo can be loaded in the Depth Charge , Rail gun & coil gun loaders. You can create more interesting designs where you force the players to only use certain types of ammo, depending on what gun. Some DC loaders could for example only use Decoys an some railgun loaders could only use nuclear shells. It could add an extra level of difficulty to the submarine that is in use.
|
non_test
|
restrict ammo types to be used in loaders i have searched the issue tracker to check if the issue has already been reported description this is a feature request allow in the editor to restrict what ammo can be loaded in the depth charge rail gun coil gun loaders you can create more interesting designs where you force the players to only use certain types of ammo depending on what gun some dc loaders could for example only use decoys an some railgun loaders could only use nuclear shells it could add an extra level of difficulty to the submarine that is in use
| 0
|
81,221
| 7,775,996,440
|
IssuesEvent
|
2018-06-05 06:19:55
|
adobe/brackets
|
https://api.github.com/repos/adobe/brackets
|
closed
|
[Brackets auto-update Windows/Mac] The buttons in update bar should have border radius of 3px as per specs.
|
Testing
|
### Description
[Brackets auto-update Windows/Mac] The buttons in update bar should have border radius of 3px as per specs.
### Steps to Reproduce
1. Launch brackets 1.13.
2. Open another brackets window.
3. Click on Update Notification Button.
4. Click on Get it Now.
5. Update bar will be displayed along with buttons.
6. The buttons in update bar should have border radius of 3px as per specs.
<img width="240" alt="screen shot 2018-04-12 at 1 54 31 am" src="https://user-images.githubusercontent.com/25339865/38734691-258a0538-3f44-11e8-912a-10c90505233c.png">
**Expected behavior:** The buttons in update bar should have border radius of 3px as per specs.
**Actual behavior:** The buttons does not have any border radius.
### Versions
Windows 10 64 Bit
Mac 10.13
Release 1.13 build 1.13.0-17665
|
1.0
|
[Brackets auto-update Windows/Mac] The buttons in update bar should have border radius of 3px as per specs. - ### Description
[Brackets auto-update Windows/Mac] The buttons in update bar should have border radius of 3px as per specs.
### Steps to Reproduce
1. Launch brackets 1.13.
2. Open another brackets window.
3. Click on Update Notification Button.
4. Click on Get it Now.
5. Update bar will be displayed along with buttons.
6. The buttons in update bar should have border radius of 3px as per specs.
<img width="240" alt="screen shot 2018-04-12 at 1 54 31 am" src="https://user-images.githubusercontent.com/25339865/38734691-258a0538-3f44-11e8-912a-10c90505233c.png">
**Expected behavior:** The buttons in update bar should have border radius of 3px as per specs.
**Actual behavior:** The buttons does not have any border radius.
### Versions
Windows 10 64 Bit
Mac 10.13
Release 1.13 build 1.13.0-17665
|
test
|
the buttons in update bar should have border radius of as per specs description the buttons in update bar should have border radius of as per specs steps to reproduce launch brackets open another brackets window click on update notification button click on get it now update bar will be displayed along with buttons the buttons in update bar should have border radius of as per specs img width alt screen shot at am src expected behavior the buttons in update bar should have border radius of as per specs actual behavior the buttons does not have any border radius versions windows bit mac release build
| 1
|
256,508
| 8,127,751,796
|
IssuesEvent
|
2018-08-17 09:12:54
|
aowen87/BAR
|
https://api.github.com/repos/aowen87/BAR
|
closed
|
Pick option to highlight selected node/zone
|
Expected Use: 3 - Occasional Feature Impact: 4 - High Priority: High
|
Although we add pick letters engineering folks (Mili users) have inquired regarding possibility of highlighting picked node or zone.
In theory, the process could be much the same as doing the pick letter except that the pick attributes wind up returing a center/radius (for a picked node to highlight) or a set of points and line segments between then, to serve as the highlighted _edges_ of the picked zone. But, that is a really, really big guess on my part as to what might be involved.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1964
Status: Resolved
Project: VisIt
Tracker: Feature
Priority: High
Subject: Pick option to highlight selected node/zone
Assigned to: Matt Larsen
Category:
Target version: 2.12.0
Author: Mark Miller
Start: 08/27/2014
Due date:
% Done: 0
Estimated time:
Created: 08/27/2014 10:53 pm
Updated: 10/25/2016 01:53 pm
Likelihood:
Severity:
Found in version:
Impact: 4 - High
Expected Use: 3 - Occasional
OS: All
Support Group: Any
Description:
Although we add pick letters engineering folks (Mili users) have inquired regarding possibility of highlighting picked node or zone.
In theory, the process could be much the same as doing the pick letter except that the pick attributes wind up returing a center/radius (for a picked node to highlight) or a set of points and line segments between then, to serve as the highlighted _edges_ of the picked zone. But, that is a really, really big guess on my part as to what might be involved.
Comments:
This feature is now implemented on the trunk. It can be selected through the GUI of the PickAttributes python object.
|
1.0
|
Pick option to highlight selected node/zone - Although we add pick letters engineering folks (Mili users) have inquired regarding possibility of highlighting picked node or zone.
In theory, the process could be much the same as doing the pick letter except that the pick attributes wind up returing a center/radius (for a picked node to highlight) or a set of points and line segments between then, to serve as the highlighted _edges_ of the picked zone. But, that is a really, really big guess on my part as to what might be involved.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1964
Status: Resolved
Project: VisIt
Tracker: Feature
Priority: High
Subject: Pick option to highlight selected node/zone
Assigned to: Matt Larsen
Category:
Target version: 2.12.0
Author: Mark Miller
Start: 08/27/2014
Due date:
% Done: 0
Estimated time:
Created: 08/27/2014 10:53 pm
Updated: 10/25/2016 01:53 pm
Likelihood:
Severity:
Found in version:
Impact: 4 - High
Expected Use: 3 - Occasional
OS: All
Support Group: Any
Description:
Although we add pick letters engineering folks (Mili users) have inquired regarding possibility of highlighting picked node or zone.
In theory, the process could be much the same as doing the pick letter except that the pick attributes wind up returing a center/radius (for a picked node to highlight) or a set of points and line segments between then, to serve as the highlighted _edges_ of the picked zone. But, that is a really, really big guess on my part as to what might be involved.
Comments:
This feature is now implemented on the trunk. It can be selected through the GUI of the PickAttributes python object.
|
non_test
|
pick option to highlight selected node zone although we add pick letters engineering folks mili users have inquired regarding possibility of highlighting picked node or zone in theory the process could be much the same as doing the pick letter except that the pick attributes wind up returing a center radius for a picked node to highlight or a set of points and line segments between then to serve as the highlighted edges of the picked zone but that is a really really big guess on my part as to what might be involved redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker feature priority high subject pick option to highlight selected node zone assigned to matt larsen category target version author mark miller start due date done estimated time created pm updated pm likelihood severity found in version impact high expected use occasional os all support group any description although we add pick letters engineering folks mili users have inquired regarding possibility of highlighting picked node or zone in theory the process could be much the same as doing the pick letter except that the pick attributes wind up returing a center radius for a picked node to highlight or a set of points and line segments between then to serve as the highlighted edges of the picked zone but that is a really really big guess on my part as to what might be involved comments this feature is now implemented on the trunk it can be selected through the gui of the pickattributes python object
| 0
|
13,222
| 3,317,521,611
|
IssuesEvent
|
2015-11-06 22:06:29
|
marklogic/java-client-api
|
https://api.github.com/repos/marklogic/java-client-api
|
reopened
|
Missing QName value in Json Document properties
|
Bug minor test
|
A Java test case inserts QName for a JSON document and then reads back the properties. Here is the snippet of that test case.
I am seeing difference between 7.0 (b2_0) and 8.0 (b3_0) when properties are read back and checked for ns. I checked with Sam and he is not aware of this issue.
After talking with Erik, decided to assign this issue to Erik.
// put metadata
metadataHandle.getProperties().put(new QName("http://www.example.com", "foo"), "bar");
// write the doc with the metadata
writeDocumentUsingOutputStreamHandle(client, filename, "/write-json-outputstreamhandle-metadata/", metadataHandle, "JSON");
// create handle to read metadata
DocumentMetadataHandle readMetadataHandle = new DocumentMetadataHandle();
// read metadata
readMetadataHandle = readMetadataFromDocument(client, "/write-json-outputstreamhandle-metadata/" + filename, "JSON");
// get metadata values
DocumentProperties properties = readMetadataHandle.getProperties();
// Properties
String expectedProperties = "size:1|{http://www.example.com}foo:bar|";
String actualProperties = getDocumentPropertiesString(properties);
In 7.0 Query Console I see the example.com NS missing. Has something changed? The test is looking for example.com to be present.
From 7.0 Query Console:
<?xml version="1.0" encoding="UTF-8"?>
<prop:properties xmlns:prop="http://marklogic.com/xdmp/property">
<foo xsi:type="xs:string" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
bar
</foo>
</prop:properties>
From 8.0 Query Console:
<?xml version="1.0" encoding="UTF-8"?>
<prop:properties xmlns:prop="http://marklogic.com/xdmp/property">
<foo xsi:type="xs:string" xmlns="http://www.example.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">bar</foo>
</prop:properties>
This is a minor issue.
|
1.0
|
Missing QName value in Json Document properties - A Java test case inserts QName for a JSON document and then reads back the properties. Here is the snippet of that test case.
I am seeing difference between 7.0 (b2_0) and 8.0 (b3_0) when properties are read back and checked for ns. I checked with Sam and he is not aware of this issue.
After talking with Erik, decided to assign this issue to Erik.
// put metadata
metadataHandle.getProperties().put(new QName("http://www.example.com", "foo"), "bar");
// write the doc with the metadata
writeDocumentUsingOutputStreamHandle(client, filename, "/write-json-outputstreamhandle-metadata/", metadataHandle, "JSON");
// create handle to read metadata
DocumentMetadataHandle readMetadataHandle = new DocumentMetadataHandle();
// read metadata
readMetadataHandle = readMetadataFromDocument(client, "/write-json-outputstreamhandle-metadata/" + filename, "JSON");
// get metadata values
DocumentProperties properties = readMetadataHandle.getProperties();
// Properties
String expectedProperties = "size:1|{http://www.example.com}foo:bar|";
String actualProperties = getDocumentPropertiesString(properties);
In 7.0 Query Console I see the example.com NS missing. Has something changed? The test is looking for example.com to be present.
From 7.0 Query Console:
<?xml version="1.0" encoding="UTF-8"?>
<prop:properties xmlns:prop="http://marklogic.com/xdmp/property">
<foo xsi:type="xs:string" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
bar
</foo>
</prop:properties>
From 8.0 Query Console:
<?xml version="1.0" encoding="UTF-8"?>
<prop:properties xmlns:prop="http://marklogic.com/xdmp/property">
<foo xsi:type="xs:string" xmlns="http://www.example.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">bar</foo>
</prop:properties>
This is a minor issue.
|
test
|
missing qname value in json document properties a java test case inserts qname for a json document and then reads back the properties here is the snippet of that test case i am seeing difference between and when properties are read back and checked for ns i checked with sam and he is not aware of this issue after talking with erik decided to assign this issue to erik put metadata metadatahandle getproperties put new qname foo bar write the doc with the metadata writedocumentusingoutputstreamhandle client filename write json outputstreamhandle metadata metadatahandle json create handle to read metadata documentmetadatahandle readmetadatahandle new documentmetadatahandle read metadata readmetadatahandle readmetadatafromdocument client write json outputstreamhandle metadata filename json get metadata values documentproperties properties readmetadatahandle getproperties properties string expectedproperties size string actualproperties getdocumentpropertiesstring properties in query console i see the example com ns missing has something changed the test is looking for example com to be present from query console prop properties xmlns prop foo xsi type xs string xmlns xsi bar from query console prop properties xmlns prop foo xsi type xs string xmlns xmlns xsi this is a minor issue
| 1
|
327,409
| 28,060,618,718
|
IssuesEvent
|
2023-03-29 12:24:26
|
elastic/cloud-on-k8s
|
https://api.github.com/repos/elastic/cloud-on-k8s
|
closed
|
ElasticMapsServer Pod does not start on ARM
|
>flaky_test
|
`TestElasticMapsServerCrossNSAssociation/ElasticMapsServer_Pods_should_eventually_be_ready` failed 2 times in a row on ARM:
* https://devops-ci.elastic.co/blue/organizations/jenkins/cloud-on-k8s-e2e-tests-eks-arm/detail/cloud-on-k8s-e2e-tests-eks-arm/586/pipeline/
* https://devops-ci.elastic.co/blue/organizations/jenkins/cloud-on-k8s-e2e-tests-eks-arm/detail/cloud-on-k8s-e2e-tests-eks-arm/587/pipeline/
Error in the EMS Pod logs is:
```
==== START logs for e2e-pwsy4-venus/test-cross-ns-ems-es-46m4-ems-6dd454f667-cmp44 ====
exec /bin/sh: exec format error
==== END logs for e2e-pwsy4-venus/test-cross-ns-ems-es-46m4-ems-6dd454f667-cmp44 ====
```
For `587` the image pulled is `docker.elastic.co/elastic-maps-service/elastic-maps-server-ubi8@sha256:f8d8eb19d8bf8bbcb4e051a8f3b216339d131b9e7bf51bbf048921b6a7145c69`:
```json
"containerStatuses": [
{
"containerID": "docker://9bee63637904d31e9e272d4cd3a31db46c64cdeddf88d9c59e248d0a83545bd1",
"image": "docker.elastic.co/elastic-maps-service/elastic-maps-server-ubi8:8.5.0",
"imageID": "docker-pullable://docker.elastic.co/elastic-maps-service/elastic-maps-server-ubi8@sha256:f8d8eb19d8bf8bbcb4e051a8f3b216339d131b9e7bf51bbf048921b6a7145c69",
"lastState": {
"terminated": {
"containerID": "docker://9bee63637904d31e9e272d4cd3a31db46c64cdeddf88d9c59e248d0a83545bd1",
"exitCode": 1,
"finishedAt": "2023-01-10T01:58:41Z",
"reason": "Error",
"startedAt": "2023-01-10T01:58:41Z"
}
```
Which seems designed for `amd64`:
```
docker inspect docker.elastic.co/elastic-maps-service/elastic-maps-server-ubi8@sha256:f8d8eb19d8bf8bbcb4e051a8f3b216339d131b9e7bf51bbf048921b6a7145c69 | grep Architecture
"Architecture": "amd64",
```
While nodes are `arm64`:
```json
"nodeInfo": {
"architecture": "arm64",
"containerRuntimeVersion": "docker://20.10.17",
"kernelVersion": "5.4.226-129.415.amzn2.aarch64",
"kubeProxyVersion": "v1.20.15-eks-fb459a0",
"kubeletVersion": "v1.20.15-eks-fb459a0",
"operatingSystem": "linux",
"osImage": "Amazon Linux 2",
}
```
|
1.0
|
ElasticMapsServer Pod does not start on ARM - `TestElasticMapsServerCrossNSAssociation/ElasticMapsServer_Pods_should_eventually_be_ready` failed 2 times in a row on ARM:
* https://devops-ci.elastic.co/blue/organizations/jenkins/cloud-on-k8s-e2e-tests-eks-arm/detail/cloud-on-k8s-e2e-tests-eks-arm/586/pipeline/
* https://devops-ci.elastic.co/blue/organizations/jenkins/cloud-on-k8s-e2e-tests-eks-arm/detail/cloud-on-k8s-e2e-tests-eks-arm/587/pipeline/
Error in the EMS Pod logs is:
```
==== START logs for e2e-pwsy4-venus/test-cross-ns-ems-es-46m4-ems-6dd454f667-cmp44 ====
exec /bin/sh: exec format error
==== END logs for e2e-pwsy4-venus/test-cross-ns-ems-es-46m4-ems-6dd454f667-cmp44 ====
```
For `587` the image pulled is `docker.elastic.co/elastic-maps-service/elastic-maps-server-ubi8@sha256:f8d8eb19d8bf8bbcb4e051a8f3b216339d131b9e7bf51bbf048921b6a7145c69`:
```json
"containerStatuses": [
{
"containerID": "docker://9bee63637904d31e9e272d4cd3a31db46c64cdeddf88d9c59e248d0a83545bd1",
"image": "docker.elastic.co/elastic-maps-service/elastic-maps-server-ubi8:8.5.0",
"imageID": "docker-pullable://docker.elastic.co/elastic-maps-service/elastic-maps-server-ubi8@sha256:f8d8eb19d8bf8bbcb4e051a8f3b216339d131b9e7bf51bbf048921b6a7145c69",
"lastState": {
"terminated": {
"containerID": "docker://9bee63637904d31e9e272d4cd3a31db46c64cdeddf88d9c59e248d0a83545bd1",
"exitCode": 1,
"finishedAt": "2023-01-10T01:58:41Z",
"reason": "Error",
"startedAt": "2023-01-10T01:58:41Z"
}
```
Which seems designed for `amd64`:
```
docker inspect docker.elastic.co/elastic-maps-service/elastic-maps-server-ubi8@sha256:f8d8eb19d8bf8bbcb4e051a8f3b216339d131b9e7bf51bbf048921b6a7145c69 | grep Architecture
"Architecture": "amd64",
```
While nodes are `arm64`:
```json
"nodeInfo": {
"architecture": "arm64",
"containerRuntimeVersion": "docker://20.10.17",
"kernelVersion": "5.4.226-129.415.amzn2.aarch64",
"kubeProxyVersion": "v1.20.15-eks-fb459a0",
"kubeletVersion": "v1.20.15-eks-fb459a0",
"operatingSystem": "linux",
"osImage": "Amazon Linux 2",
}
```
|
test
|
elasticmapsserver pod does not start on arm testelasticmapsservercrossnsassociation elasticmapsserver pods should eventually be ready failed times in a row on arm error in the ems pod logs is start logs for venus test cross ns ems es ems exec bin sh exec format error end logs for venus test cross ns ems es ems for the image pulled is docker elastic co elastic maps service elastic maps server json containerstatuses containerid docker image docker elastic co elastic maps service elastic maps server imageid docker pullable docker elastic co elastic maps service elastic maps server laststate terminated containerid docker exitcode finishedat reason error startedat which seems designed for docker inspect docker elastic co elastic maps service elastic maps server grep architecture architecture while nodes are json nodeinfo architecture containerruntimeversion docker kernelversion kubeproxyversion eks kubeletversion eks operatingsystem linux osimage amazon linux
| 1
|
21,332
| 6,142,652,835
|
IssuesEvent
|
2017-06-27 01:38:03
|
dotnet/coreclr
|
https://api.github.com/repos/dotnet/coreclr
|
opened
|
[x86][LEGACY_BACKEND] Assertion failed 'TypeOfVN(argVN) == TYP_DOUBLE'
|
arch-x86 area-CodeGen bug
|
Tests:
```
Assert failure(PID 8656 [0x000021d0], Thread: 7044 [0x1b84]): Assertion failed 'TypeOfVN(argVN) == TYP_DOUBLE' in 'ILGEN_0x372a9ae6:Method_0xdc6ff1a4(byte,byte,int,int,char,double,long,long):int' (IL size 11052)
JIT\Regression\CLR-x86-JIT\V1-M12-Beta2\b59782\b59782\b59782.cmd
```
run with:
```
set COMPLUS_AltJit=*
set COMPLUS_AltJitNgen=*
set COMPLUS_AltJitName=legacyjit.dll
set COMPLUS_NoGuiOnAssert=1
set COMPLUS_AltJitAssertOnNYI=1
```
|
1.0
|
[x86][LEGACY_BACKEND] Assertion failed 'TypeOfVN(argVN) == TYP_DOUBLE' - Tests:
```
Assert failure(PID 8656 [0x000021d0], Thread: 7044 [0x1b84]): Assertion failed 'TypeOfVN(argVN) == TYP_DOUBLE' in 'ILGEN_0x372a9ae6:Method_0xdc6ff1a4(byte,byte,int,int,char,double,long,long):int' (IL size 11052)
JIT\Regression\CLR-x86-JIT\V1-M12-Beta2\b59782\b59782\b59782.cmd
```
run with:
```
set COMPLUS_AltJit=*
set COMPLUS_AltJitNgen=*
set COMPLUS_AltJitName=legacyjit.dll
set COMPLUS_NoGuiOnAssert=1
set COMPLUS_AltJitAssertOnNYI=1
```
|
non_test
|
assertion failed typeofvn argvn typ double tests assert failure pid thread assertion failed typeofvn argvn typ double in ilgen method byte byte int int char double long long int il size jit regression clr jit cmd run with set complus altjit set complus altjitngen set complus altjitname legacyjit dll set complus noguionassert set complus altjitassertonnyi
| 0
|
1,077
| 2,531,533,145
|
IssuesEvent
|
2015-01-23 08:10:56
|
ajency/Foodstree
|
https://api.github.com/repos/ajency/Foodstree
|
closed
|
Filters not shown for sellers on order screen
|
bug Pushed to test site
|
Steps:
1.Login as a seller.
2. Click on Orders
Current Behaviour: No filters are visible to the user
Expected Behaviour: The filters have to be visible to the user
|
1.0
|
Filters not shown for sellers on order screen - Steps:
1.Login as a seller.
2. Click on Orders
Current Behaviour: No filters are visible to the user
Expected Behaviour: The filters have to be visible to the user
|
test
|
filters not shown for sellers on order screen steps login as a seller click on orders current behaviour no filters are visible to the user expected behaviour the filters have to be visible to the user
| 1
|
323,764
| 9,878,763,382
|
IssuesEvent
|
2019-06-24 08:26:50
|
daleran/meal-folio
|
https://api.github.com/repos/daleran/meal-folio
|
closed
|
Design major components [1]
|
Priority: Hot Theme: Engineering Type: Task
|
Design what the major components in a wireframing tool.
- [x] Recipe List
- [x] Add/Edit Recipe
- [x] View Recipe
|
1.0
|
Design major components [1] - Design what the major components in a wireframing tool.
- [x] Recipe List
- [x] Add/Edit Recipe
- [x] View Recipe
|
non_test
|
design major components design what the major components in a wireframing tool recipe list add edit recipe view recipe
| 0
|
115,254
| 9,785,101,899
|
IssuesEvent
|
2019-06-09 02:50:58
|
SpongePowered/SpongeForge
|
https://api.github.com/repos/SpongePowered/SpongeForge
|
closed
|
Villagers don't drop XP if trade result is shift-clicked out of GUI
|
status: needs testing type: bug version: 1.12
|
**I am currently running**
- SpongeForge version: 1.12.2-2768-7.1.4
- Forge version: 2768
- Java version: 8
**Issue Description**
As always, title has it all. If you hold Shift and press left-click on trade reult in villager's GUI - it doesn't drop any XP. However, if you just left-click the trade result - XP is dropped immediately.
Might be SpongeCommon bug. Also might have been fixed already.
|
1.0
|
Villagers don't drop XP if trade result is shift-clicked out of GUI - **I am currently running**
- SpongeForge version: 1.12.2-2768-7.1.4
- Forge version: 2768
- Java version: 8
**Issue Description**
As always, title has it all. If you hold Shift and press left-click on trade reult in villager's GUI - it doesn't drop any XP. However, if you just left-click the trade result - XP is dropped immediately.
Might be SpongeCommon bug. Also might have been fixed already.
|
test
|
villagers don t drop xp if trade result is shift clicked out of gui i am currently running spongeforge version forge version java version issue description as always title has it all if you hold shift and press left click on trade reult in villager s gui it doesn t drop any xp however if you just left click the trade result xp is dropped immediately might be spongecommon bug also might have been fixed already
| 1
|
165,161
| 6,264,629,170
|
IssuesEvent
|
2017-07-16 10:03:50
|
pmrukot/aion
|
https://api.github.com/repos/pmrukot/aion
|
opened
|
Refactor Elm
|
Priority: Medium Status: Blocked Type: Question
|
**Type**
Enhancement
**Current behaviour**
We need to improve our frontend code, the more we add to it, the worse it gets. We should do this as soon as we finish with #34
**Expected behaviour**
My suggestions:
- [ ] `roomId` is an `Int`, not sure why that's the case, as in most of the cases we convert it to String.
We could just make it a string and AFAIK, that's the way it should be done anyway.
- [ ] Actually in the later stages I would refrain from using a raw id as it's not so safe, I believe our frontend part should use encoded ids, we don't want to let users know how much data we actually have in our dbs.
- [ ] As for the architecture:
* Create Update.elm for Room and move all room-specific logic to this file
* Create Msgs.elm for Room resource
* I believe that we should have separate modules for `Room`, `Profile` (#35) and `DataPanel` (or however we would call the module taking care of the question, subject forms etc.)
**Motivation / use case**
Our project grows larger, we need to remodel it so that it's easier to maintain.
|
1.0
|
Refactor Elm - **Type**
Enhancement
**Current behaviour**
We need to improve our frontend code, the more we add to it, the worse it gets. We should do this as soon as we finish with #34
**Expected behaviour**
My suggestions:
- [ ] `roomId` is an `Int`, not sure why that's the case, as in most of the cases we convert it to String.
We could just make it a string and AFAIK, that's the way it should be done anyway.
- [ ] Actually in the later stages I would refrain from using a raw id as it's not so safe, I believe our frontend part should use encoded ids, we don't want to let users know how much data we actually have in our dbs.
- [ ] As for the architecture:
* Create Update.elm for Room and move all room-specific logic to this file
* Create Msgs.elm for Room resource
* I believe that we should have separate modules for `Room`, `Profile` (#35) and `DataPanel` (or however we would call the module taking care of the question, subject forms etc.)
**Motivation / use case**
Our project grows larger, we need to remodel it so that it's easier to maintain.
|
non_test
|
refactor elm type enhancement current behaviour we need to improve our frontend code the more we add to it the worse it gets we should do this as soon as we finish with expected behaviour my suggestions roomid is an int not sure why that s the case as in most of the cases we convert it to string we could just make it a string and afaik that s the way it should be done anyway actually in the later stages i would refrain from using a raw id as it s not so safe i believe our frontend part should use encoded ids we don t want to let users know how much data we actually have in our dbs as for the architecture create update elm for room and move all room specific logic to this file create msgs elm for room resource i believe that we should have separate modules for room profile and datapanel or however we would call the module taking care of the question subject forms etc motivation use case our project grows larger we need to remodel it so that it s easier to maintain
| 0
|
548,018
| 16,055,550,267
|
IssuesEvent
|
2021-04-23 04:03:55
|
bryntum/support
|
https://api.github.com/repos/bryntum/support
|
closed
|
Grid scroll not working after store.add when store is filtered
|
bug high-priority resolved
|
Steps to reproduce:
1 - Open: https://www.bryntum.com/examples/grid/filtering/
2 - run: `grid.store.add({name: 'Peter'})`
Impossible to scroll down to see the last record added to grid.
|
1.0
|
Grid scroll not working after store.add when store is filtered - Steps to reproduce:
1 - Open: https://www.bryntum.com/examples/grid/filtering/
2 - run: `grid.store.add({name: 'Peter'})`
Impossible to scroll down to see the last record added to grid.
|
non_test
|
grid scroll not working after store add when store is filtered steps to reproduce open run grid store add name peter impossible to scroll down to see the last record added to grid
| 0
|
476,098
| 13,733,629,003
|
IssuesEvent
|
2020-10-05 07:25:30
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
ders.eba.gov.tr - desktop site instead of mobile site
|
browser-firefox engine-gecko ml-needsdiagnosis-false priority-normal
|
<!-- @browser: Firefox 83.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; rv:83.0) Gecko/20100101 Firefox/83.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/59336 -->
**URL**: http://ders.eba.gov.tr/ders/verifyredirect?
**Browser / Version**: Firefox 83.0
**Operating System**: Windows 7
**Tested Another Browser**: Yes Chrome
**Problem type**: Desktop site instead of mobile site
**Description**: Desktop site instead of mobile site
**Steps to Reproduce**:
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/10/ba49d85c-ec54-44f5-9a0b-6a9d4a558d64.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201004093007</li><li>channel: nightly</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/10/1833e3f9-d490-4509-950a-36af3cd46671)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
ders.eba.gov.tr - desktop site instead of mobile site - <!-- @browser: Firefox 83.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; rv:83.0) Gecko/20100101 Firefox/83.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/59336 -->
**URL**: http://ders.eba.gov.tr/ders/verifyredirect?
**Browser / Version**: Firefox 83.0
**Operating System**: Windows 7
**Tested Another Browser**: Yes Chrome
**Problem type**: Desktop site instead of mobile site
**Description**: Desktop site instead of mobile site
**Steps to Reproduce**:
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/10/ba49d85c-ec54-44f5-9a0b-6a9d4a558d64.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201004093007</li><li>channel: nightly</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/10/1833e3f9-d490-4509-950a-36af3cd46671)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_test
|
ders eba gov tr desktop site instead of mobile site url browser version firefox operating system windows tested another browser yes chrome problem type desktop site instead of mobile site description desktop site instead of mobile site steps to reproduce view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel nightly hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 0
|
823,313
| 30,989,705,512
|
IssuesEvent
|
2023-08-09 02:49:36
|
Karooobar/Voyager
|
https://api.github.com/repos/Karooobar/Voyager
|
closed
|
Rewrite DropDown component to use Select and option.
|
Low Priority
|
Rewrite this dropdown component, src/components/Dropdown in such a way that it uses the Select and option of the HTML. All the behavior and the styling should match. Only use tailwind for styling. By rewriting it we can remove the uses of `handleClickOutside`, `dropdownRef`, and other unnecessary stuffs.
|
1.0
|
Rewrite DropDown component to use Select and option. - Rewrite this dropdown component, src/components/Dropdown in such a way that it uses the Select and option of the HTML. All the behavior and the styling should match. Only use tailwind for styling. By rewriting it we can remove the uses of `handleClickOutside`, `dropdownRef`, and other unnecessary stuffs.
|
non_test
|
rewrite dropdown component to use select and option rewrite this dropdown component src components dropdown in such a way that it uses the select and option of the html all the behavior and the styling should match only use tailwind for styling by rewriting it we can remove the uses of handleclickoutside dropdownref and other unnecessary stuffs
| 0
|
27,503
| 6,877,703,305
|
IssuesEvent
|
2017-11-20 09:10:14
|
Appliscale/serverless-cat-detector
|
https://api.github.com/repos/Appliscale/serverless-cat-detector
|
closed
|
Lambda Implementation - Image Classification
|
source code
|
We need an _AWS Lambda_ implementation that will:
- Handle previously uploaded file,
- It will invoke _AWS Rekognition_ service,
- Results will be saved in the corresponding _DynamoDB_ record.
The same goes for _AWS API Gateway_ configuration.
|
1.0
|
Lambda Implementation - Image Classification - We need an _AWS Lambda_ implementation that will:
- Handle previously uploaded file,
- It will invoke _AWS Rekognition_ service,
- Results will be saved in the corresponding _DynamoDB_ record.
The same goes for _AWS API Gateway_ configuration.
|
non_test
|
lambda implementation image classification we need an aws lambda implementation that will handle previously uploaded file it will invoke aws rekognition service results will be saved in the corresponding dynamodb record the same goes for aws api gateway configuration
| 0
|
94,238
| 8,477,065,752
|
IssuesEvent
|
2018-10-25 00:59:12
|
Brycey92/Galaxy-Craft-Issues
|
https://api.github.com/repos/Brycey92/Galaxy-Craft-Issues
|
closed
|
Garden Cloches are insane
|
fixed - needs testing
|
**Pack version**
1.0.3
**Describe the bug**
Garden cloches are absolutely nuts, they only use ~8rf/t and can fill a gold chest with product in under a few hours with only water as an input, no replant time and only takes up 3 blocks.
|
1.0
|
Garden Cloches are insane - **Pack version**
1.0.3
**Describe the bug**
Garden cloches are absolutely nuts, they only use ~8rf/t and can fill a gold chest with product in under a few hours with only water as an input, no replant time and only takes up 3 blocks.
|
test
|
garden cloches are insane pack version describe the bug garden cloches are absolutely nuts they only use t and can fill a gold chest with product in under a few hours with only water as an input no replant time and only takes up blocks
| 1
|
138,249
| 11,196,312,702
|
IssuesEvent
|
2020-01-03 09:45:13
|
ulmo-dev/ulmo
|
https://api.github.com/repos/ulmo-dev/ulmo
|
closed
|
Many tests are now failing
|
tests
|
On the latest PR, many tests are now failing on Travis-CI, that were not failing when version 0.8.5 was released. Here's [one of the reports](https://travis-ci.org/ulmo-dev/ulmo/jobs/550913422), and its summary of results:
```
39 failed, 41 passed, 30 deselected, 12 warnings
```
For comparison, the [Travis-CI report from the issuing of the 0.8.5 release](https://travis-ci.org/ulmo-dev/ulmo/jobs/510456156) 3 months ago had these results:
```
3 failed, 79 passed, 30 deselected, 19 warnings
```
|
1.0
|
Many tests are now failing - On the latest PR, many tests are now failing on Travis-CI, that were not failing when version 0.8.5 was released. Here's [one of the reports](https://travis-ci.org/ulmo-dev/ulmo/jobs/550913422), and its summary of results:
```
39 failed, 41 passed, 30 deselected, 12 warnings
```
For comparison, the [Travis-CI report from the issuing of the 0.8.5 release](https://travis-ci.org/ulmo-dev/ulmo/jobs/510456156) 3 months ago had these results:
```
3 failed, 79 passed, 30 deselected, 19 warnings
```
|
test
|
many tests are now failing on the latest pr many tests are now failing on travis ci that were not failing when version was released here s and its summary of results failed passed deselected warnings for comparison the months ago had these results failed passed deselected warnings
| 1
|
138,610
| 11,208,623,661
|
IssuesEvent
|
2020-01-06 08:24:17
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
closed
|
Allow custom new platform plugin paths in integration tests
|
Feature:New Platform Team:Platform blocker test-api-integration
|
We used to create custom plugins for various integration tests that expose custom endpoints (e.g. to serve as fake IdP) and whatnot. Currently it's possible to load new platform plugins from custom paths, but only when Kibana is run in dev mode. Unfortunately this doesn't help since integration tests on CI are run against "prod" Kibana.
/cc @legrego
|
1.0
|
Allow custom new platform plugin paths in integration tests - We used to create custom plugins for various integration tests that expose custom endpoints (e.g. to serve as fake IdP) and whatnot. Currently it's possible to load new platform plugins from custom paths, but only when Kibana is run in dev mode. Unfortunately this doesn't help since integration tests on CI are run against "prod" Kibana.
/cc @legrego
|
test
|
allow custom new platform plugin paths in integration tests we used to create custom plugins for various integration tests that expose custom endpoints e g to serve as fake idp and whatnot currently it s possible to load new platform plugins from custom paths but only when kibana is run in dev mode unfortunately this doesn t help since integration tests on ci are run against prod kibana cc legrego
| 1
|
178,305
| 14,666,860,240
|
IssuesEvent
|
2020-12-29 17:13:55
|
PyFstat/PyFstat
|
https://api.github.com/repos/PyFstat/PyFstat
|
opened
|
html documentation via sphinx/autodoc & readthedocs
|
documentation
|
I've had a look at using [sphinx/autodoc](https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html) and [readthedocs](https://readthedocs.org/) to generate and host html documentation pages, and at least the generation part seems straightforward enough.
- [ ] basic sphinx setup
- [ ] publish on readthedocs
- [ ] go through warnings and obvious issues
- [ ] include examples
- [ ] extend and improve documentation where lacking
|
1.0
|
html documentation via sphinx/autodoc & readthedocs - I've had a look at using [sphinx/autodoc](https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html) and [readthedocs](https://readthedocs.org/) to generate and host html documentation pages, and at least the generation part seems straightforward enough.
- [ ] basic sphinx setup
- [ ] publish on readthedocs
- [ ] go through warnings and obvious issues
- [ ] include examples
- [ ] extend and improve documentation where lacking
|
non_test
|
html documentation via sphinx autodoc readthedocs i ve had a look at using and to generate and host html documentation pages and at least the generation part seems straightforward enough basic sphinx setup publish on readthedocs go through warnings and obvious issues include examples extend and improve documentation where lacking
| 0
|
167,488
| 13,031,742,089
|
IssuesEvent
|
2020-07-28 02:10:42
|
streamnative/pulsar
|
https://api.github.com/repos/streamnative/pulsar
|
closed
|
ISSUE-5012: [flaky test] BasicEndToEndTest.testNegativeAcks
|
component/test flaky-tests type/bug
|
Original Issue: apache/pulsar#5012
---
```bash
[69/156] BasicEndToEndTest.testNegativeAcks (167 ms)
Note: Google Test filter = BasicEndToEndTest.testNegativeAcks
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from BasicEndToEndTest
[ RUN ] BasicEndToEndTest.testNegativeAcks
2019-08-22 09:28:07.862 INFO Client:88 | Subscribing on Topic :testNegativeAcks-1566466087
2019-08-22 09:28:07.862 INFO ConnectionPool:72 | Created connection for pulsar://localhost:6650
2019-08-22 09:28:07.863 INFO ClientConnection:324 | [127.0.0.1:33678 -> 127.0.0.1:6650] Connected to broker
2019-08-22 09:28:07.866 INFO HandlerBase:52 | [persistent://public/default/testNegativeAcks-1566466087, test, 0] Getting connection from pool
2019-08-22 09:28:07.892 INFO ConsumerImpl:170 | [persistent://public/default/testNegativeAcks-1566466087, test, 0] Created consumer on broker [127.0.0.1:33678 -> 127.0.0.1:6650]
2019-08-22 09:28:07.897 INFO HandlerBase:52 | [persistent://public/default/testNegativeAcks-1566466087, ] Getting connection from pool
2019-08-22 09:28:07.901 INFO ProducerImpl:155 | [persistent://public/default/testNegativeAcks-1566466087, ] Created producer on broker [127.0.0.1:33678 -> 127.0.0.1:6650]
/pulsar/pulsar-client-cpp/tests/BasicEndToEndTest.cc:2909: Failure
Value of: res
Actual: Ok
Expected: ResultTimeout
Which is: TimeOut
2019-08-22 09:28:07.983 WARN ConsumerImpl:98 | [persistent://public/default/testNegativeAcks-1566466087, test, 0] Destroyed consumer which was not properly closed
2019-08-22 09:28:07.983 INFO ConsumerImpl:839 | [persistent://public/default/testNegativeAcks-1566466087, test, 0] Closing consumer for topic persistent://public/default/testNegativeAcks-1566466087
2019-08-22 09:28:07.983 INFO ProducerImpl:469 | Producer - [persistent://public/default/testNegativeAcks-1566466087, standalone-0-140] , [batching = off]
[ FAILED ] BasicEndToEndTest.testNegativeAcks (133 ms)
[----------] 1 test from BasicEndToEndTest (133 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (133 ms total)
[ PASSED ] 0 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] BasicEndToEndTest.testNegativeAcks
1 FAILED TEST
```
|
2.0
|
ISSUE-5012: [flaky test] BasicEndToEndTest.testNegativeAcks - Original Issue: apache/pulsar#5012
---
```bash
[69/156] BasicEndToEndTest.testNegativeAcks (167 ms)
Note: Google Test filter = BasicEndToEndTest.testNegativeAcks
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from BasicEndToEndTest
[ RUN ] BasicEndToEndTest.testNegativeAcks
2019-08-22 09:28:07.862 INFO Client:88 | Subscribing on Topic :testNegativeAcks-1566466087
2019-08-22 09:28:07.862 INFO ConnectionPool:72 | Created connection for pulsar://localhost:6650
2019-08-22 09:28:07.863 INFO ClientConnection:324 | [127.0.0.1:33678 -> 127.0.0.1:6650] Connected to broker
2019-08-22 09:28:07.866 INFO HandlerBase:52 | [persistent://public/default/testNegativeAcks-1566466087, test, 0] Getting connection from pool
2019-08-22 09:28:07.892 INFO ConsumerImpl:170 | [persistent://public/default/testNegativeAcks-1566466087, test, 0] Created consumer on broker [127.0.0.1:33678 -> 127.0.0.1:6650]
2019-08-22 09:28:07.897 INFO HandlerBase:52 | [persistent://public/default/testNegativeAcks-1566466087, ] Getting connection from pool
2019-08-22 09:28:07.901 INFO ProducerImpl:155 | [persistent://public/default/testNegativeAcks-1566466087, ] Created producer on broker [127.0.0.1:33678 -> 127.0.0.1:6650]
/pulsar/pulsar-client-cpp/tests/BasicEndToEndTest.cc:2909: Failure
Value of: res
Actual: Ok
Expected: ResultTimeout
Which is: TimeOut
2019-08-22 09:28:07.983 WARN ConsumerImpl:98 | [persistent://public/default/testNegativeAcks-1566466087, test, 0] Destroyed consumer which was not properly closed
2019-08-22 09:28:07.983 INFO ConsumerImpl:839 | [persistent://public/default/testNegativeAcks-1566466087, test, 0] Closing consumer for topic persistent://public/default/testNegativeAcks-1566466087
2019-08-22 09:28:07.983 INFO ProducerImpl:469 | Producer - [persistent://public/default/testNegativeAcks-1566466087, standalone-0-140] , [batching = off]
[ FAILED ] BasicEndToEndTest.testNegativeAcks (133 ms)
[----------] 1 test from BasicEndToEndTest (133 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (133 ms total)
[ PASSED ] 0 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] BasicEndToEndTest.testNegativeAcks
1 FAILED TEST
```
|
test
|
issue basicendtoendtest testnegativeacks original issue apache pulsar bash basicendtoendtest testnegativeacks ms note google test filter basicendtoendtest testnegativeacks running test from test case global test environment set up test from basicendtoendtest basicendtoendtest testnegativeacks info client subscribing on topic testnegativeacks info connectionpool created connection for pulsar localhost info clientconnection connected to broker info handlerbase getting connection from pool info consumerimpl created consumer on broker info handlerbase getting connection from pool info producerimpl created producer on broker pulsar pulsar client cpp tests basicendtoendtest cc failure value of res actual ok expected resulttimeout which is timeout warn consumerimpl destroyed consumer which was not properly closed info consumerimpl closing consumer for topic persistent public default testnegativeacks info producerimpl producer basicendtoendtest testnegativeacks ms test from basicendtoendtest ms total global test environment tear down test from test case ran ms total tests test listed below basicendtoendtest testnegativeacks failed test
| 1
|
83,775
| 24,142,845,626
|
IssuesEvent
|
2022-09-21 16:05:55
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
Installing conda package from pytorch-nighly fails with `libcupti.so.X.Y` not found
|
module: build module: cuda triaged module: regression
|
### 🐛 Describe the bug
Attempts to run
```
$ conda create -n py_3.7-torch-nightly -c pytorch-nightly cudatoolkit==11.3.1 pytorch python==3.7
$ conda run -n py_3.7-torch-nightly python -c "import torch"
```
will fail with
```
ERROR conda.cli.main_run:execute(33): Subprocess for 'conda run ['python', '-c', 'import torch']' command failed. (See above for error)
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/fsx/users/nshulga/conda/envs/py_3.7-torch-nightly/lib/python3.7/site-packages/torch/__init__.py", line 199, in <module>
from torch._C import * # noqa: F403
ImportError: libcupti.so.11.3: cannot open shared object file: No such file or directory
```
For more details see https://github.com/pytorch/vision/issues/5635
### Versions
nightly
cc @ezyang @gchanan @zou3519 @malfet @seemethere @ngimel
|
1.0
|
Installing conda package from pytorch-nighly fails with `libcupti.so.X.Y` not found - ### 🐛 Describe the bug
Attempts to run
```
$ conda create -n py_3.7-torch-nightly -c pytorch-nightly cudatoolkit==11.3.1 pytorch python==3.7
$ conda run -n py_3.7-torch-nightly python -c "import torch"
```
will fail with
```
ERROR conda.cli.main_run:execute(33): Subprocess for 'conda run ['python', '-c', 'import torch']' command failed. (See above for error)
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/fsx/users/nshulga/conda/envs/py_3.7-torch-nightly/lib/python3.7/site-packages/torch/__init__.py", line 199, in <module>
from torch._C import * # noqa: F403
ImportError: libcupti.so.11.3: cannot open shared object file: No such file or directory
```
For more details see https://github.com/pytorch/vision/issues/5635
### Versions
nightly
cc @ezyang @gchanan @zou3519 @malfet @seemethere @ngimel
|
non_test
|
installing conda package from pytorch nighly fails with libcupti so x y not found 🐛 describe the bug attempts to run conda create n py torch nightly c pytorch nightly cudatoolkit pytorch python conda run n py torch nightly python c import torch will fail with error conda cli main run execute subprocess for conda run command failed see above for error traceback most recent call last file line in file fsx users nshulga conda envs py torch nightly lib site packages torch init py line in from torch c import noqa importerror libcupti so cannot open shared object file no such file or directory for more details see versions nightly cc ezyang gchanan malfet seemethere ngimel
| 0
|
144,186
| 11,597,057,158
|
IssuesEvent
|
2020-02-24 20:05:45
|
zio/zio
|
https://api.github.com/repos/zio/zio
|
closed
|
Flaky ZIO ForkAll Test
|
help wanted tests
|
```scala
[info] ZIO Test
[info] - ZIO
[info] - forkAll
[info] - propagates defects
```
https://circleci.com/gh/zio/zio/36839?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link
|
1.0
|
Flaky ZIO ForkAll Test - ```scala
[info] ZIO Test
[info] - ZIO
[info] - forkAll
[info] - propagates defects
```
https://circleci.com/gh/zio/zio/36839?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link
|
test
|
flaky zio forkall test scala zio test zio forkall propagates defects
| 1
|
174,150
| 13,459,225,054
|
IssuesEvent
|
2020-09-09 11:56:13
|
ExchangeUnion/xud
|
https://api.github.com/repos/ExchangeUnion/xud
|
opened
|
Simulation Test for connext on-chain resolution
|
P1 automated tests connext
|
Progress of on-chain disputes on connext side can be monitored in: https://github.com/connext/indra/issues/1412
|
1.0
|
Simulation Test for connext on-chain resolution - Progress of on-chain disputes on connext side can be monitored in: https://github.com/connext/indra/issues/1412
|
test
|
simulation test for connext on chain resolution progress of on chain disputes on connext side can be monitored in
| 1
|
60,972
| 8,481,780,901
|
IssuesEvent
|
2018-10-25 16:37:43
|
arangodb/arangodb
|
https://api.github.com/repos/arangodb/arangodb
|
closed
|
Support-Links should link to Documents corresponding the current version of Admin i.e. 3.4 ...
|
1 Bug 2 Fixed 3 Documentation 3 UI
|
ArangoDb 3.4 RC1 - Admin

... instead of latest stable version i.e. 3.3
|
1.0
|
Support-Links should link to Documents corresponding the current version of Admin i.e. 3.4 ... - ArangoDb 3.4 RC1 - Admin

... instead of latest stable version i.e. 3.3
|
non_test
|
support links should link to documents corresponding the current version of admin i e arangodb admin instead of latest stable version i e
| 0
|
266,826
| 23,262,684,319
|
IssuesEvent
|
2022-08-04 14:41:32
|
geocollections/eMaapou
|
https://api.github.com/repos/geocollections/eMaapou
|
closed
|
Specimen filter issues
|
Ready for testing
|
Issues with specimen detailed search:
- search by specimen number searches from specimen_full_name field, https://api.geoloogia.info/solr/specimen?q=%2A&fq=%28%28specimen_full_name%3A%2ASc%20101%2A%29%29&start=0&rows=25 and returns all records
- this index field should be renamed to specimen_number_full
- specimen number filter should used fields specimen_number, specimen_number_old, specimen_number_full
- search field "fossil" is not fossil name, but group, label should be renamed to "fossil group" / "fossiilirühm"
- filter by fossil name is needed (from fields taxon, taxon_txt, taxon_full)
- index field hierarchy_string should be renamed to taxon_hierarchy
- how to search this field hierarchically - similarly to stratigraphy filter
- filter by related reference is needed - autocomplete like for stratigraphy
- filter by rock name using rock, rock_en, rock_txt, rock_txt_en, formula
|
1.0
|
Specimen filter issues - Issues with specimen detailed search:
- search by specimen number searches from specimen_full_name field, https://api.geoloogia.info/solr/specimen?q=%2A&fq=%28%28specimen_full_name%3A%2ASc%20101%2A%29%29&start=0&rows=25 and returns all records
- this index field should be renamed to specimen_number_full
- specimen number filter should used fields specimen_number, specimen_number_old, specimen_number_full
- search field "fossil" is not fossil name, but group, label should be renamed to "fossil group" / "fossiilirühm"
- filter by fossil name is needed (from fields taxon, taxon_txt, taxon_full)
- index field hierarchy_string should be renamed to taxon_hierarchy
- how to search this field hierarchically - similarly to stratigraphy filter
- filter by related reference is needed - autocomplete like for stratigraphy
- filter by rock name using rock, rock_en, rock_txt, rock_txt_en, formula
|
test
|
specimen filter issues issues with specimen detailed search search by specimen number searches from specimen full name field and returns all records this index field should be renamed to specimen number full specimen number filter should used fields specimen number specimen number old specimen number full search field fossil is not fossil name but group label should be renamed to fossil group fossiilirühm filter by fossil name is needed from fields taxon taxon txt taxon full index field hierarchy string should be renamed to taxon hierarchy how to search this field hierarchically similarly to stratigraphy filter filter by related reference is needed autocomplete like for stratigraphy filter by rock name using rock rock en rock txt rock txt en formula
| 1
|
115,612
| 9,806,518,107
|
IssuesEvent
|
2019-06-12 11:37:48
|
mono/mono
|
https://api.github.com/repos/mono/mono
|
opened
|
[netcore] Make System.Reflection.Tests.AssemblyNameTests.Version Pass
|
area-netcore: CoreLib epic: CoreFX tests
|
The test currently fails with:
```
System.Reflection.Tests.AssemblyNameTests.Version(version: 255.1, versionString: "255.1") [FAIL]
Expected
MyAssemblyName, Version=255.1, PublicKeyToken=null == MyAssemblyName, Version=255.1
or
MyAssemblyName, Version=255.1, PublicKeyToken=null == MyAssemblyName, Version=255.1, Culture=neutral, PublicKeyToken=null
Expected: True
Actual: False
Stack Trace:
at System.Reflection.Tests.AssemblyNameTests.Version(Version version, String versionString)
at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
System.Reflection.Tests.AssemblyNameTests.Version(version: 255.1.2, versionString: "255.1.2") [FAIL]
Expected
MyAssemblyName, Version=255.1.2, PublicKeyToken=null == MyAssemblyName, Version=255.1.2
or
MyAssemblyName, Version=255.1.2, PublicKeyToken=null == MyAssemblyName, Version=255.1.2, Culture=neutral, PublicKeyToken=null
Expected: True
Actual: False
Stack Trace:
at System.Reflection.Tests.AssemblyNameTests.Version(Version version, String versionString)
at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
System.Reflection.Tests.AssemblyNameTests.Version(version: 255.1.2.3, versionString: "255.1.2.3") [FAIL]
Expected
MyAssemblyName, Version=255.1.2.3, PublicKeyToken=null == MyAssemblyName, Version=255.1.2.3
or
MyAssemblyName, Version=255.1.2.3, PublicKeyToken=null == MyAssemblyName, Version=255.1.2.3, Culture=neutral, PublicKeyToken=null
Expected: True
Actual: False
Stack Trace:
at System.Reflection.Tests.AssemblyNameTests.Version(Version version, String versionString)
at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
System.Reflection.Tests.AssemblyNameTests.Version(version: 1.2.131071.4, versionString: "1.2") [FAIL]
Expected
MyAssemblyName, Version=1.2, PublicKeyToken=null == MyAssemblyName, Version=1.2
or
MyAssemblyName, Version=1.2, PublicKeyToken=null == MyAssemblyName, Version=1.2, Culture=neutral, PublicKeyToken=null
Expected: True
Actual: False
Stack Trace:
at System.Reflection.Tests.AssemblyNameTests.Version(Version version, String versionString)
at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
```
Find out why the right version isn't being returned and make the test pass.
|
1.0
|
[netcore] Make System.Reflection.Tests.AssemblyNameTests.Version Pass - The test currently fails with:
```
System.Reflection.Tests.AssemblyNameTests.Version(version: 255.1, versionString: "255.1") [FAIL]
Expected
MyAssemblyName, Version=255.1, PublicKeyToken=null == MyAssemblyName, Version=255.1
or
MyAssemblyName, Version=255.1, PublicKeyToken=null == MyAssemblyName, Version=255.1, Culture=neutral, PublicKeyToken=null
Expected: True
Actual: False
Stack Trace:
at System.Reflection.Tests.AssemblyNameTests.Version(Version version, String versionString)
at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
System.Reflection.Tests.AssemblyNameTests.Version(version: 255.1.2, versionString: "255.1.2") [FAIL]
Expected
MyAssemblyName, Version=255.1.2, PublicKeyToken=null == MyAssemblyName, Version=255.1.2
or
MyAssemblyName, Version=255.1.2, PublicKeyToken=null == MyAssemblyName, Version=255.1.2, Culture=neutral, PublicKeyToken=null
Expected: True
Actual: False
Stack Trace:
at System.Reflection.Tests.AssemblyNameTests.Version(Version version, String versionString)
at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
System.Reflection.Tests.AssemblyNameTests.Version(version: 255.1.2.3, versionString: "255.1.2.3") [FAIL]
Expected
MyAssemblyName, Version=255.1.2.3, PublicKeyToken=null == MyAssemblyName, Version=255.1.2.3
or
MyAssemblyName, Version=255.1.2.3, PublicKeyToken=null == MyAssemblyName, Version=255.1.2.3, Culture=neutral, PublicKeyToken=null
Expected: True
Actual: False
Stack Trace:
at System.Reflection.Tests.AssemblyNameTests.Version(Version version, String versionString)
at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
System.Reflection.Tests.AssemblyNameTests.Version(version: 1.2.131071.4, versionString: "1.2") [FAIL]
Expected
MyAssemblyName, Version=1.2, PublicKeyToken=null == MyAssemblyName, Version=1.2
or
MyAssemblyName, Version=1.2, PublicKeyToken=null == MyAssemblyName, Version=1.2, Culture=neutral, PublicKeyToken=null
Expected: True
Actual: False
Stack Trace:
at System.Reflection.Tests.AssemblyNameTests.Version(Version version, String versionString)
at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
```
Find out why the right version isn't being returned and make the test pass.
|
test
|
make system reflection tests assemblynametests version pass the test currently fails with system reflection tests assemblynametests version version versionstring expected myassemblyname version publickeytoken null myassemblyname version or myassemblyname version publickeytoken null myassemblyname version culture neutral publickeytoken null expected true actual false stack trace at system reflection tests assemblynametests version version version string versionstring at system reflection runtimemethodinfo invoke object obj bindingflags invokeattr binder binder object parameters cultureinfo culture system reflection tests assemblynametests version version versionstring expected myassemblyname version publickeytoken null myassemblyname version or myassemblyname version publickeytoken null myassemblyname version culture neutral publickeytoken null expected true actual false stack trace at system reflection tests assemblynametests version version version string versionstring at system reflection runtimemethodinfo invoke object obj bindingflags invokeattr binder binder object parameters cultureinfo culture system reflection tests assemblynametests version version versionstring expected myassemblyname version publickeytoken null myassemblyname version or myassemblyname version publickeytoken null myassemblyname version culture neutral publickeytoken null expected true actual false stack trace at system reflection tests assemblynametests version version version string versionstring at system reflection runtimemethodinfo invoke object obj bindingflags invokeattr binder binder object parameters cultureinfo culture system reflection tests assemblynametests version version versionstring expected myassemblyname version publickeytoken null myassemblyname version or myassemblyname version publickeytoken null myassemblyname version culture neutral publickeytoken null expected true actual false stack trace at system reflection tests assemblynametests version version version string versionstring at system reflection runtimemethodinfo invoke object obj bindingflags invokeattr binder binder object parameters cultureinfo culture find out why the right version isn t being returned and make the test pass
| 1
|
67,929
| 13,047,221,936
|
IssuesEvent
|
2020-07-29 10:19:06
|
Regalis11/Barotrauma
|
https://api.github.com/repos/Regalis11/Barotrauma
|
closed
|
Problem with copying wires.
|
Bug Code
|
When copying wires (laid out from inventory and not connected at all), you cannot interact with copies of wires in any way.
|
1.0
|
Problem with copying wires. - When copying wires (laid out from inventory and not connected at all), you cannot interact with copies of wires in any way.
|
non_test
|
problem with copying wires when copying wires laid out from inventory and not connected at all you cannot interact with copies of wires in any way
| 0
|
195,023
| 14,699,689,832
|
IssuesEvent
|
2021-01-04 09:00:04
|
filecoin-project/venus
|
https://api.github.com/repos/filecoin-project/venus
|
closed
|
Feature request: localnet has some CLI option so repos aren't deleted afterward
|
A-FAST A-tests
|
### Description
For testing and debugging purposes, I've been using localnet (awesome!) to generate small repos. Unfortunately if I want to do something with the repos it creates, I have to go through the cumbersome process of suspending localnet, copying the repo(s), removing lockfiles, etc. and then killing localnet. It would be greatly appreciated if there were an option to leave the repos in place.
### Acceptance criteria
localnet can be run with a --keep or something like that, which doesn't delete the repos it creates after it's stopped.
### Risks + pitfalls
I can't think of any.
### Where to begin
localnet :)
|
1.0
|
Feature request: localnet has some CLI option so repos aren't deleted afterward - ### Description
For testing and debugging purposes, I've been using localnet (awesome!) to generate small repos. Unfortunately if I want to do something with the repos it creates, I have to go through the cumbersome process of suspending localnet, copying the repo(s), removing lockfiles, etc. and then killing localnet. It would be greatly appreciated if there were an option to leave the repos in place.
### Acceptance criteria
localnet can be run with a --keep or something like that, which doesn't delete the repos it creates after it's stopped.
### Risks + pitfalls
I can't think of any.
### Where to begin
localnet :)
|
test
|
feature request localnet has some cli option so repos aren t deleted afterward description for testing and debugging purposes i ve been using localnet awesome to generate small repos unfortunately if i want to do something with the repos it creates i have to go through the cumbersome process of suspending localnet copying the repo s removing lockfiles etc and then killing localnet it would be greatly appreciated if there were an option to leave the repos in place acceptance criteria localnet can be run with a keep or something like that which doesn t delete the repos it creates after it s stopped risks pitfalls i can t think of any where to begin localnet
| 1
|
316,645
| 27,173,039,863
|
IssuesEvent
|
2023-02-17 21:25:37
|
godotengine/godot
|
https://api.github.com/repos/godotengine/godot
|
closed
|
Instantiating scenes that contain CPUParticles2D node causes spikes in the profiler (not just the first time)
|
bug needs testing
|
### Godot version
3.5.2.rc2.official
### System information
Windows10 / GLES3
### Issue description
Instantiating scenes containing nodes of type CPUparticles2D causes spikes in the profiler:

Oddly the profiler only shows 0.05ms for script functions, while shoot projectile took 16.06ms, maybe this is just normal behavior, but it seems to cause stutter issues.
### Steps to reproduce
Instantiated scenes containing CPUParticles2D
### Minimal reproduction project
[ParticleTest.zip](https://github.com/godotengine/godot/files/10766388/ParticleTest.zip)
|
1.0
|
Instantiating scenes that contain CPUParticles2D node causes spikes in the profiler (not just the first time) - ### Godot version
3.5.2.rc2.official
### System information
Windows10 / GLES3
### Issue description
Instantiating scenes containing nodes of type CPUparticles2D causes spikes in the profiler:

Oddly the profiler only shows 0.05ms for script functions, while shoot projectile took 16.06ms, maybe this is just normal behavior, but it seems to cause stutter issues.
### Steps to reproduce
Instantiated scenes containing CPUParticles2D
### Minimal reproduction project
[ParticleTest.zip](https://github.com/godotengine/godot/files/10766388/ParticleTest.zip)
|
test
|
instantiating scenes that contain node causes spikes in the profiler not just the first time godot version official system information issue description instantiating scenes containing nodes of type causes spikes in the profiler oddly the profiler only shows for script functions while shoot projectile took maybe this is just normal behavior but it seems to cause stutter issues steps to reproduce instantiated scenes containing minimal reproduction project
| 1
|
279,360
| 24,219,689,463
|
IssuesEvent
|
2022-09-26 09:47:21
|
apache/rocketmq
|
https://api.github.com/repos/apache/rocketmq
|
closed
|
Make tests in ACL module pass on Windows
|
module/test
|
We have a few tests that do NOT work on Windows and MacOS. This ticket is used to track fix of Windows compatibility issues
|
1.0
|
Make tests in ACL module pass on Windows - We have a few tests that do NOT work on Windows and MacOS. This ticket is used to track fix of Windows compatibility issues
|
test
|
make tests in acl module pass on windows we have a few tests that do not work on windows and macos this ticket is used to track fix of windows compatibility issues
| 1
|
216,837
| 16,820,517,836
|
IssuesEvent
|
2021-06-17 12:37:44
|
chameleon-system/chameleon-system
|
https://api.github.com/repos/chameleon-system/chameleon-system
|
closed
|
`chameleon-system/chameleon-base` Tests fail on lowest dependencies
|
Status: Test
|
**Describe the bug**
When tests in travisci are running with lowest dependencies (`--prefer-lowest`), then the following test fails:
```
1) TIteratorTest::testCanBeConvertedToArrayUsingIteratorMethods
Error: Call to undefined method TIteratorTest::assertIsArray()
```
It uses `phpunit/phpunit` `~7.0` and I assume that that method was added later in the 7.x lifecycle.
**Affected version(s)**
Anything in 7.1 and master after chameleon-system/chameleon-base#526 was merged.
**To Reproduce**
1. Run `composer update --prefer-lowest` in the `chameleon-base` repository
2. Run phpunit tests
3. Observe error above
**Expected behavior**
No failure in testsuite
|
1.0
|
`chameleon-system/chameleon-base` Tests fail on lowest dependencies - **Describe the bug**
When tests in travisci are running with lowest dependencies (`--prefer-lowest`), then the following test fails:
```
1) TIteratorTest::testCanBeConvertedToArrayUsingIteratorMethods
Error: Call to undefined method TIteratorTest::assertIsArray()
```
It uses `phpunit/phpunit` `~7.0` and I assume that that method was added later in the 7.x lifecycle.
**Affected version(s)**
Anything in 7.1 and master after chameleon-system/chameleon-base#526 was merged.
**To Reproduce**
1. Run `composer update --prefer-lowest` in the `chameleon-base` repository
2. Run phpunit tests
3. Observe error above
**Expected behavior**
No failure in testsuite
|
test
|
chameleon system chameleon base tests fail on lowest dependencies describe the bug when tests in travisci are running with lowest dependencies prefer lowest then the following test fails titeratortest testcanbeconvertedtoarrayusingiteratormethods error call to undefined method titeratortest assertisarray it uses phpunit phpunit and i assume that that method was added later in the x lifecycle affected version s anything in and master after chameleon system chameleon base was merged to reproduce run composer update prefer lowest in the chameleon base repository run phpunit tests observe error above expected behavior no failure in testsuite
| 1
|
244,929
| 20,731,284,771
|
IssuesEvent
|
2022-03-14 09:41:08
|
scikit-learn/scikit-learn
|
https://api.github.com/repos/scikit-learn/scikit-learn
|
closed
|
Use modifiable global random state in tests
|
module:test-suite
|
As mentioned by @jnothman in https://github.com/scikit-learn/scikit-learn/issues/13846#issuecomment-494175027
> Relatedly, I proposed having a random_seed fixture that was globally set to different values on different testing runs. One benefit would be that we could easily distinguish those tests that are invariant under changing random seed from those that are brittle.
I think it would be a good idea. For instance, we could,
- [ ] create a global ~~auto-use~~ pytest fixture in `scikit-learn/conftest.py`,
```py
@pytest.fixture(scope="session")
def pytest_rng():
random_seed = os.environ.get('SKLEARN_TEST_RNG', 42)
return np.random.RandomState(random_seed)
```
- [ ] modify tests to use it, e.g.
```diff
- def test_something():
+ def test_something(pytest_rng):
- rng = np.random.RandomState(0)
- est = Estimator(random_state=rng)
+ est = Estimator(random_state=pytest_rng)
```
One issue is that global auto-use fixtures are a bit magical, but I'm hoping that naming it as `pytest_rng` it would be explicit enough.
*Edit:* updated to avoid using an auto-use fixture.
|
1.0
|
Use modifiable global random state in tests - As mentioned by @jnothman in https://github.com/scikit-learn/scikit-learn/issues/13846#issuecomment-494175027
> Relatedly, I proposed having a random_seed fixture that was globally set to different values on different testing runs. One benefit would be that we could easily distinguish those tests that are invariant under changing random seed from those that are brittle.
I think it would be a good idea. For instance, we could,
- [ ] create a global ~~auto-use~~ pytest fixture in `scikit-learn/conftest.py`,
```py
@pytest.fixture(scope="session")
def pytest_rng():
random_seed = os.environ.get('SKLEARN_TEST_RNG', 42)
return np.random.RandomState(random_seed)
```
- [ ] modify tests to use it, e.g.
```diff
- def test_something():
+ def test_something(pytest_rng):
- rng = np.random.RandomState(0)
- est = Estimator(random_state=rng)
+ est = Estimator(random_state=pytest_rng)
```
One issue is that global auto-use fixtures are a bit magical, but I'm hoping that naming it as `pytest_rng` it would be explicit enough.
*Edit:* updated to avoid using an auto-use fixture.
|
test
|
use modifiable global random state in tests as mentioned by jnothman in relatedly i proposed having a random seed fixture that was globally set to different values on different testing runs one benefit would be that we could easily distinguish those tests that are invariant under changing random seed from those that are brittle i think it would be a good idea for instance we could create a global auto use pytest fixture in scikit learn conftest py py pytest fixture scope session def pytest rng random seed os environ get sklearn test rng return np random randomstate random seed modify tests to use it e g diff def test something def test something pytest rng rng np random randomstate est estimator random state rng est estimator random state pytest rng one issue is that global auto use fixtures are a bit magical but i m hoping that naming it as pytest rng it would be explicit enough edit updated to avoid using an auto use fixture
| 1
|
51,607
| 21,724,066,089
|
IssuesEvent
|
2022-05-11 05:27:32
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
table data confidence info
|
triaged cxp doc-enhancement Pri2 forms-recognizer/subsvc applied-ai-services/svc
|
Custom models that are trained trained to extract table data doesn't return confidence level. Is there a way to acquire this info. It would be great to see these details added to this page if available
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 35cb4999-9110-64a0-2361-9e434d6e3d54
* Version Independent ID: d840438a-0d38-f92b-d294-316fef55a819
* Content: [Interpret and improve model accuracy and analysis confidence scores - Azure Applied AI Services](https://docs.microsoft.com/en-us/azure/applied-ai-services/form-recognizer/concept-accuracy-confidence#feedback)
* Content Source: [articles/applied-ai-services/form-recognizer/concept-accuracy-confidence.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/applied-ai-services/form-recognizer/concept-accuracy-confidence.md)
* Service: **applied-ai-services**
* Sub-service: **forms-recognizer**
* GitHub Login: @laujan
* Microsoft Alias: **lajanuar**
|
1.0
|
table data confidence info - Custom models that are trained trained to extract table data doesn't return confidence level. Is there a way to acquire this info. It would be great to see these details added to this page if available
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 35cb4999-9110-64a0-2361-9e434d6e3d54
* Version Independent ID: d840438a-0d38-f92b-d294-316fef55a819
* Content: [Interpret and improve model accuracy and analysis confidence scores - Azure Applied AI Services](https://docs.microsoft.com/en-us/azure/applied-ai-services/form-recognizer/concept-accuracy-confidence#feedback)
* Content Source: [articles/applied-ai-services/form-recognizer/concept-accuracy-confidence.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/applied-ai-services/form-recognizer/concept-accuracy-confidence.md)
* Service: **applied-ai-services**
* Sub-service: **forms-recognizer**
* GitHub Login: @laujan
* Microsoft Alias: **lajanuar**
|
non_test
|
table data confidence info custom models that are trained trained to extract table data doesn t return confidence level is there a way to acquire this info it would be great to see these details added to this page if available document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service applied ai services sub service forms recognizer github login laujan microsoft alias lajanuar
| 0
|
283,061
| 8,713,526,758
|
IssuesEvent
|
2018-12-07 03:04:37
|
aowen87/TicketTester
|
https://api.github.com/repos/aowen87/TicketTester
|
closed
|
problem with build_visit building mpich and parallel at the same time.
|
bug crash likelihood medium priority reviewed severity high wrong results
|
Rick Angelini is trying to build a parallel version of visit on a workstation without mpi installed so he is having build_visit build mpich in addition to everything else. He normally sets PAR_COMPILER to build a parallel version of VisIt, but that isn't working in this case. Here is his e-mail.
OK - let's drop the --uintah flag for a second. Let's say that I want to
build mpich. I don't have any openmpi or mpich implementation on my
system.
So, I add "--parallel --mpich" to my build_visit command line so that it
will build mpich for me. I'm also supposed to set a PAR_COMPILER flag?
Presumably, I don't have a parallel implementation on my system yet (since mpich hasn't been built) so there's no mpicc, mpic++. So, I set the environment variable PAR_COMPILER=g++, and I get the same error as described below.
My trip down the rabbit hole continues! 8-)
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1707
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: High
Subject: problem with build_visit building mpich and parallel at the same time.
Assigned to: Cyrus Harrison
Category:
Target version: 2.7.3
Author: Eric Brugger
Start: 01/22/2014
Due date:
% Done: 0
Estimated time:
Created: 01/22/2014 04:27 pm
Updated: 05/30/2014 07:04 pm
Likelihood: 3 - Occasional
Severity: 4 - Crash / Wrong Results
Found in version: 2.7.0
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
Rick Angelini is trying to build a parallel version of visit on a workstation without mpi installed so he is having build_visit build mpich in addition to everything else. He normally sets PAR_COMPILER to build a parallel version of VisIt, but that isn't working in this case. Here is his e-mail.
OK - let's drop the --uintah flag for a second. Let's say that I want to
build mpich. I don't have any openmpi or mpich implementation on my
system.
So, I add "--parallel --mpich" to my build_visit command line so that it
will build mpich for me. I'm also supposed to set a PAR_COMPILER flag?
Presumably, I don't have a parallel implementation on my system yet (since mpich hasn't been built) so there's no mpicc, mpic++. So, I set the environment variable PAR_COMPILER=g++, and I get the same error as described below.
My trip down the rabbit hole continues! 8-)
Comments:
If I ask that MPICH be included in the package I still need to set the MPICH_DIR in my cmake file.That is when SET is done in a machine.cmake file I must also set the path:VISIT_OPTION_DEFAULT(VISIT_MPICH_DIR ${VISITHOME}/mpich/3.0.1/${VISITARCH})So that the package gets the proper dir.This sort of makes sense and is related to thinking that if the PAR_COMPILER was set then the MPICH_DIR would be set. But that is not the case.
we are trying to resolve this for 2.7.3 ...
we are working on setting PAR_COMPILER correctly if mpich is enabled. , from this PAR_INCLUDE and PAR_LIB will also be extracted so that uintah, adios, ice-t, etc can easily pickup mpich as the parallel solution.
this was resolved as part of #1805. When mpich is enabled, it becomes VISIT_MPI_COMPILER, even if --parallel isnt passed.
|
1.0
|
problem with build_visit building mpich and parallel at the same time. - Rick Angelini is trying to build a parallel version of visit on a workstation without mpi installed so he is having build_visit build mpich in addition to everything else. He normally sets PAR_COMPILER to build a parallel version of VisIt, but that isn't working in this case. Here is his e-mail.
OK - let's drop the --uintah flag for a second. Let's say that I want to
build mpich. I don't have any openmpi or mpich implementation on my
system.
So, I add "--parallel --mpich" to my build_visit command line so that it
will build mpich for me. I'm also supposed to set a PAR_COMPILER flag?
Presumably, I don't have a parallel implementation on my system yet (since mpich hasn't been built) so there's no mpicc, mpic++. So, I set the environment variable PAR_COMPILER=g++, and I get the same error as described below.
My trip down the rabbit hole continues! 8-)
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1707
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: High
Subject: problem with build_visit building mpich and parallel at the same time.
Assigned to: Cyrus Harrison
Category:
Target version: 2.7.3
Author: Eric Brugger
Start: 01/22/2014
Due date:
% Done: 0
Estimated time:
Created: 01/22/2014 04:27 pm
Updated: 05/30/2014 07:04 pm
Likelihood: 3 - Occasional
Severity: 4 - Crash / Wrong Results
Found in version: 2.7.0
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
Rick Angelini is trying to build a parallel version of visit on a workstation without mpi installed so he is having build_visit build mpich in addition to everything else. He normally sets PAR_COMPILER to build a parallel version of VisIt, but that isn't working in this case. Here is his e-mail.
OK - let's drop the --uintah flag for a second. Let's say that I want to
build mpich. I don't have any openmpi or mpich implementation on my
system.
So, I add "--parallel --mpich" to my build_visit command line so that it
will build mpich for me. I'm also supposed to set a PAR_COMPILER flag?
Presumably, I don't have a parallel implementation on my system yet (since mpich hasn't been built) so there's no mpicc, mpic++. So, I set the environment variable PAR_COMPILER=g++, and I get the same error as described below.
My trip down the rabbit hole continues! 8-)
Comments:
If I ask that MPICH be included in the package I still need to set the MPICH_DIR in my cmake file.That is when SET is done in a machine.cmake file I must also set the path:VISIT_OPTION_DEFAULT(VISIT_MPICH_DIR ${VISITHOME}/mpich/3.0.1/${VISITARCH})So that the package gets the proper dir.This sort of makes sense and is related to thinking that if the PAR_COMPILER was set then the MPICH_DIR would be set. But that is not the case.
we are trying to resolve this for 2.7.3 ...
we are working on setting PAR_COMPILER correctly if mpich is enabled. , from this PAR_INCLUDE and PAR_LIB will also be extracted so that uintah, adios, ice-t, etc can easily pickup mpich as the parallel solution.
this was resolved as part of #1805. When mpich is enabled, it becomes VISIT_MPI_COMPILER, even if --parallel isnt passed.
|
non_test
|
problem with build visit building mpich and parallel at the same time rick angelini is trying to build a parallel version of visit on a workstation without mpi installed so he is having build visit build mpich in addition to everything else he normally sets par compiler to build a parallel version of visit but that isn t working in this case here is his e mail ok let s drop the uintah flag for a second let s say that i want to build mpich i don t have any openmpi or mpich implementation on my system so i add parallel mpich to my build visit command line so that it will build mpich for me i m also supposed to set a par compiler flag presumably i don t have a parallel implementation on my system yet since mpich hasn t been built so there s no mpicc mpic so i set the environment variable par compiler g and i get the same error as described below my trip down the rabbit hole continues redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority high subject problem with build visit building mpich and parallel at the same time assigned to cyrus harrison category target version author eric brugger start due date done estimated time created pm updated pm likelihood occasional severity crash wrong results found in version impact expected use os all support group any description rick angelini is trying to build a parallel version of visit on a workstation without mpi installed so he is having build visit build mpich in addition to everything else he normally sets par compiler to build a parallel version of visit but that isn t working in this case here is his e mail ok let s drop the uintah flag for a second let s say that i want to build mpich i don t have any openmpi or mpich implementation on my system so i add parallel mpich to my build visit command line so that it will build mpich for me i m also supposed to set a par compiler flag presumably i don t have a parallel implementation on my system yet since mpich hasn t been built so there s no mpicc mpic so i set the environment variable par compiler g and i get the same error as described below my trip down the rabbit hole continues comments if i ask that mpich be included in the package i still need to set the mpich dir in my cmake file that is when set is done in a machine cmake file i must also set the path visit option default visit mpich dir visithome mpich visitarch so that the package gets the proper dir this sort of makes sense and is related to thinking that if the par compiler was set then the mpich dir would be set but that is not the case we are trying to resolve this for we are working on setting par compiler correctly if mpich is enabled from this par include and par lib will also be extracted so that uintah adios ice t etc can easily pickup mpich as the parallel solution this was resolved as part of when mpich is enabled it becomes visit mpi compiler even if parallel isnt passed
| 0
|
71,458
| 8,657,041,522
|
IssuesEvent
|
2018-11-27 20:09:00
|
cityofaustin/techstack
|
https://api.github.com/repos/cityofaustin/techstack
|
closed
|
Janis v2 Design: breakpoint exploration
|
Janis 2.0 Resident Interface Size: M Team: Design + Research
|
Work on meeting USWDS 2.0 [break point criteria](https://v2.designsystem.digital.gov/utilities/layout-grid/):
- [ ] Mobile large ≥480px
- [ ] Tablet ≥640px
- [ ] Desktop ≥ 1024px
|
1.0
|
Janis v2 Design: breakpoint exploration - Work on meeting USWDS 2.0 [break point criteria](https://v2.designsystem.digital.gov/utilities/layout-grid/):
- [ ] Mobile large ≥480px
- [ ] Tablet ≥640px
- [ ] Desktop ≥ 1024px
|
non_test
|
janis design breakpoint exploration work on meeting uswds mobile large ≥ tablet ≥ desktop ≥
| 0
|
199,232
| 15,029,950,506
|
IssuesEvent
|
2021-02-02 06:35:56
|
openvinotoolkit/cvat
|
https://api.github.com/repos/openvinotoolkit/cvat
|
closed
|
Issue with point region doesn't work in Firefox
|
bug need test
|
<!---
Copyright (C) 2020 Intel Corporation
SPDX-License-Identifier: MIT
-->
### My actions before raising this issue
- [x] Read/searched [the docs](https://github.com/opencv/cvat/tree/master#documentation)
- [x] Searched [past issues](/issues)
<!--- Provide a general summary of the issue in the Title above -->
### Expected Behaviour
Can create issue with point region
### Current Behaviour
Can't create issue with point region
### Possible Solution
Investigate & fix.
### Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to
reproduce this bug. Include code to reproduce, if relevant -->
1. In firefox open a task
1. Go to review mode
1. Try to create issue using point
1. Canvas doesn't react
### Context
<!--- How has this issue affected you? What are you trying to accomplish?
Providing context helps us come up with a solution that is most useful in
the real world -->
### Your Environment
<!--- Include as many relevant details about the environment you experienced
the bug in -->
- Git hash commit (`git log -1`): a04d95d
- Docker version `docker version` (e.g. Docker 17.0.05):
- Are you using Docker Swarm or Kubernetes?
- Operating System and version (e.g. Linux, Windows, MacOS):
- Code example or link to GitHub repo or gist to reproduce problem:
- Other diagnostic information / logs:
<details>
<summary>Logs from `cvat` container</summary>
</details>
### Next steps
You may [join our Gitter](https://gitter.im/opencv-cvat/public) channel for community support.
|
1.0
|
Issue with point region doesn't work in Firefox - <!---
Copyright (C) 2020 Intel Corporation
SPDX-License-Identifier: MIT
-->
### My actions before raising this issue
- [x] Read/searched [the docs](https://github.com/opencv/cvat/tree/master#documentation)
- [x] Searched [past issues](/issues)
<!--- Provide a general summary of the issue in the Title above -->
### Expected Behaviour
Can create issue with point region
### Current Behaviour
Can't create issue with point region
### Possible Solution
Investigate & fix.
### Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to
reproduce this bug. Include code to reproduce, if relevant -->
1. In firefox open a task
1. Go to review mode
1. Try to create issue using point
1. Canvas doesn't react
### Context
<!--- How has this issue affected you? What are you trying to accomplish?
Providing context helps us come up with a solution that is most useful in
the real world -->
### Your Environment
<!--- Include as many relevant details about the environment you experienced
the bug in -->
- Git hash commit (`git log -1`): a04d95d
- Docker version `docker version` (e.g. Docker 17.0.05):
- Are you using Docker Swarm or Kubernetes?
- Operating System and version (e.g. Linux, Windows, MacOS):
- Code example or link to GitHub repo or gist to reproduce problem:
- Other diagnostic information / logs:
<details>
<summary>Logs from `cvat` container</summary>
</details>
### Next steps
You may [join our Gitter](https://gitter.im/opencv-cvat/public) channel for community support.
|
test
|
issue with point region doesn t work in firefox copyright c intel corporation spdx license identifier mit my actions before raising this issue read searched searched issues expected behaviour can create issue with point region current behaviour can t create issue with point region possible solution investigate fix steps to reproduce for bugs provide a link to a live example or an unambiguous set of steps to reproduce this bug include code to reproduce if relevant in firefox open a task go to review mode try to create issue using point canvas doesn t react context how has this issue affected you what are you trying to accomplish providing context helps us come up with a solution that is most useful in the real world your environment include as many relevant details about the environment you experienced the bug in git hash commit git log docker version docker version e g docker are you using docker swarm or kubernetes operating system and version e g linux windows macos code example or link to github repo or gist to reproduce problem other diagnostic information logs logs from cvat container next steps you may channel for community support
| 1
|
38,051
| 5,165,256,517
|
IssuesEvent
|
2017-01-17 13:11:54
|
difi/move-integrasjonspunkt
|
https://api.github.com/repos/difi/move-integrasjonspunkt
|
closed
|
Oppdatere SR etter oppdaterte krav
|
test
|
Vi ønsker å returnere ServiceRecords utifra reglene: hvis ORGL: så brukes ServiceRecord for meldingsformidling. Hvis "privat" så brukes Servicerecord for post til virksomheter
|
1.0
|
Oppdatere SR etter oppdaterte krav - Vi ønsker å returnere ServiceRecords utifra reglene: hvis ORGL: så brukes ServiceRecord for meldingsformidling. Hvis "privat" så brukes Servicerecord for post til virksomheter
|
test
|
oppdatere sr etter oppdaterte krav vi ønsker å returnere servicerecords utifra reglene hvis orgl så brukes servicerecord for meldingsformidling hvis privat så brukes servicerecord for post til virksomheter
| 1
|
113,889
| 17,171,053,450
|
IssuesEvent
|
2021-07-15 04:32:45
|
samq-ghdemo/JavaDemo
|
https://api.github.com/repos/samq-ghdemo/JavaDemo
|
opened
|
CVE-2016-2510 (High) detected in bsh-core-2.0b4.jar
|
security vulnerability
|
## CVE-2016-2510 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bsh-core-2.0b4.jar</b></p></summary>
<p>BeanShell core</p>
<p>Path to dependency file: JavaDemo/pom.xml</p>
<p>Path to vulnerable library: JavaDemo/target/easybuggy-1-SNAPSHOT/WEB-INF/lib/bsh-core-2.0b4.jar,/home/wss-scanner/.m2/repository/org/beanshell/bsh-core/2.0b4/bsh-core-2.0b4.jar</p>
<p>
Dependency Hierarchy:
- :x: **bsh-core-2.0b4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samq-ghdemo/JavaDemo/commit/1c7518fe49deed7c168634e0cf0ce1754e1c2c6e">1c7518fe49deed7c168634e0cf0ce1754e1c2c6e</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
BeanShell (bsh) before 2.0b6, when included on the classpath by an application that uses Java serialization or XStream, allows remote attackers to execute arbitrary code via crafted serialized data, related to XThis.Handler.
<p>Publish Date: 2016-04-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-2510>CVE-2016-2510</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-2510">https://nvd.nist.gov/vuln/detail/CVE-2016-2510</a></p>
<p>Release Date: 2016-04-07</p>
<p>Fix Resolution: 2.0b6</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.beanshell","packageName":"bsh-core","packageVersion":"2.0b4","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.beanshell:bsh-core:2.0b4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.0b6"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2016-2510","vulnerabilityDetails":"BeanShell (bsh) before 2.0b6, when included on the classpath by an application that uses Java serialization or XStream, allows remote attackers to execute arbitrary code via crafted serialized data, related to XThis.Handler.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-2510","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2016-2510 (High) detected in bsh-core-2.0b4.jar - ## CVE-2016-2510 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bsh-core-2.0b4.jar</b></p></summary>
<p>BeanShell core</p>
<p>Path to dependency file: JavaDemo/pom.xml</p>
<p>Path to vulnerable library: JavaDemo/target/easybuggy-1-SNAPSHOT/WEB-INF/lib/bsh-core-2.0b4.jar,/home/wss-scanner/.m2/repository/org/beanshell/bsh-core/2.0b4/bsh-core-2.0b4.jar</p>
<p>
Dependency Hierarchy:
- :x: **bsh-core-2.0b4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samq-ghdemo/JavaDemo/commit/1c7518fe49deed7c168634e0cf0ce1754e1c2c6e">1c7518fe49deed7c168634e0cf0ce1754e1c2c6e</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
BeanShell (bsh) before 2.0b6, when included on the classpath by an application that uses Java serialization or XStream, allows remote attackers to execute arbitrary code via crafted serialized data, related to XThis.Handler.
<p>Publish Date: 2016-04-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-2510>CVE-2016-2510</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-2510">https://nvd.nist.gov/vuln/detail/CVE-2016-2510</a></p>
<p>Release Date: 2016-04-07</p>
<p>Fix Resolution: 2.0b6</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.beanshell","packageName":"bsh-core","packageVersion":"2.0b4","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.beanshell:bsh-core:2.0b4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.0b6"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2016-2510","vulnerabilityDetails":"BeanShell (bsh) before 2.0b6, when included on the classpath by an application that uses Java serialization or XStream, allows remote attackers to execute arbitrary code via crafted serialized data, related to XThis.Handler.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-2510","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_test
|
cve high detected in bsh core jar cve high severity vulnerability vulnerable library bsh core jar beanshell core path to dependency file javademo pom xml path to vulnerable library javademo target easybuggy snapshot web inf lib bsh core jar home wss scanner repository org beanshell bsh core bsh core jar dependency hierarchy x bsh core jar vulnerable library found in head commit a href found in base branch main vulnerability details beanshell bsh before when included on the classpath by an application that uses java serialization or xstream allows remote attackers to execute arbitrary code via crafted serialized data related to xthis handler publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree org beanshell bsh core isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails beanshell bsh before when included on the classpath by an application that uses java serialization or xstream allows remote attackers to execute arbitrary code via crafted serialized data related to xthis handler vulnerabilityurl
| 0
|
491
| 2,502,241,901
|
IssuesEvent
|
2015-01-09 05:55:01
|
fossology/fossology
|
https://api.github.com/repos/fossology/fossology
|
opened
|
Copyright agent display limit
|
Category: Copyright Component: Rank Component: Tester Priority: High Status: New Tracker: Enhancement
|
---
Author Name: **Bob Gobeille**
Original Redmine Issue: 6159, http://www.fossology.org/issues/6159
Original Date: 2013/12/03
---
copyright (hist.php) limits the number of rows that can be displayed. The user will see
```
Too many rows to display
```
This limit is set by $MaxTreeRecs in hist.php. A better solution would be to page the output.
|
1.0
|
Copyright agent display limit - ---
Author Name: **Bob Gobeille**
Original Redmine Issue: 6159, http://www.fossology.org/issues/6159
Original Date: 2013/12/03
---
copyright (hist.php) limits the number of rows that can be displayed. The user will see
```
Too many rows to display
```
This limit is set by $MaxTreeRecs in hist.php. A better solution would be to page the output.
|
test
|
copyright agent display limit author name bob gobeille original redmine issue original date copyright hist php limits the number of rows that can be displayed the user will see too many rows to display this limit is set by maxtreerecs in hist php a better solution would be to page the output
| 1
|
78,567
| 7,654,447,598
|
IssuesEvent
|
2018-05-10 09:17:42
|
citusdata/citus
|
https://api.github.com/repos/citusdata/citus
|
closed
|
Parallel regression tests might lead travis to fail
|
regression tests
|
We've had few suspicious regression tests failures on travis. It looks like we're running some `INSERT`s concurrently with `SELECT`s on different tests (e.g., `with_basics.sql` and `with_prepare.sql`).
We should simply do not run such tests in parallel.
An example failure we've seen:
```
cat regression.diffs
--- /home/travis/build/citusdata/citus-enterprise/src/test/regress/expected/with_basics.out 2018-03-23 18:30:04.105135408 +0000
+++ /home/travis/build/citusdata/citus-enterprise/src/test/regress/results/with_basics.out 2018-03-23 18:33:06.083393816 +0000
@@ -737,26 +737,26 @@
WITH cte AS (
SELECT * FROM users_table
)
SELECT user_id, max(value_1) as value_1 FROM cte GROUP BY 1;
WITH cte_user AS (
SELECT basic_view.user_id,events_table.value_2 FROM basic_view join events_table on (basic_view.user_id = events_table.user_id)
)
SELECT user_id, sum(value_2) FROM cte_user GROUP BY 1 ORDER BY 1, 2;
user_id | sum
---------+------
- 1 | 294
- 2 | 1026
- 3 | 782
- 4 | 943
+ 1 | 336
+ 2 | 1083
+ 3 | 828
+ 4 | 984
5 | 806
- 6 | 220
+ 6 | 242
(6 rows)
```
|
1.0
|
Parallel regression tests might lead travis to fail - We've had few suspicious regression tests failures on travis. It looks like we're running some `INSERT`s concurrently with `SELECT`s on different tests (e.g., `with_basics.sql` and `with_prepare.sql`).
We should simply do not run such tests in parallel.
An example failure we've seen:
```
cat regression.diffs
--- /home/travis/build/citusdata/citus-enterprise/src/test/regress/expected/with_basics.out 2018-03-23 18:30:04.105135408 +0000
+++ /home/travis/build/citusdata/citus-enterprise/src/test/regress/results/with_basics.out 2018-03-23 18:33:06.083393816 +0000
@@ -737,26 +737,26 @@
WITH cte AS (
SELECT * FROM users_table
)
SELECT user_id, max(value_1) as value_1 FROM cte GROUP BY 1;
WITH cte_user AS (
SELECT basic_view.user_id,events_table.value_2 FROM basic_view join events_table on (basic_view.user_id = events_table.user_id)
)
SELECT user_id, sum(value_2) FROM cte_user GROUP BY 1 ORDER BY 1, 2;
user_id | sum
---------+------
- 1 | 294
- 2 | 1026
- 3 | 782
- 4 | 943
+ 1 | 336
+ 2 | 1083
+ 3 | 828
+ 4 | 984
5 | 806
- 6 | 220
+ 6 | 242
(6 rows)
```
|
test
|
parallel regression tests might lead travis to fail we ve had few suspicious regression tests failures on travis it looks like we re running some insert s concurrently with select s on different tests e g with basics sql and with prepare sql we should simply do not run such tests in parallel an example failure we ve seen cat regression diffs home travis build citusdata citus enterprise src test regress expected with basics out home travis build citusdata citus enterprise src test regress results with basics out with cte as select from users table select user id max value as value from cte group by with cte user as select basic view user id events table value from basic view join events table on basic view user id events table user id select user id sum value from cte user group by order by user id sum rows
| 1
|
60,543
| 6,705,570,431
|
IssuesEvent
|
2017-10-12 01:10:33
|
MyersResearchGroup/iBioSim
|
https://api.github.com/repos/MyersResearchGroup/iBioSim
|
closed
|
Duplicate Species Ids
|
BUG Needs Testing
|
iBioSim Version 2.9.6
Operating system: Mac OS X
Bug reported by: myers@ece.utah.edu
Description:
If you annotate two species and/or promoters with the same component, you can get duplicate ids. Therefore, you need to first check that the id is not in use, and then set the id. If it is in use, you need to make the id unique, perhaps adding "underscore".
Stack trace:
java.lang.IllegalArgumentException: org.sbml.jsbml.IdentifierException: Cannot set duplicate meta identifier BetI_protein for species.
at org.sbml.jsbml.AbstractSBase.setId(AbstractSBase.java:3093)
at edu.utah.ece.async.ibiosim.dataModels.biomodel.parser.BioModel.changeSpeciesId(BioModel.java:2245)
at edu.utah.ece.async.ibiosim.gui.modelEditor.sbmlcore.SpeciesPanel.handlePanelData(SpeciesPanel.java:929)
at edu.utah.ece.async.ibiosim.gui.modelEditor.sbmlcore.SpeciesPanel.constructor(SpeciesPanel.java:630)
at edu.utah.ece.async.ibiosim.gui.modelEditor.sbmlcore.SpeciesPanel.<init>(SpeciesPanel.java:132)
at edu.utah.ece.async.ibiosim.gui.modelEditor.schematic.ModelEditor.launchSpeciesPanel(ModelEditor.java:2546)
at edu.utah.ece.async.ibiosim.gui.modelEditor.schematic.Schematic.bringUpEditorForCell(Schematic.java:2355)
at edu.utah.ece.async.ibiosim.gui.modelEditor.schematic.Schematic$9.mouseReleased(Schematic.java:1165)
at java.awt.AWTEventMulticaster.mouseReleased(AWTEventMulticaster.java:290)
at java.awt.AWTEventMulticaster.mouseReleased(AWTEventMulticaster.java:289)
at java.awt.Component.processMouseEvent(Component.java:6533)
at javax.swing.JComponent.processMouseEvent(JComponent.java:3324)
at java.awt.Component.processEvent(Component.java:6298)
at java.awt.Container.processEvent(Container.java:2236)
at java.awt.Component.dispatchEventImpl(Component.java:4889)
at java.awt.Container.dispatchEventImpl(Container.java:2294)
at java.awt.Component.dispatchEvent(Component.java:4711)
at java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4888)
at java.awt.LightweightDispatcher.processMouseEvent(Container.java:4525)
at java.awt.LightweightDispatcher.dispatchEvent(Container.java:4466)
at java.awt.Container.dispatchEventImpl(Container.java:2280)
at java.awt.Window.dispatchEventImpl(Window.java:2746)
at java.awt.Component.dispatchEvent(Component.java:4711)
at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:758)
at java.awt.EventQueue.access$500(EventQueue.java:97)
at java.awt.EventQueue$3.run(EventQueue.java:709)
at java.awt.EventQueue$3.run(EventQueue.java:703)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:80)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:90)
at java.awt.EventQueue$4.run(EventQueue.java:731)
at java.awt.EventQueue$4.run(EventQueue.java:729)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:80)
at java.awt.EventQueue.dispatchEvent(EventQueue.java:728)
at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:201)
at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:116)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:105)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:101)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:93)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:82)
Caused by: org.sbml.jsbml.IdentifierException: Cannot set duplicate meta identifier BetI_protein for species.
at org.sbml.jsbml.AbstractSBase.setId(AbstractSBase.java:3091)
... 40 more
|
1.0
|
Duplicate Species Ids - iBioSim Version 2.9.6
Operating system: Mac OS X
Bug reported by: myers@ece.utah.edu
Description:
If you annotate two species and/or promoters with the same component, you can get duplicate ids. Therefore, you need to first check that the id is not in use, and then set the id. If it is in use, you need to make the id unique, perhaps adding "underscore".
Stack trace:
java.lang.IllegalArgumentException: org.sbml.jsbml.IdentifierException: Cannot set duplicate meta identifier BetI_protein for species.
at org.sbml.jsbml.AbstractSBase.setId(AbstractSBase.java:3093)
at edu.utah.ece.async.ibiosim.dataModels.biomodel.parser.BioModel.changeSpeciesId(BioModel.java:2245)
at edu.utah.ece.async.ibiosim.gui.modelEditor.sbmlcore.SpeciesPanel.handlePanelData(SpeciesPanel.java:929)
at edu.utah.ece.async.ibiosim.gui.modelEditor.sbmlcore.SpeciesPanel.constructor(SpeciesPanel.java:630)
at edu.utah.ece.async.ibiosim.gui.modelEditor.sbmlcore.SpeciesPanel.<init>(SpeciesPanel.java:132)
at edu.utah.ece.async.ibiosim.gui.modelEditor.schematic.ModelEditor.launchSpeciesPanel(ModelEditor.java:2546)
at edu.utah.ece.async.ibiosim.gui.modelEditor.schematic.Schematic.bringUpEditorForCell(Schematic.java:2355)
at edu.utah.ece.async.ibiosim.gui.modelEditor.schematic.Schematic$9.mouseReleased(Schematic.java:1165)
at java.awt.AWTEventMulticaster.mouseReleased(AWTEventMulticaster.java:290)
at java.awt.AWTEventMulticaster.mouseReleased(AWTEventMulticaster.java:289)
at java.awt.Component.processMouseEvent(Component.java:6533)
at javax.swing.JComponent.processMouseEvent(JComponent.java:3324)
at java.awt.Component.processEvent(Component.java:6298)
at java.awt.Container.processEvent(Container.java:2236)
at java.awt.Component.dispatchEventImpl(Component.java:4889)
at java.awt.Container.dispatchEventImpl(Container.java:2294)
at java.awt.Component.dispatchEvent(Component.java:4711)
at java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4888)
at java.awt.LightweightDispatcher.processMouseEvent(Container.java:4525)
at java.awt.LightweightDispatcher.dispatchEvent(Container.java:4466)
at java.awt.Container.dispatchEventImpl(Container.java:2280)
at java.awt.Window.dispatchEventImpl(Window.java:2746)
at java.awt.Component.dispatchEvent(Component.java:4711)
at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:758)
at java.awt.EventQueue.access$500(EventQueue.java:97)
at java.awt.EventQueue$3.run(EventQueue.java:709)
at java.awt.EventQueue$3.run(EventQueue.java:703)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:80)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:90)
at java.awt.EventQueue$4.run(EventQueue.java:731)
at java.awt.EventQueue$4.run(EventQueue.java:729)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:80)
at java.awt.EventQueue.dispatchEvent(EventQueue.java:728)
at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:201)
at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:116)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:105)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:101)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:93)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:82)
Caused by: org.sbml.jsbml.IdentifierException: Cannot set duplicate meta identifier BetI_protein for species.
at org.sbml.jsbml.AbstractSBase.setId(AbstractSBase.java:3091)
... 40 more
|
test
|
duplicate species ids ibiosim version operating system mac os x bug reported by myers ece utah edu description if you annotate two species and or promoters with the same component you can get duplicate ids therefore you need to first check that the id is not in use and then set the id if it is in use you need to make the id unique perhaps adding underscore stack trace java lang illegalargumentexception org sbml jsbml identifierexception cannot set duplicate meta identifier beti protein for species at org sbml jsbml abstractsbase setid abstractsbase java at edu utah ece async ibiosim datamodels biomodel parser biomodel changespeciesid biomodel java at edu utah ece async ibiosim gui modeleditor sbmlcore speciespanel handlepaneldata speciespanel java at edu utah ece async ibiosim gui modeleditor sbmlcore speciespanel constructor speciespanel java at edu utah ece async ibiosim gui modeleditor sbmlcore speciespanel speciespanel java at edu utah ece async ibiosim gui modeleditor schematic modeleditor launchspeciespanel modeleditor java at edu utah ece async ibiosim gui modeleditor schematic schematic bringupeditorforcell schematic java at edu utah ece async ibiosim gui modeleditor schematic schematic mousereleased schematic java at java awt awteventmulticaster mousereleased awteventmulticaster java at java awt awteventmulticaster mousereleased awteventmulticaster java at java awt component processmouseevent component java at javax swing jcomponent processmouseevent jcomponent java at java awt component processevent component java at java awt container processevent container java at java awt component dispatcheventimpl component java at java awt container dispatcheventimpl container java at java awt component dispatchevent component java at java awt lightweightdispatcher retargetmouseevent container java at java awt lightweightdispatcher processmouseevent container java at java awt lightweightdispatcher dispatchevent container java at java awt container dispatcheventimpl container java at java awt window dispatcheventimpl window java at java awt component dispatchevent component java at java awt eventqueue dispatcheventimpl eventqueue java at java awt eventqueue access eventqueue java at java awt eventqueue run eventqueue java at java awt eventqueue run eventqueue java at java security accesscontroller doprivileged native method at java security protectiondomain javasecurityaccessimpl dointersectionprivilege protectiondomain java at java security protectiondomain javasecurityaccessimpl dointersectionprivilege protectiondomain java at java awt eventqueue run eventqueue java at java awt eventqueue run eventqueue java at java security accesscontroller doprivileged native method at java security protectiondomain javasecurityaccessimpl dointersectionprivilege protectiondomain java at java awt eventqueue dispatchevent eventqueue java at java awt eventdispatchthread pumponeeventforfilters eventdispatchthread java at java awt eventdispatchthread pumpeventsforfilter eventdispatchthread java at java awt eventdispatchthread pumpeventsforhierarchy eventdispatchthread java at java awt eventdispatchthread pumpevents eventdispatchthread java at java awt eventdispatchthread pumpevents eventdispatchthread java at java awt eventdispatchthread run eventdispatchthread java caused by org sbml jsbml identifierexception cannot set duplicate meta identifier beti protein for species at org sbml jsbml abstractsbase setid abstractsbase java more
| 1
|
91,420
| 15,856,407,686
|
IssuesEvent
|
2021-04-08 02:16:17
|
alonro/ASOS
|
https://api.github.com/repos/alonro/ASOS
|
opened
|
WS-2020-0218 (High) detected in merge-1.2.1.tgz
|
security vulnerability
|
## WS-2020-0218 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>merge-1.2.1.tgz</b></p></summary>
<p>Merge multiple objects into one, optionally creating a new cloned object. Similar to the jQuery.extend but more flexible. Works in Node.js and the browser.</p>
<p>Library home page: <a href="https://registry.npmjs.org/merge/-/merge-1.2.1.tgz">https://registry.npmjs.org/merge/-/merge-1.2.1.tgz</a></p>
<p>Path to dependency file: /ASOS/package.json</p>
<p>Path to vulnerable library: ASOS/node_modules/merge/package.json</p>
<p>
Dependency Hierarchy:
- react-native-0.57.1.tgz (Root Library)
- metro-0.45.6.tgz
- jest-haste-map-23.5.0.tgz
- sane-2.5.2.tgz
- exec-sh-0.2.2.tgz
- :x: **merge-1.2.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Prototype Pollution vulnerability was found in merge before 2.1.0 via the merge.recursive function. It can be tricked into adding or modifying properties of the Object prototype. These properties will be present on all objects.
<p>Publish Date: 2020-10-09
<p>URL: <a href=https://github.com/yeikos/js.merge/pull/38>WS-2020-0218</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/yeikos/js.merge/pull/38">https://github.com/yeikos/js.merge/pull/38</a></p>
<p>Release Date: 2020-10-09</p>
<p>Fix Resolution: merge - 2.1.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2020-0218 (High) detected in merge-1.2.1.tgz - ## WS-2020-0218 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>merge-1.2.1.tgz</b></p></summary>
<p>Merge multiple objects into one, optionally creating a new cloned object. Similar to the jQuery.extend but more flexible. Works in Node.js and the browser.</p>
<p>Library home page: <a href="https://registry.npmjs.org/merge/-/merge-1.2.1.tgz">https://registry.npmjs.org/merge/-/merge-1.2.1.tgz</a></p>
<p>Path to dependency file: /ASOS/package.json</p>
<p>Path to vulnerable library: ASOS/node_modules/merge/package.json</p>
<p>
Dependency Hierarchy:
- react-native-0.57.1.tgz (Root Library)
- metro-0.45.6.tgz
- jest-haste-map-23.5.0.tgz
- sane-2.5.2.tgz
- exec-sh-0.2.2.tgz
- :x: **merge-1.2.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Prototype Pollution vulnerability was found in merge before 2.1.0 via the merge.recursive function. It can be tricked into adding or modifying properties of the Object prototype. These properties will be present on all objects.
<p>Publish Date: 2020-10-09
<p>URL: <a href=https://github.com/yeikos/js.merge/pull/38>WS-2020-0218</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/yeikos/js.merge/pull/38">https://github.com/yeikos/js.merge/pull/38</a></p>
<p>Release Date: 2020-10-09</p>
<p>Fix Resolution: merge - 2.1.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
ws high detected in merge tgz ws high severity vulnerability vulnerable library merge tgz merge multiple objects into one optionally creating a new cloned object similar to the jquery extend but more flexible works in node js and the browser library home page a href path to dependency file asos package json path to vulnerable library asos node modules merge package json dependency hierarchy react native tgz root library metro tgz jest haste map tgz sane tgz exec sh tgz x merge tgz vulnerable library vulnerability details a prototype pollution vulnerability was found in merge before via the merge recursive function it can be tricked into adding or modifying properties of the object prototype these properties will be present on all objects publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution merge step up your open source security game with whitesource
| 0
|
328,704
| 28,131,326,049
|
IssuesEvent
|
2023-03-31 23:43:52
|
FTC7393/FtcRobotController
|
https://api.github.com/repos/FTC7393/FtcRobotController
|
closed
|
Resolve Lever Wait statements
|
Auto Needs testing implemented
|
- [x] Near the end of fetchAndScore, uncomment the Wait For Lever and make it use lever.isDOne() and include giveUp timing logic. _This will make sure the lever is at the STACK before grabbing the cone._
- [x] Near the beginning of fetchAndScore, right after the Wait For Fetcher, copy in the Wait For Lever from the previous step. _This will make sure the lever is INSIDE before attempting to transfer the cone._
|
1.0
|
Resolve Lever Wait statements - - [x] Near the end of fetchAndScore, uncomment the Wait For Lever and make it use lever.isDOne() and include giveUp timing logic. _This will make sure the lever is at the STACK before grabbing the cone._
- [x] Near the beginning of fetchAndScore, right after the Wait For Fetcher, copy in the Wait For Lever from the previous step. _This will make sure the lever is INSIDE before attempting to transfer the cone._
|
test
|
resolve lever wait statements near the end of fetchandscore uncomment the wait for lever and make it use lever isdone and include giveup timing logic this will make sure the lever is at the stack before grabbing the cone near the beginning of fetchandscore right after the wait for fetcher copy in the wait for lever from the previous step this will make sure the lever is inside before attempting to transfer the cone
| 1
|
4,968
| 3,487,524,387
|
IssuesEvent
|
2016-01-02 00:33:15
|
MNoya/DotaCraft
|
https://api.github.com/repos/MNoya/DotaCraft
|
closed
|
Tinker gets stuck under the factory
|
abilities bug buildings
|
When you place the pocket factory too close to tinker he will get stuck there untill he tps out or untill the factory despawns.
|
1.0
|
Tinker gets stuck under the factory - When you place the pocket factory too close to tinker he will get stuck there untill he tps out or untill the factory despawns.
|
non_test
|
tinker gets stuck under the factory when you place the pocket factory too close to tinker he will get stuck there untill he tps out or untill the factory despawns
| 0
|
227,669
| 18,091,500,885
|
IssuesEvent
|
2021-09-22 02:29:57
|
etcd-io/etcd
|
https://api.github.com/repos/etcd-io/etcd
|
closed
|
integration/clientv3/examples test flakes frequently
|
stale important Release-Backport/v3.5 area/testing/flake
|
ExampleCluster_memberAddAsLearner
I managed to repro this with:
```
for i in `seq 1 100`; do (cd tests && 'env' 'go' 'test' '-timeout=15m' '--race=false' '--cpu=4' './integration/clientv3/examples' --count=1 -v -run ExampleCluster_memberAddAsLearner| tee log.log); done
```
but it does not always flake.
Flake on Actions:
https://github.com/etcd-io/etcd/pull/12981/checks?check_run_id=2595306923
Uploaded log: [logs_1206.zip](https://github.com/etcd-io/etcd/files/6491927/logs_1206.zip)
```
2021/05/16 18:49:34 2021-05-16T18:49:34.400Z INFO m2 started stream writer with remote peer {"member": "m2", "local-member-id": "685c5c71ce9a9328", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.400Z INFO m2 started stream writer with remote peer {"member": "m2", "local-member-id": "685c5c71ce9a9328", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.400Z INFO m2 started stream reader with remote peer {"member": "m2", "stream-reader-type": "stream MsgApp v2", "local-member-id": "685c5c71ce9a9328", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.400Z INFO m0 started remote peer {"member": "m0", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.400Z INFO m0 added remote peer {"member": "m0", "local-member-id": "c5c5a20ca4073d6a", "remote-peer-id": "3b13bbeaeef551eb", "remote-peer-urls": ["http://localhost:32381"]}
2021/05/16 18:49:34 2021-05-16T18:49:34.400Z INFO m0 started stream writer with remote peer {"member": "m0", "local-member-id": "c5c5a20ca4073d6a", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.400Z INFO m0 started stream writer with remote peer {"member": "m0", "local-member-id": "c5c5a20ca4073d6a", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.400Z INFO m0 started stream reader with remote peer {"member": "m0", "stream-reader-type": "stream MsgApp v2", "local-member-id": "c5c5a20ca4073d6a", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.400Z INFO m0 started stream reader with remote peer {"member": "m0", "stream-reader-type": "stream Message", "local-member-id": "c5c5a20ca4073d6a", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.400Z INFO m2 started stream reader with remote peer {"member": "m2", "stream-reader-type": "stream Message", "local-member-id": "685c5c71ce9a9328", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.401Z INFO m0 applied a configuration change through raft {"member": "m0", "local-member-id": "c5c5a20ca4073d6a", "raft-conf-change": "ConfChangeAddLearnerNode", "raft-conf-change-node-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.401Z INFO m2.raft 685c5c71ce9a9328 ignoring conf change {ConfChangeRemoveNode 4256952690501046763 [] 4425483133757641240} at config voters=(7519987121669182248 13966091899056808041 14250974771059047786) learners=(4256952690501046763): possible unapplied conf change at index 25 (applied to 24) {"member": "m2"}
2021/05/16 18:49:34 2021-05-16T18:49:34.401Z INFO m1.raft c1d186962fbde069 switched to configuration voters=(7519987121669182248 13966091899056808041 14250974771059047786) learners=(4256952690501046763) {"member": "m1"}
2021/05/16 18:49:34 2021-05-16T18:49:34.402Z INFO m1 added member {"member": "m1", "cluster-id": "1cfa2497ce0563d4", "local-member-id": "c1d186962fbde069", "added-peer-id": "3b13bbeaeef551eb", "added-peer-peer-urls": ["http://localhost:32381"]}
2021/05/16 18:49:34 2021-05-16T18:49:34.402Z INFO m1 starting remote peer {"member": "m1", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.402Z INFO m1 started HTTP pipelining with remote peer {"member": "m1", "local-member-id": "c1d186962fbde069", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.402Z WARN m2 failed to reach the peer URL {"member": "m2", "address": "http://localhost:32381/version", "remote-member-id": "3b13bbeaeef551eb", "error": "Get \"http://localhost:32381/version\": dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 18:49:34 2021-05-16T18:49:34.402Z WARN m2 failed to get version {"member": "m2", "remote-member-id": "3b13bbeaeef551eb", "error": "Get \"http://localhost:32381/version\": dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 18:49:34 2021-05-16T18:49:34.402Z INFO m1 started stream writer with remote peer {"member": "m1", "local-member-id": "c1d186962fbde069", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.402Z INFO m1 started stream writer with remote peer {"member": "m1", "local-member-id": "c1d186962fbde069", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.402Z INFO m1 started remote peer {"member": "m1", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.402Z INFO m1 added remote peer {"member": "m1", "local-member-id": "c1d186962fbde069", "remote-peer-id": "3b13bbeaeef551eb", "remote-peer-urls": ["http://localhost:32381"]}
2021/05/16 18:49:34 2021-05-16T18:49:34.402Z INFO m1 started stream reader with remote peer {"member": "m1", "stream-reader-type": "stream MsgApp v2", "local-member-id": "c1d186962fbde069", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.403Z INFO m1 started stream reader with remote peer {"member": "m1", "stream-reader-type": "stream Message", "local-member-id": "c1d186962fbde069", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:38 2021-05-16T18:49:38.404Z WARN m2 failed to reach the peer URL {"member": "m2", "address": "http://localhost:32381/version", "remote-member-id": "3b13bbeaeef551eb", "error": "Get \"http://localhost:32381/version\": dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 18:49:38 2021-05-16T18:49:38.404Z WARN m2 failed to get version {"member": "m2", "remote-member-id": "3b13bbeaeef551eb", "error": "Get \"http://localhost:32381/version\": dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 18:49:39 2021-05-16T18:49:39.400Z WARN m2 prober detected unhealthy status {"member": "m2", "round-tripper-name": "ROUND_TRIPPER_RAFT_MESSAGE", "remote-peer-id": "3b13bbeaeef551eb", "rtt": "0s", "error": "dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 18:49:39 2021-05-16T18:49:39.401Z WARN m0 prober detected unhealthy status {"member": "m0", "round-tripper-name": "ROUND_TRIPPER_RAFT_MESSAGE", "remote-peer-id": "3b13bbeaeef551eb", "rtt": "0s", "error": "dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 18:49:39 2021-05-16T18:49:39.401Z WARN m0 prober detected unhealthy status {"member": "m0", "round-tripper-name": "ROUND_TRIPPER_SNAPSHOT", "remote-peer-id": "3b13bbeaeef551eb", "rtt": "0s", "error": "dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 18:49:39 2021-05-16T18:49:39.401Z WARN m2 prober detected unhealthy status {"member": "m2", "round-tripper-name": "ROUND_TRIPPER_SNAPSHOT", "remote-peer-id": "3b13bbeaeef551eb", "rtt": "0s", "error": "dial tcp [::1]:32381: connect: connection refused"}
```
This line looks suspecious
```
2021/05/16 18:49:34 2021-05-16T18:49:34.401Z INFO m2.raft 685c5c71ce9a9328 ignoring conf change {ConfChangeRemoveNode 4256952690501046763 [] 4425483133757641240} at config voters=(7519987121669182248 13966091899056808041 14250974771059047786) learners=(4256952690501046763): possible unapplied conf change at index 25 (applied to 24) {"member": "m2"}
```
```
goroutine 1 [select, 14 minutes]:
google.golang.org/grpc/internal/transport.(*Stream).waitOnHeader(0xc0003d70e0)
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.37.0/internal/transport/transport.go:322 +0x99
google.golang.org/grpc/internal/transport.(*Stream).RecvCompress(...)
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.37.0/internal/transport/transport.go:337
google.golang.org/grpc.(*csAttempt).recvMsg(0xc003a3ef00, 0x1025000, 0xc000284440, 0x0, 0x0, 0x0)
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.37.0/stream.go:937 +0x731
google.golang.org/grpc.(*clientStream).RecvMsg.func1(0xc003a3ef00, 0xc0037fcbe0, 0xa)
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.37.0/stream.go:802 +0x46
google.golang.org/grpc.(*clientStream).withRetry(0xc0003d6ea0, 0xc000379298, 0xc000379268, 0xc0037fcbea, 0x9c7a49)
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.37.0/stream.go:660 +0x9f
google.golang.org/grpc.(*clientStream).RecvMsg(0xc0003d6ea0, 0x1025000, 0xc000284440, 0x0, 0x0)
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.37.0/stream.go:801 +0x105
google.golang.org/grpc.invoke(0x11d7738, 0xc0038c1cb0, 0x109b415, 0x22, 0x101aea0, 0xc0038c1c50, 0x1025000, 0xc000284440, 0xc002d5e000, 0xc0002844c0, ...)
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.37.0/call.go:73 +0x142
go.etcd.io/etcd/client/v3.(*Client).unaryClientInterceptor.func1(0x11d76c8, 0xc0038c1cb0, 0x109b415, 0x22, 0x101aea0, 0xc0038c1c50, 0x1025000, 0xc000284440, 0xc002d5e000, 0x10c6d98, ...)
/home/runner/work/etcd/etcd/client/v3/retry_interceptor.go:58 +0x46a
google.golang.org/grpc.(*ClientConn).Invoke(0xc002d5e000, 0x11d76c8, 0xc00011e010, 0x109b415, 0x22, 0x101aea0, 0xc0038c1c50, 0x1025000, 0xc000284440, 0x1823240, ...)
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.37.0/call.go:35 +0x109
go.etcd.io/etcd/api/v3/etcdserverpb.(*clusterClient).MemberRemove(0xc00018e108, 0x11d76c8, 0xc00011e010, 0xc0038c1c50, 0x1823240, 0x3, 0x3, 0xfc4be0, 0x1, 0xc0038c1c50)
/home/runner/work/etcd/etcd/api/etcdserverpb/rpc.pb.go:7083 +0xcf
go.etcd.io/etcd/client/v3.(*retryClusterClient).MemberRemove(0xc0039fd310, 0x11d76c8, 0xc00011e010, 0xc0038c1c50, 0x1823240, 0x3, 0x3, 0xc0032fc4b0, 0x0, 0x0)
/home/runner/work/etcd/etcd/client/v3/retry.go:175 +0x7c
go.etcd.io/etcd/client/v3.(*cluster).MemberRemove(0xc003a42db0, 0x11d76c8, 0xc00011e010, 0x3b13bbeaeef551eb, 0x1, 0x1, 0xc0032fc4b0)
/home/runner/work/etcd/etcd/client/v3/cluster.go:103 +0x88
go.etcd.io/etcd/tests/v3/integration/clientv3/examples_test.ExampleCluster_memberAddAsLearner.func1()
/home/runner/work/etcd/etcd/tests/integration/clientv3/examples/example_cluster_test.go:106 +0x247
go.etcd.io/etcd/tests/v3/integration/clientv3/examples_test.forUnitTestsRunInMockedContext(...)
/home/runner/work/etcd/etcd/tests/integration/clientv3/examples/main_test.go:40
go.etcd.io/etcd/tests/v3/integration/clientv3/examples_test.ExampleCluster_memberAddAsLearner()
/home/runner/work/etcd/etcd/tests/integration/clientv3/examples/example_cluster_test.go:90 +0x2b
testing.runExample(0x1099ea7, 0x21, 0x10c6868, 0x10a96f3, 0x2e, 0x0, 0x0)
/opt/hostedtoolcache/go/1.16.4/x64/src/testing/run_example.go:63 +0x222
testing.runExamples(0xc000379e58, 0x1816a80, 0x18, 0x18, 0xc02079108fcf076d)
/opt/hostedtoolcache/go/1.16.4/x64/src/testing/example.go:44 +0x17a
testing.(*M).Run(0xc0001f0380, 0x0)
/opt/hostedtoolcache/go/1.16.4/x64/src/testing/testing.go:1418 +0x273
go.etcd.io/etcd/tests/v3/integration/clientv3/examples_test.TestMain(0xc0001f0380)
/home/runner/work/etcd/etcd/tests/integration/clientv3/examples/main_test.go:46 +0x48
main.main()
_testmain.go:91 +0x165
```
----------------------------------
```
ptab@ptab ~/corp/etcd% (cd tests && 'env' 'go' 'test' '-timeout=15m' '--race=false' '--cpu=4' './integration/clientv3/examples' --count=1 -v -run ExampleCluster_memberAddAsLearner| tee log.log)
=== RUN ExampleCluster_memberAddAsLearner
2021/05/16 22:58:56 Working directory '/home/ptab/corp/etcd/tests/integration/clientv3/examples' expected to be in temp-dir ('/tmp').Have you executed integration.BeforeTest(t) ?
2021/05/16 22:58:56 2021-05-16T22:58:56.587+0200 INFO m0 LISTEN GRPC {"member": "m0", "m.grpcAddr": "localhost:m0", "m.Name": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.587+0200 INFO m1 LISTEN GRPC {"member": "m1", "m.grpcAddr": "localhost:m1", "m.Name": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.587+0200 INFO m2 LISTEN GRPC {"member": "m2", "m.grpcAddr": "localhost:m2", "m.Name": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.587+0200 INFO m2 launching a member {"member": "m2", "name": "m2", "advertise-peer-urls": ["unix://127.0.0.1:210053101996"], "listen-client-urls": ["unix://127.0.0.1:210063101996"], "grpc-address": "unix://localhost:m20"}
2021/05/16 22:58:56 2021-05-16T22:58:56.587+0200 INFO m1 launching a member {"member": "m1", "name": "m1", "advertise-peer-urls": ["unix://127.0.0.1:210033101996"], "listen-client-urls": ["unix://127.0.0.1:210043101996"], "grpc-address": "unix://localhost:m10"}
2021/05/16 22:58:56 2021-05-16T22:58:56.587+0200 INFO m0 launching a member {"member": "m0", "name": "m0", "advertise-peer-urls": ["unix://127.0.0.1:210013101996"], "listen-client-urls": ["unix://127.0.0.1:210023101996"], "grpc-address": "unix://localhost:m00"}
2021/05/16 22:58:56 2021-05-16T22:58:56.605+0200 INFO m1 opened backend db {"member": "m1", "path": "/tmp/lazy_cluster813018228/etcd819471171/member/snap/db", "took": "17.295908ms"}
2021/05/16 22:58:56 2021-05-16T22:58:56.605+0200 INFO m0 opened backend db {"member": "m0", "path": "/tmp/lazy_cluster668773490/etcd777733929/member/snap/db", "took": "17.274012ms"}
2021/05/16 22:58:56 2021-05-16T22:58:56.605+0200 INFO m2 opened backend db {"member": "m2", "path": "/tmp/lazy_cluster025711558/etcd404414061/member/snap/db", "took": "17.357534ms"}
2021/05/16 22:58:56 2021-05-16T22:58:56.637+0200 INFO m2 starting local member {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "cluster-id": "3c5b384f03b0f420"}
2021/05/16 22:58:56 2021-05-16T22:58:56.637+0200 INFO m2.raft f22d0a0f9f1b431b switched to configuration voters=() {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.637+0200 INFO m2.raft f22d0a0f9f1b431b became follower at term 0 {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.637+0200 INFO m2.raft newRaft f22d0a0f9f1b431b [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.637+0200 INFO m2.raft f22d0a0f9f1b431b became follower at term 1 {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.637+0200 INFO m2.raft f22d0a0f9f1b431b switched to configuration voters=(11609101907503085289) {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.637+0200 INFO m2.raft f22d0a0f9f1b431b switched to configuration voters=(11609101907503085289 16575215055615236278) {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.637+0200 INFO m2.raft f22d0a0f9f1b431b switched to configuration voters=(11609101907503085289 16575215055615236278 17450615193340691227) {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.640+0200 INFO m1 starting local member {"member": "m1", "local-member-id": "e606fe4619a718b6", "cluster-id": "3c5b384f03b0f420"}
2021/05/16 22:58:56 2021-05-16T22:58:56.640+0200 INFO m1.raft e606fe4619a718b6 switched to configuration voters=() {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.640+0200 INFO m1.raft e606fe4619a718b6 became follower at term 0 {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.640+0200 INFO m1.raft newRaft e606fe4619a718b6 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.640+0200 INFO m1.raft e606fe4619a718b6 became follower at term 1 {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.640+0200 INFO m1.raft e606fe4619a718b6 switched to configuration voters=(11609101907503085289) {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.640+0200 INFO m1.raft e606fe4619a718b6 switched to configuration voters=(11609101907503085289 16575215055615236278) {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.640+0200 INFO m1.raft e606fe4619a718b6 switched to configuration voters=(11609101907503085289 16575215055615236278 17450615193340691227) {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.640+0200 INFO m0 starting local member {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "cluster-id": "3c5b384f03b0f420"}
2021/05/16 22:58:56 2021-05-16T22:58:56.640+0200 INFO m0.raft a11bd0ad5e07f6e9 switched to configuration voters=() {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.640+0200 INFO m0.raft a11bd0ad5e07f6e9 became follower at term 0 {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.641+0200 INFO m0.raft newRaft a11bd0ad5e07f6e9 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.641+0200 INFO m0.raft a11bd0ad5e07f6e9 became follower at term 1 {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.641+0200 INFO m0.raft a11bd0ad5e07f6e9 switched to configuration voters=(11609101907503085289) {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.641+0200 INFO m0.raft a11bd0ad5e07f6e9 switched to configuration voters=(11609101907503085289 16575215055615236278) {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.641+0200 INFO m0.raft a11bd0ad5e07f6e9 switched to configuration voters=(11609101907503085289 16575215055615236278 17450615193340691227) {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.648+0200 WARN m2 simple token is not cryptographically signed {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.651+0200 WARN m1 simple token is not cryptographically signed {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.651+0200 WARN m0 simple token is not cryptographically signed {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.652+0200 INFO m2 kvstore restored {"member": "m2", "current-rev": 1}
2021/05/16 22:58:56 2021-05-16T22:58:56.655+0200 INFO m1 kvstore restored {"member": "m1", "current-rev": 1}
2021/05/16 22:58:56 2021-05-16T22:58:56.658+0200 INFO m0 kvstore restored {"member": "m0", "current-rev": 1}
2021/05/16 22:58:56 2021-05-16T22:58:56.658+0200 INFO m2 enabled backend quota with default value {"member": "m2", "quota-name": "v3-applier", "quota-size-bytes": 2147483648, "quota-size": "2.1 GB"}
2021/05/16 22:58:56 2021-05-16T22:58:56.661+0200 INFO m2 starting remote peer {"member": "m2", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.661+0200 INFO m2 started HTTP pipelining with remote peer {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.662+0200 INFO m2 started stream writer with remote peer {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.663+0200 INFO m2 started stream writer with remote peer {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.663+0200 INFO m1 starting remote peer {"member": "m1", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.663+0200 INFO m1 started HTTP pipelining with remote peer {"member": "m1", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.664+0200 INFO m1 started stream writer with remote peer {"member": "m1", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.665+0200 INFO m2 started remote peer {"member": "m2", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.665+0200 INFO m2 added remote peer {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "a11bd0ad5e07f6e9", "remote-peer-urls": ["unix://127.0.0.1:210013101996"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.665+0200 INFO m2 starting remote peer {"member": "m2", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.665+0200 INFO m2 started HTTP pipelining with remote peer {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.665+0200 INFO m2 started stream reader with remote peer {"member": "m2", "stream-reader-type": "stream Message", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.666+0200 INFO m2 started stream reader with remote peer {"member": "m2", "stream-reader-type": "stream MsgApp v2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.666+0200 INFO m0 starting remote peer {"member": "m0", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.666+0200 INFO m0 started HTTP pipelining with remote peer {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.669+0200 INFO m2 started stream writer with remote peer {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.670+0200 INFO m1 started remote peer {"member": "m1", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.670+0200 INFO m1 added remote peer {"member": "m1", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "a11bd0ad5e07f6e9", "remote-peer-urls": ["unix://127.0.0.1:210013101996"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.670+0200 INFO m1 starting remote peer {"member": "m1", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.670+0200 INFO m1 started HTTP pipelining with remote peer {"member": "m1", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.671+0200 INFO m0 started remote peer {"member": "m0", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.672+0200 INFO m0 added remote peer {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "e606fe4619a718b6", "remote-peer-urls": ["unix://127.0.0.1:210033101996"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.672+0200 INFO m0 starting remote peer {"member": "m0", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.672+0200 INFO m0 started HTTP pipelining with remote peer {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.673+0200 INFO m1 started stream writer with remote peer {"member": "m1", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.673+0200 INFO m1 started stream writer with remote peer {"member": "m1", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.673+0200 INFO m0 started stream reader with remote peer {"member": "m0", "stream-reader-type": "stream MsgApp v2", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.673+0200 INFO m1 started stream reader with remote peer {"member": "m1", "stream-reader-type": "stream Message", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.673+0200 INFO m0 started stream reader with remote peer {"member": "m0", "stream-reader-type": "stream Message", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.672+0200 INFO m1 started stream reader with remote peer {"member": "m1", "stream-reader-type": "stream MsgApp v2", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.673+0200 INFO m2 started stream writer with remote peer {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.673+0200 INFO m1 started stream writer with remote peer {"member": "m1", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.673+0200 INFO m0 started stream writer with remote peer {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.674+0200 INFO m0 started stream writer with remote peer {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.674+0200 INFO m2 started remote peer {"member": "m2", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.674+0200 INFO m2 added remote peer {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "e606fe4619a718b6", "remote-peer-urls": ["unix://127.0.0.1:210033101996"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.674+0200 INFO m2 starting etcd server {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "local-server-version": "3.5.0-alpha.0", "cluster-version": "to_be_decided"}
2021/05/16 22:58:56 2021-05-16T22:58:56.674+0200 INFO m1 started remote peer {"member": "m1", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.674+0200 INFO m1 added remote peer {"member": "m1", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "f22d0a0f9f1b431b", "remote-peer-urls": ["unix://127.0.0.1:210053101996"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.674+0200 INFO m1 starting etcd server {"member": "m1", "local-member-id": "e606fe4619a718b6", "local-server-version": "3.5.0-alpha.0", "cluster-version": "to_be_decided"}
2021/05/16 22:58:56 2021-05-16T22:58:56.674+0200 INFO m1 started stream reader with remote peer {"member": "m1", "stream-reader-type": "stream MsgApp v2", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.674+0200 INFO m1 started stream reader with remote peer {"member": "m1", "stream-reader-type": "stream Message", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.674+0200 INFO m0 started remote peer {"member": "m0", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.674+0200 INFO m0 added remote peer {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "f22d0a0f9f1b431b", "remote-peer-urls": ["unix://127.0.0.1:210053101996"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.674+0200 INFO m0 starting etcd server {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "local-server-version": "3.5.0-alpha.0", "cluster-version": "to_be_decided"}
2021/05/16 22:58:56 2021-05-16T22:58:56.674+0200 INFO m0 started stream writer with remote peer {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.675+0200 INFO m0 started stream writer with remote peer {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.675+0200 INFO m0 started stream reader with remote peer {"member": "m0", "stream-reader-type": "stream MsgApp v2", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.675+0200 INFO m0 started stream reader with remote peer {"member": "m0", "stream-reader-type": "stream Message", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.675+0200 INFO m1 starting initial election tick advance {"member": "m1", "election-ticks": 10}
2021/05/16 22:58:56 2021-05-16T22:58:56.676+0200 INFO m1.raft e606fe4619a718b6 switched to configuration voters=(11609101907503085289 16575215055615236278 17450615193340691227) {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.676+0200 INFO m1 added member {"member": "m1", "cluster-id": "3c5b384f03b0f420", "local-member-id": "e606fe4619a718b6", "added-peer-id": "a11bd0ad5e07f6e9", "added-peer-peer-urls": ["unix://127.0.0.1:210013101996"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.676+0200 INFO m0 starting initial election tick advance {"member": "m0", "election-ticks": 10}
2021/05/16 22:58:56 2021-05-16T22:58:56.676+0200 INFO m1.raft e606fe4619a718b6 switched to configuration voters=(11609101907503085289 16575215055615236278 17450615193340691227) {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.677+0200 INFO m1 added member {"member": "m1", "cluster-id": "3c5b384f03b0f420", "local-member-id": "e606fe4619a718b6", "added-peer-id": "e606fe4619a718b6", "added-peer-peer-urls": ["unix://127.0.0.1:210033101996"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.677+0200 INFO m1.raft e606fe4619a718b6 switched to configuration voters=(11609101907503085289 16575215055615236278 17450615193340691227) {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.677+0200 INFO m1 added member {"member": "m1", "cluster-id": "3c5b384f03b0f420", "local-member-id": "e606fe4619a718b6", "added-peer-id": "f22d0a0f9f1b431b", "added-peer-peer-urls": ["unix://127.0.0.1:210053101996"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.677+0200 INFO m0.raft a11bd0ad5e07f6e9 switched to configuration voters=(11609101907503085289 16575215055615236278 17450615193340691227) {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.677+0200 INFO m2 starting initial election tick advance {"member": "m2", "election-ticks": 10}
2021/05/16 22:58:56 2021-05-16T22:58:56.677+0200 INFO m2 started stream reader with remote peer {"member": "m2", "stream-reader-type": "stream Message", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.677+0200 INFO m2.raft f22d0a0f9f1b431b switched to configuration voters=(11609101907503085289 16575215055615236278 17450615193340691227) {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.677+0200 INFO m0 added member {"member": "m0", "cluster-id": "3c5b384f03b0f420", "local-member-id": "a11bd0ad5e07f6e9", "added-peer-id": "a11bd0ad5e07f6e9", "added-peer-peer-urls": ["unix://127.0.0.1:210013101996"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.678+0200 INFO m2 added member {"member": "m2", "cluster-id": "3c5b384f03b0f420", "local-member-id": "f22d0a0f9f1b431b", "added-peer-id": "a11bd0ad5e07f6e9", "added-peer-peer-urls": ["unix://127.0.0.1:210013101996"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.678+0200 INFO m0.raft a11bd0ad5e07f6e9 switched to configuration voters=(11609101907503085289 16575215055615236278 17450615193340691227) {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.678+0200 INFO m2.raft f22d0a0f9f1b431b switched to configuration voters=(11609101907503085289 16575215055615236278 17450615193340691227) {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.678+0200 INFO m0 added member {"member": "m0", "cluster-id": "3c5b384f03b0f420", "local-member-id": "a11bd0ad5e07f6e9", "added-peer-id": "e606fe4619a718b6", "added-peer-peer-urls": ["unix://127.0.0.1:210033101996"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.678+0200 INFO m2 added member {"member": "m2", "cluster-id": "3c5b384f03b0f420", "local-member-id": "f22d0a0f9f1b431b", "added-peer-id": "e606fe4619a718b6", "added-peer-peer-urls": ["unix://127.0.0.1:210033101996"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.678+0200 INFO m0.raft a11bd0ad5e07f6e9 switched to configuration voters=(11609101907503085289 16575215055615236278 17450615193340691227) {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.678+0200 INFO m2.raft f22d0a0f9f1b431b switched to configuration voters=(11609101907503085289 16575215055615236278 17450615193340691227) {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.678+0200 INFO m0 added member {"member": "m0", "cluster-id": "3c5b384f03b0f420", "local-member-id": "a11bd0ad5e07f6e9", "added-peer-id": "f22d0a0f9f1b431b", "added-peer-peer-urls": ["unix://127.0.0.1:210053101996"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.678+0200 INFO m2 added member {"member": "m2", "cluster-id": "3c5b384f03b0f420", "local-member-id": "f22d0a0f9f1b431b", "added-peer-id": "f22d0a0f9f1b431b", "added-peer-peer-urls": ["unix://127.0.0.1:210053101996"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.677+0200 INFO m2 started stream reader with remote peer {"member": "m2", "stream-reader-type": "stream MsgApp v2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.681+0200 INFO m2 set message encoder {"member": "m2", "from": "f22d0a0f9f1b431b", "to": "a11bd0ad5e07f6e9", "stream-type": "stream Message"}
2021/05/16 22:58:56 2021-05-16T22:58:56.681+0200 INFO m0 set message encoder {"member": "m0", "from": "a11bd0ad5e07f6e9", "to": "f22d0a0f9f1b431b", "stream-type": "stream Message"}
2021/05/16 22:58:56 2021-05-16T22:58:56.681+0200 INFO m0 peer became active {"member": "m0", "peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.681+0200 INFO m0 established TCP streaming connection with remote peer {"member": "m0", "stream-writer-type": "stream Message", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.681+0200 INFO m2 peer became active {"member": "m2", "peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.681+0200 INFO m2 established TCP streaming connection with remote peer {"member": "m2", "stream-writer-type": "stream Message", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.682+0200 INFO m2 launched a member {"member": "m2", "name": "m2", "advertise-peer-urls": ["unix://127.0.0.1:210053101996"], "listen-client-urls": ["unix://127.0.0.1:210063101996"], "grpc-address": "unix://localhost:m20"}
2021/05/16 22:58:56 2021-05-16T22:58:56.682+0200 INFO m1 launched a member {"member": "m1", "name": "m1", "advertise-peer-urls": ["unix://127.0.0.1:210033101996"], "listen-client-urls": ["unix://127.0.0.1:210043101996"], "grpc-address": "unix://localhost:m10"}
2021/05/16 22:58:56 2021-05-16T22:58:56.682+0200 INFO m0 established TCP streaming connection with remote peer {"member": "m0", "stream-reader-type": "stream Message", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.682+0200 INFO m2 established TCP streaming connection with remote peer {"member": "m2", "stream-reader-type": "stream Message", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.682+0200 INFO m2 set message encoder {"member": "m2", "from": "f22d0a0f9f1b431b", "to": "e606fe4619a718b6", "stream-type": "stream MsgApp v2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.683+0200 INFO m2 peer became active {"member": "m2", "peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.683+0200 INFO m2 established TCP streaming connection with remote peer {"member": "m2", "stream-writer-type": "stream MsgApp v2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.683+0200 INFO m2 set message encoder {"member": "m2", "from": "f22d0a0f9f1b431b", "to": "e606fe4619a718b6", "stream-type": "stream Message"}
2021/05/16 22:58:56 2021-05-16T22:58:56.683+0200 INFO m2 established TCP streaming connection with remote peer {"member": "m2", "stream-writer-type": "stream Message", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.685+0200 INFO m2 set message encoder {"member": "m2", "from": "f22d0a0f9f1b431b", "to": "a11bd0ad5e07f6e9", "stream-type": "stream MsgApp v2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.685+0200 INFO m2 established TCP streaming connection with remote peer {"member": "m2", "stream-writer-type": "stream MsgApp v2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.685+0200 INFO m0 launched a member {"member": "m0", "name": "m0", "advertise-peer-urls": ["unix://127.0.0.1:210013101996"], "listen-client-urls": ["unix://127.0.0.1:210023101996"], "grpc-address": "unix://localhost:m00"}
2021/05/16 22:58:56 2021-05-16T22:58:56.686+0200 INFO m2 established TCP streaming connection with remote peer {"member": "m2", "stream-reader-type": "stream MsgApp v2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.686+0200 INFO m1 peer became active {"member": "m1", "peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.686+0200 INFO m1 established TCP streaming connection with remote peer {"member": "m1", "stream-reader-type": "stream MsgApp v2", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.686+0200 INFO m2 established TCP streaming connection with remote peer {"member": "m2", "stream-reader-type": "stream MsgApp v2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.687+0200 INFO m1 set message encoder {"member": "m1", "from": "e606fe4619a718b6", "to": "a11bd0ad5e07f6e9", "stream-type": "stream MsgApp v2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.687+0200 INFO m1 peer became active {"member": "m1", "peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.687+0200 INFO m1 established TCP streaming connection with remote peer {"member": "m1", "stream-writer-type": "stream MsgApp v2", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.687+0200 INFO m0 set message encoder {"member": "m0", "from": "a11bd0ad5e07f6e9", "to": "f22d0a0f9f1b431b", "stream-type": "stream MsgApp v2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.687+0200 INFO m0 established TCP streaming connection with remote peer {"member": "m0", "stream-writer-type": "stream MsgApp v2", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.687+0200 INFO m1 set message encoder {"member": "m1", "from": "e606fe4619a718b6", "to": "f22d0a0f9f1b431b", "stream-type": "stream MsgApp v2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.687+0200 INFO m1 established TCP streaming connection with remote peer {"member": "m1", "stream-writer-type": "stream MsgApp v2", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.687+0200 INFO m1 set message encoder {"member": "m1", "from": "e606fe4619a718b6", "to": "a11bd0ad5e07f6e9", "stream-type": "stream Message"}
2021/05/16 22:58:56 2021-05-16T22:58:56.687+0200 INFO m1 established TCP streaming connection with remote peer {"member": "m1", "stream-writer-type": "stream Message", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.687+0200 INFO m1 set message encoder {"member": "m1", "from": "e606fe4619a718b6", "to": "f22d0a0f9f1b431b", "stream-type": "stream Message"}
2021/05/16 22:58:56 2021-05-16T22:58:56.687+0200 INFO m1 established TCP streaming connection with remote peer {"member": "m1", "stream-writer-type": "stream Message", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.687+0200 INFO m2 established TCP streaming connection with remote peer {"member": "m2", "stream-reader-type": "stream Message", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.687+0200 INFO m1 established TCP streaming connection with remote peer {"member": "m1", "stream-reader-type": "stream MsgApp v2", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.688+0200 INFO m0 set message encoder {"member": "m0", "from": "a11bd0ad5e07f6e9", "to": "e606fe4619a718b6", "stream-type": "stream MsgApp v2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.688+0200 INFO m0 peer became active {"member": "m0", "peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.688+0200 INFO m0 established TCP streaming connection with remote peer {"member": "m0", "stream-writer-type": "stream MsgApp v2", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.688+0200 INFO m0 established TCP streaming connection with remote peer {"member": "m0", "stream-reader-type": "stream MsgApp v2", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.688+0200 INFO m0 established TCP streaming connection with remote peer {"member": "m0", "stream-reader-type": "stream Message", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.689+0200 INFO m0 established TCP streaming connection with remote peer {"member": "m0", "stream-reader-type": "stream MsgApp v2", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.690+0200 INFO m0 set message encoder {"member": "m0", "from": "a11bd0ad5e07f6e9", "to": "e606fe4619a718b6", "stream-type": "stream Message"}
2021/05/16 22:58:56 2021-05-16T22:58:56.690+0200 INFO m0 established TCP streaming connection with remote peer {"member": "m0", "stream-writer-type": "stream Message", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.690+0200 INFO m1 established TCP streaming connection with remote peer {"member": "m1", "stream-reader-type": "stream Message", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.690+0200 INFO m1 established TCP streaming connection with remote peer {"member": "m1", "stream-reader-type": "stream Message", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.726+0200 INFO m1 initialized peer connections; fast-forwarding election ticks {"member": "m1", "local-member-id": "e606fe4619a718b6", "forward-ticks": 8, "forward-duration": "80ms", "election-ticks": 10, "election-timeout": "100ms", "active-remote-members": 2}
2021/05/16 22:58:56 2021-05-16T22:58:56.726+0200 INFO m1.raft e606fe4619a718b6 is starting a new election at term 1 {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.726+0200 INFO m1.raft e606fe4619a718b6 became candidate at term 2 {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.726+0200 INFO m1.raft e606fe4619a718b6 received MsgVoteResp from e606fe4619a718b6 at term 2 {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.726+0200 INFO m1.raft e606fe4619a718b6 [logterm: 1, index: 3] sent MsgVote request to a11bd0ad5e07f6e9 at term 2 {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.726+0200 INFO m1.raft e606fe4619a718b6 [logterm: 1, index: 3] sent MsgVote request to f22d0a0f9f1b431b at term 2 {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.726+0200 INFO m0 initialized peer connections; fast-forwarding election ticks {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "forward-ticks": 8, "forward-duration": "80ms", "election-ticks": 10, "election-timeout": "100ms", "active-remote-members": 2}
2021/05/16 22:58:56 2021-05-16T22:58:56.728+0200 INFO m2 initialized peer connections; fast-forwarding election ticks {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "forward-ticks": 8, "forward-duration": "80ms", "election-ticks": 10, "election-timeout": "100ms", "active-remote-members": 2}
2021/05/16 22:58:56 2021-05-16T22:58:56.728+0200 INFO m2.raft f22d0a0f9f1b431b [term: 1] received a MsgVote message with higher term from e606fe4619a718b6 [term: 2] {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.728+0200 INFO m2.raft f22d0a0f9f1b431b became follower at term 2 {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.728+0200 INFO m2.raft f22d0a0f9f1b431b [logterm: 1, index: 3, vote: 0] cast MsgVote for e606fe4619a718b6 [logterm: 1, index: 3] at term 2 {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.728+0200 INFO m0.raft a11bd0ad5e07f6e9 [term: 1] received a MsgVote message with higher term from e606fe4619a718b6 [term: 2] {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.728+0200 INFO m0.raft a11bd0ad5e07f6e9 became follower at term 2 {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.728+0200 INFO m0.raft a11bd0ad5e07f6e9 [logterm: 1, index: 3, vote: 0] cast MsgVote for e606fe4619a718b6 [logterm: 1, index: 3] at term 2 {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.730+0200 INFO m1.raft e606fe4619a718b6 received MsgVoteResp from a11bd0ad5e07f6e9 at term 2 {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.730+0200 INFO m1.raft e606fe4619a718b6 has received 2 MsgVoteResp votes and 0 vote rejections {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.730+0200 INFO m1.raft e606fe4619a718b6 became leader at term 2 {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.730+0200 INFO m1.raft raft.node: e606fe4619a718b6 elected leader e606fe4619a718b6 at term 2 {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.730+0200 INFO m2.raft raft.node: f22d0a0f9f1b431b elected leader e606fe4619a718b6 at term 2 {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.730+0200 INFO m0.raft raft.node: a11bd0ad5e07f6e9 elected leader e606fe4619a718b6 at term 2 {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.735+0200 INFO m1 setting up initial cluster version {"member": "m1", "cluster-version": "3.5"}
2021/05/16 22:58:56 2021-05-16T22:58:56.738+0200 INFO m1 set initial cluster version {"member": "m1", "cluster-id": "3c5b384f03b0f420", "local-member-id": "e606fe4619a718b6", "cluster-version": "3.5"}
2021/05/16 22:58:56 2021-05-16T22:58:56.739+0200 INFO m1 enabled capabilities for version {"member": "m1", "cluster-version": "3.5"}
2021/05/16 22:58:56 2021-05-16T22:58:56.739+0200 INFO m0 set initial cluster version {"member": "m0", "cluster-id": "3c5b384f03b0f420", "local-member-id": "a11bd0ad5e07f6e9", "cluster-version": "3.5"}
2021/05/16 22:58:56 2021-05-16T22:58:56.739+0200 INFO m2 set initial cluster version {"member": "m2", "cluster-id": "3c5b384f03b0f420", "local-member-id": "f22d0a0f9f1b431b", "cluster-version": "3.5"}
2021/05/16 22:58:56 2021-05-16T22:58:56.739+0200 INFO m0 published local member to cluster through raft {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "local-member-attributes": "{Name:m0 ClientURLs:[unix://127.0.0.1:210023101996]}", "request-path": "/0/members/a11bd0ad5e07f6e9/attributes", "cluster-id": "3c5b384f03b0f420", "publish-timeout": "5.2s"}
2021/05/16 22:58:56 2021-05-16T22:58:56.739+0200 INFO m1 published local member to cluster through raft {"member": "m1", "local-member-id": "e606fe4619a718b6", "local-member-attributes": "{Name:m1 ClientURLs:[unix://127.0.0.1:210043101996]}", "request-path": "/0/members/e606fe4619a718b6/attributes", "cluster-id": "3c5b384f03b0f420", "publish-timeout": "5.2s"}
2021/05/16 22:58:56 2021-05-16T22:58:56.739+0200 INFO m2 published local member to cluster through raft {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "local-member-attributes": "{Name:m2 ClientURLs:[unix://127.0.0.1:210063101996]}", "request-path": "/0/members/f22d0a0f9f1b431b/attributes", "cluster-id": "3c5b384f03b0f420", "publish-timeout": "5.2s"}
2021/05/16 22:58:56 - m0 -> a11bd0ad5e07f6e9 (unix://localhost:m00)
2021/05/16 22:58:56 - m1 -> e606fe4619a718b6 (unix://localhost:m10)
2021/05/16 22:58:56 - m2 -> f22d0a0f9f1b431b (unix://localhost:m20)
2021/05/16 22:58:56 2021-05-16T22:58:56.782+0200 INFO m1.raft e606fe4619a718b6 switched to configuration voters=(11609101907503085289 16575215055615236278 17450615193340691227) learners=(9777699696455160727) {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.782+0200 INFO m2.raft f22d0a0f9f1b431b switched to configuration voters=(11609101907503085289 16575215055615236278 17450615193340691227) learners=(9777699696455160727) {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.782+0200 INFO m1 added member {"member": "m1", "cluster-id": "3c5b384f03b0f420", "local-member-id": "e606fe4619a718b6", "added-peer-id": "87b15e11c7a79397", "added-peer-peer-urls": ["http://localhost:32381"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.782+0200 INFO m1 starting remote peer {"member": "m1", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.782+0200 INFO m1 started HTTP pipelining with remote peer {"member": "m1", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.782+0200 INFO m2 added member {"member": "m2", "cluster-id": "3c5b384f03b0f420", "local-member-id": "f22d0a0f9f1b431b", "added-peer-id": "87b15e11c7a79397", "added-peer-peer-urls": ["http://localhost:32381"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.782+0200 INFO m2 starting remote peer {"member": "m2", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.782+0200 INFO m2 started HTTP pipelining with remote peer {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.783+0200 INFO m1 started remote peer {"member": "m1", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.783+0200 INFO m1 added remote peer {"member": "m1", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "87b15e11c7a79397", "remote-peer-urls": ["http://localhost:32381"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.783+0200 INFO m0.raft a11bd0ad5e07f6e9 switched to configuration voters=(11609101907503085289 16575215055615236278 17450615193340691227) learners=(9777699696455160727) {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.784+0200 INFO m1 started stream writer with remote peer {"member": "m1", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.784+0200 INFO m1 started stream writer with remote peer {"member": "m1", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.784+0200 INFO m1 started stream reader with remote peer {"member": "m1", "stream-reader-type": "stream MsgApp v2", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.785+0200 INFO m0 added member {"member": "m0", "cluster-id": "3c5b384f03b0f420", "local-member-id": "a11bd0ad5e07f6e9", "added-peer-id": "87b15e11c7a79397", "added-peer-peer-urls": ["http://localhost:32381"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.785+0200 INFO m0 starting remote peer {"member": "m0", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.786+0200 INFO m0 started HTTP pipelining with remote peer {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.786+0200 INFO m2 started remote peer {"member": "m2", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.786+0200 INFO m2 added remote peer {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "87b15e11c7a79397", "remote-peer-urls": ["http://localhost:32381"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.786+0200 INFO m2 started stream writer with remote peer {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.787+0200 INFO m2 started stream writer with remote peer {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.787+0200 INFO m1 started stream reader with remote peer {"member": "m1", "stream-reader-type": "stream Message", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.787+0200 INFO m0 started remote peer {"member": "m0", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.787+0200 INFO m2 started stream reader with remote peer {"member": "m2", "stream-reader-type": "stream MsgApp v2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.787+0200 INFO m2 started stream reader with remote peer {"member": "m2", "stream-reader-type": "stream Message", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.788+0200 INFO m0 added remote peer {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "87b15e11c7a79397", "remote-peer-urls": ["http://localhost:32381"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.788+0200 INFO m0 started stream writer with remote peer {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.789+0200 INFO m0 started stream writer with remote peer {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.789+0200 INFO m0 started stream reader with remote peer {"member": "m0", "stream-reader-type": "stream MsgApp v2", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.789+0200 INFO m0 started stream reader with remote peer {"member": "m0", "stream-reader-type": "stream Message", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.789+0200 INFO m0 applied a configuration change through raft {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "raft-conf-change": "ConfChangeAddLearnerNode", "raft-conf-change-node-id": "87b15e11c7a79397"}
!!! 2021/05/16 22:58:56 2021-05-16T22:58:56.790+0200 INFO m1.raft e606fe4619a718b6 ignoring conf change {ConfChangeRemoveNode 9777699696455160727 [] 17791885354803724549} at config voters=(11609101907503085289 16575215055615236278 17450615193340691227) learners=(9777699696455160727): possible unapplied conf change at index 9 (applied to 8) {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.794+0200 WARN m1 failed to reach the peer URL {"member": "m1", "address": "http://localhost:32381/version", "remote-member-id": "87b15e11c7a79397", "error": "Get \"http://localhost:32381/version\": dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 22:58:56 2021-05-16T22:58:56.794+0200 WARN m1 failed to get version {"member": "m1", "remote-member-id": "87b15e11c7a79397", "error": "Get \"http://localhost:32381/version\": dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 22:59:00 2021-05-16T22:59:00.797+0200 WARN m1 failed to reach the peer URL {"member": "m1", "address": "http://localhost:32381/version", "remote-member-id": "87b15e11c7a79397", "error": "Get \"http://localhost:32381/version\": dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 22:59:00 2021-05-16T22:59:00.798+0200 WARN m1 failed to get version {"member": "m1", "remote-member-id": "87b15e11c7a79397", "error": "Get \"http://localhost:32381/version\": dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 22:59:01 2021-05-16T22:59:01.784+0200 WARN m1 prober detected unhealthy status {"member": "m1", "round-tripper-name": "ROUND_TRIPPER_RAFT_MESSAGE", "remote-peer-id": "87b15e11c7a79397", "rtt": "0s", "error": "dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 22:59:01 2021-05-16T22:59:01.787+0200 WARN m1 prober detected unhealthy status {"member": "m1", "round-tripper-name": "ROUND_TRIPPER_SNAPSHOT", "remote-peer-id": "87b15e11c7a79397", "rtt": "0s", "error": "dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 22:59:01 2021-05-16T22:59:01.788+0200 WARN m2 prober detected unhealthy status {"member": "m2", "round-tripper-name": "ROUND_TRIPPER_SNAPSHOT", "remote-peer-id": "87b15e11c7a79397", "rtt": "0s", "error": "dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 22:59:01 2021-05-16T22:59:01.788+0200 WARN m2 prober detected unhealthy status {"member": "m2", "round-tripper-name": "ROUND_TRIPPER_RAFT_MESSAGE", "remote-peer-id": "87b15e11c7a79397", "rtt": "0s", "error": "dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 22:59:01 2021-05-16T22:59:01.789+0200 WARN m0 prober detected unhealthy status {"member": "m0", "round-tripper-name": "ROUND_TRIPPER_SNAPSHOT", "remote-peer-id": "87b15e11c7a79397", "rtt": "0s", "error": "dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 22:59:01 2021-05-16T22:59:01.790+0200 WARN m0 prober detected unhealthy status {"member": "m0", "round-tripper-name": "ROUND_TRIPPER_RAFT_MESSAGE", "remote-peer-id": "87b15e11c7a79397", "rtt": "0s", "error": "dial tcp [::1]:32381: connect: connection refused"}
```
|
1.0
|
integration/clientv3/examples test flakes frequently - ExampleCluster_memberAddAsLearner
I managed to repro this with:
```
for i in `seq 1 100`; do (cd tests && 'env' 'go' 'test' '-timeout=15m' '--race=false' '--cpu=4' './integration/clientv3/examples' --count=1 -v -run ExampleCluster_memberAddAsLearner| tee log.log); done
```
but it does not always flake.
Flake on Actions:
https://github.com/etcd-io/etcd/pull/12981/checks?check_run_id=2595306923
Uploaded log: [logs_1206.zip](https://github.com/etcd-io/etcd/files/6491927/logs_1206.zip)
```
2021/05/16 18:49:34 2021-05-16T18:49:34.400Z INFO m2 started stream writer with remote peer {"member": "m2", "local-member-id": "685c5c71ce9a9328", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.400Z INFO m2 started stream writer with remote peer {"member": "m2", "local-member-id": "685c5c71ce9a9328", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.400Z INFO m2 started stream reader with remote peer {"member": "m2", "stream-reader-type": "stream MsgApp v2", "local-member-id": "685c5c71ce9a9328", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.400Z INFO m0 started remote peer {"member": "m0", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.400Z INFO m0 added remote peer {"member": "m0", "local-member-id": "c5c5a20ca4073d6a", "remote-peer-id": "3b13bbeaeef551eb", "remote-peer-urls": ["http://localhost:32381"]}
2021/05/16 18:49:34 2021-05-16T18:49:34.400Z INFO m0 started stream writer with remote peer {"member": "m0", "local-member-id": "c5c5a20ca4073d6a", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.400Z INFO m0 started stream writer with remote peer {"member": "m0", "local-member-id": "c5c5a20ca4073d6a", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.400Z INFO m0 started stream reader with remote peer {"member": "m0", "stream-reader-type": "stream MsgApp v2", "local-member-id": "c5c5a20ca4073d6a", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.400Z INFO m0 started stream reader with remote peer {"member": "m0", "stream-reader-type": "stream Message", "local-member-id": "c5c5a20ca4073d6a", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.400Z INFO m2 started stream reader with remote peer {"member": "m2", "stream-reader-type": "stream Message", "local-member-id": "685c5c71ce9a9328", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.401Z INFO m0 applied a configuration change through raft {"member": "m0", "local-member-id": "c5c5a20ca4073d6a", "raft-conf-change": "ConfChangeAddLearnerNode", "raft-conf-change-node-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.401Z INFO m2.raft 685c5c71ce9a9328 ignoring conf change {ConfChangeRemoveNode 4256952690501046763 [] 4425483133757641240} at config voters=(7519987121669182248 13966091899056808041 14250974771059047786) learners=(4256952690501046763): possible unapplied conf change at index 25 (applied to 24) {"member": "m2"}
2021/05/16 18:49:34 2021-05-16T18:49:34.401Z INFO m1.raft c1d186962fbde069 switched to configuration voters=(7519987121669182248 13966091899056808041 14250974771059047786) learners=(4256952690501046763) {"member": "m1"}
2021/05/16 18:49:34 2021-05-16T18:49:34.402Z INFO m1 added member {"member": "m1", "cluster-id": "1cfa2497ce0563d4", "local-member-id": "c1d186962fbde069", "added-peer-id": "3b13bbeaeef551eb", "added-peer-peer-urls": ["http://localhost:32381"]}
2021/05/16 18:49:34 2021-05-16T18:49:34.402Z INFO m1 starting remote peer {"member": "m1", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.402Z INFO m1 started HTTP pipelining with remote peer {"member": "m1", "local-member-id": "c1d186962fbde069", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.402Z WARN m2 failed to reach the peer URL {"member": "m2", "address": "http://localhost:32381/version", "remote-member-id": "3b13bbeaeef551eb", "error": "Get \"http://localhost:32381/version\": dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 18:49:34 2021-05-16T18:49:34.402Z WARN m2 failed to get version {"member": "m2", "remote-member-id": "3b13bbeaeef551eb", "error": "Get \"http://localhost:32381/version\": dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 18:49:34 2021-05-16T18:49:34.402Z INFO m1 started stream writer with remote peer {"member": "m1", "local-member-id": "c1d186962fbde069", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.402Z INFO m1 started stream writer with remote peer {"member": "m1", "local-member-id": "c1d186962fbde069", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.402Z INFO m1 started remote peer {"member": "m1", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.402Z INFO m1 added remote peer {"member": "m1", "local-member-id": "c1d186962fbde069", "remote-peer-id": "3b13bbeaeef551eb", "remote-peer-urls": ["http://localhost:32381"]}
2021/05/16 18:49:34 2021-05-16T18:49:34.402Z INFO m1 started stream reader with remote peer {"member": "m1", "stream-reader-type": "stream MsgApp v2", "local-member-id": "c1d186962fbde069", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:34 2021-05-16T18:49:34.403Z INFO m1 started stream reader with remote peer {"member": "m1", "stream-reader-type": "stream Message", "local-member-id": "c1d186962fbde069", "remote-peer-id": "3b13bbeaeef551eb"}
2021/05/16 18:49:38 2021-05-16T18:49:38.404Z WARN m2 failed to reach the peer URL {"member": "m2", "address": "http://localhost:32381/version", "remote-member-id": "3b13bbeaeef551eb", "error": "Get \"http://localhost:32381/version\": dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 18:49:38 2021-05-16T18:49:38.404Z WARN m2 failed to get version {"member": "m2", "remote-member-id": "3b13bbeaeef551eb", "error": "Get \"http://localhost:32381/version\": dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 18:49:39 2021-05-16T18:49:39.400Z WARN m2 prober detected unhealthy status {"member": "m2", "round-tripper-name": "ROUND_TRIPPER_RAFT_MESSAGE", "remote-peer-id": "3b13bbeaeef551eb", "rtt": "0s", "error": "dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 18:49:39 2021-05-16T18:49:39.401Z WARN m0 prober detected unhealthy status {"member": "m0", "round-tripper-name": "ROUND_TRIPPER_RAFT_MESSAGE", "remote-peer-id": "3b13bbeaeef551eb", "rtt": "0s", "error": "dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 18:49:39 2021-05-16T18:49:39.401Z WARN m0 prober detected unhealthy status {"member": "m0", "round-tripper-name": "ROUND_TRIPPER_SNAPSHOT", "remote-peer-id": "3b13bbeaeef551eb", "rtt": "0s", "error": "dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 18:49:39 2021-05-16T18:49:39.401Z WARN m2 prober detected unhealthy status {"member": "m2", "round-tripper-name": "ROUND_TRIPPER_SNAPSHOT", "remote-peer-id": "3b13bbeaeef551eb", "rtt": "0s", "error": "dial tcp [::1]:32381: connect: connection refused"}
```
This line looks suspecious
```
2021/05/16 18:49:34 2021-05-16T18:49:34.401Z INFO m2.raft 685c5c71ce9a9328 ignoring conf change {ConfChangeRemoveNode 4256952690501046763 [] 4425483133757641240} at config voters=(7519987121669182248 13966091899056808041 14250974771059047786) learners=(4256952690501046763): possible unapplied conf change at index 25 (applied to 24) {"member": "m2"}
```
```
goroutine 1 [select, 14 minutes]:
google.golang.org/grpc/internal/transport.(*Stream).waitOnHeader(0xc0003d70e0)
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.37.0/internal/transport/transport.go:322 +0x99
google.golang.org/grpc/internal/transport.(*Stream).RecvCompress(...)
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.37.0/internal/transport/transport.go:337
google.golang.org/grpc.(*csAttempt).recvMsg(0xc003a3ef00, 0x1025000, 0xc000284440, 0x0, 0x0, 0x0)
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.37.0/stream.go:937 +0x731
google.golang.org/grpc.(*clientStream).RecvMsg.func1(0xc003a3ef00, 0xc0037fcbe0, 0xa)
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.37.0/stream.go:802 +0x46
google.golang.org/grpc.(*clientStream).withRetry(0xc0003d6ea0, 0xc000379298, 0xc000379268, 0xc0037fcbea, 0x9c7a49)
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.37.0/stream.go:660 +0x9f
google.golang.org/grpc.(*clientStream).RecvMsg(0xc0003d6ea0, 0x1025000, 0xc000284440, 0x0, 0x0)
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.37.0/stream.go:801 +0x105
google.golang.org/grpc.invoke(0x11d7738, 0xc0038c1cb0, 0x109b415, 0x22, 0x101aea0, 0xc0038c1c50, 0x1025000, 0xc000284440, 0xc002d5e000, 0xc0002844c0, ...)
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.37.0/call.go:73 +0x142
go.etcd.io/etcd/client/v3.(*Client).unaryClientInterceptor.func1(0x11d76c8, 0xc0038c1cb0, 0x109b415, 0x22, 0x101aea0, 0xc0038c1c50, 0x1025000, 0xc000284440, 0xc002d5e000, 0x10c6d98, ...)
/home/runner/work/etcd/etcd/client/v3/retry_interceptor.go:58 +0x46a
google.golang.org/grpc.(*ClientConn).Invoke(0xc002d5e000, 0x11d76c8, 0xc00011e010, 0x109b415, 0x22, 0x101aea0, 0xc0038c1c50, 0x1025000, 0xc000284440, 0x1823240, ...)
/home/runner/go/pkg/mod/google.golang.org/grpc@v1.37.0/call.go:35 +0x109
go.etcd.io/etcd/api/v3/etcdserverpb.(*clusterClient).MemberRemove(0xc00018e108, 0x11d76c8, 0xc00011e010, 0xc0038c1c50, 0x1823240, 0x3, 0x3, 0xfc4be0, 0x1, 0xc0038c1c50)
/home/runner/work/etcd/etcd/api/etcdserverpb/rpc.pb.go:7083 +0xcf
go.etcd.io/etcd/client/v3.(*retryClusterClient).MemberRemove(0xc0039fd310, 0x11d76c8, 0xc00011e010, 0xc0038c1c50, 0x1823240, 0x3, 0x3, 0xc0032fc4b0, 0x0, 0x0)
/home/runner/work/etcd/etcd/client/v3/retry.go:175 +0x7c
go.etcd.io/etcd/client/v3.(*cluster).MemberRemove(0xc003a42db0, 0x11d76c8, 0xc00011e010, 0x3b13bbeaeef551eb, 0x1, 0x1, 0xc0032fc4b0)
/home/runner/work/etcd/etcd/client/v3/cluster.go:103 +0x88
go.etcd.io/etcd/tests/v3/integration/clientv3/examples_test.ExampleCluster_memberAddAsLearner.func1()
/home/runner/work/etcd/etcd/tests/integration/clientv3/examples/example_cluster_test.go:106 +0x247
go.etcd.io/etcd/tests/v3/integration/clientv3/examples_test.forUnitTestsRunInMockedContext(...)
/home/runner/work/etcd/etcd/tests/integration/clientv3/examples/main_test.go:40
go.etcd.io/etcd/tests/v3/integration/clientv3/examples_test.ExampleCluster_memberAddAsLearner()
/home/runner/work/etcd/etcd/tests/integration/clientv3/examples/example_cluster_test.go:90 +0x2b
testing.runExample(0x1099ea7, 0x21, 0x10c6868, 0x10a96f3, 0x2e, 0x0, 0x0)
/opt/hostedtoolcache/go/1.16.4/x64/src/testing/run_example.go:63 +0x222
testing.runExamples(0xc000379e58, 0x1816a80, 0x18, 0x18, 0xc02079108fcf076d)
/opt/hostedtoolcache/go/1.16.4/x64/src/testing/example.go:44 +0x17a
testing.(*M).Run(0xc0001f0380, 0x0)
/opt/hostedtoolcache/go/1.16.4/x64/src/testing/testing.go:1418 +0x273
go.etcd.io/etcd/tests/v3/integration/clientv3/examples_test.TestMain(0xc0001f0380)
/home/runner/work/etcd/etcd/tests/integration/clientv3/examples/main_test.go:46 +0x48
main.main()
_testmain.go:91 +0x165
```
----------------------------------
```
ptab@ptab ~/corp/etcd% (cd tests && 'env' 'go' 'test' '-timeout=15m' '--race=false' '--cpu=4' './integration/clientv3/examples' --count=1 -v -run ExampleCluster_memberAddAsLearner| tee log.log)
=== RUN ExampleCluster_memberAddAsLearner
2021/05/16 22:58:56 Working directory '/home/ptab/corp/etcd/tests/integration/clientv3/examples' expected to be in temp-dir ('/tmp').Have you executed integration.BeforeTest(t) ?
2021/05/16 22:58:56 2021-05-16T22:58:56.587+0200 INFO m0 LISTEN GRPC {"member": "m0", "m.grpcAddr": "localhost:m0", "m.Name": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.587+0200 INFO m1 LISTEN GRPC {"member": "m1", "m.grpcAddr": "localhost:m1", "m.Name": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.587+0200 INFO m2 LISTEN GRPC {"member": "m2", "m.grpcAddr": "localhost:m2", "m.Name": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.587+0200 INFO m2 launching a member {"member": "m2", "name": "m2", "advertise-peer-urls": ["unix://127.0.0.1:210053101996"], "listen-client-urls": ["unix://127.0.0.1:210063101996"], "grpc-address": "unix://localhost:m20"}
2021/05/16 22:58:56 2021-05-16T22:58:56.587+0200 INFO m1 launching a member {"member": "m1", "name": "m1", "advertise-peer-urls": ["unix://127.0.0.1:210033101996"], "listen-client-urls": ["unix://127.0.0.1:210043101996"], "grpc-address": "unix://localhost:m10"}
2021/05/16 22:58:56 2021-05-16T22:58:56.587+0200 INFO m0 launching a member {"member": "m0", "name": "m0", "advertise-peer-urls": ["unix://127.0.0.1:210013101996"], "listen-client-urls": ["unix://127.0.0.1:210023101996"], "grpc-address": "unix://localhost:m00"}
2021/05/16 22:58:56 2021-05-16T22:58:56.605+0200 INFO m1 opened backend db {"member": "m1", "path": "/tmp/lazy_cluster813018228/etcd819471171/member/snap/db", "took": "17.295908ms"}
2021/05/16 22:58:56 2021-05-16T22:58:56.605+0200 INFO m0 opened backend db {"member": "m0", "path": "/tmp/lazy_cluster668773490/etcd777733929/member/snap/db", "took": "17.274012ms"}
2021/05/16 22:58:56 2021-05-16T22:58:56.605+0200 INFO m2 opened backend db {"member": "m2", "path": "/tmp/lazy_cluster025711558/etcd404414061/member/snap/db", "took": "17.357534ms"}
2021/05/16 22:58:56 2021-05-16T22:58:56.637+0200 INFO m2 starting local member {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "cluster-id": "3c5b384f03b0f420"}
2021/05/16 22:58:56 2021-05-16T22:58:56.637+0200 INFO m2.raft f22d0a0f9f1b431b switched to configuration voters=() {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.637+0200 INFO m2.raft f22d0a0f9f1b431b became follower at term 0 {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.637+0200 INFO m2.raft newRaft f22d0a0f9f1b431b [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.637+0200 INFO m2.raft f22d0a0f9f1b431b became follower at term 1 {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.637+0200 INFO m2.raft f22d0a0f9f1b431b switched to configuration voters=(11609101907503085289) {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.637+0200 INFO m2.raft f22d0a0f9f1b431b switched to configuration voters=(11609101907503085289 16575215055615236278) {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.637+0200 INFO m2.raft f22d0a0f9f1b431b switched to configuration voters=(11609101907503085289 16575215055615236278 17450615193340691227) {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.640+0200 INFO m1 starting local member {"member": "m1", "local-member-id": "e606fe4619a718b6", "cluster-id": "3c5b384f03b0f420"}
2021/05/16 22:58:56 2021-05-16T22:58:56.640+0200 INFO m1.raft e606fe4619a718b6 switched to configuration voters=() {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.640+0200 INFO m1.raft e606fe4619a718b6 became follower at term 0 {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.640+0200 INFO m1.raft newRaft e606fe4619a718b6 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.640+0200 INFO m1.raft e606fe4619a718b6 became follower at term 1 {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.640+0200 INFO m1.raft e606fe4619a718b6 switched to configuration voters=(11609101907503085289) {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.640+0200 INFO m1.raft e606fe4619a718b6 switched to configuration voters=(11609101907503085289 16575215055615236278) {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.640+0200 INFO m1.raft e606fe4619a718b6 switched to configuration voters=(11609101907503085289 16575215055615236278 17450615193340691227) {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.640+0200 INFO m0 starting local member {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "cluster-id": "3c5b384f03b0f420"}
2021/05/16 22:58:56 2021-05-16T22:58:56.640+0200 INFO m0.raft a11bd0ad5e07f6e9 switched to configuration voters=() {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.640+0200 INFO m0.raft a11bd0ad5e07f6e9 became follower at term 0 {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.641+0200 INFO m0.raft newRaft a11bd0ad5e07f6e9 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.641+0200 INFO m0.raft a11bd0ad5e07f6e9 became follower at term 1 {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.641+0200 INFO m0.raft a11bd0ad5e07f6e9 switched to configuration voters=(11609101907503085289) {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.641+0200 INFO m0.raft a11bd0ad5e07f6e9 switched to configuration voters=(11609101907503085289 16575215055615236278) {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.641+0200 INFO m0.raft a11bd0ad5e07f6e9 switched to configuration voters=(11609101907503085289 16575215055615236278 17450615193340691227) {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.648+0200 WARN m2 simple token is not cryptographically signed {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.651+0200 WARN m1 simple token is not cryptographically signed {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.651+0200 WARN m0 simple token is not cryptographically signed {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.652+0200 INFO m2 kvstore restored {"member": "m2", "current-rev": 1}
2021/05/16 22:58:56 2021-05-16T22:58:56.655+0200 INFO m1 kvstore restored {"member": "m1", "current-rev": 1}
2021/05/16 22:58:56 2021-05-16T22:58:56.658+0200 INFO m0 kvstore restored {"member": "m0", "current-rev": 1}
2021/05/16 22:58:56 2021-05-16T22:58:56.658+0200 INFO m2 enabled backend quota with default value {"member": "m2", "quota-name": "v3-applier", "quota-size-bytes": 2147483648, "quota-size": "2.1 GB"}
2021/05/16 22:58:56 2021-05-16T22:58:56.661+0200 INFO m2 starting remote peer {"member": "m2", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.661+0200 INFO m2 started HTTP pipelining with remote peer {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.662+0200 INFO m2 started stream writer with remote peer {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.663+0200 INFO m2 started stream writer with remote peer {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.663+0200 INFO m1 starting remote peer {"member": "m1", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.663+0200 INFO m1 started HTTP pipelining with remote peer {"member": "m1", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.664+0200 INFO m1 started stream writer with remote peer {"member": "m1", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.665+0200 INFO m2 started remote peer {"member": "m2", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.665+0200 INFO m2 added remote peer {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "a11bd0ad5e07f6e9", "remote-peer-urls": ["unix://127.0.0.1:210013101996"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.665+0200 INFO m2 starting remote peer {"member": "m2", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.665+0200 INFO m2 started HTTP pipelining with remote peer {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.665+0200 INFO m2 started stream reader with remote peer {"member": "m2", "stream-reader-type": "stream Message", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.666+0200 INFO m2 started stream reader with remote peer {"member": "m2", "stream-reader-type": "stream MsgApp v2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.666+0200 INFO m0 starting remote peer {"member": "m0", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.666+0200 INFO m0 started HTTP pipelining with remote peer {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.669+0200 INFO m2 started stream writer with remote peer {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.670+0200 INFO m1 started remote peer {"member": "m1", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.670+0200 INFO m1 added remote peer {"member": "m1", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "a11bd0ad5e07f6e9", "remote-peer-urls": ["unix://127.0.0.1:210013101996"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.670+0200 INFO m1 starting remote peer {"member": "m1", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.670+0200 INFO m1 started HTTP pipelining with remote peer {"member": "m1", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.671+0200 INFO m0 started remote peer {"member": "m0", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.672+0200 INFO m0 added remote peer {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "e606fe4619a718b6", "remote-peer-urls": ["unix://127.0.0.1:210033101996"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.672+0200 INFO m0 starting remote peer {"member": "m0", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.672+0200 INFO m0 started HTTP pipelining with remote peer {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.673+0200 INFO m1 started stream writer with remote peer {"member": "m1", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.673+0200 INFO m1 started stream writer with remote peer {"member": "m1", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.673+0200 INFO m0 started stream reader with remote peer {"member": "m0", "stream-reader-type": "stream MsgApp v2", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.673+0200 INFO m1 started stream reader with remote peer {"member": "m1", "stream-reader-type": "stream Message", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.673+0200 INFO m0 started stream reader with remote peer {"member": "m0", "stream-reader-type": "stream Message", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.672+0200 INFO m1 started stream reader with remote peer {"member": "m1", "stream-reader-type": "stream MsgApp v2", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.673+0200 INFO m2 started stream writer with remote peer {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.673+0200 INFO m1 started stream writer with remote peer {"member": "m1", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.673+0200 INFO m0 started stream writer with remote peer {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.674+0200 INFO m0 started stream writer with remote peer {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.674+0200 INFO m2 started remote peer {"member": "m2", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.674+0200 INFO m2 added remote peer {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "e606fe4619a718b6", "remote-peer-urls": ["unix://127.0.0.1:210033101996"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.674+0200 INFO m2 starting etcd server {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "local-server-version": "3.5.0-alpha.0", "cluster-version": "to_be_decided"}
2021/05/16 22:58:56 2021-05-16T22:58:56.674+0200 INFO m1 started remote peer {"member": "m1", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.674+0200 INFO m1 added remote peer {"member": "m1", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "f22d0a0f9f1b431b", "remote-peer-urls": ["unix://127.0.0.1:210053101996"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.674+0200 INFO m1 starting etcd server {"member": "m1", "local-member-id": "e606fe4619a718b6", "local-server-version": "3.5.0-alpha.0", "cluster-version": "to_be_decided"}
2021/05/16 22:58:56 2021-05-16T22:58:56.674+0200 INFO m1 started stream reader with remote peer {"member": "m1", "stream-reader-type": "stream MsgApp v2", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.674+0200 INFO m1 started stream reader with remote peer {"member": "m1", "stream-reader-type": "stream Message", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.674+0200 INFO m0 started remote peer {"member": "m0", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.674+0200 INFO m0 added remote peer {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "f22d0a0f9f1b431b", "remote-peer-urls": ["unix://127.0.0.1:210053101996"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.674+0200 INFO m0 starting etcd server {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "local-server-version": "3.5.0-alpha.0", "cluster-version": "to_be_decided"}
2021/05/16 22:58:56 2021-05-16T22:58:56.674+0200 INFO m0 started stream writer with remote peer {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.675+0200 INFO m0 started stream writer with remote peer {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.675+0200 INFO m0 started stream reader with remote peer {"member": "m0", "stream-reader-type": "stream MsgApp v2", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.675+0200 INFO m0 started stream reader with remote peer {"member": "m0", "stream-reader-type": "stream Message", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.675+0200 INFO m1 starting initial election tick advance {"member": "m1", "election-ticks": 10}
2021/05/16 22:58:56 2021-05-16T22:58:56.676+0200 INFO m1.raft e606fe4619a718b6 switched to configuration voters=(11609101907503085289 16575215055615236278 17450615193340691227) {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.676+0200 INFO m1 added member {"member": "m1", "cluster-id": "3c5b384f03b0f420", "local-member-id": "e606fe4619a718b6", "added-peer-id": "a11bd0ad5e07f6e9", "added-peer-peer-urls": ["unix://127.0.0.1:210013101996"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.676+0200 INFO m0 starting initial election tick advance {"member": "m0", "election-ticks": 10}
2021/05/16 22:58:56 2021-05-16T22:58:56.676+0200 INFO m1.raft e606fe4619a718b6 switched to configuration voters=(11609101907503085289 16575215055615236278 17450615193340691227) {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.677+0200 INFO m1 added member {"member": "m1", "cluster-id": "3c5b384f03b0f420", "local-member-id": "e606fe4619a718b6", "added-peer-id": "e606fe4619a718b6", "added-peer-peer-urls": ["unix://127.0.0.1:210033101996"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.677+0200 INFO m1.raft e606fe4619a718b6 switched to configuration voters=(11609101907503085289 16575215055615236278 17450615193340691227) {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.677+0200 INFO m1 added member {"member": "m1", "cluster-id": "3c5b384f03b0f420", "local-member-id": "e606fe4619a718b6", "added-peer-id": "f22d0a0f9f1b431b", "added-peer-peer-urls": ["unix://127.0.0.1:210053101996"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.677+0200 INFO m0.raft a11bd0ad5e07f6e9 switched to configuration voters=(11609101907503085289 16575215055615236278 17450615193340691227) {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.677+0200 INFO m2 starting initial election tick advance {"member": "m2", "election-ticks": 10}
2021/05/16 22:58:56 2021-05-16T22:58:56.677+0200 INFO m2 started stream reader with remote peer {"member": "m2", "stream-reader-type": "stream Message", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.677+0200 INFO m2.raft f22d0a0f9f1b431b switched to configuration voters=(11609101907503085289 16575215055615236278 17450615193340691227) {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.677+0200 INFO m0 added member {"member": "m0", "cluster-id": "3c5b384f03b0f420", "local-member-id": "a11bd0ad5e07f6e9", "added-peer-id": "a11bd0ad5e07f6e9", "added-peer-peer-urls": ["unix://127.0.0.1:210013101996"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.678+0200 INFO m2 added member {"member": "m2", "cluster-id": "3c5b384f03b0f420", "local-member-id": "f22d0a0f9f1b431b", "added-peer-id": "a11bd0ad5e07f6e9", "added-peer-peer-urls": ["unix://127.0.0.1:210013101996"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.678+0200 INFO m0.raft a11bd0ad5e07f6e9 switched to configuration voters=(11609101907503085289 16575215055615236278 17450615193340691227) {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.678+0200 INFO m2.raft f22d0a0f9f1b431b switched to configuration voters=(11609101907503085289 16575215055615236278 17450615193340691227) {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.678+0200 INFO m0 added member {"member": "m0", "cluster-id": "3c5b384f03b0f420", "local-member-id": "a11bd0ad5e07f6e9", "added-peer-id": "e606fe4619a718b6", "added-peer-peer-urls": ["unix://127.0.0.1:210033101996"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.678+0200 INFO m2 added member {"member": "m2", "cluster-id": "3c5b384f03b0f420", "local-member-id": "f22d0a0f9f1b431b", "added-peer-id": "e606fe4619a718b6", "added-peer-peer-urls": ["unix://127.0.0.1:210033101996"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.678+0200 INFO m0.raft a11bd0ad5e07f6e9 switched to configuration voters=(11609101907503085289 16575215055615236278 17450615193340691227) {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.678+0200 INFO m2.raft f22d0a0f9f1b431b switched to configuration voters=(11609101907503085289 16575215055615236278 17450615193340691227) {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.678+0200 INFO m0 added member {"member": "m0", "cluster-id": "3c5b384f03b0f420", "local-member-id": "a11bd0ad5e07f6e9", "added-peer-id": "f22d0a0f9f1b431b", "added-peer-peer-urls": ["unix://127.0.0.1:210053101996"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.678+0200 INFO m2 added member {"member": "m2", "cluster-id": "3c5b384f03b0f420", "local-member-id": "f22d0a0f9f1b431b", "added-peer-id": "f22d0a0f9f1b431b", "added-peer-peer-urls": ["unix://127.0.0.1:210053101996"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.677+0200 INFO m2 started stream reader with remote peer {"member": "m2", "stream-reader-type": "stream MsgApp v2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.681+0200 INFO m2 set message encoder {"member": "m2", "from": "f22d0a0f9f1b431b", "to": "a11bd0ad5e07f6e9", "stream-type": "stream Message"}
2021/05/16 22:58:56 2021-05-16T22:58:56.681+0200 INFO m0 set message encoder {"member": "m0", "from": "a11bd0ad5e07f6e9", "to": "f22d0a0f9f1b431b", "stream-type": "stream Message"}
2021/05/16 22:58:56 2021-05-16T22:58:56.681+0200 INFO m0 peer became active {"member": "m0", "peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.681+0200 INFO m0 established TCP streaming connection with remote peer {"member": "m0", "stream-writer-type": "stream Message", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.681+0200 INFO m2 peer became active {"member": "m2", "peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.681+0200 INFO m2 established TCP streaming connection with remote peer {"member": "m2", "stream-writer-type": "stream Message", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.682+0200 INFO m2 launched a member {"member": "m2", "name": "m2", "advertise-peer-urls": ["unix://127.0.0.1:210053101996"], "listen-client-urls": ["unix://127.0.0.1:210063101996"], "grpc-address": "unix://localhost:m20"}
2021/05/16 22:58:56 2021-05-16T22:58:56.682+0200 INFO m1 launched a member {"member": "m1", "name": "m1", "advertise-peer-urls": ["unix://127.0.0.1:210033101996"], "listen-client-urls": ["unix://127.0.0.1:210043101996"], "grpc-address": "unix://localhost:m10"}
2021/05/16 22:58:56 2021-05-16T22:58:56.682+0200 INFO m0 established TCP streaming connection with remote peer {"member": "m0", "stream-reader-type": "stream Message", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.682+0200 INFO m2 established TCP streaming connection with remote peer {"member": "m2", "stream-reader-type": "stream Message", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.682+0200 INFO m2 set message encoder {"member": "m2", "from": "f22d0a0f9f1b431b", "to": "e606fe4619a718b6", "stream-type": "stream MsgApp v2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.683+0200 INFO m2 peer became active {"member": "m2", "peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.683+0200 INFO m2 established TCP streaming connection with remote peer {"member": "m2", "stream-writer-type": "stream MsgApp v2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.683+0200 INFO m2 set message encoder {"member": "m2", "from": "f22d0a0f9f1b431b", "to": "e606fe4619a718b6", "stream-type": "stream Message"}
2021/05/16 22:58:56 2021-05-16T22:58:56.683+0200 INFO m2 established TCP streaming connection with remote peer {"member": "m2", "stream-writer-type": "stream Message", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.685+0200 INFO m2 set message encoder {"member": "m2", "from": "f22d0a0f9f1b431b", "to": "a11bd0ad5e07f6e9", "stream-type": "stream MsgApp v2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.685+0200 INFO m2 established TCP streaming connection with remote peer {"member": "m2", "stream-writer-type": "stream MsgApp v2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.685+0200 INFO m0 launched a member {"member": "m0", "name": "m0", "advertise-peer-urls": ["unix://127.0.0.1:210013101996"], "listen-client-urls": ["unix://127.0.0.1:210023101996"], "grpc-address": "unix://localhost:m00"}
2021/05/16 22:58:56 2021-05-16T22:58:56.686+0200 INFO m2 established TCP streaming connection with remote peer {"member": "m2", "stream-reader-type": "stream MsgApp v2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.686+0200 INFO m1 peer became active {"member": "m1", "peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.686+0200 INFO m1 established TCP streaming connection with remote peer {"member": "m1", "stream-reader-type": "stream MsgApp v2", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.686+0200 INFO m2 established TCP streaming connection with remote peer {"member": "m2", "stream-reader-type": "stream MsgApp v2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.687+0200 INFO m1 set message encoder {"member": "m1", "from": "e606fe4619a718b6", "to": "a11bd0ad5e07f6e9", "stream-type": "stream MsgApp v2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.687+0200 INFO m1 peer became active {"member": "m1", "peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.687+0200 INFO m1 established TCP streaming connection with remote peer {"member": "m1", "stream-writer-type": "stream MsgApp v2", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.687+0200 INFO m0 set message encoder {"member": "m0", "from": "a11bd0ad5e07f6e9", "to": "f22d0a0f9f1b431b", "stream-type": "stream MsgApp v2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.687+0200 INFO m0 established TCP streaming connection with remote peer {"member": "m0", "stream-writer-type": "stream MsgApp v2", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.687+0200 INFO m1 set message encoder {"member": "m1", "from": "e606fe4619a718b6", "to": "f22d0a0f9f1b431b", "stream-type": "stream MsgApp v2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.687+0200 INFO m1 established TCP streaming connection with remote peer {"member": "m1", "stream-writer-type": "stream MsgApp v2", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.687+0200 INFO m1 set message encoder {"member": "m1", "from": "e606fe4619a718b6", "to": "a11bd0ad5e07f6e9", "stream-type": "stream Message"}
2021/05/16 22:58:56 2021-05-16T22:58:56.687+0200 INFO m1 established TCP streaming connection with remote peer {"member": "m1", "stream-writer-type": "stream Message", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.687+0200 INFO m1 set message encoder {"member": "m1", "from": "e606fe4619a718b6", "to": "f22d0a0f9f1b431b", "stream-type": "stream Message"}
2021/05/16 22:58:56 2021-05-16T22:58:56.687+0200 INFO m1 established TCP streaming connection with remote peer {"member": "m1", "stream-writer-type": "stream Message", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.687+0200 INFO m2 established TCP streaming connection with remote peer {"member": "m2", "stream-reader-type": "stream Message", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.687+0200 INFO m1 established TCP streaming connection with remote peer {"member": "m1", "stream-reader-type": "stream MsgApp v2", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.688+0200 INFO m0 set message encoder {"member": "m0", "from": "a11bd0ad5e07f6e9", "to": "e606fe4619a718b6", "stream-type": "stream MsgApp v2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.688+0200 INFO m0 peer became active {"member": "m0", "peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.688+0200 INFO m0 established TCP streaming connection with remote peer {"member": "m0", "stream-writer-type": "stream MsgApp v2", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.688+0200 INFO m0 established TCP streaming connection with remote peer {"member": "m0", "stream-reader-type": "stream MsgApp v2", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.688+0200 INFO m0 established TCP streaming connection with remote peer {"member": "m0", "stream-reader-type": "stream Message", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.689+0200 INFO m0 established TCP streaming connection with remote peer {"member": "m0", "stream-reader-type": "stream MsgApp v2", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.690+0200 INFO m0 set message encoder {"member": "m0", "from": "a11bd0ad5e07f6e9", "to": "e606fe4619a718b6", "stream-type": "stream Message"}
2021/05/16 22:58:56 2021-05-16T22:58:56.690+0200 INFO m0 established TCP streaming connection with remote peer {"member": "m0", "stream-writer-type": "stream Message", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "e606fe4619a718b6"}
2021/05/16 22:58:56 2021-05-16T22:58:56.690+0200 INFO m1 established TCP streaming connection with remote peer {"member": "m1", "stream-reader-type": "stream Message", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "a11bd0ad5e07f6e9"}
2021/05/16 22:58:56 2021-05-16T22:58:56.690+0200 INFO m1 established TCP streaming connection with remote peer {"member": "m1", "stream-reader-type": "stream Message", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "f22d0a0f9f1b431b"}
2021/05/16 22:58:56 2021-05-16T22:58:56.726+0200 INFO m1 initialized peer connections; fast-forwarding election ticks {"member": "m1", "local-member-id": "e606fe4619a718b6", "forward-ticks": 8, "forward-duration": "80ms", "election-ticks": 10, "election-timeout": "100ms", "active-remote-members": 2}
2021/05/16 22:58:56 2021-05-16T22:58:56.726+0200 INFO m1.raft e606fe4619a718b6 is starting a new election at term 1 {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.726+0200 INFO m1.raft e606fe4619a718b6 became candidate at term 2 {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.726+0200 INFO m1.raft e606fe4619a718b6 received MsgVoteResp from e606fe4619a718b6 at term 2 {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.726+0200 INFO m1.raft e606fe4619a718b6 [logterm: 1, index: 3] sent MsgVote request to a11bd0ad5e07f6e9 at term 2 {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.726+0200 INFO m1.raft e606fe4619a718b6 [logterm: 1, index: 3] sent MsgVote request to f22d0a0f9f1b431b at term 2 {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.726+0200 INFO m0 initialized peer connections; fast-forwarding election ticks {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "forward-ticks": 8, "forward-duration": "80ms", "election-ticks": 10, "election-timeout": "100ms", "active-remote-members": 2}
2021/05/16 22:58:56 2021-05-16T22:58:56.728+0200 INFO m2 initialized peer connections; fast-forwarding election ticks {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "forward-ticks": 8, "forward-duration": "80ms", "election-ticks": 10, "election-timeout": "100ms", "active-remote-members": 2}
2021/05/16 22:58:56 2021-05-16T22:58:56.728+0200 INFO m2.raft f22d0a0f9f1b431b [term: 1] received a MsgVote message with higher term from e606fe4619a718b6 [term: 2] {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.728+0200 INFO m2.raft f22d0a0f9f1b431b became follower at term 2 {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.728+0200 INFO m2.raft f22d0a0f9f1b431b [logterm: 1, index: 3, vote: 0] cast MsgVote for e606fe4619a718b6 [logterm: 1, index: 3] at term 2 {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.728+0200 INFO m0.raft a11bd0ad5e07f6e9 [term: 1] received a MsgVote message with higher term from e606fe4619a718b6 [term: 2] {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.728+0200 INFO m0.raft a11bd0ad5e07f6e9 became follower at term 2 {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.728+0200 INFO m0.raft a11bd0ad5e07f6e9 [logterm: 1, index: 3, vote: 0] cast MsgVote for e606fe4619a718b6 [logterm: 1, index: 3] at term 2 {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.730+0200 INFO m1.raft e606fe4619a718b6 received MsgVoteResp from a11bd0ad5e07f6e9 at term 2 {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.730+0200 INFO m1.raft e606fe4619a718b6 has received 2 MsgVoteResp votes and 0 vote rejections {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.730+0200 INFO m1.raft e606fe4619a718b6 became leader at term 2 {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.730+0200 INFO m1.raft raft.node: e606fe4619a718b6 elected leader e606fe4619a718b6 at term 2 {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.730+0200 INFO m2.raft raft.node: f22d0a0f9f1b431b elected leader e606fe4619a718b6 at term 2 {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.730+0200 INFO m0.raft raft.node: a11bd0ad5e07f6e9 elected leader e606fe4619a718b6 at term 2 {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.735+0200 INFO m1 setting up initial cluster version {"member": "m1", "cluster-version": "3.5"}
2021/05/16 22:58:56 2021-05-16T22:58:56.738+0200 INFO m1 set initial cluster version {"member": "m1", "cluster-id": "3c5b384f03b0f420", "local-member-id": "e606fe4619a718b6", "cluster-version": "3.5"}
2021/05/16 22:58:56 2021-05-16T22:58:56.739+0200 INFO m1 enabled capabilities for version {"member": "m1", "cluster-version": "3.5"}
2021/05/16 22:58:56 2021-05-16T22:58:56.739+0200 INFO m0 set initial cluster version {"member": "m0", "cluster-id": "3c5b384f03b0f420", "local-member-id": "a11bd0ad5e07f6e9", "cluster-version": "3.5"}
2021/05/16 22:58:56 2021-05-16T22:58:56.739+0200 INFO m2 set initial cluster version {"member": "m2", "cluster-id": "3c5b384f03b0f420", "local-member-id": "f22d0a0f9f1b431b", "cluster-version": "3.5"}
2021/05/16 22:58:56 2021-05-16T22:58:56.739+0200 INFO m0 published local member to cluster through raft {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "local-member-attributes": "{Name:m0 ClientURLs:[unix://127.0.0.1:210023101996]}", "request-path": "/0/members/a11bd0ad5e07f6e9/attributes", "cluster-id": "3c5b384f03b0f420", "publish-timeout": "5.2s"}
2021/05/16 22:58:56 2021-05-16T22:58:56.739+0200 INFO m1 published local member to cluster through raft {"member": "m1", "local-member-id": "e606fe4619a718b6", "local-member-attributes": "{Name:m1 ClientURLs:[unix://127.0.0.1:210043101996]}", "request-path": "/0/members/e606fe4619a718b6/attributes", "cluster-id": "3c5b384f03b0f420", "publish-timeout": "5.2s"}
2021/05/16 22:58:56 2021-05-16T22:58:56.739+0200 INFO m2 published local member to cluster through raft {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "local-member-attributes": "{Name:m2 ClientURLs:[unix://127.0.0.1:210063101996]}", "request-path": "/0/members/f22d0a0f9f1b431b/attributes", "cluster-id": "3c5b384f03b0f420", "publish-timeout": "5.2s"}
2021/05/16 22:58:56 - m0 -> a11bd0ad5e07f6e9 (unix://localhost:m00)
2021/05/16 22:58:56 - m1 -> e606fe4619a718b6 (unix://localhost:m10)
2021/05/16 22:58:56 - m2 -> f22d0a0f9f1b431b (unix://localhost:m20)
2021/05/16 22:58:56 2021-05-16T22:58:56.782+0200 INFO m1.raft e606fe4619a718b6 switched to configuration voters=(11609101907503085289 16575215055615236278 17450615193340691227) learners=(9777699696455160727) {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.782+0200 INFO m2.raft f22d0a0f9f1b431b switched to configuration voters=(11609101907503085289 16575215055615236278 17450615193340691227) learners=(9777699696455160727) {"member": "m2"}
2021/05/16 22:58:56 2021-05-16T22:58:56.782+0200 INFO m1 added member {"member": "m1", "cluster-id": "3c5b384f03b0f420", "local-member-id": "e606fe4619a718b6", "added-peer-id": "87b15e11c7a79397", "added-peer-peer-urls": ["http://localhost:32381"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.782+0200 INFO m1 starting remote peer {"member": "m1", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.782+0200 INFO m1 started HTTP pipelining with remote peer {"member": "m1", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.782+0200 INFO m2 added member {"member": "m2", "cluster-id": "3c5b384f03b0f420", "local-member-id": "f22d0a0f9f1b431b", "added-peer-id": "87b15e11c7a79397", "added-peer-peer-urls": ["http://localhost:32381"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.782+0200 INFO m2 starting remote peer {"member": "m2", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.782+0200 INFO m2 started HTTP pipelining with remote peer {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.783+0200 INFO m1 started remote peer {"member": "m1", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.783+0200 INFO m1 added remote peer {"member": "m1", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "87b15e11c7a79397", "remote-peer-urls": ["http://localhost:32381"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.783+0200 INFO m0.raft a11bd0ad5e07f6e9 switched to configuration voters=(11609101907503085289 16575215055615236278 17450615193340691227) learners=(9777699696455160727) {"member": "m0"}
2021/05/16 22:58:56 2021-05-16T22:58:56.784+0200 INFO m1 started stream writer with remote peer {"member": "m1", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.784+0200 INFO m1 started stream writer with remote peer {"member": "m1", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.784+0200 INFO m1 started stream reader with remote peer {"member": "m1", "stream-reader-type": "stream MsgApp v2", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.785+0200 INFO m0 added member {"member": "m0", "cluster-id": "3c5b384f03b0f420", "local-member-id": "a11bd0ad5e07f6e9", "added-peer-id": "87b15e11c7a79397", "added-peer-peer-urls": ["http://localhost:32381"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.785+0200 INFO m0 starting remote peer {"member": "m0", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.786+0200 INFO m0 started HTTP pipelining with remote peer {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.786+0200 INFO m2 started remote peer {"member": "m2", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.786+0200 INFO m2 added remote peer {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "87b15e11c7a79397", "remote-peer-urls": ["http://localhost:32381"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.786+0200 INFO m2 started stream writer with remote peer {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.787+0200 INFO m2 started stream writer with remote peer {"member": "m2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.787+0200 INFO m1 started stream reader with remote peer {"member": "m1", "stream-reader-type": "stream Message", "local-member-id": "e606fe4619a718b6", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.787+0200 INFO m0 started remote peer {"member": "m0", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.787+0200 INFO m2 started stream reader with remote peer {"member": "m2", "stream-reader-type": "stream MsgApp v2", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.787+0200 INFO m2 started stream reader with remote peer {"member": "m2", "stream-reader-type": "stream Message", "local-member-id": "f22d0a0f9f1b431b", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.788+0200 INFO m0 added remote peer {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "87b15e11c7a79397", "remote-peer-urls": ["http://localhost:32381"]}
2021/05/16 22:58:56 2021-05-16T22:58:56.788+0200 INFO m0 started stream writer with remote peer {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.789+0200 INFO m0 started stream writer with remote peer {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.789+0200 INFO m0 started stream reader with remote peer {"member": "m0", "stream-reader-type": "stream MsgApp v2", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.789+0200 INFO m0 started stream reader with remote peer {"member": "m0", "stream-reader-type": "stream Message", "local-member-id": "a11bd0ad5e07f6e9", "remote-peer-id": "87b15e11c7a79397"}
2021/05/16 22:58:56 2021-05-16T22:58:56.789+0200 INFO m0 applied a configuration change through raft {"member": "m0", "local-member-id": "a11bd0ad5e07f6e9", "raft-conf-change": "ConfChangeAddLearnerNode", "raft-conf-change-node-id": "87b15e11c7a79397"}
!!! 2021/05/16 22:58:56 2021-05-16T22:58:56.790+0200 INFO m1.raft e606fe4619a718b6 ignoring conf change {ConfChangeRemoveNode 9777699696455160727 [] 17791885354803724549} at config voters=(11609101907503085289 16575215055615236278 17450615193340691227) learners=(9777699696455160727): possible unapplied conf change at index 9 (applied to 8) {"member": "m1"}
2021/05/16 22:58:56 2021-05-16T22:58:56.794+0200 WARN m1 failed to reach the peer URL {"member": "m1", "address": "http://localhost:32381/version", "remote-member-id": "87b15e11c7a79397", "error": "Get \"http://localhost:32381/version\": dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 22:58:56 2021-05-16T22:58:56.794+0200 WARN m1 failed to get version {"member": "m1", "remote-member-id": "87b15e11c7a79397", "error": "Get \"http://localhost:32381/version\": dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 22:59:00 2021-05-16T22:59:00.797+0200 WARN m1 failed to reach the peer URL {"member": "m1", "address": "http://localhost:32381/version", "remote-member-id": "87b15e11c7a79397", "error": "Get \"http://localhost:32381/version\": dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 22:59:00 2021-05-16T22:59:00.798+0200 WARN m1 failed to get version {"member": "m1", "remote-member-id": "87b15e11c7a79397", "error": "Get \"http://localhost:32381/version\": dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 22:59:01 2021-05-16T22:59:01.784+0200 WARN m1 prober detected unhealthy status {"member": "m1", "round-tripper-name": "ROUND_TRIPPER_RAFT_MESSAGE", "remote-peer-id": "87b15e11c7a79397", "rtt": "0s", "error": "dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 22:59:01 2021-05-16T22:59:01.787+0200 WARN m1 prober detected unhealthy status {"member": "m1", "round-tripper-name": "ROUND_TRIPPER_SNAPSHOT", "remote-peer-id": "87b15e11c7a79397", "rtt": "0s", "error": "dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 22:59:01 2021-05-16T22:59:01.788+0200 WARN m2 prober detected unhealthy status {"member": "m2", "round-tripper-name": "ROUND_TRIPPER_SNAPSHOT", "remote-peer-id": "87b15e11c7a79397", "rtt": "0s", "error": "dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 22:59:01 2021-05-16T22:59:01.788+0200 WARN m2 prober detected unhealthy status {"member": "m2", "round-tripper-name": "ROUND_TRIPPER_RAFT_MESSAGE", "remote-peer-id": "87b15e11c7a79397", "rtt": "0s", "error": "dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 22:59:01 2021-05-16T22:59:01.789+0200 WARN m0 prober detected unhealthy status {"member": "m0", "round-tripper-name": "ROUND_TRIPPER_SNAPSHOT", "remote-peer-id": "87b15e11c7a79397", "rtt": "0s", "error": "dial tcp [::1]:32381: connect: connection refused"}
2021/05/16 22:59:01 2021-05-16T22:59:01.790+0200 WARN m0 prober detected unhealthy status {"member": "m0", "round-tripper-name": "ROUND_TRIPPER_RAFT_MESSAGE", "remote-peer-id": "87b15e11c7a79397", "rtt": "0s", "error": "dial tcp [::1]:32381: connect: connection refused"}
```
|
test
|
integration examples test flakes frequently examplecluster memberaddaslearner i managed to repro this with for i in seq do cd tests env go test timeout race false cpu integration examples count v run examplecluster memberaddaslearner tee log log done but it does not always flake flake on actions uploaded log info started stream writer with remote peer member local member id remote peer id info started stream writer with remote peer member local member id remote peer id info started stream reader with remote peer member stream reader type stream msgapp local member id remote peer id info started remote peer member remote peer id info added remote peer member local member id remote peer id remote peer urls info started stream writer with remote peer member local member id remote peer id info started stream writer with remote peer member local member id remote peer id info started stream reader with remote peer member stream reader type stream msgapp local member id remote peer id info started stream reader with remote peer member stream reader type stream message local member id remote peer id info started stream reader with remote peer member stream reader type stream message local member id remote peer id info applied a configuration change through raft member local member id raft conf change confchangeaddlearnernode raft conf change node id info raft ignoring conf change confchangeremovenode at config voters learners possible unapplied conf change at index applied to member info raft switched to configuration voters learners member info added member member cluster id local member id added peer id added peer peer urls info starting remote peer member remote peer id info started http pipelining with remote peer member local member id remote peer id warn failed to reach the peer url member address remote member id error get dial tcp connect connection refused warn failed to get version member remote member id error get dial tcp connect connection refused info started stream writer with remote peer member local member id remote peer id info started stream writer with remote peer member local member id remote peer id info started remote peer member remote peer id info added remote peer member local member id remote peer id remote peer urls info started stream reader with remote peer member stream reader type stream msgapp local member id remote peer id info started stream reader with remote peer member stream reader type stream message local member id remote peer id warn failed to reach the peer url member address remote member id error get dial tcp connect connection refused warn failed to get version member remote member id error get dial tcp connect connection refused warn prober detected unhealthy status member round tripper name round tripper raft message remote peer id rtt error dial tcp connect connection refused warn prober detected unhealthy status member round tripper name round tripper raft message remote peer id rtt error dial tcp connect connection refused warn prober detected unhealthy status member round tripper name round tripper snapshot remote peer id rtt error dial tcp connect connection refused warn prober detected unhealthy status member round tripper name round tripper snapshot remote peer id rtt error dial tcp connect connection refused this line looks suspecious info raft ignoring conf change confchangeremovenode at config voters learners possible unapplied conf change at index applied to member goroutine google golang org grpc internal transport stream waitonheader home runner go pkg mod google golang org grpc internal transport transport go google golang org grpc internal transport stream recvcompress home runner go pkg mod google golang org grpc internal transport transport go google golang org grpc csattempt recvmsg home runner go pkg mod google golang org grpc stream go google golang org grpc clientstream recvmsg home runner go pkg mod google golang org grpc stream go google golang org grpc clientstream withretry home runner go pkg mod google golang org grpc stream go google golang org grpc clientstream recvmsg home runner go pkg mod google golang org grpc stream go google golang org grpc invoke home runner go pkg mod google golang org grpc call go go etcd io etcd client client unaryclientinterceptor home runner work etcd etcd client retry interceptor go google golang org grpc clientconn invoke home runner go pkg mod google golang org grpc call go go etcd io etcd api etcdserverpb clusterclient memberremove home runner work etcd etcd api etcdserverpb rpc pb go go etcd io etcd client retryclusterclient memberremove home runner work etcd etcd client retry go go etcd io etcd client cluster memberremove home runner work etcd etcd client cluster go go etcd io etcd tests integration examples test examplecluster memberaddaslearner home runner work etcd etcd tests integration examples example cluster test go go etcd io etcd tests integration examples test forunittestsruninmockedcontext home runner work etcd etcd tests integration examples main test go go etcd io etcd tests integration examples test examplecluster memberaddaslearner home runner work etcd etcd tests integration examples example cluster test go testing runexample opt hostedtoolcache go src testing run example go testing runexamples opt hostedtoolcache go src testing example go testing m run opt hostedtoolcache go src testing testing go go etcd io etcd tests integration examples test testmain home runner work etcd etcd tests integration examples main test go main main testmain go ptab ptab corp etcd cd tests env go test timeout race false cpu integration examples count v run examplecluster memberaddaslearner tee log log run examplecluster memberaddaslearner working directory home ptab corp etcd tests integration examples expected to be in temp dir tmp have you executed integration beforetest t info listen grpc member m grpcaddr localhost m name info listen grpc member m grpcaddr localhost m name info listen grpc member m grpcaddr localhost m name info launching a member member name advertise peer urls listen client urls grpc address unix localhost info launching a member member name advertise peer urls listen client urls grpc address unix localhost info launching a member member name advertise peer urls listen client urls grpc address unix localhost info opened backend db member path tmp lazy member snap db took info opened backend db member path tmp lazy member snap db took info opened backend db member path tmp lazy member snap db took info starting local member member local member id cluster id info raft switched to configuration voters member info raft became follower at term member info raft newraft term commit applied lastindex lastterm member info raft became follower at term member info raft switched to configuration voters member info raft switched to configuration voters member info raft switched to configuration voters member info starting local member member local member id cluster id info raft switched to configuration voters member info raft became follower at term member info raft newraft term commit applied lastindex lastterm member info raft became follower at term member info raft switched to configuration voters member info raft switched to configuration voters member info raft switched to configuration voters member info starting local member member local member id cluster id info raft switched to configuration voters member info raft became follower at term member info raft newraft term commit applied lastindex lastterm member info raft became follower at term member info raft switched to configuration voters member info raft switched to configuration voters member info raft switched to configuration voters member warn simple token is not cryptographically signed member warn simple token is not cryptographically signed member warn simple token is not cryptographically signed member info kvstore restored member current rev info kvstore restored member current rev info kvstore restored member current rev info enabled backend quota with default value member quota name applier quota size bytes quota size gb info starting remote peer member remote peer id info started http pipelining with remote peer member local member id remote peer id info started stream writer with remote peer member local member id remote peer id info started stream writer with remote peer member local member id remote peer id info starting remote peer member remote peer id info started http pipelining with remote peer member local member id remote peer id info started stream writer with remote peer member local member id remote peer id info started remote peer member remote peer id info added remote peer member local member id remote peer id remote peer urls info starting remote peer member remote peer id info started http pipelining with remote peer member local member id remote peer id info started stream reader with remote peer member stream reader type stream message local member id remote peer id info started stream reader with remote peer member stream reader type stream msgapp local member id remote peer id info starting remote peer member remote peer id info started http pipelining with remote peer member local member id remote peer id info started stream writer with remote peer member local member id remote peer id info started remote peer member remote peer id info added remote peer member local member id remote peer id remote peer urls info starting remote peer member remote peer id info started http pipelining with remote peer member local member id remote peer id info started remote peer member remote peer id info added remote peer member local member id remote peer id remote peer urls info starting remote peer member remote peer id info started http pipelining with remote peer member local member id remote peer id info started stream writer with remote peer member local member id remote peer id info started stream writer with remote peer member local member id remote peer id info started stream reader with remote peer member stream reader type stream msgapp local member id remote peer id info started stream reader with remote peer member stream reader type stream message local member id remote peer id info started stream reader with remote peer member stream reader type stream message local member id remote peer id info started stream reader with remote peer member stream reader type stream msgapp local member id remote peer id info started stream writer with remote peer member local member id remote peer id info started stream writer with remote peer member local member id remote peer id info started stream writer with remote peer member local member id remote peer id info started stream writer with remote peer member local member id remote peer id info started remote peer member remote peer id info added remote peer member local member id remote peer id remote peer urls info starting etcd server member local member id local server version alpha cluster version to be decided info started remote peer member remote peer id info added remote peer member local member id remote peer id remote peer urls info starting etcd server member local member id local server version alpha cluster version to be decided info started stream reader with remote peer member stream reader type stream msgapp local member id remote peer id info started stream reader with remote peer member stream reader type stream message local member id remote peer id info started remote peer member remote peer id info added remote peer member local member id remote peer id remote peer urls info starting etcd server member local member id local server version alpha cluster version to be decided info started stream writer with remote peer member local member id remote peer id info started stream writer with remote peer member local member id remote peer id info started stream reader with remote peer member stream reader type stream msgapp local member id remote peer id info started stream reader with remote peer member stream reader type stream message local member id remote peer id info starting initial election tick advance member election ticks info raft switched to configuration voters member info added member member cluster id local member id added peer id added peer peer urls info starting initial election tick advance member election ticks info raft switched to configuration voters member info added member member cluster id local member id added peer id added peer peer urls info raft switched to configuration voters member info added member member cluster id local member id added peer id added peer peer urls info raft switched to configuration voters member info starting initial election tick advance member election ticks info started stream reader with remote peer member stream reader type stream message local member id remote peer id info raft switched to configuration voters member info added member member cluster id local member id added peer id added peer peer urls info added member member cluster id local member id added peer id added peer peer urls info raft switched to configuration voters member info raft switched to configuration voters member info added member member cluster id local member id added peer id added peer peer urls info added member member cluster id local member id added peer id added peer peer urls info raft switched to configuration voters member info raft switched to configuration voters member info added member member cluster id local member id added peer id added peer peer urls info added member member cluster id local member id added peer id added peer peer urls info started stream reader with remote peer member stream reader type stream msgapp local member id remote peer id info set message encoder member from to stream type stream message info set message encoder member from to stream type stream message info peer became active member peer id info established tcp streaming connection with remote peer member stream writer type stream message local member id remote peer id info peer became active member peer id info established tcp streaming connection with remote peer member stream writer type stream message local member id remote peer id info launched a member member name advertise peer urls listen client urls grpc address unix localhost info launched a member member name advertise peer urls listen client urls grpc address unix localhost info established tcp streaming connection with remote peer member stream reader type stream message local member id remote peer id info established tcp streaming connection with remote peer member stream reader type stream message local member id remote peer id info set message encoder member from to stream type stream msgapp info peer became active member peer id info established tcp streaming connection with remote peer member stream writer type stream msgapp local member id remote peer id info set message encoder member from to stream type stream message info established tcp streaming connection with remote peer member stream writer type stream message local member id remote peer id info set message encoder member from to stream type stream msgapp info established tcp streaming connection with remote peer member stream writer type stream msgapp local member id remote peer id info launched a member member name advertise peer urls listen client urls grpc address unix localhost info established tcp streaming connection with remote peer member stream reader type stream msgapp local member id remote peer id info peer became active member peer id info established tcp streaming connection with remote peer member stream reader type stream msgapp local member id remote peer id info established tcp streaming connection with remote peer member stream reader type stream msgapp local member id remote peer id info set message encoder member from to stream type stream msgapp info peer became active member peer id info established tcp streaming connection with remote peer member stream writer type stream msgapp local member id remote peer id info set message encoder member from to stream type stream msgapp info established tcp streaming connection with remote peer member stream writer type stream msgapp local member id remote peer id info set message encoder member from to stream type stream msgapp info established tcp streaming connection with remote peer member stream writer type stream msgapp local member id remote peer id info set message encoder member from to stream type stream message info established tcp streaming connection with remote peer member stream writer type stream message local member id remote peer id info set message encoder member from to stream type stream message info established tcp streaming connection with remote peer member stream writer type stream message local member id remote peer id info established tcp streaming connection with remote peer member stream reader type stream message local member id remote peer id info established tcp streaming connection with remote peer member stream reader type stream msgapp local member id remote peer id info set message encoder member from to stream type stream msgapp info peer became active member peer id info established tcp streaming connection with remote peer member stream writer type stream msgapp local member id remote peer id info established tcp streaming connection with remote peer member stream reader type stream msgapp local member id remote peer id info established tcp streaming connection with remote peer member stream reader type stream message local member id remote peer id info established tcp streaming connection with remote peer member stream reader type stream msgapp local member id remote peer id info set message encoder member from to stream type stream message info established tcp streaming connection with remote peer member stream writer type stream message local member id remote peer id info established tcp streaming connection with remote peer member stream reader type stream message local member id remote peer id info established tcp streaming connection with remote peer member stream reader type stream message local member id remote peer id info initialized peer connections fast forwarding election ticks member local member id forward ticks forward duration election ticks election timeout active remote members info raft is starting a new election at term member info raft became candidate at term member info raft received msgvoteresp from at term member info raft sent msgvote request to at term member info raft sent msgvote request to at term member info initialized peer connections fast forwarding election ticks member local member id forward ticks forward duration election ticks election timeout active remote members info initialized peer connections fast forwarding election ticks member local member id forward ticks forward duration election ticks election timeout active remote members info raft received a msgvote message with higher term from member info raft became follower at term member info raft cast msgvote for at term member info raft received a msgvote message with higher term from member info raft became follower at term member info raft cast msgvote for at term member info raft received msgvoteresp from at term member info raft has received msgvoteresp votes and vote rejections member info raft became leader at term member info raft raft node elected leader at term member info raft raft node elected leader at term member info raft raft node elected leader at term member info setting up initial cluster version member cluster version info set initial cluster version member cluster id local member id cluster version info enabled capabilities for version member cluster version info set initial cluster version member cluster id local member id cluster version info set initial cluster version member cluster id local member id cluster version info published local member to cluster through raft member local member id local member attributes name clienturls request path members attributes cluster id publish timeout info published local member to cluster through raft member local member id local member attributes name clienturls request path members attributes cluster id publish timeout info published local member to cluster through raft member local member id local member attributes name clienturls request path members attributes cluster id publish timeout unix localhost unix localhost unix localhost info raft switched to configuration voters learners member info raft switched to configuration voters learners member info added member member cluster id local member id added peer id added peer peer urls info starting remote peer member remote peer id info started http pipelining with remote peer member local member id remote peer id info added member member cluster id local member id added peer id added peer peer urls info starting remote peer member remote peer id info started http pipelining with remote peer member local member id remote peer id info started remote peer member remote peer id info added remote peer member local member id remote peer id remote peer urls info raft switched to configuration voters learners member info started stream writer with remote peer member local member id remote peer id info started stream writer with remote peer member local member id remote peer id info started stream reader with remote peer member stream reader type stream msgapp local member id remote peer id info added member member cluster id local member id added peer id added peer peer urls info starting remote peer member remote peer id info started http pipelining with remote peer member local member id remote peer id info started remote peer member remote peer id info added remote peer member local member id remote peer id remote peer urls info started stream writer with remote peer member local member id remote peer id info started stream writer with remote peer member local member id remote peer id info started stream reader with remote peer member stream reader type stream message local member id remote peer id info started remote peer member remote peer id info started stream reader with remote peer member stream reader type stream msgapp local member id remote peer id info started stream reader with remote peer member stream reader type stream message local member id remote peer id info added remote peer member local member id remote peer id remote peer urls info started stream writer with remote peer member local member id remote peer id info started stream writer with remote peer member local member id remote peer id info started stream reader with remote peer member stream reader type stream msgapp local member id remote peer id info started stream reader with remote peer member stream reader type stream message local member id remote peer id info applied a configuration change through raft member local member id raft conf change confchangeaddlearnernode raft conf change node id info raft ignoring conf change confchangeremovenode at config voters learners possible unapplied conf change at index applied to member warn failed to reach the peer url member address remote member id error get dial tcp connect connection refused warn failed to get version member remote member id error get dial tcp connect connection refused warn failed to reach the peer url member address remote member id error get dial tcp connect connection refused warn failed to get version member remote member id error get dial tcp connect connection refused warn prober detected unhealthy status member round tripper name round tripper raft message remote peer id rtt error dial tcp connect connection refused warn prober detected unhealthy status member round tripper name round tripper snapshot remote peer id rtt error dial tcp connect connection refused warn prober detected unhealthy status member round tripper name round tripper snapshot remote peer id rtt error dial tcp connect connection refused warn prober detected unhealthy status member round tripper name round tripper raft message remote peer id rtt error dial tcp connect connection refused warn prober detected unhealthy status member round tripper name round tripper snapshot remote peer id rtt error dial tcp connect connection refused warn prober detected unhealthy status member round tripper name round tripper raft message remote peer id rtt error dial tcp connect connection refused
| 1
|
103,966
| 22,534,119,974
|
IssuesEvent
|
2022-06-25 01:19:36
|
macder/medusa-fulfillment-shippo
|
https://api.github.com/repos/macder/medusa-fulfillment-shippo
|
closed
|
Dev - automate config source references
|
code improvement chore
|
eliminate the manual step of changing the config reference (standalone localhost vs medusa package)
|
1.0
|
Dev - automate config source references - eliminate the manual step of changing the config reference (standalone localhost vs medusa package)
|
non_test
|
dev automate config source references eliminate the manual step of changing the config reference standalone localhost vs medusa package
| 0
|
446,168
| 12,840,311,695
|
IssuesEvent
|
2020-07-07 20:49:28
|
ClangBuiltLinux/linux
|
https://api.github.com/repos/ClangBuiltLinux/linux
|
opened
|
-Wsometimes-uninitialized in drivers/cpufreq/intel_pstate.c
|
-Wsometimes-uninitialized [ARCH] x86_64 [BUG] linux-next good first issue low priority
|
linux-next x86 defconfig
```
drivers/cpufreq/intel_pstate.c:720:6: warning: variable 'epp' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized]
if (ret < 0) {
^~~~~~~
drivers/cpufreq/intel_pstate.c:731:63: note: uninitialized use occurs here
ret = intel_pstate_set_energy_pref_index(cpu_data, ret, raw, epp);
^~~
drivers/cpufreq/intel_pstate.c:720:2: note: remove the 'if' if its condition is always true
if (ret < 0) {
^~~~~~~~~~~~~
drivers/cpufreq/intel_pstate.c:712:9: note: initialize the variable 'epp' to silence this warning
u32 epp;
^
= 0
```
|
1.0
|
-Wsometimes-uninitialized in drivers/cpufreq/intel_pstate.c - linux-next x86 defconfig
```
drivers/cpufreq/intel_pstate.c:720:6: warning: variable 'epp' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized]
if (ret < 0) {
^~~~~~~
drivers/cpufreq/intel_pstate.c:731:63: note: uninitialized use occurs here
ret = intel_pstate_set_energy_pref_index(cpu_data, ret, raw, epp);
^~~
drivers/cpufreq/intel_pstate.c:720:2: note: remove the 'if' if its condition is always true
if (ret < 0) {
^~~~~~~~~~~~~
drivers/cpufreq/intel_pstate.c:712:9: note: initialize the variable 'epp' to silence this warning
u32 epp;
^
= 0
```
|
non_test
|
wsometimes uninitialized in drivers cpufreq intel pstate c linux next defconfig drivers cpufreq intel pstate c warning variable epp is used uninitialized whenever if condition is false if ret drivers cpufreq intel pstate c note uninitialized use occurs here ret intel pstate set energy pref index cpu data ret raw epp drivers cpufreq intel pstate c note remove the if if its condition is always true if ret drivers cpufreq intel pstate c note initialize the variable epp to silence this warning epp
| 0
|
27,773
| 4,328,887,218
|
IssuesEvent
|
2016-07-26 15:16:00
|
mautic/mautic
|
https://api.github.com/repos/mautic/mautic
|
closed
|
Release 2 / Contacts: cants save and close
|
Ready To Test
|
## Description
I recently installed Mautic 2.0. When I try to save a new contact, it is not possible.
Its seems that it is tied to the Attributions fields. This is the error I get from the log.
Any Idea?
[2016-07-05 22:30:14] mautic.NOTICE: PHP Notice: Undefined index: roundmode - in file /home/bluebamboocomar/public_html/tellme2/app/bundles/LeadBundle/Form/Type/LeadType.php - at line 148 [] []
[2016-07-05 22:30:14] mautic.ERROR: Deprecation: Accessing type "submit" by its string name is deprecated since version 2.8 and will be removed in 3.0. Use the fully-qualified type class name "Symfony\Component\Form\Extension\Core\Type\SubmitType" instead. - in file /home/bluebamboocomar/public_html/tellme2/vendor/symfony/form/FormRegistry.php - at line 100 [] []
PHP Version 5.6.23
Thanks in advance.
Federico
|
1.0
|
Release 2 / Contacts: cants save and close - ## Description
I recently installed Mautic 2.0. When I try to save a new contact, it is not possible.
Its seems that it is tied to the Attributions fields. This is the error I get from the log.
Any Idea?
[2016-07-05 22:30:14] mautic.NOTICE: PHP Notice: Undefined index: roundmode - in file /home/bluebamboocomar/public_html/tellme2/app/bundles/LeadBundle/Form/Type/LeadType.php - at line 148 [] []
[2016-07-05 22:30:14] mautic.ERROR: Deprecation: Accessing type "submit" by its string name is deprecated since version 2.8 and will be removed in 3.0. Use the fully-qualified type class name "Symfony\Component\Form\Extension\Core\Type\SubmitType" instead. - in file /home/bluebamboocomar/public_html/tellme2/vendor/symfony/form/FormRegistry.php - at line 100 [] []
PHP Version 5.6.23
Thanks in advance.
Federico
|
test
|
release contacts cants save and close description i recently installed mautic when i try to save a new contact it is not possible its seems that it is tied to the attributions fields this is the error i get from the log any idea mautic notice php notice undefined index roundmode in file home bluebamboocomar public html app bundles leadbundle form type leadtype php at line mautic error deprecation accessing type submit by its string name is deprecated since version and will be removed in use the fully qualified type class name symfony component form extension core type submittype instead in file home bluebamboocomar public html vendor symfony form formregistry php at line php version thanks in advance federico
| 1
|
257,649
| 22,198,917,624
|
IssuesEvent
|
2022-06-07 09:25:15
|
elastic/elasticsearch
|
https://api.github.com/repos/elastic/elasticsearch
|
opened
|
[CI] StableMasterDisruptionIT testFollowerCheckerDetectsUnresponsiveNodeAfterMasterReelection failing
|
>test-failure :Distributed/Cluster Coordination
|
Similar to #84172, but opening a new issue, since that failure was triaged some time ago.
**Build scan:**
https://gradle-enterprise.elastic.co/s/t5qajyr32lims/tests/:server:internalClusterTest/org.elasticsearch.discovery.StableMasterDisruptionIT/testFollowerCheckerDetectsUnresponsiveNodeAfterMasterReelection
**Reproduction line:**
`./gradlew ':server:internalClusterTest' --tests "org.elasticsearch.discovery.StableMasterDisruptionIT.testFollowerCheckerDetectsUnresponsiveNodeAfterMasterReelection" -Dtests.seed=307D64547B8478A7 -Dtests.locale=mk -Dtests.timezone=Australia/Brisbane -Druntime.java=17`
**Applicable branches:**
master
**Reproduces locally?:**
No
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.discovery.StableMasterDisruptionIT&tests.test=testFollowerCheckerDetectsUnresponsiveNodeAfterMasterReelection
**Failure excerpt:**
```
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=1668, name=Thread-9, state=RUNNABLE, group=TGRP-StableMasterDisruptionIT]
at __randomizedtesting.SeedInfo.seed([307D64547B8478A7:B6009E0B0638C111]:0)
Caused by: java.lang.AssertionError: java.lang.IllegalArgumentException: remote node [{node_s1}{q-h1mCKKR3KEscL6GIgLHQ}{q5G89eA6Q-6vK79v3fK9gg}{node_s1}{127.0.0.1}{127.0.0.1:35677}{cdfhilrstw}] is build [30406baabfbafad6603e68177b7500f551715738] of version [8.4.0] but this node is build [unknown] of version [8.4.0] which has an incompatible wire format
at __randomizedtesting.SeedInfo.seed([307D64547B8478A7]:0)
at org.elasticsearch.transport.InboundHandler.handleResponse(InboundHandler.java:347)
at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:143)
at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:95)
at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:790)
at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:149)
at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:121)
at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:86)
at org.elasticsearch.transport.netty4.Netty4MessageInboundHandler.channelRead(Netty4MessageInboundHandler.java:63)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:280)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:623)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:586)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(Thread.java:833)
Caused by: java.lang.IllegalArgumentException: remote node [{node_s1}{q-h1mCKKR3KEscL6GIgLHQ}{q5G89eA6Q-6vK79v3fK9gg}{node_s1}{127.0.0.1}{127.0.0.1:35677}{cdfhilrstw}] is build [30406baabfbafad6603e68177b7500f551715738] of version [8.4.0] but this node is build [unknown] of version [8.4.0] which has an incompatible wire format
at org.elasticsearch.transport.TransportService$HandshakeResponse.throwOnIncompatibleBuild(TransportService.java:591)
at org.elasticsearch.transport.TransportService$HandshakeResponse.maybeThrowOnIncompatibleBuild(TransportService.java:578)
at org.elasticsearch.transport.TransportService$HandshakeResponse.<init>(TransportService.java:572)
at org.elasticsearch.action.ActionListenerResponseHandler.read(ActionListenerResponseHandler.java:58)
at org.elasticsearch.action.ActionListenerResponseHandler.read(ActionListenerResponseHandler.java:25)
at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.read(TransportService.java:1320)
at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.read(TransportService.java:1307)
at org.elasticsearch.transport.InboundHandler.handleResponse(InboundHandler.java:339)
at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:143)
at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:95)
at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:790)
at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:149)
at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:121)
at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:86)
at org.elasticsearch.transport.netty4.Netty4MessageInboundHandler.channelRead(Netty4MessageInboundHandler.java:63)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:280)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:623)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:586)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(Thread.java:833)
```
|
1.0
|
[CI] StableMasterDisruptionIT testFollowerCheckerDetectsUnresponsiveNodeAfterMasterReelection failing - Similar to #84172, but opening a new issue, since that failure was triaged some time ago.
**Build scan:**
https://gradle-enterprise.elastic.co/s/t5qajyr32lims/tests/:server:internalClusterTest/org.elasticsearch.discovery.StableMasterDisruptionIT/testFollowerCheckerDetectsUnresponsiveNodeAfterMasterReelection
**Reproduction line:**
`./gradlew ':server:internalClusterTest' --tests "org.elasticsearch.discovery.StableMasterDisruptionIT.testFollowerCheckerDetectsUnresponsiveNodeAfterMasterReelection" -Dtests.seed=307D64547B8478A7 -Dtests.locale=mk -Dtests.timezone=Australia/Brisbane -Druntime.java=17`
**Applicable branches:**
master
**Reproduces locally?:**
No
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.discovery.StableMasterDisruptionIT&tests.test=testFollowerCheckerDetectsUnresponsiveNodeAfterMasterReelection
**Failure excerpt:**
```
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=1668, name=Thread-9, state=RUNNABLE, group=TGRP-StableMasterDisruptionIT]
at __randomizedtesting.SeedInfo.seed([307D64547B8478A7:B6009E0B0638C111]:0)
Caused by: java.lang.AssertionError: java.lang.IllegalArgumentException: remote node [{node_s1}{q-h1mCKKR3KEscL6GIgLHQ}{q5G89eA6Q-6vK79v3fK9gg}{node_s1}{127.0.0.1}{127.0.0.1:35677}{cdfhilrstw}] is build [30406baabfbafad6603e68177b7500f551715738] of version [8.4.0] but this node is build [unknown] of version [8.4.0] which has an incompatible wire format
at __randomizedtesting.SeedInfo.seed([307D64547B8478A7]:0)
at org.elasticsearch.transport.InboundHandler.handleResponse(InboundHandler.java:347)
at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:143)
at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:95)
at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:790)
at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:149)
at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:121)
at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:86)
at org.elasticsearch.transport.netty4.Netty4MessageInboundHandler.channelRead(Netty4MessageInboundHandler.java:63)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:280)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:623)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:586)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(Thread.java:833)
Caused by: java.lang.IllegalArgumentException: remote node [{node_s1}{q-h1mCKKR3KEscL6GIgLHQ}{q5G89eA6Q-6vK79v3fK9gg}{node_s1}{127.0.0.1}{127.0.0.1:35677}{cdfhilrstw}] is build [30406baabfbafad6603e68177b7500f551715738] of version [8.4.0] but this node is build [unknown] of version [8.4.0] which has an incompatible wire format
at org.elasticsearch.transport.TransportService$HandshakeResponse.throwOnIncompatibleBuild(TransportService.java:591)
at org.elasticsearch.transport.TransportService$HandshakeResponse.maybeThrowOnIncompatibleBuild(TransportService.java:578)
at org.elasticsearch.transport.TransportService$HandshakeResponse.<init>(TransportService.java:572)
at org.elasticsearch.action.ActionListenerResponseHandler.read(ActionListenerResponseHandler.java:58)
at org.elasticsearch.action.ActionListenerResponseHandler.read(ActionListenerResponseHandler.java:25)
at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.read(TransportService.java:1320)
at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.read(TransportService.java:1307)
at org.elasticsearch.transport.InboundHandler.handleResponse(InboundHandler.java:339)
at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:143)
at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:95)
at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:790)
at org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:149)
at org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:121)
at org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:86)
at org.elasticsearch.transport.netty4.Netty4MessageInboundHandler.channelRead(Netty4MessageInboundHandler.java:63)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:280)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:623)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:586)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(Thread.java:833)
```
|
test
|
stablemasterdisruptionit testfollowercheckerdetectsunresponsivenodeaftermasterreelection failing similar to but opening a new issue since that failure was triaged some time ago build scan reproduction line gradlew server internalclustertest tests org elasticsearch discovery stablemasterdisruptionit testfollowercheckerdetectsunresponsivenodeaftermasterreelection dtests seed dtests locale mk dtests timezone australia brisbane druntime java applicable branches master reproduces locally no failure history failure excerpt com carrotsearch randomizedtesting uncaughtexceptionerror captured an uncaught exception in thread thread at randomizedtesting seedinfo seed caused by java lang assertionerror java lang illegalargumentexception remote node is build of version but this node is build of version which has an incompatible wire format at randomizedtesting seedinfo seed at org elasticsearch transport inboundhandler handleresponse inboundhandler java at org elasticsearch transport inboundhandler messagereceived inboundhandler java at org elasticsearch transport inboundhandler inboundmessage inboundhandler java at org elasticsearch transport tcptransport inboundmessage tcptransport java at org elasticsearch transport inboundpipeline forwardfragments inboundpipeline java at org elasticsearch transport inboundpipeline dohandlebytes inboundpipeline java at org elasticsearch transport inboundpipeline handlebytes inboundpipeline java at org elasticsearch transport channelread java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty handler logging logginghandler channelread logginghandler java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty handler codec messagetomessagedecoder channelread messagetomessagedecoder java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline headcontext channelread defaultchannelpipeline java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline firechannelread defaultchannelpipeline java at io netty channel nio abstractniobytechannel niobyteunsafe read abstractniobytechannel java at io netty channel nio nioeventloop processselectedkey nioeventloop java at io netty channel nio nioeventloop processselectedkeysplain nioeventloop java at io netty channel nio nioeventloop processselectedkeys nioeventloop java at io netty channel nio nioeventloop run nioeventloop java at io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java at io netty util internal threadexecutormap run threadexecutormap java at java lang thread run thread java caused by java lang illegalargumentexception remote node is build of version but this node is build of version which has an incompatible wire format at org elasticsearch transport transportservice handshakeresponse throwonincompatiblebuild transportservice java at org elasticsearch transport transportservice handshakeresponse maybethrowonincompatiblebuild transportservice java at org elasticsearch transport transportservice handshakeresponse transportservice java at org elasticsearch action actionlistenerresponsehandler read actionlistenerresponsehandler java at org elasticsearch action actionlistenerresponsehandler read actionlistenerresponsehandler java at org elasticsearch transport transportservice contextrestoreresponsehandler read transportservice java at org elasticsearch transport transportservice contextrestoreresponsehandler read transportservice java at org elasticsearch transport inboundhandler handleresponse inboundhandler java at org elasticsearch transport inboundhandler messagereceived inboundhandler java at org elasticsearch transport inboundhandler inboundmessage inboundhandler java at org elasticsearch transport tcptransport inboundmessage tcptransport java at org elasticsearch transport inboundpipeline forwardfragments inboundpipeline java at org elasticsearch transport inboundpipeline dohandlebytes inboundpipeline java at org elasticsearch transport inboundpipeline handlebytes inboundpipeline java at org elasticsearch transport channelread java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty handler logging logginghandler channelread logginghandler java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty handler codec messagetomessagedecoder channelread messagetomessagedecoder java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline headcontext channelread defaultchannelpipeline java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline firechannelread defaultchannelpipeline java at io netty channel nio abstractniobytechannel niobyteunsafe read abstractniobytechannel java at io netty channel nio nioeventloop processselectedkey nioeventloop java at io netty channel nio nioeventloop processselectedkeysplain nioeventloop java at io netty channel nio nioeventloop processselectedkeys nioeventloop java at io netty channel nio nioeventloop run nioeventloop java at io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java at io netty util internal threadexecutormap run threadexecutormap java at java lang thread run thread java
| 1
|
242,860
| 18,672,824,512
|
IssuesEvent
|
2021-10-31 02:23:10
|
raspberrypi/documentation
|
https://api.github.com/repos/raspberrypi/documentation
|
closed
|
Hardware/software information for UARTs
|
needs discussion documentation stale issue
|
A section similar to https://www.raspberrypi.org/documentation/computers/raspberry-pi.html#spi-hardware is needed to cover UART hardware.
And a section similar to https://www.raspberrypi.org/documentation/computers/raspberry-pi.html#spi-software to cover UART software.
|
1.0
|
Hardware/software information for UARTs - A section similar to https://www.raspberrypi.org/documentation/computers/raspberry-pi.html#spi-hardware is needed to cover UART hardware.
And a section similar to https://www.raspberrypi.org/documentation/computers/raspberry-pi.html#spi-software to cover UART software.
|
non_test
|
hardware software information for uarts a section similar to is needed to cover uart hardware and a section similar to to cover uart software
| 0
|
206,811
| 15,776,306,831
|
IssuesEvent
|
2021-04-01 04:26:54
|
camunda-cloud/zeebe
|
https://api.github.com/repos/camunda-cloud/zeebe
|
closed
|
ReplayStatePropertyTest is flaky (maybe?)
|
Impact: Testing Scope: broker Status: Planned Type: Unstable Test
|
**Summary**
- How often does the test fail? Once so far
- Does it block your work? Not really, but could in the future
- Do we suspect that it is a real failure? Seems like a configuration issue with the test
**Failures**
[Failing build](https://ci.zeebe.camunda.cloud/blue/organizations/jenkins/zeebe-io%2Fzeebe/detail/staging/2820/tests/)
<details><summary>Example assertion failure</summary>
<pre>
java.lang.Exception: No tests found matching Unique ID [engine:junit-vintage]/[runner:io.zeebe.engine.processing.streamprocessor.ReplayStatePropertyTest]/[test:%5BTestDataRecord{workFlowSeed=-8632946862485235079, executionPathSeed=861460546695806531}%5D]/[test:shouldRestoreStateAtEachStepInExecution%5BTestDataRecord{workFlowSeed=-8632946862485235079, executionPathSeed=861460546695806531}%5D(io.zeebe.engine.processing.streamprocessor.ReplayStatePropertyTest)] from org.junit.vintage.engine.descriptor.RunnerRequest@374dc136
at org.junit.internal.requests.FilterRequest.getRunner(FilterRequest.java:40)
at org.junit.vintage.engine.descriptor.RunnerTestDescriptor.applyFilters(RunnerTestDescriptor.java:136)
at org.junit.vintage.engine.discovery.RunnerTestDescriptorPostProcessor.applyFiltersAndCreateDescendants(RunnerTestDescriptorPostProcessor.java:46)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177)
at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133)
at java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497)
at org.junit.vintage.engine.discovery.VintageDiscoverer.discover(VintageDiscoverer.java:50)
at org.junit.vintage.engine.VintageTestEngine.discover(VintageTestEngine.java:63)
at org.junit.platform.launcher.core.EngineDiscoveryOrchestrator.discoverEngineRoot(EngineDiscoveryOrchestrator.java:103)
at org.junit.platform.launcher.core.EngineDiscoveryOrchestrator.discover(EngineDiscoveryOrchestrator.java:85)
at org.junit.platform.launcher.core.DefaultLauncher.discover(DefaultLauncher.java:92)
at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75)
at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:165)
at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:120)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
</pre>
</details>
**Hypotheses**
No idea - are we generating different IDs for these tests dynamically? Could we be messing with junit when we rerun these tests on failure? It detected it as flaky for some reason.
**Logs**
<details><summary>Logs</summary>
<pre>
Stacktrace
java.lang.Exception: No tests found matching Unique ID [engine:junit-vintage]/[runner:io.zeebe.engine.processing.streamprocessor.ReplayStatePropertyTest]/[test:%5BTestDataRecord{workFlowSeed=-8632946862485235079, executionPathSeed=861460546695806531}%5D]/[test:shouldRestoreStateAtEachStepInExecution%5BTestDataRecord{workFlowSeed=-8632946862485235079, executionPathSeed=861460546695806531}%5D(io.zeebe.engine.processing.streamprocessor.ReplayStatePropertyTest)] from org.junit.vintage.engine.descriptor.RunnerRequest@374dc136
at org.junit.internal.requests.FilterRequest.getRunner(FilterRequest.java:40)
at org.junit.vintage.engine.descriptor.RunnerTestDescriptor.applyFilters(RunnerTestDescriptor.java:136)
at org.junit.vintage.engine.discovery.RunnerTestDescriptorPostProcessor.applyFiltersAndCreateDescendants(RunnerTestDescriptorPostProcessor.java:46)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177)
at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133)
at java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497)
at org.junit.vintage.engine.discovery.VintageDiscoverer.discover(VintageDiscoverer.java:50)
at org.junit.vintage.engine.VintageTestEngine.discover(VintageTestEngine.java:63)
at org.junit.platform.launcher.core.EngineDiscoveryOrchestrator.discoverEngineRoot(EngineDiscoveryOrchestrator.java:103)
at org.junit.platform.launcher.core.EngineDiscoveryOrchestrator.discover(EngineDiscoveryOrchestrator.java:85)
at org.junit.platform.launcher.core.DefaultLauncher.discover(DefaultLauncher.java:92)
at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75)
at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:165)
at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:120)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
Standard Output
20:37:23.909 [Broker-0-LogStream-1] DEBUG io.zeebe.logstreams - Configured log appender back pressure at partition 1 as AppenderVegasCfg{initialLimit=1024, maxConcurrency=32768, alphaLimit=0.7, betaLimit=0.95}. Window limiting is disabled
20:37:24.077 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:37:24.078 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@67538fcd)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@69946d57, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@6dabe4bd, configuration: Configuration(false)]
20:37:24.079 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:37:24.080 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@28a9676c)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@53ffa853, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@19cb89ae, configuration: Configuration(false)]
20:37:24.081 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@2b771826)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@48869823, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@66950c50, configuration: Configuration(false)]
20:37:24.178 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@2b611b1c)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@24a5fa0c, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@4bc34488, configuration: Configuration(false)]
20:37:24.432 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:37:24.432 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:37:24.497 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:37:24.609 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:37:24.610 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@24bd663e)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@2a7a8cd9, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@2e450d4e, configuration: Configuration(false)]
20:37:24.611 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:37:24.612 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@6b4786e2)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@24849122, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@5a66cd2f, configuration: Configuration(false)]
20:37:24.613 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@5c4f8450)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@b6879ad, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@b65e463, configuration: Configuration(false)]
20:37:24.615 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 32
20:37:24.676 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@7e25652e)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@2cf1f5ff, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@154ea218, configuration: Configuration(false)]
20:37:24.727 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 33
20:37:26.890 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:37:26.891 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:37:26.983 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:37:27.185 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:37:27.187 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@7aa33039)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@7464c11d, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@705b5b49, configuration: Configuration(false)]
20:37:27.189 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:37:27.190 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@3559cb26)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@5618074c, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@4a658bb, configuration: Configuration(false)]
20:37:27.191 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@2e0dca75)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@55763469, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@43427bec, configuration: Configuration(false)]
20:37:27.193 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 32
20:37:27.263 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@51252940)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@15fc4437, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@463227e5, configuration: Configuration(false)]
20:37:27.295 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 33
20:37:27.536 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:37:27.537 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:37:27.600 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:37:27.774 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:37:27.775 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@2a9ca32a)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@1a9b79b5, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@7fd90f36, configuration: Configuration(false)]
20:37:27.776 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:37:27.777 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@611807fd)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@2ed27d07, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@6fc5ffb9, configuration: Configuration(false)]
20:37:27.778 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@5852edc8)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@79d42ed0, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@44534ea8, configuration: Configuration(false)]
20:37:27.780 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 36
20:37:27.805 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@6cdd3ca5)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@6f924a1e, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@35629b98, configuration: Configuration(false)]
20:37:27.876 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 37
20:37:28.010 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:37:28.011 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:37:28.049 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:37:28.145 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:37:28.146 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@3dfb7ad6)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@14aa9611, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@4390db15, configuration: Configuration(false)]
20:37:28.147 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:37:28.148 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@44d3de86)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@22982100, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@6242315c, configuration: Configuration(false)]
20:37:28.149 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@274f0672)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@7d32cbf3, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@1ec17574, configuration: Configuration(false)]
20:37:28.155 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 78
20:37:28.183 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@25066255)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@3e73b, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@272c02f2, configuration: Configuration(false)]
20:37:28.290 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 79
20:37:28.384 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:37:28.384 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:37:28.431 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:37:28.513 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:37:28.514 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@5cc80d72)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@520ff89b, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@56306602, configuration: Configuration(false)]
20:37:28.515 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:37:28.515 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@1332f2a0)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@f8dde29, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@22eb2463, configuration: Configuration(false)]
20:37:28.516 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@77be564a)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@731d0def, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@1280e933, configuration: Configuration(false)]
20:37:28.518 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 78
20:37:28.540 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@28a02b42)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@1596d9db, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@9d7f71, configuration: Configuration(false)]
20:37:28.610 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 79
20:37:28.637 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:37:28.638 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:37:28.690 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:37:28.873 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:37:28.874 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@c94e9a0)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@8ed2dbe, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@75495299, configuration: Configuration(false)]
20:37:28.875 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:37:28.876 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@5b105e4e)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@70e26565, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@1103361a, configuration: Configuration(false)]
20:37:28.877 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@2ee17339)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@77daba85, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@14702056, configuration: Configuration(false)]
20:37:28.879 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 78
20:37:28.960 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@7f07ae42)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@21fead30, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@609c66aa, configuration: Configuration(false)]
20:37:29.100 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 79
20:37:29.356 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:37:29.356 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:37:29.393 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:37:29.506 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:37:29.507 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@2d99c8f)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@229a3cf6, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@746e2009, configuration: Configuration(false)]
20:37:29.508 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:37:29.509 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@25caa1a3)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@6bb96c71, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@1837807c, configuration: Configuration(false)]
20:37:29.510 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@1d5c851b)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@6d6bf8aa, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@4d91db92, configuration: Configuration(false)]
20:37:29.512 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 82
20:37:29.539 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@1a788623)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@18e72413, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@2f39896d, configuration: Configuration(false)]
20:37:29.604 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 83
20:37:29.777 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:37:29.777 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:37:29.814 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:37:29.905 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:37:29.906 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@3876c034)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@36b1394a, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@328eab44, configuration: Configuration(false)]
20:37:29.908 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:37:29.909 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@778c3371)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@79753ad4, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@463127d9, configuration: Configuration(false)]
20:37:29.910 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@12270d0)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@17a3cc3e, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@47d0f784, configuration: Configuration(false)]
20:37:29.912 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 114
20:37:29.938 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@51ffc126)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@2938563, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@733c5648, configuration: Configuration(false)]
20:37:30.026 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 115
20:37:30.171 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:37:30.172 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:37:30.268 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:37:30.380 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:37:30.381 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@23d67fd5)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@3073f5a1, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@4cc8240c, configuration: Configuration(false)]
20:37:30.382 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:37:30.382 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@2f15649)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@12c9b915, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@3d388810, configuration: Configuration(false)]
20:37:30.383 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@30dd33ff)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@389678b3, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@67487fa5, configuration: Configuration(false)]
20:37:30.384 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 114
20:37:30.405 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@1816f51b)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@514f410f, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@5c12ba1a, configuration: Configuration(false)]
20:37:30.492 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 115
20:37:30.707 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:37:30.708 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:37:30.772 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:37:30.879 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:37:30.880 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@771d133e)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@79daf3ad, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@7de5784e, configuration: Configuration(false)]
20:37:30.881 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:37:30.882 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@38e7f996)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@277c9957, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@995418e, configuration: Configuration(false)]
20:37:30.883 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@5fe47ffa)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@58c806d9, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@1bbd8ab6, configuration: Configuration(false)]
20:37:30.885 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 139
20:37:30.905 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@343a0c10)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@bd8300, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@2003cc76, configuration: Configuration(false)]
20:37:31.084 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 141
20:37:31.157 [] INFO io.zeebe.test.records - Test failed, following records were exported:
20:37:31.265 [] INFO io.zeebe.test.records - {"valueType":"DEPLOYMENT","key":-1,"position":1,"timestamp":1612989444105,"recordType":"COMMAND","intent":"CREATE","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"resources":[{"resource":"PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiIHN0YW5kYWxvbmU9Im5vIj8+CjxkZWZpbml0aW9ucyB4bWxuczpicG1uZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvREkiIHhtbG5zOmRjPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9ERC8yMDEwMDUyNC9EQyIgeG1sbnM6ZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0RELzIwMTAwNTI0L0RJIiB4bWxuczpuczA9Imh0dHA6Ly9jYW11bmRhLm9yZy9zY2hlbWEvemVlYmUvMS4wIiBpZD0iZGVmaW5pdGlvbnNfNzkwZTEwYWItMzA3My00M2ZkLTkyOTktYzY2MmI4NzU3Yzk0IiB0YXJnZXROYW1lc3BhY2U9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvTU9ERUwiIHhtbG5zPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9CUE1OLzIwMTAwNTI0L01PREVMIj4KICA8cHJvY2VzcyBpZD0icHJvY2Vzc19pZF8wIiBpc0V4ZWN1dGFibGU9InRydWUiPgogICAgPHN0YXJ0RXZlbnQgaWQ9ImlkXzEiIG5hbWU9ImlkXzEiPgogICAgICA8b3V0Z29pbmc+c2VxdWVuY2VGbG93X2QwYTgzZmQyLTAwZTEtNGJlZS1iMGNjLTEwOWM0MD...
20:37:31.269 [] INFO io.zeebe.test.records - {"valueType":"DEPLOYMENT","key":2251799813685250,"position":2,"timestamp":1612989444255,"recordType":"EVENT","intent":"CREATED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"resources":[{"resource":"PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiIHN0YW5kYWxvbmU9Im5vIj8+CjxkZWZpbml0aW9ucyB4bWxuczpicG1uZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvREkiIHhtbG5zOmRjPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9ERC8yMDEwMDUyNC9EQyIgeG1sbnM6ZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0RELzIwMTAwNTI0L0RJIiB4bWxuczpuczA9Imh0dHA6Ly9jYW11bmRhLm9yZy9zY2hlbWEvemVlYmUvMS4wIiBpZD0iZGVmaW5pdGlvbnNfNzkwZTEwYWItMzA3My00M2ZkLTkyOTktYzY2MmI4NzU3Yzk0IiB0YXJnZXROYW1lc3BhY2U9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvTU9ERUwiIHhtbG5zPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9CUE1OLzIwMTAwNTI0L01PREVMIj4KICA8cHJvY2VzcyBpZD0icHJvY2Vzc19pZF8wIiBpc0V4ZWN1dGFibGU9InRydWUiPgogICAgPHN0YXJ0RXZlbnQgaWQ9ImlkXzEiIG5hbWU9ImlkXzEiPgogICAgICA8b3V0Z29pbmc+c2VxdWVuY2VGbG93X2QwYTgzZmQyLTAwZTEtNGJlZS1iM...
20:37:31.271 [] INFO io.zeebe.test.records - {"valueType":"DEPLOYMENT","key":2251799813685250,"position":3,"timestamp":1612989444259,"recordType":"COMMAND","intent":"DISTRIBUTE","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"resources":[{"resource":"PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiIHN0YW5kYWxvbmU9Im5vIj8+CjxkZWZpbml0aW9ucyB4bWxuczpicG1uZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvREkiIHhtbG5zOmRjPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9ERC8yMDEwMDUyNC9EQyIgeG1sbnM6ZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0RELzIwMTAwNTI0L0RJIiB4bWxuczpuczA9Imh0dHA6Ly9jYW11bmRhLm9yZy9zY2hlbWEvemVlYmUvMS4wIiBpZD0iZGVmaW5pdGlvbnNfNzkwZTEwYWItMzA3My00M2ZkLTkyOTktYzY2MmI4NzU3Yzk0IiB0YXJnZXROYW1lc3BhY2U9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvTU9ERUwiIHhtbG5zPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9CUE1OLzIwMTAwNTI0L01PREVMIj4KICA8cHJvY2VzcyBpZD0icHJvY2Vzc19pZF8wIiBpc0V4ZWN1dGFibGU9InRydWUiPgogICAgPHN0YXJ0RXZlbnQgaWQ9ImlkXzEiIG5hbWU9ImlkXzEiPgogICAgICA8b3V0Z29pbmc+c2VxdWVuY2VGbG93X2QwYTgzZmQyLTAwZTEtNGJl...
20:37:31.274 [] INFO io.zeebe.test.records - {"valueType":"DEPLOYMENT","key":2251799813685250,"position":4,"timestamp":1612989444260,"recordType":"EVENT","intent":"DISTRIBUTED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"resources":[{"resource":"PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiIHN0YW5kYWxvbmU9Im5vIj8+CjxkZWZpbml0aW9ucyB4bWxuczpicG1uZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvREkiIHhtbG5zOmRjPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9ERC8yMDEwMDUyNC9EQyIgeG1sbnM6ZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0RELzIwMTAwNTI0L0RJIiB4bWxuczpuczA9Imh0dHA6Ly9jYW11bmRhLm9yZy9zY2hlbWEvemVlYmUvMS4wIiBpZD0iZGVmaW5pdGlvbnNfNzkwZTEwYWItMzA3My00M2ZkLTkyOTktYzY2MmI4NzU3Yzk0IiB0YXJnZXROYW1lc3BhY2U9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvTU9ERUwiIHhtbG5zPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9CUE1OLzIwMTAwNTI0L01PREVMIj4KICA8cHJvY2VzcyBpZD0icHJvY2Vzc19pZF8wIiBpc0V4ZWN1dGFibGU9InRydWUiPgogICAgPHN0YXJ0RXZlbnQgaWQ9ImlkXzEiIG5hbWU9ImlkXzEiPgogICAgICA8b3V0Z29pbmc+c2VxdWVuY2VGbG93X2QwYTgzZmQyLTAwZTEtNGJlZ...
20:37:31.277 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE_CREATION","key":-1,"position":5,"timestamp":1612989444264,"recordType":"COMMAND","intent":"CREATE","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":-1,"variables":{"fork_id_28_branch":"default-case","fork_id_42_branch":"edge_id_46","fork_id_3_branch":"edge_id_8","fork_id_14_branch":"default-case","fork_id_23_branch":"edge_id_26"},"bpmnProcessId":"process_id_0","workflowInstanceKey":-1,"workflowKey":-1},"sourceRecordPosition":-1}
20:37:31.280 [] INFO io.zeebe.test.records - {"valueType":"VARIABLE","key":2251799813685252,"position":6,"timestamp":1612989444265,"recordType":"EVENT","intent":"CREATED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"name":"fork_id_14_branch","value":"\"default-case\"","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249,"scopeKey":2251799813685251},"sourceRecordPosition":5}
20:37:31.280 [] INFO io.zeebe.test.records - {"valueType":"VARIABLE","key":2251799813685253,"position":7,"timestamp":1612989444265,"recordType":"EVENT","intent":"CREATED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"name":"fork_id_23_branch","value":"\"edge_id_26\"","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249,"scopeKey":2251799813685251},"sourceRecordPosition":5}
20:37:31.280 [] INFO io.zeebe.test.records - {"valueType":"VARIABLE","key":2251799813685254,"position":8,"timestamp":1612989444265,"recordType":"EVENT","intent":"CREATED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"name":"fork_id_28_branch","value":"\"default-case\"","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249,"scopeKey":2251799813685251},"sourceRecordPosition":5}
20:37:31.281 [] INFO io.zeebe.test.records - {"valueType":"VARIABLE","key":2251799813685255,"position":9,"timestamp":1612989444265,"recordType":"EVENT","intent":"CREATED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"name":"fork_id_42_branch","value":"\"edge_id_46\"","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249,"scopeKey":2251799813685251},"sourceRecordPosition":5}
20:37:31.281 [] INFO io.zeebe.test.records - {"valueType":"VARIABLE","key":2251799813685256,"position":10,"timestamp":1612989444265,"recordType":"EVENT","intent":"CREATED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"name":"fork_id_3_branch","value":"\"edge_id_8\"","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249,"scopeKey":2251799813685251},"sourceRecordPosition":5}
20:37:31.285 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685251,"position":11,"timestamp":1612989444265,"recordType":"EVENT","intent":"ELEMENT_ACTIVATING","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":-1,"bpmnElementType":"PROCESS","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"process_id_0","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":5}
20:37:31.285 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE_CREATION","key":2251799813685257,"position":12,"timestamp":1612989444265,"recordType":"EVENT","intent":"CREATED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"variables":{"fork_id_28_branch":"default-case","fork_id_42_branch":"edge_id_46","fork_id_3_branch":"edge_id_8","fork_id_14_branch":"default-case","fork_id_23_branch":"edge_id_26"},"bpmnProcessId":"process_id_0","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":5}
20:37:31.285 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685251,"position":13,"timestamp":1612989444266,"recordType":"EVENT","intent":"ELEMENT_ACTIVATED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":-1,"bpmnElementType":"PROCESS","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"process_id_0","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":11}
20:37:31.286 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685258,"position":14,"timestamp":1612989444267,"recordType":"EVENT","intent":"ELEMENT_ACTIVATING","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"START_EVENT","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"id_1","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":13}
20:37:31.286 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685258,"position":15,"timestamp":1612989444268,"recordType":"EVENT","intent":"ELEMENT_ACTIVATED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"START_EVENT","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"id_1","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":14}
20:37:31.290 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685258,"position":16,"timestamp":1612989444268,"recordType":"EVENT","intent":"ELEMENT_COMPLETING","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"START_EVENT","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"id_1","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":15}
20:37:31.291 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685258,"position":17,"timestamp":1612989444269,"recordType":"EVENT","intent":"ELEMENT_COMPLETED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"START_EVENT","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"id_1","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":16}
20:37:31.291 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685259,"position":18,"timestamp":1612989444271,"recordType":"EVENT","intent":"SEQUENCE_FLOW_TAKEN","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"SEQUENCE_FLOW","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"sequenceFlow_d0a83fd2-00e1-4bee-b0cc-109c404a6cab","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":17}
20:37:31.291 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685260,"position":19,"timestamp":1612989444272,"recordType":"EVENT","intent":"ELEMENT_ACTIVATING","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"EXCLUSIVE_GATEWAY","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"fork_id_3","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":18}
20:37:31.292 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685260,"position":20,"timestamp":1612989444278,"recordType":"EVENT","intent":"ELEMENT_ACTIVATED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"EXCLUSIVE_GATEWAY","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"fork_id_3","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":19}
20:37:31.292 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685260,"position":21,"timestamp":1612989444279,"recordType":"EVENT","intent":"ELEMENT_COMPLETING","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"EXCLUSIVE_GATEWAY","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"fork_id_3","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":20}
20:37:31.292 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685260,"position":22,"timestamp":1612989444281,"recordType":"EVENT","intent":"ELEMENT_COMPLETED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"EXCLUSIVE_GATEWAY","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"fork_id_3","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":21}
20:37:31.293 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685261,"position":23,"timestamp":1612989444285,"recordType":"EVENT","intent":"SEQUENCE_FLOW_TAKEN","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"SEQUENCE_FLOW","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"edge_id_8","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":22}
20:37:31.293 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685262,"position":24,"timestamp":1612989444286,"recordType":"EVENT","intent":"ELEMENT_ACTIVATING","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"PARALLEL_GATEWAY","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"fork_id_9","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":23}
20:37:31.293 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685262,"position":25,"timestamp":1612989444287,"recordType":"EVENT","intent":"ELEMENT_ACTIVATED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"PARALLEL_GATEWAY","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"fork_id_9","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":24}
20:37:31.294 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685262,"position":26,"timestamp":1612989444288,"recordType":"EVENT","intent":"ELEMENT_COMPLETING","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"PARALLEL_GATEWAY","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"fork_id_9","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":25}
20:37:31.297 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685262,"position":27,"timestamp":1612989444289,"recordType":"EVENT","intent":"ELEMENT_COMPLETED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"PARALLEL_GATEWAY","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"fork_id_9","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":26}
20:37:31.297 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685263,"position":28,"timestamp":1612989444290,"recordType":"EVENT","intent":"SEQUENCE_FLOW_TAKEN","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"SEQUENCE_FLOW","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"id_12","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":27}
20:37:31.297 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685264,"position":29,"timestamp":1612989444290,"recordType":"EVENT","intent":"SEQUENCE_FLOW_TAKEN","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"SEQUENCE_FLOW","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"sequenceFlow_0aa24ecd-79de-4279-a923-9c107589e91e","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":27}
20:37:31.298 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685265,"position":30,"timestamp":1612989444291,"recordType":"EVENT","intent":"ELEMENT_ACTIVATING","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"SERVICE_TASK","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"id_13","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":28}
20:37:31.298 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685265,"position":31,"timestamp":1612989444294,"recordType":"EVENT","intent":"ELEMENT_ACTIVATED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"SERVICE_TASK","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"id_13","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":30}
20:37:31.307 [] INFO io.zeebe.test.records - {"valueType":"JOB","key":-1,"position":32,"timestamp":1612989444294,"recordType":"COMMAND","intent":"CREATE","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"type":"job_id_13","errorMessage":"","deadline":-1,"variables":{},"errorCode":"","retries":3,"customHeaders":{},"worker":"","workflowDefinitionVersion":1,"elementInstanceKey":2251799813685265,"bpmnProcessId":"process_id_0","elementId":"id_13","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":31}
20:37:31.308 [] INFO io.zeebe.test.records - {"valueType":"JOB","key":2251799813685266,"position":33,"timestamp":1612989444295,"recordType":"EVENT","intent":"CREATED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"type":"job_id_13","errorMessage":"","deadline":-1,"variables":{},"errorCode":"","retries":3,"customHeaders":{},"worker":"","workflowDefinitionVersion":1,"elementInstanceKey":2251799813685265,"bpmnProcessId":"process_id_0","elementId":"id_13","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":32}
20:37:31.311 [] INFO io.zeebe.test.records - {"valueType":"DEPLOYMENT","key":-1,"position":1,"timestamp":1612989444105,"recordType":"COMMAND","intent":"CREATE","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"resources":[{"resource":"PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiIHN0YW5kYWxvbmU9Im5vIj8+CjxkZWZpbml0aW9ucyB4bWxuczpicG1uZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvREkiIHhtbG5zOmRjPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9ERC8yMDEwMDUyNC9EQyIgeG1sbnM6ZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0RELzIwMTAwNTI0L0RJIiB4bWxuczpuczA9Imh0dHA6Ly9jYW11bmRhLm9yZy9zY2hlbWEvemVlYmUvMS4wIiBpZD0iZGVmaW5pdGlvbnNfNzkwZTEwYWItMzA3My00M2ZkLTkyOTktYzY2MmI4NzU3Yzk0IiB0YXJnZXROYW1lc3BhY2U9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvTU9ERUwiIHhtbG5zPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9CUE1OLzIwMTAwNTI0L01PREVMIj4KICA8cHJvY2VzcyBpZD0icHJvY2Vzc19pZF8wIiBpc0V4ZWN1dGFibGU9InRydWUiPgogICAgPHN0YXJ0RXZlbnQgaWQ9ImlkXzEiIG5hbWU9ImlkXzEiPgogICAgICA8b3V0Z29pbmc+c2VxdWVuY2VGbG93X2QwYTgzZmQyLTAwZTEtNGJlZS1iMGNjLTEwOWM0MD...
20:37:31.314 [] INFO io.zeebe.test.records - {"valueType":"DEPLOYMENT","key":2251799813685250,"position":2,"timestamp":1612989444255,"recordType":"EVENT","intent":"CREATED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"resources":[{"resource":"PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiIHN0YW5kYWxvbmU9Im5vIj8+CjxkZWZpbml0aW9ucyB4bWxuczpicG1uZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvREkiIHhtbG5zOmRjPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9ERC8yMDEwMDUyNC9EQyIgeG1sbnM6ZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0RELzIwMTAwNTI0L0RJIiB4bWxuczpuczA9Imh0dHA6Ly9jYW11bmRhLm9yZy9zY2hlbWEvemVlYmUvMS4wIiBpZD0iZGVmaW5pdGlvbnNfNzkwZTEwYWItMzA3My00M2ZkLTkyOTktYzY2MmI4NzU3Yzk0IiB0YXJnZXROYW1lc3BhY2U9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvTU9ERUwiIHhtbG5zPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9CUE1OLzIwMTAwNTI0L01PREVMIj4KICA8cHJvY2VzcyBpZD0icHJvY2Vzc19pZF8wIiBpc0V4ZWN1dGFibGU9InRydWUiPgogICAgPHN0YXJ0RXZlbnQgaWQ9ImlkXzEiIG5hbWU9ImlkXzEiPgogICAgICA8b3V0Z29pbmc+c2VxdWVuY2VGbG93X2QwYTgzZmQyLTAwZTEtNGJlZS1iM...
20:37:31.317 [] INFO io.zeebe.test.records - {"valueType":"DEPLOYMENT","key":2251799813685250,"position":3,"timestamp":1612989444259,"recordType":"COMMAND","intent":"DISTRIBUTE","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"resources":[{"resource":"PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiIHN0YW5kYWxvbmU9Im5vIj8+CjxkZWZpbml0aW9ucyB4bWxuczpicG1uZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvREkiIHhtbG5zOmRjPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9ERC8yMDEwMDUyNC9EQyIgeG1sbnM6ZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0RELzIwMTAwNTI0L0RJIiB4bWxuczpuczA9Imh0dHA6Ly9jYW11bmRhLm9yZy9zY2hlbWEvemVlYmUvMS4wIiBpZD0iZGVmaW5pdGlvbnNfNzkwZTEwYWItMzA3My00M2ZkLTkyOTktYzY2MmI4NzU3Yzk0IiB0YXJnZXROYW1lc3BhY2U9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvTU9ERUwiIHhtbG5zPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9CUE1OLzIwMTAwNTI0L01PREVMIj4KICA8cHJvY2VzcyBpZD0icHJvY2Vzc19pZF8wIiBpc0V4ZWN1dGFibGU9InRydWUiPgogICAgPHN0YXJ0RXZlbnQgaWQ9ImlkXzEiIG5hbWU9ImlkXzEiPgogICAgICA8b3V0Z29pbmc+c2VxdWVuY2VGbG93X2QwYTgzZmQyLTAwZTEtNGJl...
20:37:31.319 [] INFO io.zeebe.test.records - {"valueType":"DEPLOYMENT","key":2251799813685250,"position":4,"timestamp":1612989444260,"recordType":"EVENT","intent":"DISTRIBUTED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"resources":[{"resource":"PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiIHN0YW5kYWxvbmU9Im5vIj8+CjxkZWZpbml0aW9ucyB4bWxuczpicG1uZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvREkiIHhtbG5zOmRjPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9ERC8yMDEwMDUyNC9EQyIgeG1sbnM6ZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0RELzIwMTAwNTI0L0RJIiB4bWxuczpuczA9Imh0dHA6Ly9jYW11bmRhLm9yZy9zY2hlbWEvemVlYmUvMS4wIiBpZD0iZGVmaW5pdGlvbnNfNzkwZTEwYWItMzA3My00M2ZkLTkyOTktYzY2MmI4NzU3Yzk0IiB0YXJnZXROYW1lc3BhY2U9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvTU9ERUwiIHhtbG5zPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9CUE1OLzIwMTAwNTI0L01PREVMIj4KICA8cHJvY2VzcyBpZD0icHJvY2Vzc19pZF8wIiBpc0V4ZWN1dGFibGU9InRydWUiPgogICAgPHN0YXJ0RXZlbnQgaWQ9ImlkXzEiIG5hbWU9ImlkXzEiPgogICAgICA8b3V0Z29pbmc+c2VxdWVuY2VGbG93X2QwYTgzZmQyLTAwZTEtNGJlZ...
20:37:31.320 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE_CREATION","key":-1,"position":5,"timestamp":1612989444264,"recordType":"COMMAND","intent":"CREATE","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":-1,"variables":{"fork_id_28_branch":"default-case","fork_id_42_branch":"edge_id_46","fork_id_3_branch":"edge_id_8","fork_id_14_branch":"default-case","fork_id_23_branch":"edge_id_26"},"bpmnProcessId":"process_id_0","workflowInstanceKey":-1,"workflowKey":-1},"sourceRecordPosition":-1}
20:37:31.320 [] INFO io.zeebe.test.records - {"valueType":"VARIABLE","key":2251799813685252,"position":6,"timestamp":1612989444265,"recordType":"EVENT","intent":"CREATED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"name":"fork_id_14_branch","value":"\"default-case\"","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249,"scopeKey":2251799813685251},"sourceRecordPosition":5}
20:37:31.320 [] INFO io.zeebe.test.records - {"valueType":"VARIABLE","key":2251799813685253,"position":7,"timestamp":1612989444265,"recordType":"EVENT","intent":"CR
...[truncated 1411553 bytes]...
0:38:49.353 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:49.420 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:49.421 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@444ccd60)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@4a8b2b99, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@4eb4a610, configuration: Configuration(false)]
20:38:49.421 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:49.422 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@bb621a2)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@2933ca79, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@3af7eb25, configuration: Configuration(false)]
20:38:49.422 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@51985e4d)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@6b99deed, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@5ccea1e5, configuration: Configuration(false)]
20:38:49.423 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 72
20:38:49.432 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@29e404de)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@6350a18a, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@2516cd72, configuration: Configuration(false)]
20:38:49.452 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 73
20:38:49.527 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:49.527 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:49.558 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:49.624 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:49.624 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@2f4f9abc)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@21886125, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@25b7f1cc, configuration: Configuration(false)]
20:38:49.625 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:49.625 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@60b56b1e)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@337c97ce, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@511f49cd, configuration: Configuration(false)]
20:38:49.626 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@4b3cebb3)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@9a9451, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@570fb16b, configuration: Configuration(false)]
20:38:49.627 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 72
20:38:49.637 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@132b0187)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@4347617e, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@44e5f336, configuration: Configuration(false)]
20:38:49.659 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 73
20:38:49.834 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:49.834 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:49.865 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:49.931 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:49.931 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@2e45eefc)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@1af226d4, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@212d1933, configuration: Configuration(false)]
20:38:49.932 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:49.933 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@53e99121)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@525aa7ae, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@5995ffb7, configuration: Configuration(false)]
20:38:49.933 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@10388137)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@f476a88, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@827cb19, configuration: Configuration(false)]
20:38:49.934 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 76
20:38:49.943 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@17ff90d6)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@428aa385, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@b264b50, configuration: Configuration(false)]
20:38:49.966 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 77
20:38:50.141 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:50.142 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:50.173 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:50.241 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:50.241 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@28edbc46)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@299a0858, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@772c2b87, configuration: Configuration(false)]
20:38:50.242 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:50.242 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@3825a002)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@7d6f24cc, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@44e3bf1, configuration: Configuration(false)]
20:38:50.242 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@122848ec)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@718cf1b, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@355d270, configuration: Configuration(false)]
20:38:50.243 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 87
20:38:50.253 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@604c40b5)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@e319584, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@4f97c28e, configuration: Configuration(false)]
20:38:50.275 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 88
20:38:50.450 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:50.451 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:50.482 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:50.549 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:50.549 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@52d95386)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@7133f914, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@ae3a0e8, configuration: Configuration(false)]
20:38:50.550 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:50.550 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@4a453f2c)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@52180081, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@3c0db88f, configuration: Configuration(false)]
20:38:50.551 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@2b7ebb06)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@48214d1a, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@5a2f5097, configuration: Configuration(false)]
20:38:50.552 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 129
20:38:50.560 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@4f7f23ef)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@28c44e9c, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@1ea22703, configuration: Configuration(false)]
20:38:50.590 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 130
20:38:50.656 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:50.656 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:50.687 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:50.754 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:50.755 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@fed39db)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@5e079b5a, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@11a9d0e6, configuration: Configuration(false)]
20:38:50.755 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:50.756 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@72c9eb3b)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@140c3758, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@76b3a3c8, configuration: Configuration(false)]
20:38:50.756 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@29426318)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@146fd538, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@61ac961a, configuration: Configuration(false)]
20:38:50.757 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 129
20:38:50.766 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@5eb3138a)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@64cd1607, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@73824c1e, configuration: Configuration(false)]
20:38:50.797 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 130
20:38:50.861 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:50.861 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:50.892 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:50.959 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:50.959 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@359584a5)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@7d5d2548, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@e33b494, configuration: Configuration(false)]
20:38:50.960 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:50.960 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@432c1167)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@5a05150b, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@4ee60c66, configuration: Configuration(false)]
20:38:50.961 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@5d62963a)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@7e6603d8, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@cb9ec52, configuration: Configuration(false)]
20:38:50.962 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 129
20:38:50.971 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@50504ce)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@43044bb8, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@6a4fdb17, configuration: Configuration(false)]
20:38:51.001 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 130
20:38:51.169 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:51.170 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:51.201 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:51.267 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:51.268 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@c81e2be)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@5004549, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@69e75899, configuration: Configuration(false)]
20:38:51.269 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:51.269 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@6e4c4bfa)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@6ddbd09f, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@6a76ea60, configuration: Configuration(false)]
20:38:51.269 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@39065427)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@3599db66, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@6f5ba62, configuration: Configuration(false)]
20:38:51.270 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 140
20:38:51.280 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@437d6fed)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@7e2a91bc, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@1a48f9a6, configuration: Configuration(false)]
20:38:51.313 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 141
20:38:51.478 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:51.478 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:51.510 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:51.577 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:51.577 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@5ecdf247)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@500a07ee, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@54359937, configuration: Configuration(false)]
20:38:51.578 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:51.578 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@11370dd5)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@20598a7e, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@20ecdcec, configuration: Configuration(false)]
20:38:51.578 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@586805e7)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@7118f29a, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@1d7e8cbd, configuration: Configuration(false)]
20:38:51.579 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 144
20:38:51.589 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@3a0654c4)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@7d41ea69, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@2cf827ca, configuration: Configuration(false)]
20:38:51.621 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 145
20:38:51.787 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:51.787 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:51.818 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:51.884 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:51.885 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@7433c345)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@7af423b3, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@7ecba835, configuration: Configuration(false)]
20:38:51.885 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:51.886 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@3bc92988)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@518aec5f, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@ab8b466, configuration: Configuration(false)]
20:38:51.886 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@29112f04)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@5be6d6c7, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@5d0831f0, configuration: Configuration(false)]
20:38:51.887 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 167
20:38:51.896 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@34ba7b6a)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@4e269ae9, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@27e6d6f4, configuration: Configuration(false)]
20:38:51.931 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 168
20:38:52.092 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:52.093 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:52.124 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:52.190 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:52.190 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@4e1d048a)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@7a499139, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@3a39e7e0, configuration: Configuration(false)]
20:38:52.191 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:52.191 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@1308b572)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@3517cacd, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@3e1450f, configuration: Configuration(false)]
20:38:52.191 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@7692b474)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@20e72cf1, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@2badcb82, configuration: Configuration(false)]
20:38:52.192 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 167
20:38:52.202 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@88e2c5e)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@7c760800, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@7eb81d2c, configuration: Configuration(false)]
20:38:52.237 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 168
20:38:52.295 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:52.295 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:52.326 [Broker-0-LogStream-1] INFO io.zeebe.logstreams - Close appender for log stream stream-1
20:38:52.327 [stream-1-write-buffer] DEBUG io.zeebe.dispatcher - Dispatcher closed
20:38:52.327 [Broker-0-LogStream-1] INFO io.zeebe.logstreams - On closing logstream stream-1 close 25 readers
20:38:52.327 [Broker-0-LogStream-1] INFO io.zeebe.logstreams - Close log storage with name stream-1
20:38:52.327 [] DEBUG io.zeebe.broker.test - Clean up test files on path /tmp/junit16159778016761806535
20:38:52.327 [] DEBUG io.zeebe.util.actor - Closing actor thread ground '-zb-fs-workers'
20:38:52.327 [] DEBUG io.zeebe.util.actor - Closing actor thread ground '-zb-actors'
20:38:52.328 [] DEBUG io.zeebe.util.actor - Closing actor thread ground '-zb-actors': closed successfully
20:38:52.329 [] DEBUG io.zeebe.util.actor - Closing actor thread ground '-zb-fs-workers': closed successfully
20:38:52.400 [Broker-0-LogStream-1] DEBUG io.zeebe.logstreams - Configured log appender back pressure at partition 1 as AppenderVegasCfg{initialLimit=1024, maxConcurrency=32768, alphaLimit=0.7, betaLimit=0.95}. Window limiting is disabled
20:38:52.466 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:52.467 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@73e8ed7f)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@7d2f0d72, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@7910860f, configuration: Configuration(false)]
20:38:52.467 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:52.468 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@3c120749)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@82596a1, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@d645e4a, configuration: Configuration(false)]
20:38:52.468 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@70b99117)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@5c3df806, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@7809edbe, configuration: Configuration(false)]
20:38:52.480 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@63560539)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@81b4a85, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@71c15162, configuration: Configuration(false)]
20:38:52.597 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:52.598 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:52.629 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:52.695 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:52.695 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@5f5fedc6)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@5198085d, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@55a786b6, configuration: Configuration(false)]
20:38:52.696 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:52.697 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@10a50266)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@30b4ec69, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@21ffd0c2, configuration: Configuration(false)]
20:38:52.697 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@4804607d)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@708afd29, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@67ca90f5, configuration: Configuration(false)]
20:38:52.698 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 72
20:38:52.708 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@3a0727d0)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@9a2eb88, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@6f07b680, configuration: Configuration(false)]
20:38:52.729 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 73
20:38:52.802 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:52.802 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:52.833 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:52.898 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:52.899 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@732fcb85)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@7de82b90, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@79d7a0f5, configuration: Configuration(false)]
20:38:52.900 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:52.900 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@3cd521ab)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@312bb657, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@da3bf49, configuration: Configuration(false)]
20:38:52.900 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@49b77600)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@5cfadee4, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@308f5222, configuration: Configuration(false)]
20:38:52.907 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 72
20:38:52.921 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@225f34bf)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@4b6fac90, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@368352ff, configuration: Configuration(false)]
20:38:52.943 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 73
20:38:53.115 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:53.115 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:53.147 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:53.214 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:53.214 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@57307afb)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@6bbd7823, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@5873409a, configuration: Configuration(false)]
20:38:53.215 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:53.215 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@6f203237)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@3b937a8a, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@3b6f61bb, configuration: Configuration(false)]
20:38:53.216 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@79da91dc)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@1705a55a, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@37e2ca1a, configuration: Configuration(false)]
20:38:53.217 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 76
20:38:53.226 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@4e9f5439)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@103c66b1, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@181f0ad8, configuration: Configuration(false)]
20:38:53.246 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 77
20:38:53.424 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:53.424 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:53.455 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:53.521 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:53.522 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@1b5c5489)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@159ba992, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@11a43ff9, configuration: Configuration(false)]
20:38:53.522 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:53.523 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@54a6a3a9)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@2da75da8, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@6538b6b7, configuration: Configuration(false)]
20:38:53.523 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@55d3161e)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@34895ce6, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@23d40dc2, configuration: Configuration(false)]
20:38:53.524 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 118
20:38:53.533 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@606313d3)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@76a1ee22, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@35d11153, configuration: Configuration(false)]
20:38:53.562 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 119
20:38:53.629 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:53.629 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:53.661 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:53.727 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:53.727 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@31045e79)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@3fc3b629, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@3bfc3509, configuration: Configuration(false)]
20:38:53.728 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:53.728 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@7efebc12)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@6d9ad222, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@4fca4346, configuration: Configuration(false)]
20:38:53.729 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@154473cb)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@1ed1435d, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@25e19967, configuration: Configuration(false)]
20:38:53.730 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 118
20:38:53.739 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@188edf60)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@fedd687, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@7ecab29d, configuration: Configuration(false)]
20:38:53.767 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 119
20:38:53.834 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:53.834 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:53.866 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:53.932 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:53.932 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@7e9a844f)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@4e2b3ea, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@4b46efa4, configuration: Configuration(false)]
20:38:53.933 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:53.933 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@276636f3)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@d7476ac, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@1c98f90b, configuration: Configuration(false)]
20:38:53.933 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@605f0ef3)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@76986da5, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@5877a4cc, configuration: Configuration(false)]
20:38:53.934 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 118
20:38:53.943 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@3389a162)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@3a8c3f35, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@24855ea8, configuration: Configuration(false)]
20:38:53.971 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 119
20:38:54.142 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:54.142 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:54.173 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:54.241 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:54.241 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@63f8f9c3)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@73931fb5, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@5bc42a54, configuration: Configuration(false)]
20:38:54.242 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:54.242 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@db8735d)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@29aab5c8, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@42af79ff, configuration: Configuration(false)]
20:38:54.242 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@38437d74)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@762fd3ba, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@ad73c42, configuration: Configuration(false)]
20:38:54.243 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 122
20:38:54.253 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@52ee61f7)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@7f642207, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@fd8c9bd, configuration: Configuration(false)]
20:38:54.282 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 123
20:38:54.450 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:54.451 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:54.482 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:54.548 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:54.549 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@554d2b0f)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@5972dadb, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@74a6ef00, configuration: Configuration(false)]
20:38:54.550 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:54.550 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@21a50fba)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@1d0c4dd2, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@5267f8aa, configuration: Configuration(false)]
20:38:54.550 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@117ac309)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@18abd643, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@67367a59, configuration: Configuration(false)]
20:38:54.551 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 133
20:38:54.560 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@6c06dd44)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@132a0da2, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@1ba1c0cb, configuration: Configuration(false)]
20:38:54.592 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 134
20:38:54.758 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:54.758 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:54.790 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:54.856 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:54.857 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@682b4349)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@66eca03c, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@62d32839, configuration: Configuration(false)]
20:38:54.858 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:54.858 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@27d1aa07)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@155294bf, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@16bc2b69, configuration: Configuration(false)]
20:38:54.858 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@6dd54b67)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@47fe5548, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@4608d664, configuration: Configuration(false)]
20:38:54.859 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 156
20:38:54.869 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@697fe30f)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@544ab3fb, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@58558941, configuration: Configuration(false)]
20:38:54.903 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 157
20:38:55.064 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:55.065 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:55.096 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:55.163 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:55.163 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@8e47bb3)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@62384df, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@21c10c3, configuration: Configuration(false)]
20:38:55.164 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:55.164 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@471e8ee4)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@6c0c4c57, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@76932875, configuration: Configuration(false)]
20:38:55.165 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@146bb46c)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@273171ab, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@29d1efee, configuration: Configuration(false)]
20:38:55.166 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 156
20:38:55.175 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@8950795)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@5d319409, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@19bb1ba0, configuration: Configuration(false)]
20:38:55.211 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 157
20:38:55.268 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:55.269 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:55.300 [Broker-0-LogStream-1] INFO io.zeebe.logstreams - Close appender for log stream stream-1
20:38:55.301 [stream-1-write-buffer] DEBUG io.zeebe.dispatcher - Dispatcher closed
20:38:55.301 [Broker-0-LogStream-1] INFO io.zeebe.logstreams - On closing logstream stream-1 close 23 readers
20:38:55.301 [Broker-0-LogStream-1] INFO io.zeebe.logstreams - Close log storage with name stream-1
20:38:55.301 [] DEBUG io.zeebe.broker.test - Clean up test files on path /tmp/junit7573536190744857024
20:38:55.301 [] DEBUG io.zeebe.util.actor - Closing actor thread ground '-zb-fs-workers'
20:38:55.301 [] DEBUG io.zeebe.util.actor - Closing actor thread ground '-zb-actors'
20:38:55.302 [] DEBUG io.zeebe.util.actor - Closing actor thread ground '-zb-fs-workers': closed successfully
20:38:55.302 [] DEBUG io.zeebe.util.actor - Closing actor thread ground '-zb-actors': closed successfully
</pre>
</details>
|
2.0
|
ReplayStatePropertyTest is flaky (maybe?) - **Summary**
- How often does the test fail? Once so far
- Does it block your work? Not really, but could in the future
- Do we suspect that it is a real failure? Seems like a configuration issue with the test
**Failures**
[Failing build](https://ci.zeebe.camunda.cloud/blue/organizations/jenkins/zeebe-io%2Fzeebe/detail/staging/2820/tests/)
<details><summary>Example assertion failure</summary>
<pre>
java.lang.Exception: No tests found matching Unique ID [engine:junit-vintage]/[runner:io.zeebe.engine.processing.streamprocessor.ReplayStatePropertyTest]/[test:%5BTestDataRecord{workFlowSeed=-8632946862485235079, executionPathSeed=861460546695806531}%5D]/[test:shouldRestoreStateAtEachStepInExecution%5BTestDataRecord{workFlowSeed=-8632946862485235079, executionPathSeed=861460546695806531}%5D(io.zeebe.engine.processing.streamprocessor.ReplayStatePropertyTest)] from org.junit.vintage.engine.descriptor.RunnerRequest@374dc136
at org.junit.internal.requests.FilterRequest.getRunner(FilterRequest.java:40)
at org.junit.vintage.engine.descriptor.RunnerTestDescriptor.applyFilters(RunnerTestDescriptor.java:136)
at org.junit.vintage.engine.discovery.RunnerTestDescriptorPostProcessor.applyFiltersAndCreateDescendants(RunnerTestDescriptorPostProcessor.java:46)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177)
at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133)
at java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497)
at org.junit.vintage.engine.discovery.VintageDiscoverer.discover(VintageDiscoverer.java:50)
at org.junit.vintage.engine.VintageTestEngine.discover(VintageTestEngine.java:63)
at org.junit.platform.launcher.core.EngineDiscoveryOrchestrator.discoverEngineRoot(EngineDiscoveryOrchestrator.java:103)
at org.junit.platform.launcher.core.EngineDiscoveryOrchestrator.discover(EngineDiscoveryOrchestrator.java:85)
at org.junit.platform.launcher.core.DefaultLauncher.discover(DefaultLauncher.java:92)
at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75)
at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:165)
at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:120)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
</pre>
</details>
**Hypotheses**
No idea - are we generating different IDs for these tests dynamically? Could we be messing with junit when we rerun these tests on failure? It detected it as flaky for some reason.
**Logs**
<details><summary>Logs</summary>
<pre>
Stacktrace
java.lang.Exception: No tests found matching Unique ID [engine:junit-vintage]/[runner:io.zeebe.engine.processing.streamprocessor.ReplayStatePropertyTest]/[test:%5BTestDataRecord{workFlowSeed=-8632946862485235079, executionPathSeed=861460546695806531}%5D]/[test:shouldRestoreStateAtEachStepInExecution%5BTestDataRecord{workFlowSeed=-8632946862485235079, executionPathSeed=861460546695806531}%5D(io.zeebe.engine.processing.streamprocessor.ReplayStatePropertyTest)] from org.junit.vintage.engine.descriptor.RunnerRequest@374dc136
at org.junit.internal.requests.FilterRequest.getRunner(FilterRequest.java:40)
at org.junit.vintage.engine.descriptor.RunnerTestDescriptor.applyFilters(RunnerTestDescriptor.java:136)
at org.junit.vintage.engine.discovery.RunnerTestDescriptorPostProcessor.applyFiltersAndCreateDescendants(RunnerTestDescriptorPostProcessor.java:46)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177)
at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133)
at java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497)
at org.junit.vintage.engine.discovery.VintageDiscoverer.discover(VintageDiscoverer.java:50)
at org.junit.vintage.engine.VintageTestEngine.discover(VintageTestEngine.java:63)
at org.junit.platform.launcher.core.EngineDiscoveryOrchestrator.discoverEngineRoot(EngineDiscoveryOrchestrator.java:103)
at org.junit.platform.launcher.core.EngineDiscoveryOrchestrator.discover(EngineDiscoveryOrchestrator.java:85)
at org.junit.platform.launcher.core.DefaultLauncher.discover(DefaultLauncher.java:92)
at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75)
at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:165)
at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:120)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
Standard Output
20:37:23.909 [Broker-0-LogStream-1] DEBUG io.zeebe.logstreams - Configured log appender back pressure at partition 1 as AppenderVegasCfg{initialLimit=1024, maxConcurrency=32768, alphaLimit=0.7, betaLimit=0.95}. Window limiting is disabled
20:37:24.077 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:37:24.078 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@67538fcd)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@69946d57, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@6dabe4bd, configuration: Configuration(false)]
20:37:24.079 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:37:24.080 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@28a9676c)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@53ffa853, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@19cb89ae, configuration: Configuration(false)]
20:37:24.081 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@2b771826)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@48869823, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@66950c50, configuration: Configuration(false)]
20:37:24.178 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@2b611b1c)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@24a5fa0c, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@4bc34488, configuration: Configuration(false)]
20:37:24.432 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:37:24.432 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:37:24.497 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:37:24.609 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:37:24.610 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@24bd663e)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@2a7a8cd9, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@2e450d4e, configuration: Configuration(false)]
20:37:24.611 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:37:24.612 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@6b4786e2)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@24849122, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@5a66cd2f, configuration: Configuration(false)]
20:37:24.613 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@5c4f8450)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@b6879ad, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@b65e463, configuration: Configuration(false)]
20:37:24.615 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 32
20:37:24.676 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@7e25652e)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@2cf1f5ff, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@154ea218, configuration: Configuration(false)]
20:37:24.727 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 33
20:37:26.890 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:37:26.891 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:37:26.983 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:37:27.185 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:37:27.187 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@7aa33039)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@7464c11d, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@705b5b49, configuration: Configuration(false)]
20:37:27.189 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:37:27.190 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@3559cb26)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@5618074c, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@4a658bb, configuration: Configuration(false)]
20:37:27.191 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@2e0dca75)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@55763469, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@43427bec, configuration: Configuration(false)]
20:37:27.193 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 32
20:37:27.263 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@51252940)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@15fc4437, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@463227e5, configuration: Configuration(false)]
20:37:27.295 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 33
20:37:27.536 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:37:27.537 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:37:27.600 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:37:27.774 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:37:27.775 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@2a9ca32a)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@1a9b79b5, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@7fd90f36, configuration: Configuration(false)]
20:37:27.776 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:37:27.777 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@611807fd)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@2ed27d07, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@6fc5ffb9, configuration: Configuration(false)]
20:37:27.778 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@5852edc8)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@79d42ed0, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@44534ea8, configuration: Configuration(false)]
20:37:27.780 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 36
20:37:27.805 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@6cdd3ca5)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@6f924a1e, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@35629b98, configuration: Configuration(false)]
20:37:27.876 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 37
20:37:28.010 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:37:28.011 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:37:28.049 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:37:28.145 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:37:28.146 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@3dfb7ad6)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@14aa9611, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@4390db15, configuration: Configuration(false)]
20:37:28.147 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:37:28.148 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@44d3de86)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@22982100, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@6242315c, configuration: Configuration(false)]
20:37:28.149 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@274f0672)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@7d32cbf3, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@1ec17574, configuration: Configuration(false)]
20:37:28.155 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 78
20:37:28.183 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@25066255)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@3e73b, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@272c02f2, configuration: Configuration(false)]
20:37:28.290 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 79
20:37:28.384 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:37:28.384 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:37:28.431 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:37:28.513 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:37:28.514 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@5cc80d72)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@520ff89b, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@56306602, configuration: Configuration(false)]
20:37:28.515 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:37:28.515 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@1332f2a0)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@f8dde29, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@22eb2463, configuration: Configuration(false)]
20:37:28.516 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@77be564a)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@731d0def, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@1280e933, configuration: Configuration(false)]
20:37:28.518 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 78
20:37:28.540 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@28a02b42)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@1596d9db, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@9d7f71, configuration: Configuration(false)]
20:37:28.610 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 79
20:37:28.637 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:37:28.638 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:37:28.690 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:37:28.873 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:37:28.874 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@c94e9a0)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@8ed2dbe, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@75495299, configuration: Configuration(false)]
20:37:28.875 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:37:28.876 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@5b105e4e)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@70e26565, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@1103361a, configuration: Configuration(false)]
20:37:28.877 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@2ee17339)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@77daba85, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@14702056, configuration: Configuration(false)]
20:37:28.879 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 78
20:37:28.960 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@7f07ae42)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@21fead30, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@609c66aa, configuration: Configuration(false)]
20:37:29.100 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 79
20:37:29.356 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:37:29.356 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:37:29.393 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:37:29.506 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:37:29.507 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@2d99c8f)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@229a3cf6, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@746e2009, configuration: Configuration(false)]
20:37:29.508 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:37:29.509 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@25caa1a3)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@6bb96c71, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@1837807c, configuration: Configuration(false)]
20:37:29.510 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@1d5c851b)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@6d6bf8aa, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@4d91db92, configuration: Configuration(false)]
20:37:29.512 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 82
20:37:29.539 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@1a788623)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@18e72413, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@2f39896d, configuration: Configuration(false)]
20:37:29.604 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 83
20:37:29.777 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:37:29.777 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:37:29.814 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:37:29.905 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:37:29.906 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@3876c034)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@36b1394a, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@328eab44, configuration: Configuration(false)]
20:37:29.908 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:37:29.909 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@778c3371)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@79753ad4, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@463127d9, configuration: Configuration(false)]
20:37:29.910 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@12270d0)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@17a3cc3e, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@47d0f784, configuration: Configuration(false)]
20:37:29.912 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 114
20:37:29.938 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@51ffc126)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@2938563, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@733c5648, configuration: Configuration(false)]
20:37:30.026 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 115
20:37:30.171 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:37:30.172 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:37:30.268 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:37:30.380 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:37:30.381 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@23d67fd5)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@3073f5a1, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@4cc8240c, configuration: Configuration(false)]
20:37:30.382 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:37:30.382 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@2f15649)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@12c9b915, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@3d388810, configuration: Configuration(false)]
20:37:30.383 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@30dd33ff)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@389678b3, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@67487fa5, configuration: Configuration(false)]
20:37:30.384 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 114
20:37:30.405 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@1816f51b)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@514f410f, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@5c12ba1a, configuration: Configuration(false)]
20:37:30.492 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 115
20:37:30.707 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:37:30.708 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:37:30.772 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:37:30.879 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:37:30.880 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@771d133e)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@79daf3ad, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@7de5784e, configuration: Configuration(false)]
20:37:30.881 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:37:30.882 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@38e7f996)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@277c9957, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@995418e, configuration: Configuration(false)]
20:37:30.883 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@5fe47ffa)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@58c806d9, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@1bbd8ab6, configuration: Configuration(false)]
20:37:30.885 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 139
20:37:30.905 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@343a0c10)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@bd8300, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@2003cc76, configuration: Configuration(false)]
20:37:31.084 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 141
20:37:31.157 [] INFO io.zeebe.test.records - Test failed, following records were exported:
20:37:31.265 [] INFO io.zeebe.test.records - {"valueType":"DEPLOYMENT","key":-1,"position":1,"timestamp":1612989444105,"recordType":"COMMAND","intent":"CREATE","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"resources":[{"resource":"PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiIHN0YW5kYWxvbmU9Im5vIj8+CjxkZWZpbml0aW9ucyB4bWxuczpicG1uZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvREkiIHhtbG5zOmRjPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9ERC8yMDEwMDUyNC9EQyIgeG1sbnM6ZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0RELzIwMTAwNTI0L0RJIiB4bWxuczpuczA9Imh0dHA6Ly9jYW11bmRhLm9yZy9zY2hlbWEvemVlYmUvMS4wIiBpZD0iZGVmaW5pdGlvbnNfNzkwZTEwYWItMzA3My00M2ZkLTkyOTktYzY2MmI4NzU3Yzk0IiB0YXJnZXROYW1lc3BhY2U9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvTU9ERUwiIHhtbG5zPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9CUE1OLzIwMTAwNTI0L01PREVMIj4KICA8cHJvY2VzcyBpZD0icHJvY2Vzc19pZF8wIiBpc0V4ZWN1dGFibGU9InRydWUiPgogICAgPHN0YXJ0RXZlbnQgaWQ9ImlkXzEiIG5hbWU9ImlkXzEiPgogICAgICA8b3V0Z29pbmc+c2VxdWVuY2VGbG93X2QwYTgzZmQyLTAwZTEtNGJlZS1iMGNjLTEwOWM0MD...
20:37:31.269 [] INFO io.zeebe.test.records - {"valueType":"DEPLOYMENT","key":2251799813685250,"position":2,"timestamp":1612989444255,"recordType":"EVENT","intent":"CREATED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"resources":[{"resource":"PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiIHN0YW5kYWxvbmU9Im5vIj8+CjxkZWZpbml0aW9ucyB4bWxuczpicG1uZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvREkiIHhtbG5zOmRjPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9ERC8yMDEwMDUyNC9EQyIgeG1sbnM6ZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0RELzIwMTAwNTI0L0RJIiB4bWxuczpuczA9Imh0dHA6Ly9jYW11bmRhLm9yZy9zY2hlbWEvemVlYmUvMS4wIiBpZD0iZGVmaW5pdGlvbnNfNzkwZTEwYWItMzA3My00M2ZkLTkyOTktYzY2MmI4NzU3Yzk0IiB0YXJnZXROYW1lc3BhY2U9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvTU9ERUwiIHhtbG5zPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9CUE1OLzIwMTAwNTI0L01PREVMIj4KICA8cHJvY2VzcyBpZD0icHJvY2Vzc19pZF8wIiBpc0V4ZWN1dGFibGU9InRydWUiPgogICAgPHN0YXJ0RXZlbnQgaWQ9ImlkXzEiIG5hbWU9ImlkXzEiPgogICAgICA8b3V0Z29pbmc+c2VxdWVuY2VGbG93X2QwYTgzZmQyLTAwZTEtNGJlZS1iM...
20:37:31.271 [] INFO io.zeebe.test.records - {"valueType":"DEPLOYMENT","key":2251799813685250,"position":3,"timestamp":1612989444259,"recordType":"COMMAND","intent":"DISTRIBUTE","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"resources":[{"resource":"PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiIHN0YW5kYWxvbmU9Im5vIj8+CjxkZWZpbml0aW9ucyB4bWxuczpicG1uZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvREkiIHhtbG5zOmRjPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9ERC8yMDEwMDUyNC9EQyIgeG1sbnM6ZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0RELzIwMTAwNTI0L0RJIiB4bWxuczpuczA9Imh0dHA6Ly9jYW11bmRhLm9yZy9zY2hlbWEvemVlYmUvMS4wIiBpZD0iZGVmaW5pdGlvbnNfNzkwZTEwYWItMzA3My00M2ZkLTkyOTktYzY2MmI4NzU3Yzk0IiB0YXJnZXROYW1lc3BhY2U9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvTU9ERUwiIHhtbG5zPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9CUE1OLzIwMTAwNTI0L01PREVMIj4KICA8cHJvY2VzcyBpZD0icHJvY2Vzc19pZF8wIiBpc0V4ZWN1dGFibGU9InRydWUiPgogICAgPHN0YXJ0RXZlbnQgaWQ9ImlkXzEiIG5hbWU9ImlkXzEiPgogICAgICA8b3V0Z29pbmc+c2VxdWVuY2VGbG93X2QwYTgzZmQyLTAwZTEtNGJl...
20:37:31.274 [] INFO io.zeebe.test.records - {"valueType":"DEPLOYMENT","key":2251799813685250,"position":4,"timestamp":1612989444260,"recordType":"EVENT","intent":"DISTRIBUTED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"resources":[{"resource":"PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiIHN0YW5kYWxvbmU9Im5vIj8+CjxkZWZpbml0aW9ucyB4bWxuczpicG1uZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvREkiIHhtbG5zOmRjPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9ERC8yMDEwMDUyNC9EQyIgeG1sbnM6ZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0RELzIwMTAwNTI0L0RJIiB4bWxuczpuczA9Imh0dHA6Ly9jYW11bmRhLm9yZy9zY2hlbWEvemVlYmUvMS4wIiBpZD0iZGVmaW5pdGlvbnNfNzkwZTEwYWItMzA3My00M2ZkLTkyOTktYzY2MmI4NzU3Yzk0IiB0YXJnZXROYW1lc3BhY2U9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvTU9ERUwiIHhtbG5zPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9CUE1OLzIwMTAwNTI0L01PREVMIj4KICA8cHJvY2VzcyBpZD0icHJvY2Vzc19pZF8wIiBpc0V4ZWN1dGFibGU9InRydWUiPgogICAgPHN0YXJ0RXZlbnQgaWQ9ImlkXzEiIG5hbWU9ImlkXzEiPgogICAgICA8b3V0Z29pbmc+c2VxdWVuY2VGbG93X2QwYTgzZmQyLTAwZTEtNGJlZ...
20:37:31.277 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE_CREATION","key":-1,"position":5,"timestamp":1612989444264,"recordType":"COMMAND","intent":"CREATE","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":-1,"variables":{"fork_id_28_branch":"default-case","fork_id_42_branch":"edge_id_46","fork_id_3_branch":"edge_id_8","fork_id_14_branch":"default-case","fork_id_23_branch":"edge_id_26"},"bpmnProcessId":"process_id_0","workflowInstanceKey":-1,"workflowKey":-1},"sourceRecordPosition":-1}
20:37:31.280 [] INFO io.zeebe.test.records - {"valueType":"VARIABLE","key":2251799813685252,"position":6,"timestamp":1612989444265,"recordType":"EVENT","intent":"CREATED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"name":"fork_id_14_branch","value":"\"default-case\"","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249,"scopeKey":2251799813685251},"sourceRecordPosition":5}
20:37:31.280 [] INFO io.zeebe.test.records - {"valueType":"VARIABLE","key":2251799813685253,"position":7,"timestamp":1612989444265,"recordType":"EVENT","intent":"CREATED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"name":"fork_id_23_branch","value":"\"edge_id_26\"","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249,"scopeKey":2251799813685251},"sourceRecordPosition":5}
20:37:31.280 [] INFO io.zeebe.test.records - {"valueType":"VARIABLE","key":2251799813685254,"position":8,"timestamp":1612989444265,"recordType":"EVENT","intent":"CREATED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"name":"fork_id_28_branch","value":"\"default-case\"","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249,"scopeKey":2251799813685251},"sourceRecordPosition":5}
20:37:31.281 [] INFO io.zeebe.test.records - {"valueType":"VARIABLE","key":2251799813685255,"position":9,"timestamp":1612989444265,"recordType":"EVENT","intent":"CREATED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"name":"fork_id_42_branch","value":"\"edge_id_46\"","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249,"scopeKey":2251799813685251},"sourceRecordPosition":5}
20:37:31.281 [] INFO io.zeebe.test.records - {"valueType":"VARIABLE","key":2251799813685256,"position":10,"timestamp":1612989444265,"recordType":"EVENT","intent":"CREATED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"name":"fork_id_3_branch","value":"\"edge_id_8\"","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249,"scopeKey":2251799813685251},"sourceRecordPosition":5}
20:37:31.285 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685251,"position":11,"timestamp":1612989444265,"recordType":"EVENT","intent":"ELEMENT_ACTIVATING","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":-1,"bpmnElementType":"PROCESS","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"process_id_0","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":5}
20:37:31.285 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE_CREATION","key":2251799813685257,"position":12,"timestamp":1612989444265,"recordType":"EVENT","intent":"CREATED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"variables":{"fork_id_28_branch":"default-case","fork_id_42_branch":"edge_id_46","fork_id_3_branch":"edge_id_8","fork_id_14_branch":"default-case","fork_id_23_branch":"edge_id_26"},"bpmnProcessId":"process_id_0","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":5}
20:37:31.285 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685251,"position":13,"timestamp":1612989444266,"recordType":"EVENT","intent":"ELEMENT_ACTIVATED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":-1,"bpmnElementType":"PROCESS","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"process_id_0","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":11}
20:37:31.286 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685258,"position":14,"timestamp":1612989444267,"recordType":"EVENT","intent":"ELEMENT_ACTIVATING","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"START_EVENT","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"id_1","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":13}
20:37:31.286 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685258,"position":15,"timestamp":1612989444268,"recordType":"EVENT","intent":"ELEMENT_ACTIVATED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"START_EVENT","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"id_1","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":14}
20:37:31.290 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685258,"position":16,"timestamp":1612989444268,"recordType":"EVENT","intent":"ELEMENT_COMPLETING","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"START_EVENT","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"id_1","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":15}
20:37:31.291 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685258,"position":17,"timestamp":1612989444269,"recordType":"EVENT","intent":"ELEMENT_COMPLETED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"START_EVENT","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"id_1","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":16}
20:37:31.291 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685259,"position":18,"timestamp":1612989444271,"recordType":"EVENT","intent":"SEQUENCE_FLOW_TAKEN","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"SEQUENCE_FLOW","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"sequenceFlow_d0a83fd2-00e1-4bee-b0cc-109c404a6cab","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":17}
20:37:31.291 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685260,"position":19,"timestamp":1612989444272,"recordType":"EVENT","intent":"ELEMENT_ACTIVATING","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"EXCLUSIVE_GATEWAY","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"fork_id_3","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":18}
20:37:31.292 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685260,"position":20,"timestamp":1612989444278,"recordType":"EVENT","intent":"ELEMENT_ACTIVATED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"EXCLUSIVE_GATEWAY","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"fork_id_3","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":19}
20:37:31.292 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685260,"position":21,"timestamp":1612989444279,"recordType":"EVENT","intent":"ELEMENT_COMPLETING","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"EXCLUSIVE_GATEWAY","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"fork_id_3","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":20}
20:37:31.292 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685260,"position":22,"timestamp":1612989444281,"recordType":"EVENT","intent":"ELEMENT_COMPLETED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"EXCLUSIVE_GATEWAY","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"fork_id_3","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":21}
20:37:31.293 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685261,"position":23,"timestamp":1612989444285,"recordType":"EVENT","intent":"SEQUENCE_FLOW_TAKEN","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"SEQUENCE_FLOW","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"edge_id_8","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":22}
20:37:31.293 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685262,"position":24,"timestamp":1612989444286,"recordType":"EVENT","intent":"ELEMENT_ACTIVATING","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"PARALLEL_GATEWAY","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"fork_id_9","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":23}
20:37:31.293 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685262,"position":25,"timestamp":1612989444287,"recordType":"EVENT","intent":"ELEMENT_ACTIVATED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"PARALLEL_GATEWAY","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"fork_id_9","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":24}
20:37:31.294 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685262,"position":26,"timestamp":1612989444288,"recordType":"EVENT","intent":"ELEMENT_COMPLETING","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"PARALLEL_GATEWAY","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"fork_id_9","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":25}
20:37:31.297 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685262,"position":27,"timestamp":1612989444289,"recordType":"EVENT","intent":"ELEMENT_COMPLETED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"PARALLEL_GATEWAY","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"fork_id_9","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":26}
20:37:31.297 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685263,"position":28,"timestamp":1612989444290,"recordType":"EVENT","intent":"SEQUENCE_FLOW_TAKEN","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"SEQUENCE_FLOW","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"id_12","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":27}
20:37:31.297 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685264,"position":29,"timestamp":1612989444290,"recordType":"EVENT","intent":"SEQUENCE_FLOW_TAKEN","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"SEQUENCE_FLOW","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"sequenceFlow_0aa24ecd-79de-4279-a923-9c107589e91e","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":27}
20:37:31.298 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685265,"position":30,"timestamp":1612989444291,"recordType":"EVENT","intent":"ELEMENT_ACTIVATING","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"SERVICE_TASK","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"id_13","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":28}
20:37:31.298 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE","key":2251799813685265,"position":31,"timestamp":1612989444294,"recordType":"EVENT","intent":"ELEMENT_ACTIVATED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":1,"flowScopeKey":2251799813685251,"bpmnElementType":"SERVICE_TASK","parentWorkflowInstanceKey":-1,"parentElementInstanceKey":-1,"bpmnProcessId":"process_id_0","elementId":"id_13","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":30}
20:37:31.307 [] INFO io.zeebe.test.records - {"valueType":"JOB","key":-1,"position":32,"timestamp":1612989444294,"recordType":"COMMAND","intent":"CREATE","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"type":"job_id_13","errorMessage":"","deadline":-1,"variables":{},"errorCode":"","retries":3,"customHeaders":{},"worker":"","workflowDefinitionVersion":1,"elementInstanceKey":2251799813685265,"bpmnProcessId":"process_id_0","elementId":"id_13","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":31}
20:37:31.308 [] INFO io.zeebe.test.records - {"valueType":"JOB","key":2251799813685266,"position":33,"timestamp":1612989444295,"recordType":"EVENT","intent":"CREATED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"type":"job_id_13","errorMessage":"","deadline":-1,"variables":{},"errorCode":"","retries":3,"customHeaders":{},"worker":"","workflowDefinitionVersion":1,"elementInstanceKey":2251799813685265,"bpmnProcessId":"process_id_0","elementId":"id_13","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249},"sourceRecordPosition":32}
20:37:31.311 [] INFO io.zeebe.test.records - {"valueType":"DEPLOYMENT","key":-1,"position":1,"timestamp":1612989444105,"recordType":"COMMAND","intent":"CREATE","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"resources":[{"resource":"PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiIHN0YW5kYWxvbmU9Im5vIj8+CjxkZWZpbml0aW9ucyB4bWxuczpicG1uZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvREkiIHhtbG5zOmRjPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9ERC8yMDEwMDUyNC9EQyIgeG1sbnM6ZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0RELzIwMTAwNTI0L0RJIiB4bWxuczpuczA9Imh0dHA6Ly9jYW11bmRhLm9yZy9zY2hlbWEvemVlYmUvMS4wIiBpZD0iZGVmaW5pdGlvbnNfNzkwZTEwYWItMzA3My00M2ZkLTkyOTktYzY2MmI4NzU3Yzk0IiB0YXJnZXROYW1lc3BhY2U9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvTU9ERUwiIHhtbG5zPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9CUE1OLzIwMTAwNTI0L01PREVMIj4KICA8cHJvY2VzcyBpZD0icHJvY2Vzc19pZF8wIiBpc0V4ZWN1dGFibGU9InRydWUiPgogICAgPHN0YXJ0RXZlbnQgaWQ9ImlkXzEiIG5hbWU9ImlkXzEiPgogICAgICA8b3V0Z29pbmc+c2VxdWVuY2VGbG93X2QwYTgzZmQyLTAwZTEtNGJlZS1iMGNjLTEwOWM0MD...
20:37:31.314 [] INFO io.zeebe.test.records - {"valueType":"DEPLOYMENT","key":2251799813685250,"position":2,"timestamp":1612989444255,"recordType":"EVENT","intent":"CREATED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"resources":[{"resource":"PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiIHN0YW5kYWxvbmU9Im5vIj8+CjxkZWZpbml0aW9ucyB4bWxuczpicG1uZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvREkiIHhtbG5zOmRjPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9ERC8yMDEwMDUyNC9EQyIgeG1sbnM6ZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0RELzIwMTAwNTI0L0RJIiB4bWxuczpuczA9Imh0dHA6Ly9jYW11bmRhLm9yZy9zY2hlbWEvemVlYmUvMS4wIiBpZD0iZGVmaW5pdGlvbnNfNzkwZTEwYWItMzA3My00M2ZkLTkyOTktYzY2MmI4NzU3Yzk0IiB0YXJnZXROYW1lc3BhY2U9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvTU9ERUwiIHhtbG5zPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9CUE1OLzIwMTAwNTI0L01PREVMIj4KICA8cHJvY2VzcyBpZD0icHJvY2Vzc19pZF8wIiBpc0V4ZWN1dGFibGU9InRydWUiPgogICAgPHN0YXJ0RXZlbnQgaWQ9ImlkXzEiIG5hbWU9ImlkXzEiPgogICAgICA8b3V0Z29pbmc+c2VxdWVuY2VGbG93X2QwYTgzZmQyLTAwZTEtNGJlZS1iM...
20:37:31.317 [] INFO io.zeebe.test.records - {"valueType":"DEPLOYMENT","key":2251799813685250,"position":3,"timestamp":1612989444259,"recordType":"COMMAND","intent":"DISTRIBUTE","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"resources":[{"resource":"PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiIHN0YW5kYWxvbmU9Im5vIj8+CjxkZWZpbml0aW9ucyB4bWxuczpicG1uZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvREkiIHhtbG5zOmRjPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9ERC8yMDEwMDUyNC9EQyIgeG1sbnM6ZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0RELzIwMTAwNTI0L0RJIiB4bWxuczpuczA9Imh0dHA6Ly9jYW11bmRhLm9yZy9zY2hlbWEvemVlYmUvMS4wIiBpZD0iZGVmaW5pdGlvbnNfNzkwZTEwYWItMzA3My00M2ZkLTkyOTktYzY2MmI4NzU3Yzk0IiB0YXJnZXROYW1lc3BhY2U9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvTU9ERUwiIHhtbG5zPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9CUE1OLzIwMTAwNTI0L01PREVMIj4KICA8cHJvY2VzcyBpZD0icHJvY2Vzc19pZF8wIiBpc0V4ZWN1dGFibGU9InRydWUiPgogICAgPHN0YXJ0RXZlbnQgaWQ9ImlkXzEiIG5hbWU9ImlkXzEiPgogICAgICA8b3V0Z29pbmc+c2VxdWVuY2VGbG93X2QwYTgzZmQyLTAwZTEtNGJl...
20:37:31.319 [] INFO io.zeebe.test.records - {"valueType":"DEPLOYMENT","key":2251799813685250,"position":4,"timestamp":1612989444260,"recordType":"EVENT","intent":"DISTRIBUTED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"resources":[{"resource":"PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiIHN0YW5kYWxvbmU9Im5vIj8+CjxkZWZpbml0aW9ucyB4bWxuczpicG1uZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvREkiIHhtbG5zOmRjPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9ERC8yMDEwMDUyNC9EQyIgeG1sbnM6ZGk9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0RELzIwMTAwNTI0L0RJIiB4bWxuczpuczA9Imh0dHA6Ly9jYW11bmRhLm9yZy9zY2hlbWEvemVlYmUvMS4wIiBpZD0iZGVmaW5pdGlvbnNfNzkwZTEwYWItMzA3My00M2ZkLTkyOTktYzY2MmI4NzU3Yzk0IiB0YXJnZXROYW1lc3BhY2U9Imh0dHA6Ly93d3cub21nLm9yZy9zcGVjL0JQTU4vMjAxMDA1MjQvTU9ERUwiIHhtbG5zPSJodHRwOi8vd3d3Lm9tZy5vcmcvc3BlYy9CUE1OLzIwMTAwNTI0L01PREVMIj4KICA8cHJvY2VzcyBpZD0icHJvY2Vzc19pZF8wIiBpc0V4ZWN1dGFibGU9InRydWUiPgogICAgPHN0YXJ0RXZlbnQgaWQ9ImlkXzEiIG5hbWU9ImlkXzEiPgogICAgICA8b3V0Z29pbmc+c2VxdWVuY2VGbG93X2QwYTgzZmQyLTAwZTEtNGJlZ...
20:37:31.320 [] INFO io.zeebe.test.records - {"valueType":"WORKFLOW_INSTANCE_CREATION","key":-1,"position":5,"timestamp":1612989444264,"recordType":"COMMAND","intent":"CREATE","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"version":-1,"variables":{"fork_id_28_branch":"default-case","fork_id_42_branch":"edge_id_46","fork_id_3_branch":"edge_id_8","fork_id_14_branch":"default-case","fork_id_23_branch":"edge_id_26"},"bpmnProcessId":"process_id_0","workflowInstanceKey":-1,"workflowKey":-1},"sourceRecordPosition":-1}
20:37:31.320 [] INFO io.zeebe.test.records - {"valueType":"VARIABLE","key":2251799813685252,"position":6,"timestamp":1612989444265,"recordType":"EVENT","intent":"CREATED","partitionId":1,"rejectionType":"NULL_VAL","rejectionReason":"","brokerVersion":"1.0.0","value":{"name":"fork_id_14_branch","value":"\"default-case\"","workflowInstanceKey":2251799813685251,"workflowKey":2251799813685249,"scopeKey":2251799813685251},"sourceRecordPosition":5}
20:37:31.320 [] INFO io.zeebe.test.records - {"valueType":"VARIABLE","key":2251799813685253,"position":7,"timestamp":1612989444265,"recordType":"EVENT","intent":"CR
...[truncated 1411553 bytes]...
0:38:49.353 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:49.420 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:49.421 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@444ccd60)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@4a8b2b99, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@4eb4a610, configuration: Configuration(false)]
20:38:49.421 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:49.422 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@bb621a2)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@2933ca79, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@3af7eb25, configuration: Configuration(false)]
20:38:49.422 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@51985e4d)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@6b99deed, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@5ccea1e5, configuration: Configuration(false)]
20:38:49.423 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 72
20:38:49.432 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@29e404de)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@6350a18a, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@2516cd72, configuration: Configuration(false)]
20:38:49.452 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 73
20:38:49.527 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:49.527 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:49.558 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:49.624 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:49.624 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@2f4f9abc)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@21886125, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@25b7f1cc, configuration: Configuration(false)]
20:38:49.625 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:49.625 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@60b56b1e)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@337c97ce, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@511f49cd, configuration: Configuration(false)]
20:38:49.626 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@4b3cebb3)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@9a9451, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@570fb16b, configuration: Configuration(false)]
20:38:49.627 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 72
20:38:49.637 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@132b0187)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@4347617e, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@44e5f336, configuration: Configuration(false)]
20:38:49.659 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 73
20:38:49.834 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:49.834 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:49.865 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:49.931 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:49.931 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@2e45eefc)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@1af226d4, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@212d1933, configuration: Configuration(false)]
20:38:49.932 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:49.933 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@53e99121)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@525aa7ae, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@5995ffb7, configuration: Configuration(false)]
20:38:49.933 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@10388137)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@f476a88, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@827cb19, configuration: Configuration(false)]
20:38:49.934 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 76
20:38:49.943 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@17ff90d6)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@428aa385, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@b264b50, configuration: Configuration(false)]
20:38:49.966 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 77
20:38:50.141 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:50.142 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:50.173 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:50.241 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:50.241 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@28edbc46)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@299a0858, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@772c2b87, configuration: Configuration(false)]
20:38:50.242 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:50.242 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@3825a002)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@7d6f24cc, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@44e3bf1, configuration: Configuration(false)]
20:38:50.242 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@122848ec)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@718cf1b, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@355d270, configuration: Configuration(false)]
20:38:50.243 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 87
20:38:50.253 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@604c40b5)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@e319584, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@4f97c28e, configuration: Configuration(false)]
20:38:50.275 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 88
20:38:50.450 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:50.451 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:50.482 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:50.549 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:50.549 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@52d95386)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@7133f914, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@ae3a0e8, configuration: Configuration(false)]
20:38:50.550 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:50.550 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@4a453f2c)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@52180081, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@3c0db88f, configuration: Configuration(false)]
20:38:50.551 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@2b7ebb06)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@48214d1a, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@5a2f5097, configuration: Configuration(false)]
20:38:50.552 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 129
20:38:50.560 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@4f7f23ef)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@28c44e9c, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@1ea22703, configuration: Configuration(false)]
20:38:50.590 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 130
20:38:50.656 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:50.656 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:50.687 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:50.754 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:50.755 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@fed39db)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@5e079b5a, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@11a9d0e6, configuration: Configuration(false)]
20:38:50.755 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:50.756 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@72c9eb3b)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@140c3758, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@76b3a3c8, configuration: Configuration(false)]
20:38:50.756 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@29426318)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@146fd538, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@61ac961a, configuration: Configuration(false)]
20:38:50.757 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 129
20:38:50.766 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@5eb3138a)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@64cd1607, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@73824c1e, configuration: Configuration(false)]
20:38:50.797 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 130
20:38:50.861 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:50.861 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:50.892 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:50.959 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:50.959 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@359584a5)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@7d5d2548, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@e33b494, configuration: Configuration(false)]
20:38:50.960 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:50.960 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@432c1167)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@5a05150b, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@4ee60c66, configuration: Configuration(false)]
20:38:50.961 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@5d62963a)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@7e6603d8, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@cb9ec52, configuration: Configuration(false)]
20:38:50.962 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 129
20:38:50.971 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@50504ce)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@43044bb8, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@6a4fdb17, configuration: Configuration(false)]
20:38:51.001 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 130
20:38:51.169 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:51.170 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:51.201 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:51.267 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:51.268 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@c81e2be)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@5004549, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@69e75899, configuration: Configuration(false)]
20:38:51.269 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:51.269 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@6e4c4bfa)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@6ddbd09f, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@6a76ea60, configuration: Configuration(false)]
20:38:51.269 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@39065427)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@3599db66, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@6f5ba62, configuration: Configuration(false)]
20:38:51.270 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 140
20:38:51.280 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@437d6fed)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@7e2a91bc, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@1a48f9a6, configuration: Configuration(false)]
20:38:51.313 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 141
20:38:51.478 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:51.478 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:51.510 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:51.577 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:51.577 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@5ecdf247)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@500a07ee, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@54359937, configuration: Configuration(false)]
20:38:51.578 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:51.578 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@11370dd5)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@20598a7e, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@20ecdcec, configuration: Configuration(false)]
20:38:51.578 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@586805e7)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@7118f29a, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@1d7e8cbd, configuration: Configuration(false)]
20:38:51.579 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 144
20:38:51.589 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@3a0654c4)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@7d41ea69, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@2cf827ca, configuration: Configuration(false)]
20:38:51.621 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 145
20:38:51.787 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:51.787 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:51.818 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:51.884 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:51.885 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@7433c345)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@7af423b3, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@7ecba835, configuration: Configuration(false)]
20:38:51.885 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:51.886 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@3bc92988)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@518aec5f, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@ab8b466, configuration: Configuration(false)]
20:38:51.886 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@29112f04)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@5be6d6c7, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@5d0831f0, configuration: Configuration(false)]
20:38:51.887 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 167
20:38:51.896 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@34ba7b6a)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@4e269ae9, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@27e6d6f4, configuration: Configuration(false)]
20:38:51.931 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 168
20:38:52.092 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:52.093 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:52.124 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:52.190 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:52.190 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@4e1d048a)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@7a499139, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@3a39e7e0, configuration: Configuration(false)]
20:38:52.191 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:52.191 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@1308b572)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@3517cacd, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@3e1450f, configuration: Configuration(false)]
20:38:52.191 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@7692b474)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@20e72cf1, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@2badcb82, configuration: Configuration(false)]
20:38:52.192 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 167
20:38:52.202 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@88e2c5e)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@7c760800, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@7eb81d2c, configuration: Configuration(false)]
20:38:52.237 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 168
20:38:52.295 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:52.295 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:52.326 [Broker-0-LogStream-1] INFO io.zeebe.logstreams - Close appender for log stream stream-1
20:38:52.327 [stream-1-write-buffer] DEBUG io.zeebe.dispatcher - Dispatcher closed
20:38:52.327 [Broker-0-LogStream-1] INFO io.zeebe.logstreams - On closing logstream stream-1 close 25 readers
20:38:52.327 [Broker-0-LogStream-1] INFO io.zeebe.logstreams - Close log storage with name stream-1
20:38:52.327 [] DEBUG io.zeebe.broker.test - Clean up test files on path /tmp/junit16159778016761806535
20:38:52.327 [] DEBUG io.zeebe.util.actor - Closing actor thread ground '-zb-fs-workers'
20:38:52.327 [] DEBUG io.zeebe.util.actor - Closing actor thread ground '-zb-actors'
20:38:52.328 [] DEBUG io.zeebe.util.actor - Closing actor thread ground '-zb-actors': closed successfully
20:38:52.329 [] DEBUG io.zeebe.util.actor - Closing actor thread ground '-zb-fs-workers': closed successfully
20:38:52.400 [Broker-0-LogStream-1] DEBUG io.zeebe.logstreams - Configured log appender back pressure at partition 1 as AppenderVegasCfg{initialLimit=1024, maxConcurrency=32768, alphaLimit=0.7, betaLimit=0.95}. Window limiting is disabled
20:38:52.466 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:52.467 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@73e8ed7f)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@7d2f0d72, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@7910860f, configuration: Configuration(false)]
20:38:52.467 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:52.468 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@3c120749)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@82596a1, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@d645e4a, configuration: Configuration(false)]
20:38:52.468 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@70b99117)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@5c3df806, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@7809edbe, configuration: Configuration(false)]
20:38:52.480 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@63560539)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@81b4a85, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@71c15162, configuration: Configuration(false)]
20:38:52.597 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:52.598 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:52.629 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:52.695 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:52.695 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@5f5fedc6)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@5198085d, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@55a786b6, configuration: Configuration(false)]
20:38:52.696 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:52.697 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@10a50266)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@30b4ec69, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@21ffd0c2, configuration: Configuration(false)]
20:38:52.697 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@4804607d)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@708afd29, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@67ca90f5, configuration: Configuration(false)]
20:38:52.698 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 72
20:38:52.708 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@3a0727d0)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@9a2eb88, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@6f07b680, configuration: Configuration(false)]
20:38:52.729 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 73
20:38:52.802 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:52.802 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:52.833 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:52.898 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:52.899 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@732fcb85)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@7de82b90, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@79d7a0f5, configuration: Configuration(false)]
20:38:52.900 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:52.900 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@3cd521ab)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@312bb657, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@da3bf49, configuration: Configuration(false)]
20:38:52.900 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@49b77600)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@5cfadee4, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@308f5222, configuration: Configuration(false)]
20:38:52.907 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 72
20:38:52.921 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@225f34bf)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@4b6fac90, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@368352ff, configuration: Configuration(false)]
20:38:52.943 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 73
20:38:53.115 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:53.115 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:53.147 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:53.214 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:53.214 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@57307afb)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@6bbd7823, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@5873409a, configuration: Configuration(false)]
20:38:53.215 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:53.215 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@6f203237)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@3b937a8a, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@3b6f61bb, configuration: Configuration(false)]
20:38:53.216 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@79da91dc)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@1705a55a, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@37e2ca1a, configuration: Configuration(false)]
20:38:53.217 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 76
20:38:53.226 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@4e9f5439)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@103c66b1, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@181f0ad8, configuration: Configuration(false)]
20:38:53.246 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 77
20:38:53.424 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:53.424 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:53.455 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:53.521 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:53.522 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@1b5c5489)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@159ba992, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@11a43ff9, configuration: Configuration(false)]
20:38:53.522 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:53.523 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@54a6a3a9)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@2da75da8, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@6538b6b7, configuration: Configuration(false)]
20:38:53.523 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@55d3161e)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@34895ce6, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@23d40dc2, configuration: Configuration(false)]
20:38:53.524 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 118
20:38:53.533 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@606313d3)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@76a1ee22, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@35d11153, configuration: Configuration(false)]
20:38:53.562 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 119
20:38:53.629 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:53.629 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:53.661 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:53.727 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:53.727 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@31045e79)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@3fc3b629, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@3bfc3509, configuration: Configuration(false)]
20:38:53.728 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:53.728 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@7efebc12)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@6d9ad222, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@4fca4346, configuration: Configuration(false)]
20:38:53.729 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@154473cb)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@1ed1435d, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@25e19967, configuration: Configuration(false)]
20:38:53.730 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 118
20:38:53.739 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@188edf60)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@fedd687, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@7ecab29d, configuration: Configuration(false)]
20:38:53.767 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 119
20:38:53.834 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:53.834 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:53.866 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:53.932 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:53.932 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@7e9a844f)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@4e2b3ea, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@4b46efa4, configuration: Configuration(false)]
20:38:53.933 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:53.933 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@276636f3)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@d7476ac, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@1c98f90b, configuration: Configuration(false)]
20:38:53.933 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@605f0ef3)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@76986da5, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@5877a4cc, configuration: Configuration(false)]
20:38:53.934 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 118
20:38:53.943 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@3389a162)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@3a8c3f35, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@24855ea8, configuration: Configuration(false)]
20:38:53.971 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 119
20:38:54.142 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:54.142 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:54.173 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:54.241 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:54.241 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@63f8f9c3)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@73931fb5, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@5bc42a54, configuration: Configuration(false)]
20:38:54.242 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:54.242 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@db8735d)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@29aab5c8, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@42af79ff, configuration: Configuration(false)]
20:38:54.242 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@38437d74)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@762fd3ba, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@ad73c42, configuration: Configuration(false)]
20:38:54.243 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 122
20:38:54.253 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@52ee61f7)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@7f642207, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@fd8c9bd, configuration: Configuration(false)]
20:38:54.282 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 123
20:38:54.450 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:54.451 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:54.482 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:54.548 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:54.549 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@554d2b0f)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@5972dadb, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@74a6ef00, configuration: Configuration(false)]
20:38:54.550 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:54.550 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@21a50fba)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@1d0c4dd2, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@5267f8aa, configuration: Configuration(false)]
20:38:54.550 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@117ac309)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@18abd643, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@67367a59, configuration: Configuration(false)]
20:38:54.551 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 133
20:38:54.560 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@6c06dd44)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@132a0da2, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@1ba1c0cb, configuration: Configuration(false)]
20:38:54.592 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 134
20:38:54.758 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:54.758 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:54.790 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:54.856 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:54.857 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@682b4349)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@66eca03c, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@62d32839, configuration: Configuration(false)]
20:38:54.858 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:54.858 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@27d1aa07)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@155294bf, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@16bc2b69, configuration: Configuration(false)]
20:38:54.858 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@6dd54b67)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@47fe5548, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@4608d664, configuration: Configuration(false)]
20:38:54.859 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 156
20:38:54.869 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@697fe30f)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@544ab3fb, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@58558941, configuration: Configuration(false)]
20:38:54.903 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 157
20:38:55.064 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:55.065 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:55.096 [] INFO io.zeebe.logstreams - Closed stream stream-1
20:38:55.163 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Recovering state of partition 1 from snapshot
20:38:55.163 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@8e47bb3)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@62384df, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@21c10c3, configuration: Configuration(false)]
20:38:55.164 [Broker-0-StreamProcessor-1] INFO io.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
20:38:55.164 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@471e8ee4)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@6c0c4c57, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@76932875, configuration: Configuration(false)]
20:38:55.165 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@146bb46c)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@273171ab, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@29d1efee, configuration: Configuration(false)]
20:38:55.166 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor starts reprocessing, until last source event position 156
20:38:55.175 [Broker-0-StreamProcessor-1] INFO org.camunda.feel.FeelEngine - Engine created. [value-mapper: CompositeValueMapper(List(io.zeebe.el.impl.feel.MessagePackValueMapper@8950795)), function-provider: io.zeebe.el.impl.feel.FeelFunctionProvider@5d319409, clock: io.zeebe.el.impl.ZeebeFeelEngineClock@19bb1ba0, configuration: Configuration(false)]
20:38:55.211 [Broker-0-StreamProcessor-1] INFO io.zeebe.processor - Processor finished reprocessing at event position 157
20:38:55.268 [] DEBUG io.zeebe.util.buffer - Close stream processor
20:38:55.269 [Broker-0-StreamProcessor-1] DEBUG io.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
20:38:55.300 [Broker-0-LogStream-1] INFO io.zeebe.logstreams - Close appender for log stream stream-1
20:38:55.301 [stream-1-write-buffer] DEBUG io.zeebe.dispatcher - Dispatcher closed
20:38:55.301 [Broker-0-LogStream-1] INFO io.zeebe.logstreams - On closing logstream stream-1 close 23 readers
20:38:55.301 [Broker-0-LogStream-1] INFO io.zeebe.logstreams - Close log storage with name stream-1
20:38:55.301 [] DEBUG io.zeebe.broker.test - Clean up test files on path /tmp/junit7573536190744857024
20:38:55.301 [] DEBUG io.zeebe.util.actor - Closing actor thread ground '-zb-fs-workers'
20:38:55.301 [] DEBUG io.zeebe.util.actor - Closing actor thread ground '-zb-actors'
20:38:55.302 [] DEBUG io.zeebe.util.actor - Closing actor thread ground '-zb-fs-workers': closed successfully
20:38:55.302 [] DEBUG io.zeebe.util.actor - Closing actor thread ground '-zb-actors': closed successfully
</pre>
</details>
|
test
|
replaystatepropertytest is flaky maybe summary how often does the test fail once so far does it block your work not really but could in the future do we suspect that it is a real failure seems like a configuration issue with the test failures example assertion failure java lang exception no tests found matching unique id from org junit vintage engine descriptor runnerrequest at org junit internal requests filterrequest getrunner filterrequest java at org junit vintage engine descriptor runnertestdescriptor applyfilters runnertestdescriptor java at org junit vintage engine discovery runnertestdescriptorpostprocessor applyfiltersandcreatedescendants runnertestdescriptorpostprocessor java at java base java util stream foreachops foreachop ofref accept foreachops java at java base java util stream referencepipeline accept referencepipeline java at java base java util stream referencepipeline accept referencepipeline java at java base java util iterator foreachremaining iterator java at java base java util spliterators iteratorspliterator foreachremaining spliterators java at java base java util stream abstractpipeline copyinto abstractpipeline java at java base java util stream abstractpipeline wrapandcopyinto abstractpipeline java at java base java util stream foreachops foreachop evaluatesequential foreachops java at java base java util stream foreachops foreachop ofref evaluatesequential foreachops java at java base java util stream abstractpipeline evaluate abstractpipeline java at java base java util stream referencepipeline foreach referencepipeline java at org junit vintage engine discovery vintagediscoverer discover vintagediscoverer java at org junit vintage engine vintagetestengine discover vintagetestengine java at org junit platform launcher core enginediscoveryorchestrator discoverengineroot enginediscoveryorchestrator java at org junit platform launcher core enginediscoveryorchestrator discover enginediscoveryorchestrator java at org junit platform launcher core defaultlauncher discover defaultlauncher java at org junit platform launcher core defaultlauncher execute defaultlauncher java at org apache maven surefire junitplatform junitplatformprovider invokealltests junitplatformprovider java at org apache maven surefire junitplatform junitplatformprovider invoke junitplatformprovider java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire booter forkedbooter execute forkedbooter java at org apache maven surefire booter forkedbooter run forkedbooter java at org apache maven surefire booter forkedbooter main forkedbooter java hypotheses no idea are we generating different ids for these tests dynamically could we be messing with junit when we rerun these tests on failure it detected it as flaky for some reason logs logs stacktrace java lang exception no tests found matching unique id from org junit vintage engine descriptor runnerrequest at org junit internal requests filterrequest getrunner filterrequest java at org junit vintage engine descriptor runnertestdescriptor applyfilters runnertestdescriptor java at org junit vintage engine discovery runnertestdescriptorpostprocessor applyfiltersandcreatedescendants runnertestdescriptorpostprocessor java at java base java util stream foreachops foreachop ofref accept foreachops java at java base java util stream referencepipeline accept referencepipeline java at java base java util stream referencepipeline accept referencepipeline java at java base java util iterator foreachremaining iterator java at java base java util spliterators iteratorspliterator foreachremaining spliterators java at java base java util stream abstractpipeline copyinto abstractpipeline java at java base java util stream abstractpipeline wrapandcopyinto abstractpipeline java at java base java util stream foreachops foreachop evaluatesequential foreachops java at java base java util stream foreachops foreachop ofref evaluatesequential foreachops java at java base java util stream abstractpipeline evaluate abstractpipeline java at java base java util stream referencepipeline foreach referencepipeline java at org junit vintage engine discovery vintagediscoverer discover vintagediscoverer java at org junit vintage engine vintagetestengine discover vintagetestengine java at org junit platform launcher core enginediscoveryorchestrator discoverengineroot enginediscoveryorchestrator java at org junit platform launcher core enginediscoveryorchestrator discover enginediscoveryorchestrator java at org junit platform launcher core defaultlauncher discover defaultlauncher java at org junit platform launcher core defaultlauncher execute defaultlauncher java at org apache maven surefire junitplatform junitplatformprovider invokealltests junitplatformprovider java at org apache maven surefire junitplatform junitplatformprovider invoke junitplatformprovider java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire booter forkedbooter execute forkedbooter java at org apache maven surefire booter forkedbooter run forkedbooter java at org apache maven surefire booter forkedbooter main forkedbooter java standard output debug io zeebe logstreams configured log appender back pressure at partition as appendervegascfg initiallimit maxconcurrency alphalimit betalimit window limiting is disabled debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info org camunda feel feelengine engine created debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position info io zeebe test records test failed following records were exported info io zeebe test records valuetype deployment key position timestamp recordtype command intent create partitionid rejectiontype null val rejectionreason brokerversion value resources resource info io zeebe test records valuetype deployment key position timestamp recordtype event intent created partitionid rejectiontype null val rejectionreason brokerversion value resources resource info io zeebe test records valuetype deployment key position timestamp recordtype command intent distribute partitionid rejectiontype null val rejectionreason brokerversion value resources resource info io zeebe test records valuetype deployment key position timestamp recordtype event intent distributed partitionid rejectiontype null val rejectionreason brokerversion value resources resource info io zeebe test records valuetype workflow instance creation key position timestamp recordtype command intent create partitionid rejectiontype null val rejectionreason brokerversion value version variables fork id branch default case fork id branch edge id fork id branch edge id fork id branch default case fork id branch edge id bpmnprocessid process id workflowinstancekey workflowkey sourcerecordposition info io zeebe test records valuetype variable key position timestamp recordtype event intent created partitionid rejectiontype null val rejectionreason brokerversion value name fork id branch value default case workflowinstancekey workflowkey scopekey sourcerecordposition info io zeebe test records valuetype variable key position timestamp recordtype event intent created partitionid rejectiontype null val rejectionreason brokerversion value name fork id branch value edge id workflowinstancekey workflowkey scopekey sourcerecordposition info io zeebe test records valuetype variable key position timestamp recordtype event intent created partitionid rejectiontype null val rejectionreason brokerversion value name fork id branch value default case workflowinstancekey workflowkey scopekey sourcerecordposition info io zeebe test records valuetype variable key position timestamp recordtype event intent created partitionid rejectiontype null val rejectionreason brokerversion value name fork id branch value edge id workflowinstancekey workflowkey scopekey sourcerecordposition info io zeebe test records valuetype variable key position timestamp recordtype event intent created partitionid rejectiontype null val rejectionreason brokerversion value name fork id branch value edge id workflowinstancekey workflowkey scopekey sourcerecordposition info io zeebe test records valuetype workflow instance key position timestamp recordtype event intent element activating partitionid rejectiontype null val rejectionreason brokerversion value version flowscopekey bpmnelementtype process parentworkflowinstancekey parentelementinstancekey bpmnprocessid process id elementid process id workflowinstancekey workflowkey sourcerecordposition info io zeebe test records valuetype workflow instance creation key position timestamp recordtype event intent created partitionid rejectiontype null val rejectionreason brokerversion value version variables fork id branch default case fork id branch edge id fork id branch edge id fork id branch default case fork id branch edge id bpmnprocessid process id workflowinstancekey workflowkey sourcerecordposition info io zeebe test records valuetype workflow instance key position timestamp recordtype event intent element activated partitionid rejectiontype null val rejectionreason brokerversion value version flowscopekey bpmnelementtype process parentworkflowinstancekey parentelementinstancekey bpmnprocessid process id elementid process id workflowinstancekey workflowkey sourcerecordposition info io zeebe test records valuetype workflow instance key position timestamp recordtype event intent element activating partitionid rejectiontype null val rejectionreason brokerversion value version flowscopekey bpmnelementtype start event parentworkflowinstancekey parentelementinstancekey bpmnprocessid process id elementid id workflowinstancekey workflowkey sourcerecordposition info io zeebe test records valuetype workflow instance key position timestamp recordtype event intent element activated partitionid rejectiontype null val rejectionreason brokerversion value version flowscopekey bpmnelementtype start event parentworkflowinstancekey parentelementinstancekey bpmnprocessid process id elementid id workflowinstancekey workflowkey sourcerecordposition info io zeebe test records valuetype workflow instance key position timestamp recordtype event intent element completing partitionid rejectiontype null val rejectionreason brokerversion value version flowscopekey bpmnelementtype start event parentworkflowinstancekey parentelementinstancekey bpmnprocessid process id elementid id workflowinstancekey workflowkey sourcerecordposition info io zeebe test records valuetype workflow instance key position timestamp recordtype event intent element completed partitionid rejectiontype null val rejectionreason brokerversion value version flowscopekey bpmnelementtype start event parentworkflowinstancekey parentelementinstancekey bpmnprocessid process id elementid id workflowinstancekey workflowkey sourcerecordposition info io zeebe test records valuetype workflow instance key position timestamp recordtype event intent sequence flow taken partitionid rejectiontype null val rejectionreason brokerversion value version flowscopekey bpmnelementtype sequence flow parentworkflowinstancekey parentelementinstancekey bpmnprocessid process id elementid sequenceflow workflowinstancekey workflowkey sourcerecordposition info io zeebe test records valuetype workflow instance key position timestamp recordtype event intent element activating partitionid rejectiontype null val rejectionreason brokerversion value version flowscopekey bpmnelementtype exclusive gateway parentworkflowinstancekey parentelementinstancekey bpmnprocessid process id elementid fork id workflowinstancekey workflowkey sourcerecordposition info io zeebe test records valuetype workflow instance key position timestamp recordtype event intent element activated partitionid rejectiontype null val rejectionreason brokerversion value version flowscopekey bpmnelementtype exclusive gateway parentworkflowinstancekey parentelementinstancekey bpmnprocessid process id elementid fork id workflowinstancekey workflowkey sourcerecordposition info io zeebe test records valuetype workflow instance key position timestamp recordtype event intent element completing partitionid rejectiontype null val rejectionreason brokerversion value version flowscopekey bpmnelementtype exclusive gateway parentworkflowinstancekey parentelementinstancekey bpmnprocessid process id elementid fork id workflowinstancekey workflowkey sourcerecordposition info io zeebe test records valuetype workflow instance key position timestamp recordtype event intent element completed partitionid rejectiontype null val rejectionreason brokerversion value version flowscopekey bpmnelementtype exclusive gateway parentworkflowinstancekey parentelementinstancekey bpmnprocessid process id elementid fork id workflowinstancekey workflowkey sourcerecordposition info io zeebe test records valuetype workflow instance key position timestamp recordtype event intent sequence flow taken partitionid rejectiontype null val rejectionreason brokerversion value version flowscopekey bpmnelementtype sequence flow parentworkflowinstancekey parentelementinstancekey bpmnprocessid process id elementid edge id workflowinstancekey workflowkey sourcerecordposition info io zeebe test records valuetype workflow instance key position timestamp recordtype event intent element activating partitionid rejectiontype null val rejectionreason brokerversion value version flowscopekey bpmnelementtype parallel gateway parentworkflowinstancekey parentelementinstancekey bpmnprocessid process id elementid fork id workflowinstancekey workflowkey sourcerecordposition info io zeebe test records valuetype workflow instance key position timestamp recordtype event intent element activated partitionid rejectiontype null val rejectionreason brokerversion value version flowscopekey bpmnelementtype parallel gateway parentworkflowinstancekey parentelementinstancekey bpmnprocessid process id elementid fork id workflowinstancekey workflowkey sourcerecordposition info io zeebe test records valuetype workflow instance key position timestamp recordtype event intent element completing partitionid rejectiontype null val rejectionreason brokerversion value version flowscopekey bpmnelementtype parallel gateway parentworkflowinstancekey parentelementinstancekey bpmnprocessid process id elementid fork id workflowinstancekey workflowkey sourcerecordposition info io zeebe test records valuetype workflow instance key position timestamp recordtype event intent element completed partitionid rejectiontype null val rejectionreason brokerversion value version flowscopekey bpmnelementtype parallel gateway parentworkflowinstancekey parentelementinstancekey bpmnprocessid process id elementid fork id workflowinstancekey workflowkey sourcerecordposition info io zeebe test records valuetype workflow instance key position timestamp recordtype event intent sequence flow taken partitionid rejectiontype null val rejectionreason brokerversion value version flowscopekey bpmnelementtype sequence flow parentworkflowinstancekey parentelementinstancekey bpmnprocessid process id elementid id workflowinstancekey workflowkey sourcerecordposition info io zeebe test records valuetype workflow instance key position timestamp recordtype event intent sequence flow taken partitionid rejectiontype null val rejectionreason brokerversion value version flowscopekey bpmnelementtype sequence flow parentworkflowinstancekey parentelementinstancekey bpmnprocessid process id elementid sequenceflow workflowinstancekey workflowkey sourcerecordposition info io zeebe test records valuetype workflow instance key position timestamp recordtype event intent element activating partitionid rejectiontype null val rejectionreason brokerversion value version flowscopekey bpmnelementtype service task parentworkflowinstancekey parentelementinstancekey bpmnprocessid process id elementid id workflowinstancekey workflowkey sourcerecordposition info io zeebe test records valuetype workflow instance key position timestamp recordtype event intent element activated partitionid rejectiontype null val rejectionreason brokerversion value version flowscopekey bpmnelementtype service task parentworkflowinstancekey parentelementinstancekey bpmnprocessid process id elementid id workflowinstancekey workflowkey sourcerecordposition info io zeebe test records valuetype job key position timestamp recordtype command intent create partitionid rejectiontype null val rejectionreason brokerversion value type job id errormessage deadline variables errorcode retries customheaders worker workflowdefinitionversion elementinstancekey bpmnprocessid process id elementid id workflowinstancekey workflowkey sourcerecordposition info io zeebe test records valuetype job key position timestamp recordtype event intent created partitionid rejectiontype null val rejectionreason brokerversion value type job id errormessage deadline variables errorcode retries customheaders worker workflowdefinitionversion elementinstancekey bpmnprocessid process id elementid id workflowinstancekey workflowkey sourcerecordposition info io zeebe test records valuetype deployment key position timestamp recordtype command intent create partitionid rejectiontype null val rejectionreason brokerversion value resources resource info io zeebe test records valuetype deployment key position timestamp recordtype event intent created partitionid rejectiontype null val rejectionreason brokerversion value resources resource info io zeebe test records valuetype deployment key position timestamp recordtype command intent distribute partitionid rejectiontype null val rejectionreason brokerversion value resources resource info io zeebe test records valuetype deployment key position timestamp recordtype event intent distributed partitionid rejectiontype null val rejectionreason brokerversion value resources resource info io zeebe test records valuetype workflow instance creation key position timestamp recordtype command intent create partitionid rejectiontype null val rejectionreason brokerversion value version variables fork id branch default case fork id branch edge id fork id branch edge id fork id branch default case fork id branch edge id bpmnprocessid process id workflowinstancekey workflowkey sourcerecordposition info io zeebe test records valuetype variable key position timestamp recordtype event intent created partitionid rejectiontype null val rejectionreason brokerversion value name fork id branch value default case workflowinstancekey workflowkey scopekey sourcerecordposition info io zeebe test records valuetype variable key position timestamp recordtype event intent cr info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams close appender for log stream stream debug io zeebe dispatcher dispatcher closed info io zeebe logstreams on closing logstream stream close readers info io zeebe logstreams close log storage with name stream debug io zeebe broker test clean up test files on path tmp debug io zeebe util actor closing actor thread ground zb fs workers debug io zeebe util actor closing actor thread ground zb actors debug io zeebe util actor closing actor thread ground zb actors closed successfully debug io zeebe util actor closing actor thread ground zb fs workers closed successfully debug io zeebe logstreams configured log appender back pressure at partition as appendervegascfg initiallimit maxconcurrency alphalimit betalimit window limiting is disabled debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info org camunda feel feelengine engine created debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams closed stream stream debug io zeebe logstreams recovering state of partition from snapshot info org camunda feel feelengine engine created info io zeebe logstreams recovered state of partition from snapshot at position info org camunda feel feelengine engine created info org camunda feel feelengine engine created info io zeebe processor processor starts reprocessing until last source event position info org camunda feel feelengine engine created info io zeebe processor processor finished reprocessing at event position debug io zeebe util buffer close stream processor debug io zeebe logstreams closed stream processor controller broker streamprocessor info io zeebe logstreams close appender for log stream stream debug io zeebe dispatcher dispatcher closed info io zeebe logstreams on closing logstream stream close readers info io zeebe logstreams close log storage with name stream debug io zeebe broker test clean up test files on path tmp debug io zeebe util actor closing actor thread ground zb fs workers debug io zeebe util actor closing actor thread ground zb actors debug io zeebe util actor closing actor thread ground zb fs workers closed successfully debug io zeebe util actor closing actor thread ground zb actors closed successfully
| 1
|
73,547
| 7,343,985,021
|
IssuesEvent
|
2018-03-07 13:19:39
|
openshift/origin
|
https://api.github.com/repos/openshift/origin
|
closed
|
Pod shall not transition from terminated phase: "Failed" -> "Succeeded"
|
component/containers kind/test-flake priority/P0 sig/containers sig/master
|
```
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1299
2017-12-05 09:50:41.393147976 +0000 UTC: detected deployer pod transition from terminated phase: "Failed" -> "Succeeded"
Expected
<bool>: true
to be false
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/util.go:727
```
https://openshift-gce-devel.appspot.com/build/origin-ci-test/pr-logs/pull/17589/test_pull_request_origin_extended_conformance_install/3497/
/sig master
/kind test-flake
/assign mfojtik tnozicka
xref https://bugzilla.redhat.com/show_bug.cgi?id=1534492
|
1.0
|
Pod shall not transition from terminated phase: "Failed" -> "Succeeded" - ```
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1299
2017-12-05 09:50:41.393147976 +0000 UTC: detected deployer pod transition from terminated phase: "Failed" -> "Succeeded"
Expected
<bool>: true
to be false
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/util.go:727
```
https://openshift-gce-devel.appspot.com/build/origin-ci-test/pr-logs/pull/17589/test_pull_request_origin_extended_conformance_install/3497/
/sig master
/kind test-flake
/assign mfojtik tnozicka
xref https://bugzilla.redhat.com/show_bug.cgi?id=1534492
|
test
|
pod shall not transition from terminated phase failed succeeded go src github com openshift origin output local go src github com openshift origin test extended deployments deployments go utc detected deployer pod transition from terminated phase failed succeeded expected true to be false go src github com openshift origin output local go src github com openshift origin test extended deployments util go sig master kind test flake assign mfojtik tnozicka xref
| 1
|
259,351
| 22,469,918,763
|
IssuesEvent
|
2022-06-22 07:11:18
|
team-e-techeer/schoolvery-be
|
https://api.github.com/repos/team-e-techeer/schoolvery-be
|
opened
|
User part Test Code 작성
|
test
|
**🫧 구현 기능 **
create chat app
**✨ 결정 사항**
**🍀 이 기능을 구현하는데 사용한(추가한) 기술 스택 혹은 라이브러리 혹은 툴**
🍴 Progress
- [x] todo 1
- [ ] todo 2
- [ ] todo 3
|
1.0
|
User part Test Code 작성 - **🫧 구현 기능 **
create chat app
**✨ 결정 사항**
**🍀 이 기능을 구현하는데 사용한(추가한) 기술 스택 혹은 라이브러리 혹은 툴**
🍴 Progress
- [x] todo 1
- [ ] todo 2
- [ ] todo 3
|
test
|
user part test code 작성 🫧 구현 기능 create chat app ✨ 결정 사항 🍀 이 기능을 구현하는데 사용한 추가한 기술 스택 혹은 라이브러리 혹은 툴 🍴 progress todo todo todo
| 1
|
37,129
| 18,149,884,575
|
IssuesEvent
|
2021-09-26 04:41:58
|
EBWiki/EBWiki
|
https://api.github.com/repos/EBWiki/EBWiki
|
opened
|
Use partials and collections to render agency table
|
UI performance hacktoberfest good first issue
|
Add a partial that renders the content of an individual table row in the agencies index view, lines 13-16 in `app/views/agencies/index.html.erb`. Then update the view to use the partial and pass the agencies as a collection.
Dev note: for an example, check out how we render the list of cases in `app/views/cases/index.html.erb`
|
True
|
Use partials and collections to render agency table - Add a partial that renders the content of an individual table row in the agencies index view, lines 13-16 in `app/views/agencies/index.html.erb`. Then update the view to use the partial and pass the agencies as a collection.
Dev note: for an example, check out how we render the list of cases in `app/views/cases/index.html.erb`
|
non_test
|
use partials and collections to render agency table add a partial that renders the content of an individual table row in the agencies index view lines in app views agencies index html erb then update the view to use the partial and pass the agencies as a collection dev note for an example check out how we render the list of cases in app views cases index html erb
| 0
|
215,626
| 7,295,997,231
|
IssuesEvent
|
2018-02-26 09:18:32
|
Silikonspray/BobTheBotmeister
|
https://api.github.com/repos/Silikonspray/BobTheBotmeister
|
closed
|
Fix LoL commands
|
Priority: High bug
|
when type a command in the long form like this:
=lol search Silikonspray13
instead of taking "Silikonspray13" as the name, "arch Silikonspray13" gets recognized as the name.
|
1.0
|
Fix LoL commands - when type a command in the long form like this:
=lol search Silikonspray13
instead of taking "Silikonspray13" as the name, "arch Silikonspray13" gets recognized as the name.
|
non_test
|
fix lol commands when type a command in the long form like this lol search instead of taking as the name arch gets recognized as the name
| 0
|
261,998
| 22,785,423,952
|
IssuesEvent
|
2022-07-09 06:50:24
|
ValveSoftware/Source-1-Games
|
https://api.github.com/repos/ValveSoftware/Source-1-Games
|
closed
|
[TF2] Mann Vs. Machine loading screen does not appear when joining game via Matchmaking
|
Team Fortress 2 Need Retest
|
The loading screen made specifically for Mann Vs. Machine mode does not appear when a user connects to a game through official matchmaking (Mann Up, Boot Camp). This is probably an oversight in the code that handles swapping between the default load screen and the one used for Competitive mode.
Loading screen user sees when joining through matchmaking:

Loading screen user sees when joining through any other means:

|
1.0
|
[TF2] Mann Vs. Machine loading screen does not appear when joining game via Matchmaking - The loading screen made specifically for Mann Vs. Machine mode does not appear when a user connects to a game through official matchmaking (Mann Up, Boot Camp). This is probably an oversight in the code that handles swapping between the default load screen and the one used for Competitive mode.
Loading screen user sees when joining through matchmaking:

Loading screen user sees when joining through any other means:

|
test
|
mann vs machine loading screen does not appear when joining game via matchmaking the loading screen made specifically for mann vs machine mode does not appear when a user connects to a game through official matchmaking mann up boot camp this is probably an oversight in the code that handles swapping between the default load screen and the one used for competitive mode loading screen user sees when joining through matchmaking loading screen user sees when joining through any other means
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.