Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 4 112 | repo_url stringlengths 33 141 | action stringclasses 3 values | title stringlengths 1 1.02k | labels stringlengths 4 1.54k | body stringlengths 1 262k | index stringclasses 17 values | text_combine stringlengths 95 262k | label stringclasses 2 values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
398,862 | 27,215,442,148 | IssuesEvent | 2023-02-20 21:09:40 | markrossington/sidewinder-x2-marlin | https://api.github.com/repos/markrossington/sidewinder-x2-marlin | closed | Make steps clearer for non software engineer | documentation | Clean up the steps in the readme.
- Can you double click Python files? And on all systems?
- Better filenames?
Share thoughts if you read this and have any ideas. | 1.0 | Make steps clearer for non software engineer - Clean up the steps in the readme.
- Can you double click Python files? And on all systems?
- Better filenames?
Share thoughts if you read this and have any ideas. | non_test | make steps clearer for non software engineer clean up the steps in the readme can you double click python files and on all systems better filenames share thoughts if you read this and have any ideas | 0 |
303,195 | 26,191,864,618 | IssuesEvent | 2023-01-03 09:49:20 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | roachtest: costfuzz/rand-tables failed | C-test-failure O-robot O-roachtest branch-master release-blocker | roachtest.costfuzz/rand-tables [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8166107?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8166107?buildTab=artifacts#/costfuzz/rand-tables) on master @ [1d7bd69205c2197ccac33df9e2e6d4ff8c0fdbcf](https://github.com/cockroachdb/cockroach/commits/1d7bd69205c2197ccac33df9e2e6d4ff8c0fdbcf):
```
test artifacts and logs in: /artifacts/costfuzz/rand-tables/run_1
(query_comparison_util.go:158).runOneRoundQueryComparison: pq: Use of partitions requires an enterprise license. Your evaluation license expired on December 30, 2022. If you're interested in getting a new license, please contact subscriptions@cockroachlabs.com and we can help you out.
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*costfuzz/rand-tables.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| 2.0 | roachtest: costfuzz/rand-tables failed - roachtest.costfuzz/rand-tables [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8166107?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8166107?buildTab=artifacts#/costfuzz/rand-tables) on master @ [1d7bd69205c2197ccac33df9e2e6d4ff8c0fdbcf](https://github.com/cockroachdb/cockroach/commits/1d7bd69205c2197ccac33df9e2e6d4ff8c0fdbcf):
```
test artifacts and logs in: /artifacts/costfuzz/rand-tables/run_1
(query_comparison_util.go:158).runOneRoundQueryComparison: pq: Use of partitions requires an enterprise license. Your evaluation license expired on December 30, 2022. If you're interested in getting a new license, please contact subscriptions@cockroachlabs.com and we can help you out.
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*costfuzz/rand-tables.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| test | roachtest costfuzz rand tables failed roachtest costfuzz rand tables with on master test artifacts and logs in artifacts costfuzz rand tables run query comparison util go runoneroundquerycomparison pq use of partitions requires an enterprise license your evaluation license expired on december if you re interested in getting a new license please contact subscriptions cockroachlabs com and we can help you out parameters roachtest cloud gce roachtest cpu roachtest encrypted false roachtest ssd help see see cc cockroachdb sql queries | 1 |
689,678 | 23,630,292,367 | IssuesEvent | 2022-08-25 08:45:26 | oceanprotocol/docs | https://api.github.com/repos/oceanprotocol/docs | opened | Postman examples for Provider | Type: Enhancement Priority: Low | It would be nice to have an equivalent of the Aquarius postman examples for provider. This is not high priority at the moment though as there aren't too many people using it. | 1.0 | Postman examples for Provider - It would be nice to have an equivalent of the Aquarius postman examples for provider. This is not high priority at the moment though as there aren't too many people using it. | non_test | postman examples for provider it would be nice to have an equivalent of the aquarius postman examples for provider this is not high priority at the moment though as there aren t too many people using it | 0 |
177,443 | 13,724,383,902 | IssuesEvent | 2020-10-03 14:01:10 | webpack/webpack-cli | https://api.github.com/repos/webpack/webpack-cli | closed | feature: integration of serve package into cli | Feature Tests enhancement | **Is your feature request related to a problem? Please describe.**
Part of roadmap, we would be integrating serve by default into the CLI so that user need not install it to use dev-server from command line.
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
**Describe the solution you'd like**
- [ ] Robust tests for `serve`'s current features (if needed)
- [ ] Refactor `serve` and add/remove features from it
- [ ] Add tests for the previous step changes (if any)
- [ ] Integrate it with CLI
- [ ] Add integration tests for verifying integration
<!-- A clear and concise description of what you want to happen. -->
**Describe alternatives you've considered**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Additional context**
Roadmap #717
<!-- Add any other context or screenshots about the feature request here. -->
/cc @evilebottnawi | 1.0 | feature: integration of serve package into cli - **Is your feature request related to a problem? Please describe.**
Part of roadmap, we would be integrating serve by default into the CLI so that user need not install it to use dev-server from command line.
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
**Describe the solution you'd like**
- [ ] Robust tests for `serve`'s current features (if needed)
- [ ] Refactor `serve` and add/remove features from it
- [ ] Add tests for the previous step changes (if any)
- [ ] Integrate it with CLI
- [ ] Add integration tests for verifying integration
<!-- A clear and concise description of what you want to happen. -->
**Describe alternatives you've considered**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Additional context**
Roadmap #717
<!-- Add any other context or screenshots about the feature request here. -->
/cc @evilebottnawi | test | feature integration of serve package into cli is your feature request related to a problem please describe part of roadmap we would be integrating serve by default into the cli so that user need not install it to use dev server from command line describe the solution you d like robust tests for serve s current features if needed refactor serve and add remove features from it add tests for the previous step changes if any integrate it with cli add integration tests for verifying integration describe alternatives you ve considered additional context roadmap cc evilebottnawi | 1 |
192,383 | 14,615,643,216 | IssuesEvent | 2020-12-22 11:54:39 | WPChill/strong-testimonials | https://api.github.com/repos/WPChill/strong-testimonials | closed | Slideshow doesn’t fully load in Firefox when using anchor link | bug can't replicate tested | **Describe the bug**
I have an anchor link in the main menu that scrolls to the Bio section of the home page that is below the slide show. When I am on the home page and click the anchor link it works as expected. In Firefox (latest version), when I am on another page and use Bio anchor link in the main menu, or if you just directly load the link with the anchor on the end, the home page loads and scrolls , but the slideshow only loads the navigation arrows. When I refresh the page the slide show loads.

**To Reproduce**
Steps to reproduce the behavior:
1. add an anchor link in the main menu that scrolls to the a section of the home page that is below the slideshow
2. then go to another page on the site and click on the anchor in the menu.
3. the slideshow only loads the navigation arrows
4. check in Firefox this behavior
Screenshot with settings:

**Expected behavior**

<!-- You can check these boxes once you've created the issue. -->
* Which browser is affected (or browsers):
- [x] Firefox
<!-- You can check these boxes once you've created the issue. -->
* Which device is affected (or devices):
- [x] Desktop
#### Used versions
* WordPress version: 5.5
* Strong Testimonials version: 2.50.0
https://wordpress.org/support/topic/slide-show-doesnt-fully-load-in-firfox-when-using-anchor-link/
| 1.0 | Slideshow doesn’t fully load in Firefox when using anchor link - **Describe the bug**
I have an anchor link in the main menu that scrolls to the Bio section of the home page that is below the slide show. When I am on the home page and click the anchor link it works as expected. In Firefox (latest version), when I am on another page and use Bio anchor link in the main menu, or if you just directly load the link with the anchor on the end, the home page loads and scrolls , but the slideshow only loads the navigation arrows. When I refresh the page the slide show loads.

**To Reproduce**
Steps to reproduce the behavior:
1. add an anchor link in the main menu that scrolls to the a section of the home page that is below the slideshow
2. then go to another page on the site and click on the anchor in the menu.
3. the slideshow only loads the navigation arrows
4. check in Firefox this behavior
Screenshot with settings:

**Expected behavior**

<!-- You can check these boxes once you've created the issue. -->
* Which browser is affected (or browsers):
- [x] Firefox
<!-- You can check these boxes once you've created the issue. -->
* Which device is affected (or devices):
- [x] Desktop
#### Used versions
* WordPress version: 5.5
* Strong Testimonials version: 2.50.0
https://wordpress.org/support/topic/slide-show-doesnt-fully-load-in-firfox-when-using-anchor-link/
| test | slideshow doesn’t fully load in firefox when using anchor link describe the bug i have an anchor link in the main menu that scrolls to the bio section of the home page that is below the slide show when i am on the home page and click the anchor link it works as expected in firefox latest version when i am on another page and use bio anchor link in the main menu or if you just directly load the link with the anchor on the end the home page loads and scrolls but the slideshow only loads the navigation arrows when i refresh the page the slide show loads to reproduce steps to reproduce the behavior add an anchor link in the main menu that scrolls to the a section of the home page that is below the slideshow then go to another page on the site and click on the anchor in the menu the slideshow only loads the navigation arrows check in firefox this behavior screenshot with settings expected behavior which browser is affected or browsers firefox which device is affected or devices desktop used versions wordpress version strong testimonials version | 1 |
480,947 | 13,878,460,246 | IssuesEvent | 2020-10-17 09:41:24 | buddyboss/buddyboss-platform | https://api.github.com/repos/buddyboss/buddyboss-platform | opened | The Group send invites list itself is not able to load more names when scrolling down | bug priority: medium | **Describe the bug**
The list of members has an issue to load in Group send invites. the list itself is not able to load more names when scrolling down. Works in other browsers and diff locations.
**To Reproduce**
Steps to reproduce the behavior:
See this video
https://www.loom.com/share/5aa75f0fa5194da0b384561e3eca7773
**Expected behavior**
Should able to load more names when scrolling down
**Support ticket links**
https://secure.helpscout.net/conversation/1308180589/103192?folderId=3955985 | 1.0 | The Group send invites list itself is not able to load more names when scrolling down - **Describe the bug**
The list of members has an issue to load in Group send invites. the list itself is not able to load more names when scrolling down. Works in other browsers and diff locations.
**To Reproduce**
Steps to reproduce the behavior:
See this video
https://www.loom.com/share/5aa75f0fa5194da0b384561e3eca7773
**Expected behavior**
Should able to load more names when scrolling down
**Support ticket links**
https://secure.helpscout.net/conversation/1308180589/103192?folderId=3955985 | non_test | the group send invites list itself is not able to load more names when scrolling down describe the bug the list of members has an issue to load in group send invites the list itself is not able to load more names when scrolling down works in other browsers and diff locations to reproduce steps to reproduce the behavior see this video expected behavior should able to load more names when scrolling down support ticket links | 0 |
133,598 | 18,298,975,180 | IssuesEvent | 2021-10-05 23:50:18 | bsbtd/Teste | https://api.github.com/repos/bsbtd/Teste | opened | CVE-2020-11619 (High) detected in jackson-databind-2.9.5.jar | security vulnerability | ## CVE-2020-11619 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: Teste/liferay-portal/modules/etl/talend/talend-runtime/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar</p>
<p>
Dependency Hierarchy:
- components-api-0.25.3.jar (Root Library)
- daikon-0.27.0.jar
- :x: **jackson-databind-2.9.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/bsbtd/Teste/commit/64dde89c50c07496423c4d4a865f2e16b92399ad">64dde89c50c07496423c4d4a865f2e16b92399ad</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.springframework.aop.config.MethodLocatingFactoryBean (aka spring-aop).
<p>Publish Date: 2020-04-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11619>CVE-2020-11619</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11619">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11619</a></p>
<p>Release Date: 2020-04-07</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-11619 (High) detected in jackson-databind-2.9.5.jar - ## CVE-2020-11619 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: Teste/liferay-portal/modules/etl/talend/talend-runtime/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar</p>
<p>
Dependency Hierarchy:
- components-api-0.25.3.jar (Root Library)
- daikon-0.27.0.jar
- :x: **jackson-databind-2.9.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/bsbtd/Teste/commit/64dde89c50c07496423c4d4a865f2e16b92399ad">64dde89c50c07496423c4d4a865f2e16b92399ad</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.springframework.aop.config.MethodLocatingFactoryBean (aka spring-aop).
<p>Publish Date: 2020-04-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11619>CVE-2020-11619</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11619">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11619</a></p>
<p>Release Date: 2020-04-07</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file teste liferay portal modules etl talend talend runtime pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy components api jar root library daikon jar x jackson databind jar vulnerable library found in head commit a href vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org springframework aop config methodlocatingfactorybean aka spring aop publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with whitesource | 0 |
248,231 | 21,003,703,146 | IssuesEvent | 2022-03-29 20:06:33 | microsoft/vscode | https://api.github.com/repos/microsoft/vscode | opened | Webview integration tests timing out on GitHub CI | webview integration-test-failure | Seeing webview integration tests timeout more often today. This specifically seems to happen on the GitHub CI.
https://github.com/microsoft/vscode/runs/5742978871?check_suite_focus=true
https://github.com/microsoft/vscode/runs/5742931259?check_suite_focus=true
https://github.com/microsoft/vscode/runs/5742931058?check_suite_focus=true
https://github.com/microsoft/vscode/runs/5741763332?check_suite_focus=true
https://github.com/microsoft/vscode/runs/5742831884?check_suite_focus=true
https://github.com/microsoft/vscode/runs/5742831969?check_suite_focus=true | 1.0 | Webview integration tests timing out on GitHub CI - Seeing webview integration tests timeout more often today. This specifically seems to happen on the GitHub CI.
https://github.com/microsoft/vscode/runs/5742978871?check_suite_focus=true
https://github.com/microsoft/vscode/runs/5742931259?check_suite_focus=true
https://github.com/microsoft/vscode/runs/5742931058?check_suite_focus=true
https://github.com/microsoft/vscode/runs/5741763332?check_suite_focus=true
https://github.com/microsoft/vscode/runs/5742831884?check_suite_focus=true
https://github.com/microsoft/vscode/runs/5742831969?check_suite_focus=true | test | webview integration tests timing out on github ci seeing webview integration tests timeout more often today this specifically seems to happen on the github ci | 1 |
422,774 | 28,480,287,232 | IssuesEvent | 2023-04-18 01:41:10 | inlab-geo/espresso | https://api.github.com/repos/inlab-geo/espresso | closed | Rationalise docs & readmes | documentation | We currently have a variety of README.md files scattered through the repository, and contributor/developer guides within the docs. This is a future maintenance headache, and I think we already have some inconsistent advice.
I propose that we consolidate all important information into the 'docs', within the contributor and developer guides, and delete it from the README files. Instead, these should contain links to the relevant docs pages.
@jwhhh If you agree, can you do the initial migration of information -- you are better-placed than me to determine which information is still current.
Builds on #107.
UPDATE: also include [FAQ](https://hackmd.io/q-biMWqRSBOV51I9g1BNcQ#Espresso) in this PR. | 1.0 | Rationalise docs & readmes - We currently have a variety of README.md files scattered through the repository, and contributor/developer guides within the docs. This is a future maintenance headache, and I think we already have some inconsistent advice.
I propose that we consolidate all important information into the 'docs', within the contributor and developer guides, and delete it from the README files. Instead, these should contain links to the relevant docs pages.
@jwhhh If you agree, can you do the initial migration of information -- you are better-placed than me to determine which information is still current.
Builds on #107.
UPDATE: also include [FAQ](https://hackmd.io/q-biMWqRSBOV51I9g1BNcQ#Espresso) in this PR. | non_test | rationalise docs readmes we currently have a variety of readme md files scattered through the repository and contributor developer guides within the docs this is a future maintenance headache and i think we already have some inconsistent advice i propose that we consolidate all important information into the docs within the contributor and developer guides and delete it from the readme files instead these should contain links to the relevant docs pages jwhhh if you agree can you do the initial migration of information you are better placed than me to determine which information is still current builds on update also include in this pr | 0 |
2,025 | 2,581,430,053 | IssuesEvent | 2015-02-14 01:50:34 | wp-cli/wp-cli | https://api.github.com/repos/wp-cli/wp-cli | closed | Fix rate-limited requests to Github API | bug scope:testing | In #1535, we added `wp cli update` and the corresponding test coverage. However, Github's API is rate-limited to 60 requests/hour, so it's failing the build quite often:

Previously #1605 | 1.0 | Fix rate-limited requests to Github API - In #1535, we added `wp cli update` and the corresponding test coverage. However, Github's API is rate-limited to 60 requests/hour, so it's failing the build quite often:

Previously #1605 | test | fix rate limited requests to github api in we added wp cli update and the corresponding test coverage however github s api is rate limited to requests hour so it s failing the build quite often previously | 1 |
64,326 | 6,899,657,767 | IssuesEvent | 2017-11-24 14:42:54 | ValveSoftware/steam-for-linux | https://api.github.com/repos/ValveSoftware/steam-for-linux | closed | No List in Small Mode | Need Retest reviewed Steam client | After last update, I can't use small mode because there's no game shown on my list event I had clear the search bar

| 1.0 | No List in Small Mode - After last update, I can't use small mode because there's no game shown on my list event I had clear the search bar

| test | no list in small mode after last update i can t use small mode because there s no game shown on my list event i had clear the search bar | 1 |
251,876 | 21,527,049,186 | IssuesEvent | 2022-04-28 19:34:56 | damccorm/test-migration-target | https://api.github.com/repos/damccorm/test-migration-target | opened | flake: FlinkRunnerTest.testEnsureStdoutStdErrIsRestored | bug P1 test-failures | java.lang.AssertionError:
Expected: (a string containing "System.out: (none)" and a string containing "System.err: (none)")
but: a string containing "System.err: (none)" was "The program plan could not be fetched - the program aborted pre-maturely.
https://ci-beam.apache.org/job/beam_PreCommit_Java_Phrase/4515/
Imported from Jira [BEAM-13708](https://issues.apache.org/jira/browse/BEAM-13708). Original Jira may contain additional context.
Reported by: ibzib. Jira was originally assigned to robertwb. | 1.0 | flake: FlinkRunnerTest.testEnsureStdoutStdErrIsRestored - java.lang.AssertionError:
Expected: (a string containing "System.out: (none)" and a string containing "System.err: (none)")
but: a string containing "System.err: (none)" was "The program plan could not be fetched - the program aborted pre-maturely.
https://ci-beam.apache.org/job/beam_PreCommit_Java_Phrase/4515/
Imported from Jira [BEAM-13708](https://issues.apache.org/jira/browse/BEAM-13708). Original Jira may contain additional context.
Reported by: ibzib. Jira was originally assigned to robertwb. | test | flake flinkrunnertest testensurestdoutstderrisrestored java lang assertionerror expected a string containing system out none and a string containing system err none but a string containing system err none was the program plan could not be fetched the program aborted pre maturely imported from jira original jira may contain additional context reported by ibzib jira was originally assigned to robertwb | 1 |
581,996 | 17,350,082,486 | IssuesEvent | 2021-07-29 07:35:47 | ahmedkaludi/accelerated-mobile-pages | https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages | closed | How to change related posts thumbnail size filter is not working. | [Priority: HIGH] bug | Tutorial URL: https://ampforwp.com/tutorials/article/how-to-change-related-posts-thumbnail-size/
Ticket URL: https://secure.helpscout.net/conversation/1578917856/207264?folderId=1060556
change related posts thumbnail size filter is not working. | 1.0 | How to change related posts thumbnail size filter is not working. - Tutorial URL: https://ampforwp.com/tutorials/article/how-to-change-related-posts-thumbnail-size/
Ticket URL: https://secure.helpscout.net/conversation/1578917856/207264?folderId=1060556
change related posts thumbnail size filter is not working. | non_test | how to change related posts thumbnail size filter is not working tutorial url ticket url change related posts thumbnail size filter is not working | 0 |
747,995 | 26,103,312,566 | IssuesEvent | 2022-12-27 09:59:41 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | chaturbate.com - design is broken | nsfw priority-important browser-focus-geckoview engine-gecko | <!-- @browser: Firefox Mobile 108.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:108.0) Gecko/108.0 Firefox/108.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/115968 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://chaturbate.com
**Browser / Version**: Firefox Mobile 108.0
**Operating System**: Android 11
**Tested Another Browser**: Yes Chrome
**Problem type**: Design is broken
**Description**: Images not loaded
**Steps to Reproduce**:
No images or video. Cleared cache. Restart phone. Checked on two other browsers....same issue. Not working on mobile.
<details>
<summary>View the screenshot</summary>
Screenshot removed - possible explicit content.
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20221208122842</li><li>channel: release</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/12/aa3e0e6c-bc13-4fbb-ba92-e3d363ee5704)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | chaturbate.com - design is broken - <!-- @browser: Firefox Mobile 108.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:108.0) Gecko/108.0 Firefox/108.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/115968 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://chaturbate.com
**Browser / Version**: Firefox Mobile 108.0
**Operating System**: Android 11
**Tested Another Browser**: Yes Chrome
**Problem type**: Design is broken
**Description**: Images not loaded
**Steps to Reproduce**:
No images or video. Cleared cache. Restart phone. Checked on two other browsers....same issue. Not working on mobile.
<details>
<summary>View the screenshot</summary>
Screenshot removed - possible explicit content.
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20221208122842</li><li>channel: release</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/12/aa3e0e6c-bc13-4fbb-ba92-e3d363ee5704)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_test | chaturbate com design is broken url browser version firefox mobile operating system android tested another browser yes chrome problem type design is broken description images not loaded steps to reproduce no images or video cleared cache restart phone checked on two other browsers same issue not working on mobile view the screenshot screenshot removed possible explicit content browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel release hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ | 0 |
29,416 | 4,171,573,608 | IssuesEvent | 2016-06-21 00:19:15 | open-forcefield-group/smarty | https://api.github.com/repos/open-forcefield-group/smarty | opened | Allow binary rather than unary decorators for smarts sampling | design choice | @jchodera , this one will require your attention.
We believe that to allow better exploration we need to allow binary rather than unary construction of SMARTS to sample. I haven't dug in too much to the details of the code on this aspect, but what @cbayly13 is telling us on this end is that he needs to be able to combine "pick a bond order" with "pick an element type or functional group" more easily - i.e. rather than taking a decorator that does both at once, being able to combine two decorators. In other words, instead of a unary decorator, he wants binary decorators (take this action such as a bond, apply it to that chemical group).
So I think he wants to be able to revise this:
```
$(*~[#1]) hydrogen-adjacent
$(*~[#6]) carbon-adjacent
$(*~[#7]) nitrogen-adjacent
$(*~[#8]) oxygen-adjacent
$(*~[#9]) fluorine-adjacent
$(*~[#15]) phosphorous-adjacent
$(*~[#16]) sulfur-adjacent
$(*~[#17]) chlorine-adjacent
$(*~[#35]) bromine-adjacent
$(*~[#53]) iodine-adjacent
```
by replacing it with "pick a bond type" as one operation and "pick a functional group adjacent to it" as a second.
@cbayly13 , have I summarized properly?
Ultimately, he will also want to optimize how these selections are made under the hood, i.e. trying a single bond basically works with everything, but double or triple bonds only make sense for particular elements/functional groups, etc. This may be an API issue. | 1.0 | Allow binary rather than unary decorators for smarts sampling - @jchodera , this one will require your attention.
We believe that to allow better exploration we need to allow binary rather than unary construction of SMARTS to sample. I haven't dug in too much to the details of the code on this aspect, but what @cbayly13 is telling us on this end is that he needs to be able to combine "pick a bond order" with "pick an element type or functional group" more easily - i.e. rather than taking a decorator that does both at once, being able to combine two decorators. In other words, instead of a unary decorator, he wants binary decorators (take this action such as a bond, apply it to that chemical group).
So I think he wants to be able to revise this:
```
$(*~[#1]) hydrogen-adjacent
$(*~[#6]) carbon-adjacent
$(*~[#7]) nitrogen-adjacent
$(*~[#8]) oxygen-adjacent
$(*~[#9]) fluorine-adjacent
$(*~[#15]) phosphorous-adjacent
$(*~[#16]) sulfur-adjacent
$(*~[#17]) chlorine-adjacent
$(*~[#35]) bromine-adjacent
$(*~[#53]) iodine-adjacent
```
by replacing it with "pick a bond type" as one operation and "pick a functional group adjacent to it" as a second.
@cbayly13 , have I summarized properly?
Ultimately, he will also want to optimize how these selections are made under the hood, i.e. trying a single bond basically works with everything, but double or triple bonds only make sense for particular elements/functional groups, etc. This may be an API issue. | non_test | allow binary rather than unary decorators for smarts sampling jchodera this one will require your attention we believe that to allow better exploration we need to allow binary rather than unary construction of smarts to sample i haven t dug in too much to the details of the code on this aspect but what is telling us on this end is that he needs to be able to combine pick a bond order with pick an element type or functional group more easily i e rather than taking a decorator that does both at once being able to combine two decorators in other words instead of a unary decorator he wants binary decorators take this action such as a bond apply it to that chemical group so i think he wants to be able to revise this hydrogen adjacent carbon adjacent nitrogen adjacent oxygen adjacent fluorine adjacent phosphorous adjacent sulfur adjacent chlorine adjacent bromine adjacent iodine adjacent by replacing it with pick a bond type as one operation and pick a functional group adjacent to it as a second have i summarized properly ultimately he will also want to optimize how these selections are made under the hood i e trying a single bond basically works with everything but double or triple bonds only make sense for particular elements functional groups etc this may be an api issue | 0 |
1,771 | 6,688,399,688 | IssuesEvent | 2017-10-08 14:22:02 | t9md/atom-vim-mode-plus | https://api.github.com/repos/t9md/atom-vim-mode-plus | closed | What vmp will do when outer-vmp command add/modify selection | architecture-improvement documentation | Use this issue to consolidation place for
- Collaboration issue for vmp's `visual-mode` and selection change/addition by outer-vmp command.
# Examples
- When confirmed find-mini editor(`cmd-f`), it select next occurrence of word, but vmp doesn't auto-enter visual-mode, its odd when I see selection in `normal-mode`.
- When I use package which open editor with initially select some lines, vmp remains `normal-mode`, so moving cursor just clear selection, I want start with `visual-mode` in this case.
# Why this happens, and why this is difficult to fix completely.
- vmp have `visual-mode`, all vmp command is **mode-aware**, when it create selection, it automatically activate `visual-mode`. like `v`, `V`.
- but outer-vmp command just modify selection `editor:select-line`, that's it. Not auto-enter `visual-mode`.
- `visual-mode` doing special things
- modify cursor visible position so that it seems natural for vim-user
- preserve charwise position when shifting to linewise, that's why you can shift `V` to `v` with keeping original cursor column.
- So what I want vmp do is automatically activate `visual-mode` when outer-vmp command modify selection.
- Once it is done by `atom.commands.onWillDispatch` and `atom.commands.onDidDispatch` hook.
- Good: it's called less frequently than `editor.onDidChangeSelectionRange`.
- Bad: miss catching selection change for some command. e.g. modify selection in promise(it's fired after `editor.onDidChangeSelectionRange`)
- Another approach, use `editor.onDidChangeSelectionRange`.
- Good: no miss catch for selection change
- Bad: called so frequently. Need lock to avoid infinite loop(modifying selection in callback also fire `onDidChangeSelectionRange` event).
# Quick check TODO
- `cmd-e`(`find-and-replace:use-selection-as-find-pattern`), `cmd-g`(`find-and-replace:find-next`), works correctly? start `visual-mode`?
- `cmd-f`, input search word then `enter` works correctly? start `visual-mode`?
- with my [try](https://atom.io/packages/try) package, `try:paste` with selection start `visual-mode` in opened editor?
- `cmd-l`(`editor:select-line`) start `visual-linewise-mode`?
- `cmd-d`(`find-and-replace:select-next`) show cursor correctly, start `visual-mode`?
# Issue on this topic
#112
#490
#740
#744
#761
#794
#872
| 1.0 | What vmp will do when outer-vmp command add/modify selection - Use this issue to consolidation place for
- Collaboration issue for vmp's `visual-mode` and selection change/addition by outer-vmp command.
# Examples
- When confirmed find-mini editor(`cmd-f`), it select next occurrence of word, but vmp doesn't auto-enter visual-mode, its odd when I see selection in `normal-mode`.
- When I use package which open editor with initially select some lines, vmp remains `normal-mode`, so moving cursor just clear selection, I want start with `visual-mode` in this case.
# Why this happens, and why this is difficult to fix completely.
- vmp have `visual-mode`, all vmp command is **mode-aware**, when it create selection, it automatically activate `visual-mode`. like `v`, `V`.
- but outer-vmp command just modify selection `editor:select-line`, that's it. Not auto-enter `visual-mode`.
- `visual-mode` doing special things
- modify cursor visible position so that it seems natural for vim-user
- preserve charwise position when shifting to linewise, that's why you can shift `V` to `v` with keeping original cursor column.
- So what I want vmp do is automatically activate `visual-mode` when outer-vmp command modify selection.
- Once it is done by `atom.commands.onWillDispatch` and `atom.commands.onDidDispatch` hook.
- Good: it's called less frequently than `editor.onDidChangeSelectionRange`.
- Bad: miss catching selection change for some command. e.g. modify selection in promise(it's fired after `editor.onDidChangeSelectionRange`)
- Another approach, use `editor.onDidChangeSelectionRange`.
- Good: no miss catch for selection change
- Bad: called so frequently. Need lock to avoid infinite loop(modifying selection in callback also fire `onDidChangeSelectionRange` event).
# Quick check TODO
- `cmd-e`(`find-and-replace:use-selection-as-find-pattern`), `cmd-g`(`find-and-replace:find-next`), works correctly? start `visual-mode`?
- `cmd-f`, input search word then `enter` works correctly? start `visual-mode`?
- with my [try](https://atom.io/packages/try) package, `try:paste` with selection start `visual-mode` in opened editor?
- `cmd-l`(`editor:select-line`) start `visual-linewise-mode`?
- `cmd-d`(`find-and-replace:select-next`) show cursor correctly, start `visual-mode`?
# Issue on this topic
#112
#490
#740
#744
#761
#794
#872
| non_test | what vmp will do when outer vmp command add modify selection use this issue to consolidation place for collaboration issue for vmp s visual mode and selection change addition by outer vmp command examples when confirmed find mini editor cmd f it select next occurrence of word but vmp doesn t auto enter visual mode its odd when i see selection in normal mode when i use package which open editor with initially select some lines vmp remains normal mode so moving cursor just clear selection i want start with visual mode in this case why this happens and why this is difficult to fix completely vmp have visual mode all vmp command is mode aware when it create selection it automatically activate visual mode like v v but outer vmp command just modify selection editor select line that s it not auto enter visual mode visual mode doing special things modify cursor visible position so that it seems natural for vim user preserve charwise position when shifting to linewise that s why you can shift v to v with keeping original cursor column so what i want vmp do is automatically activate visual mode when outer vmp command modify selection once it is done by atom commands onwilldispatch and atom commands ondiddispatch hook good it s called less frequently than editor ondidchangeselectionrange bad miss catching selection change for some command e g modify selection in promise it s fired after editor ondidchangeselectionrange another approach use editor ondidchangeselectionrange good no miss catch for selection change bad called so frequently need lock to avoid infinite loop modifying selection in callback also fire ondidchangeselectionrange event quick check todo cmd e find and replace use selection as find pattern cmd g find and replace find next works correctly start visual mode cmd f input search word then enter works correctly start visual mode with my package try paste with selection start visual mode in opened editor cmd l editor select line start visual linewise mode cmd d find and replace select next show cursor correctly start visual mode issue on this topic | 0 |
331,753 | 29,057,865,795 | IssuesEvent | 2023-05-15 00:43:51 | TheRenegadeCoder/sample-programs | https://api.github.com/repos/TheRenegadeCoder/sample-programs | opened | Add Wren Testing | enhancement tests | To request a new language, please fill out the following:
Language name: Wren
Official Language Style Guide: https://wren.io/syntax.html
Official Language Website: https://wren.io/
Official Language Docker Image: https://hub.docker.com/r/esolang/wren
| 1.0 | Add Wren Testing - To request a new language, please fill out the following:
Language name: Wren
Official Language Style Guide: https://wren.io/syntax.html
Official Language Website: https://wren.io/
Official Language Docker Image: https://hub.docker.com/r/esolang/wren
| test | add wren testing to request a new language please fill out the following language name wren official language style guide official language website official language docker image | 1 |
320,349 | 27,432,478,923 | IssuesEvent | 2023-03-02 03:14:01 | brave/brave-ios | https://api.github.com/repos/brave/brave-ios | closed | Manual test run for `1.48.1` on `iPhone` or `iPad` running `iOS 14` | QA Pass - iPhone X QA/Yes release-notes/exclude ipad tests | ## Installer
- [x] Check that installer is close to the size of the last release
- [x] Check the Brave version in About and make sure it is EXACTLY as expected
## Data
- [x] Verify that data from the previous build appears in the updated build as expected (bookmarks, history, etc.)
- [x] Verify that cookies from the previous build are preserved after upgrade
- [x] Verify saved passwords are retained after upgrade
- [x] Verify stats are retained after upgrade
- [x] Verify sync chain created in the previous version is still retained on upgrade
- [x] Verify per-site settings are preserved after upgrade
## App linker
- [x] Long-press on a link in the Twitter app to get the share picker, choose Brave. Verify Brave doesn't crash after opening the link.
## Session storage
- [x] Verify that tabs restore when closed, including active tab
| 1.0 | Manual test run for `1.48.1` on `iPhone` or `iPad` running `iOS 14` - ## Installer
- [x] Check that installer is close to the size of the last release
- [x] Check the Brave version in About and make sure it is EXACTLY as expected
## Data
- [x] Verify that data from the previous build appears in the updated build as expected (bookmarks, history, etc.)
- [x] Verify that cookies from the previous build are preserved after upgrade
- [x] Verify saved passwords are retained after upgrade
- [x] Verify stats are retained after upgrade
- [x] Verify sync chain created in the previous version is still retained on upgrade
- [x] Verify per-site settings are preserved after upgrade
## App linker
- [x] Long-press on a link in the Twitter app to get the share picker, choose Brave. Verify Brave doesn't crash after opening the link.
## Session storage
- [x] Verify that tabs restore when closed, including active tab
| test | manual test run for on iphone or ipad running ios installer check that installer is close to the size of the last release check the brave version in about and make sure it is exactly as expected data verify that data from the previous build appears in the updated build as expected bookmarks history etc verify that cookies from the previous build are preserved after upgrade verify saved passwords are retained after upgrade verify stats are retained after upgrade verify sync chain created in the previous version is still retained on upgrade verify per site settings are preserved after upgrade app linker long press on a link in the twitter app to get the share picker choose brave verify brave doesn t crash after opening the link session storage verify that tabs restore when closed including active tab | 1 |
181,362 | 14,860,950,300 | IssuesEvent | 2021-01-18 21:38:32 | falcosecurity/falco-website | https://api.github.com/repos/falcosecurity/falco-website | closed | GKE Installation page | area/documentation kind/content lifecycle/rotten | While working on https://github.com/falcosecurity/falco/issues/650 - @caquino shared the Terraform they used to deploy Falco on GKE.
What we want to do is to add a documentation page, specific for GKE and specify the installation methods for it, adding this terraform config as a viable option.
<details>
<summary>Here is the terraform definition from the issue.</summary>
```
resource "kubernetes_service_account" "falco_sa" {
metadata {
name = "falco-account"
labels = {
app = "falco"
role = "security"
}
}
automount_service_account_token = true
}
resource "kubernetes_cluster_role" "falco_cr" {
metadata {
name = "falco-cluster-role"
labels = {
app = "falco"
role = "security"
}
}
rule {
api_groups = ["extensions", ""]
resources = ["nodes", "namespaces", "pods", "replicationcontrollers", "replicasets", "services", "daemonsets", "deployments", "events", "configmaps"]
verbs = ["get", "list", "watch"]
}
rule {
non_resource_urls = ["/healthz", "/healthz/*"]
verbs = ["get"]
}
}
resource "kubernetes_cluster_role_binding" "falco_crb" {
metadata {
name = "falco-cluster-role-bind"
labels = {
app = "falco"
role = "security"
}
}
subject {
kind = "ServiceAccount"
name = kubernetes_service_account.falco_sa.metadata.0.name
namespace = "default"
}
role_ref {
kind = "ClusterRole"
name = kubernetes_cluster_role.falco_cr.metadata.0.name
api_group = "rbac.authorization.k8s.io"
}
}
resource "kubernetes_config_map" "falco_cfgmap" {
metadata {
name = "falco-cfgmap"
labels = {
app = "falco"
role = "security"
}
}
data = {
"application_rules.yaml" = file("configs/falco/application_rules.yaml")
"falco_rules.local.yaml" = file("configs/falco/falco_rules.local.yaml")
"falco_rules.yaml" = file("configs/falco/falco_rules.yaml")
"k8s_audit_rules.yaml" = file("configs/falco/k8s_audit_rules.yaml")
"falco.yaml" = file("configs/falco/falco.yaml")
}
}
resource "kubernetes_daemonset" "falco_ds" {
metadata {
name = "falco-daemonset"
labels = {
app = "falco"
role = "security"
}
}
spec {
selector {
match_labels = {
app = "falco"
role = "security"
}
}
template {
metadata {
labels = {
app = "falco"
role = "security"
}
}
spec {
host_network = true
service_account_name = kubernetes_service_account.falco_sa.metadata.0.name
dns_policy = "ClusterFirstWithHostNet"
volume {
name = "docker-socket"
host_path {
path = "/var/run/docker.socket"
}
}
volume {
name = "containerd-socket"
host_path {
path = "/run/containerd/containerd.sock"
}
}
volume {
name = "dev-fs"
host_path {
path = "/dev"
}
}
volume {
name = "proc-fs"
host_path {
path = "/proc"
}
}
volume {
name = "boot-fs"
host_path {
path = "/boot"
}
}
volume {
name = "lib-modules"
host_path {
path = "/lib/modules"
}
}
volume {
name = "usr-fs"
host_path {
path = "/usr"
}
}
volume {
name = "etc-fs"
host_path {
path = "/etc"
}
}
volume {
name = "dshm"
empty_dir {
medium = "Memory"
}
}
volume {
name = "falco-config"
config_map {
name = kubernetes_config_map.falco_cfgmap.metadata.0.name
}
}
container {
name = "falco"
image = "falcosecurity/falco:latest"
args = [
"/usr/bin/falco",
"--cri", "/host/run/containerd/containerd.sock",
"-K", "/var/run/secrets/kubernetes.io/serviceaccount/token",
"-k", "https://$(KUBERNETES_SERVICE_HOST)",
"-pk",
]
security_context {
privileged = true
}
env {
name = "SYSDIG_BPF_PROBE"
value = ""
}
env {
name = "KBUILD_EXTRA_CPPFLAGS"
value = "-DCOS_73_WORKAROUND"
}
volume_mount {
name = "docker-socket"
mount_path = "/host/var/run/docker.sock"
}
volume_mount {
name = "containerd-socket"
mount_path = "/host/run/containerd/containerd.sock"
}
volume_mount {
name = "dev-fs"
mount_path = "/host/dev"
}
volume_mount {
name = "proc-fs"
mount_path = "/host/proc"
read_only = true
}
volume_mount {
name = "boot-fs"
mount_path = "/host/boot"
read_only = true
}
volume_mount {
name = "lib-modules"
mount_path = "/host/lib/modules"
read_only = true
}
volume_mount {
name = "usr-fs"
mount_path = "/host/usr"
read_only = true
}
volume_mount {
name = "etc-fs"
mount_path = "/host/etc"
read_only = true
}
volume_mount {
name = "dshm"
mount_path = "/dev/shm"
}
volume_mount {
name = "falco-config"
mount_path = "/etc/falco"
}
}
}
}
}
}
resource "kubernetes_service" "falco_svc" {
metadata {
name = kubernetes_daemonset.falco_ds.metadata.0.name
labels = {
app = "falco"
role = "security"
}
}
spec {
type = "ClusterIP"
port {
protocol = "TCP"
port = 8765
}
selector = {
app = "falco"
role = "security"
}
}
}
```
</details> | 1.0 | GKE Installation page - While working on https://github.com/falcosecurity/falco/issues/650 - @caquino shared the Terraform they used to deploy Falco on GKE.
What we want to do is to add a documentation page, specific for GKE and specify the installation methods for it, adding this terraform config as a viable option.
<details>
<summary>Here is the terraform definition from the issue.</summary>
```
resource "kubernetes_service_account" "falco_sa" {
metadata {
name = "falco-account"
labels = {
app = "falco"
role = "security"
}
}
automount_service_account_token = true
}
resource "kubernetes_cluster_role" "falco_cr" {
metadata {
name = "falco-cluster-role"
labels = {
app = "falco"
role = "security"
}
}
rule {
api_groups = ["extensions", ""]
resources = ["nodes", "namespaces", "pods", "replicationcontrollers", "replicasets", "services", "daemonsets", "deployments", "events", "configmaps"]
verbs = ["get", "list", "watch"]
}
rule {
non_resource_urls = ["/healthz", "/healthz/*"]
verbs = ["get"]
}
}
resource "kubernetes_cluster_role_binding" "falco_crb" {
metadata {
name = "falco-cluster-role-bind"
labels = {
app = "falco"
role = "security"
}
}
subject {
kind = "ServiceAccount"
name = kubernetes_service_account.falco_sa.metadata.0.name
namespace = "default"
}
role_ref {
kind = "ClusterRole"
name = kubernetes_cluster_role.falco_cr.metadata.0.name
api_group = "rbac.authorization.k8s.io"
}
}
resource "kubernetes_config_map" "falco_cfgmap" {
metadata {
name = "falco-cfgmap"
labels = {
app = "falco"
role = "security"
}
}
data = {
"application_rules.yaml" = file("configs/falco/application_rules.yaml")
"falco_rules.local.yaml" = file("configs/falco/falco_rules.local.yaml")
"falco_rules.yaml" = file("configs/falco/falco_rules.yaml")
"k8s_audit_rules.yaml" = file("configs/falco/k8s_audit_rules.yaml")
"falco.yaml" = file("configs/falco/falco.yaml")
}
}
resource "kubernetes_daemonset" "falco_ds" {
metadata {
name = "falco-daemonset"
labels = {
app = "falco"
role = "security"
}
}
spec {
selector {
match_labels = {
app = "falco"
role = "security"
}
}
template {
metadata {
labels = {
app = "falco"
role = "security"
}
}
spec {
host_network = true
service_account_name = kubernetes_service_account.falco_sa.metadata.0.name
dns_policy = "ClusterFirstWithHostNet"
volume {
name = "docker-socket"
host_path {
path = "/var/run/docker.socket"
}
}
volume {
name = "containerd-socket"
host_path {
path = "/run/containerd/containerd.sock"
}
}
volume {
name = "dev-fs"
host_path {
path = "/dev"
}
}
volume {
name = "proc-fs"
host_path {
path = "/proc"
}
}
volume {
name = "boot-fs"
host_path {
path = "/boot"
}
}
volume {
name = "lib-modules"
host_path {
path = "/lib/modules"
}
}
volume {
name = "usr-fs"
host_path {
path = "/usr"
}
}
volume {
name = "etc-fs"
host_path {
path = "/etc"
}
}
volume {
name = "dshm"
empty_dir {
medium = "Memory"
}
}
volume {
name = "falco-config"
config_map {
name = kubernetes_config_map.falco_cfgmap.metadata.0.name
}
}
container {
name = "falco"
image = "falcosecurity/falco:latest"
args = [
"/usr/bin/falco",
"--cri", "/host/run/containerd/containerd.sock",
"-K", "/var/run/secrets/kubernetes.io/serviceaccount/token",
"-k", "https://$(KUBERNETES_SERVICE_HOST)",
"-pk",
]
security_context {
privileged = true
}
env {
name = "SYSDIG_BPF_PROBE"
value = ""
}
env {
name = "KBUILD_EXTRA_CPPFLAGS"
value = "-DCOS_73_WORKAROUND"
}
volume_mount {
name = "docker-socket"
mount_path = "/host/var/run/docker.sock"
}
volume_mount {
name = "containerd-socket"
mount_path = "/host/run/containerd/containerd.sock"
}
volume_mount {
name = "dev-fs"
mount_path = "/host/dev"
}
volume_mount {
name = "proc-fs"
mount_path = "/host/proc"
read_only = true
}
volume_mount {
name = "boot-fs"
mount_path = "/host/boot"
read_only = true
}
volume_mount {
name = "lib-modules"
mount_path = "/host/lib/modules"
read_only = true
}
volume_mount {
name = "usr-fs"
mount_path = "/host/usr"
read_only = true
}
volume_mount {
name = "etc-fs"
mount_path = "/host/etc"
read_only = true
}
volume_mount {
name = "dshm"
mount_path = "/dev/shm"
}
volume_mount {
name = "falco-config"
mount_path = "/etc/falco"
}
}
}
}
}
}
resource "kubernetes_service" "falco_svc" {
metadata {
name = kubernetes_daemonset.falco_ds.metadata.0.name
labels = {
app = "falco"
role = "security"
}
}
spec {
type = "ClusterIP"
port {
protocol = "TCP"
port = 8765
}
selector = {
app = "falco"
role = "security"
}
}
}
```
</details> | non_test | gke installation page while working on caquino shared the terraform they used to deploy falco on gke what we want to do is to add a documentation page specific for gke and specify the installation methods for it adding this terraform config as a viable option here is the terraform definition from the issue resource kubernetes service account falco sa metadata name falco account labels app falco role security automount service account token true resource kubernetes cluster role falco cr metadata name falco cluster role labels app falco role security rule api groups resources verbs rule non resource urls verbs resource kubernetes cluster role binding falco crb metadata name falco cluster role bind labels app falco role security subject kind serviceaccount name kubernetes service account falco sa metadata name namespace default role ref kind clusterrole name kubernetes cluster role falco cr metadata name api group rbac authorization io resource kubernetes config map falco cfgmap metadata name falco cfgmap labels app falco role security data application rules yaml file configs falco application rules yaml falco rules local yaml file configs falco falco rules local yaml falco rules yaml file configs falco falco rules yaml audit rules yaml file configs falco audit rules yaml falco yaml file configs falco falco yaml resource kubernetes daemonset falco ds metadata name falco daemonset labels app falco role security spec selector match labels app falco role security template metadata labels app falco role security spec host network true service account name kubernetes service account falco sa metadata name dns policy clusterfirstwithhostnet volume name docker socket host path path var run docker socket volume name containerd socket host path path run containerd containerd sock volume name dev fs host path path dev volume name proc fs host path path proc volume name boot fs host path path boot volume name lib modules host path path lib modules volume name usr fs host path path usr volume name etc fs host path path etc volume name dshm empty dir medium memory volume name falco config config map name kubernetes config map falco cfgmap metadata name container name falco image falcosecurity falco latest args usr bin falco cri host run containerd containerd sock k var run secrets kubernetes io serviceaccount token k pk security context privileged true env name sysdig bpf probe value env name kbuild extra cppflags value dcos workaround volume mount name docker socket mount path host var run docker sock volume mount name containerd socket mount path host run containerd containerd sock volume mount name dev fs mount path host dev volume mount name proc fs mount path host proc read only true volume mount name boot fs mount path host boot read only true volume mount name lib modules mount path host lib modules read only true volume mount name usr fs mount path host usr read only true volume mount name etc fs mount path host etc read only true volume mount name dshm mount path dev shm volume mount name falco config mount path etc falco resource kubernetes service falco svc metadata name kubernetes daemonset falco ds metadata name labels app falco role security spec type clusterip port protocol tcp port selector app falco role security | 0 |
156,156 | 12,299,030,255 | IssuesEvent | 2020-05-11 11:37:35 | dotnet/winforms | https://api.github.com/repos/dotnet/winforms | opened | Flaky test: `ProfessionalColorTable_ChangeUserPreferences_GetColor_ReturnsExpected` deadlock | test-bug |
**Problem description:**
After #3226 was merged, `ProfessionalColorTable_ChangeUserPreferences_GetColor_ReturnsExpected` tests started deadlocking in x86 mode that caused CI builds to fail again.
I managed to reproduce the deadlock locally, though it took few attempts to do so.
Looks like the test deadlocks on itself, there are no other user-code executed:


**Expected behavior:**
The tests work as expected.
**Minimal repro:**
A repro is bit convoluted. Unfortunately `build.cmd -test -platform x86` command doesn't appear to work unless `Winforms.sln` is configured for x86 platform (which breaks other modes). | 1.0 | Flaky test: `ProfessionalColorTable_ChangeUserPreferences_GetColor_ReturnsExpected` deadlock -
**Problem description:**
After #3226 was merged, `ProfessionalColorTable_ChangeUserPreferences_GetColor_ReturnsExpected` tests started deadlocking in x86 mode that caused CI builds to fail again.
I managed to reproduce the deadlock locally, though it took few attempts to do so.
Looks like the test deadlocks on itself, there are no other user-code executed:


**Expected behavior:**
The tests work as expected.
**Minimal repro:**
A repro is bit convoluted. Unfortunately `build.cmd -test -platform x86` command doesn't appear to work unless `Winforms.sln` is configured for x86 platform (which breaks other modes). | test | flaky test professionalcolortable changeuserpreferences getcolor returnsexpected deadlock problem description after was merged professionalcolortable changeuserpreferences getcolor returnsexpected tests started deadlocking in mode that caused ci builds to fail again i managed to reproduce the deadlock locally though it took few attempts to do so looks like the test deadlocks on itself there are no other user code executed expected behavior the tests work as expected minimal repro a repro is bit convoluted unfortunately build cmd test platform command doesn t appear to work unless winforms sln is configured for platform which breaks other modes | 1 |
90,495 | 11,405,996,946 | IssuesEvent | 2020-01-31 13:25:31 | pyladiesdf/pyladiesdf_organizacao | https://api.github.com/repos/pyladiesdf/pyladiesdf_organizacao | closed | Agenda de Setembro | design media | - [ ] Criar agenda do mês no Canva
- [ ] Divulgar nos destaques do instagram | 1.0 | Agenda de Setembro - - [ ] Criar agenda do mês no Canva
- [ ] Divulgar nos destaques do instagram | non_test | agenda de setembro criar agenda do mês no canva divulgar nos destaques do instagram | 0 |
67,189 | 8,099,681,546 | IssuesEvent | 2018-08-11 12:07:25 | bologer/anycomment.io | https://api.github.com/repos/bologer/anycomment.io | closed | Make name & date inline to be more compact | design low priority | Think on how to this. Could be having name & date as inline ~ | 1.0 | Make name & date inline to be more compact - Think on how to this. Could be having name & date as inline ~ | non_test | make name date inline to be more compact think on how to this could be having name date as inline | 0 |
143,730 | 11,576,517,376 | IssuesEvent | 2020-02-21 12:08:10 | navikt/tiltaksgjennomforing-varsel | https://api.github.com/repos/navikt/tiltaksgjennomforing-varsel | closed | Bygg av ny-branch-test | deploy ny-branch-test | Kommenter med
>/deploy ny-branch-test
for å deploye til dev-fss.
Commit: | 1.0 | Bygg av ny-branch-test - Kommenter med
>/deploy ny-branch-test
for å deploye til dev-fss.
Commit: | test | bygg av ny branch test kommenter med deploy ny branch test for å deploye til dev fss commit | 1 |
181,871 | 14,891,483,604 | IssuesEvent | 2021-01-21 00:53:42 | GameBridgeAI/ts_serialize | https://api.github.com/repos/GameBridgeAI/ts_serialize | closed | [DOCS] - Add documentation for polymorphic class types on a parent class property | documentation | Please add documentation for polymorphic class types on a parent class property. The nesting nature makes it a bit unwieldly, and we should give an example.
Example:
```ts
class MyClass : Class {
@SerializeProperty({
fromJSONStrategy: json => polymorphicClassFromJSON<Polymorphic>(Polymorphic, json),
})
someClass : Polymorphic;
}
abstract class someClass: Polymorphic {
}
``` | 1.0 | [DOCS] - Add documentation for polymorphic class types on a parent class property - Please add documentation for polymorphic class types on a parent class property. The nesting nature makes it a bit unwieldly, and we should give an example.
Example:
```ts
class MyClass : Class {
@SerializeProperty({
fromJSONStrategy: json => polymorphicClassFromJSON<Polymorphic>(Polymorphic, json),
})
someClass : Polymorphic;
}
abstract class someClass: Polymorphic {
}
``` | non_test | add documentation for polymorphic class types on a parent class property please add documentation for polymorphic class types on a parent class property the nesting nature makes it a bit unwieldly and we should give an example example ts class myclass class serializeproperty fromjsonstrategy json polymorphicclassfromjson polymorphic json someclass polymorphic abstract class someclass polymorphic | 0 |
641,929 | 20,862,321,480 | IssuesEvent | 2022-03-22 00:55:02 | harvester/harvester | https://api.github.com/repos/harvester/harvester | opened | [BUG] harvester load balancer, IPAM, defaults to `DCHP` and overrides user's selection | bug priority/1 area/dashboard-related | Tracking bug [rancher/dashboard#5438](https://github.com/rancher/dashboard/issues/5438) | 1.0 | [BUG] harvester load balancer, IPAM, defaults to `DCHP` and overrides user's selection - Tracking bug [rancher/dashboard#5438](https://github.com/rancher/dashboard/issues/5438) | non_test | harvester load balancer ipam defaults to dchp and overrides user s selection tracking bug | 0 |
531 | 2,502,322,714 | IssuesEvent | 2015-01-09 07:18:12 | fossology/fossology | https://api.github.com/repos/fossology/fossology | opened | run Stress Testing weekly with latest code | Category: Testing Component: Rank Component: Tester Priority: Normal Status: New Tracker: Bug | ---
Author Name: **larry shi**
Original Redmine Issue: 6987, http://www.fossology.org/issues/6987
Original Date: 2014/05/07
Original Assignee: Dong Ma
---
manually or automatically.
| 2.0 | run Stress Testing weekly with latest code - ---
Author Name: **larry shi**
Original Redmine Issue: 6987, http://www.fossology.org/issues/6987
Original Date: 2014/05/07
Original Assignee: Dong Ma
---
manually or automatically.
| test | run stress testing weekly with latest code author name larry shi original redmine issue original date original assignee dong ma manually or automatically | 1 |
337,761 | 30,261,398,599 | IssuesEvent | 2023-07-07 08:30:01 | adrianlubitz/VVAD | https://api.github.com/repos/adrianlubitz/VVAD | opened | Establish unit tests for python functions | test | So far, there are no unit tests existent to verify the functionality of the used functions even after code changes. Goal of this issue is to establish unit tests in multiple steps:
- Commonly used functions
- Functions that must run on clusters/servers
- rarely used functions (or only used in pipelines) | 1.0 | Establish unit tests for python functions - So far, there are no unit tests existent to verify the functionality of the used functions even after code changes. Goal of this issue is to establish unit tests in multiple steps:
- Commonly used functions
- Functions that must run on clusters/servers
- rarely used functions (or only used in pipelines) | test | establish unit tests for python functions so far there are no unit tests existent to verify the functionality of the used functions even after code changes goal of this issue is to establish unit tests in multiple steps commonly used functions functions that must run on clusters servers rarely used functions or only used in pipelines | 1 |
254,318 | 8,072,780,669 | IssuesEvent | 2018-08-06 17:03:21 | marbl/MetagenomeScope | https://api.github.com/repos/marbl/MetagenomeScope | closed | Support somehow viewing node metadata during finishing | highpriorityfeature | This lets the user select nodes and inspect them during the finishing process. | 1.0 | Support somehow viewing node metadata during finishing - This lets the user select nodes and inspect them during the finishing process. | non_test | support somehow viewing node metadata during finishing this lets the user select nodes and inspect them during the finishing process | 0 |
158,661 | 12,422,153,478 | IssuesEvent | 2020-05-23 20:31:22 | drafthub/drafthub | https://api.github.com/repos/drafthub/drafthub | opened | not covered code in `core.signals` | help wanted tests | Here is the coverage report from `check.py`
```
$ docker-compose exec web python check.py coverage
Name Stmts Miss Cover Missing
-----------------------------------------------------------------------------
...
drafthub/core/signals.py 10 6 40% 8-14
...
```
And the [report from codecov](https://codecov.io/gh/drafthub/drafthub/src/master/drafthub/core/signals.py)
New tests for this issue must be written in `drafthub/core/tests/test_signals.py` | 1.0 | not covered code in `core.signals` - Here is the coverage report from `check.py`
```
$ docker-compose exec web python check.py coverage
Name Stmts Miss Cover Missing
-----------------------------------------------------------------------------
...
drafthub/core/signals.py 10 6 40% 8-14
...
```
And the [report from codecov](https://codecov.io/gh/drafthub/drafthub/src/master/drafthub/core/signals.py)
New tests for this issue must be written in `drafthub/core/tests/test_signals.py` | test | not covered code in core signals here is the coverage report from check py docker compose exec web python check py coverage name stmts miss cover missing drafthub core signals py and the new tests for this issue must be written in drafthub core tests test signals py | 1 |
3,194 | 2,743,549,033 | IssuesEvent | 2015-04-21 22:24:56 | elastic/curator | https://api.github.com/repos/elastic/curator | closed | Document that forceMerge takes a lot of space | Documentation | Apparently, up to 3x the size of an index. See https://issues.apache.org/jira/browse/LUCENE-6386 | 1.0 | Document that forceMerge takes a lot of space - Apparently, up to 3x the size of an index. See https://issues.apache.org/jira/browse/LUCENE-6386 | non_test | document that forcemerge takes a lot of space apparently up to the size of an index see | 0 |
122,062 | 10,211,588,273 | IssuesEvent | 2019-08-14 17:18:41 | input-output-hk/plutus | https://api.github.com/repos/input-output-hk/plutus | closed | support multiple modes of execution in plc-agda | Metatheory Test | There are currently two possible execution paths in plc-agda: extrinsic reduction via progress and extrinsic CK machine execution.
Add a command line flag to support both:
$ plc-agda --help
plc-agda - a Plutus Core implementation written in Agda
Usage: plc-agda --file FILENAME [--ck]
run a Plutus Core program
Available options:
--file FILENAME Plutus Core source file
--ck Whether to execute using the CK machine
-h,--help Show this help text | 1.0 | support multiple modes of execution in plc-agda - There are currently two possible execution paths in plc-agda: extrinsic reduction via progress and extrinsic CK machine execution.
Add a command line flag to support both:
$ plc-agda --help
plc-agda - a Plutus Core implementation written in Agda
Usage: plc-agda --file FILENAME [--ck]
run a Plutus Core program
Available options:
--file FILENAME Plutus Core source file
--ck Whether to execute using the CK machine
-h,--help Show this help text | test | support multiple modes of execution in plc agda there are currently two possible execution paths in plc agda extrinsic reduction via progress and extrinsic ck machine execution add a command line flag to support both plc agda help plc agda a plutus core implementation written in agda usage plc agda file filename run a plutus core program available options file filename plutus core source file ck whether to execute using the ck machine h help show this help text | 1 |
292,653 | 25,228,032,350 | IssuesEvent | 2022-11-14 17:23:43 | elastic/kibana | https://api.github.com/repos/elastic/kibana | opened | Failing test: Jest Integration Tests.src/core/server/integration_tests/saved_objects/migrations/actions - migration actions waitForIndexStatus resolves left with "index_not_green_timeout" after waiting for an index status to be green timeout | failed-test | A test failed on a tracked branch
```
Error: thrown: "Exceeded timeout of 280000 ms for a hook.
Use jest.setTimeout(newTimeout) to increase the timeout value, if this is a long-running test."
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/src/core/server/integration_tests/saved_objects/migrations/actions/actions.test.ts:62:3
at _dispatchDescribe (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/index.js:98:26)
at describe (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/index.js:60:5)
at Object.<anonymous> (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/src/core/server/integration_tests/saved_objects/migrations/actions/actions.test.ts:59:1)
at Runtime._execModule (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-runtime/build/index.js:1646:24)
at Runtime._loadModule (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-runtime/build/index.js:1185:12)
at Runtime.requireModule (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-runtime/build/index.js:1009:12)
at jestAdapter (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapter.js:79:13)
at runTestInternal (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-runner/build/runTest.js:389:16)
at runTest (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-runner/build/runTest.js:475:34)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/23497#01847704-a6ed-451b-a0ae-f94694f8b3b3)
<!-- kibanaCiData = {"failed-test":{"test.class":"Jest Integration Tests.src/core/server/integration_tests/saved_objects/migrations/actions","test.name":"migration actions waitForIndexStatus resolves left with \"index_not_green_timeout\" after waiting for an index status to be green timeout","test.failCount":1}} --> | 1.0 | Failing test: Jest Integration Tests.src/core/server/integration_tests/saved_objects/migrations/actions - migration actions waitForIndexStatus resolves left with "index_not_green_timeout" after waiting for an index status to be green timeout - A test failed on a tracked branch
```
Error: thrown: "Exceeded timeout of 280000 ms for a hook.
Use jest.setTimeout(newTimeout) to increase the timeout value, if this is a long-running test."
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/src/core/server/integration_tests/saved_objects/migrations/actions/actions.test.ts:62:3
at _dispatchDescribe (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/index.js:98:26)
at describe (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/index.js:60:5)
at Object.<anonymous> (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/src/core/server/integration_tests/saved_objects/migrations/actions/actions.test.ts:59:1)
at Runtime._execModule (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-runtime/build/index.js:1646:24)
at Runtime._loadModule (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-runtime/build/index.js:1185:12)
at Runtime.requireModule (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-runtime/build/index.js:1009:12)
at jestAdapter (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapter.js:79:13)
at runTestInternal (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-runner/build/runTest.js:389:16)
at runTest (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-runner/build/runTest.js:475:34)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/23497#01847704-a6ed-451b-a0ae-f94694f8b3b3)
<!-- kibanaCiData = {"failed-test":{"test.class":"Jest Integration Tests.src/core/server/integration_tests/saved_objects/migrations/actions","test.name":"migration actions waitForIndexStatus resolves left with \"index_not_green_timeout\" after waiting for an index status to be green timeout","test.failCount":1}} --> | test | failing test jest integration tests src core server integration tests saved objects migrations actions migration actions waitforindexstatus resolves left with index not green timeout after waiting for an index status to be green timeout a test failed on a tracked branch error thrown exceeded timeout of ms for a hook use jest settimeout newtimeout to increase the timeout value if this is a long running test at var lib buildkite agent builds kb spot elastic kibana on merge kibana src core server integration tests saved objects migrations actions actions test ts at dispatchdescribe var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build index js at describe var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build index js at object var lib buildkite agent builds kb spot elastic kibana on merge kibana src core server integration tests saved objects migrations actions actions test ts at runtime execmodule var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest runtime build index js at runtime loadmodule var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest runtime build index js at runtime requiremodule var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest runtime build index js at jestadapter var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build legacy code todo rewrite jestadapter js at runtestinternal var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest runner build runtest js at runtest var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest runner build runtest js first failure | 1 |
691,927 | 23,716,850,770 | IssuesEvent | 2022-08-30 12:30:43 | newrelic/helm-charts | https://api.github.com/repos/newrelic/helm-charts | reopened | [nri-bundle] The dependency to `common-library` has to be aligned through all the dependencies | kind/bug triage/accepted priority/short-term | To understand this issue you should have the context about the overriding system that we removed months ago: https://github.com/newrelic/helm-charts/issues/773
TL;DR: all the names of the helpers in Helm have to be unique and if not they are overridden in a deterministic but not controllable way.
Because of that, all the dependencies of the `nri-bundle` that use `common-library` should have the same version so the helpers are overridden with the same implementation of the function.
We should add a test in which all the dependencies of `nri-bundle` share the same version of `common-library` when a dependency of `nri-bundle` is changed. We should not test that is the latest version (as it could have an issue that should be fixed). | 1.0 | [nri-bundle] The dependency to `common-library` has to be aligned through all the dependencies - To understand this issue you should have the context about the overriding system that we removed months ago: https://github.com/newrelic/helm-charts/issues/773
TL;DR: all the names of the helpers in Helm have to be unique and if not they are overridden in a deterministic but not controllable way.
Because of that, all the dependencies of the `nri-bundle` that use `common-library` should have the same version so the helpers are overridden with the same implementation of the function.
We should add a test in which all the dependencies of `nri-bundle` share the same version of `common-library` when a dependency of `nri-bundle` is changed. We should not test that is the latest version (as it could have an issue that should be fixed). | non_test | the dependency to common library has to be aligned through all the dependencies to understand this issue you should have the context about the overriding system that we removed months ago tl dr all the names of the helpers in helm have to be unique and if not they are overridden in a deterministic but not controllable way because of that all the dependencies of the nri bundle that use common library should have the same version so the helpers are overridden with the same implementation of the function we should add a test in which all the dependencies of nri bundle share the same version of common library when a dependency of nri bundle is changed we should not test that is the latest version as it could have an issue that should be fixed | 0 |
67,781 | 14,891,800,870 | IssuesEvent | 2021-01-21 01:24:55 | turkdevops/graphql-js | https://api.github.com/repos/turkdevops/graphql-js | closed | CVE-2020-7774 (High) detected in y18n-4.0.0.tgz | security vulnerability | ## CVE-2020-7774 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>y18n-4.0.0.tgz</b></p></summary>
<p>the bare-bones internationalization library used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/y18n/-/y18n-4.0.0.tgz">https://registry.npmjs.org/y18n/-/y18n-4.0.0.tgz</a></p>
<p>Path to dependency file: graphql-js/package.json</p>
<p>Path to vulnerable library: graphql-js/node_modules/y18n/package.json</p>
<p>
Dependency Hierarchy:
- mocha-8.2.0.tgz (Root Library)
- yargs-13.3.2.tgz
- :x: **y18n-4.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/graphql-js/commit/972f0f818792ee924636343ae37cd5da2e83c6f7">972f0f818792ee924636343ae37cd5da2e83c6f7</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package y18n before 3.2.2, 4.0.1 and 5.0.5. PoC by po6ix: const y18n = require('y18n')(); y18n.setLocale('__proto__'); y18n.updateLocale({polluted: true}); console.log(polluted); // true
<p>Publish Date: 2020-11-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7774>CVE-2020-7774</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7774">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7774</a></p>
<p>Release Date: 2020-11-17</p>
<p>Fix Resolution: 5.0.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-7774 (High) detected in y18n-4.0.0.tgz - ## CVE-2020-7774 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>y18n-4.0.0.tgz</b></p></summary>
<p>the bare-bones internationalization library used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/y18n/-/y18n-4.0.0.tgz">https://registry.npmjs.org/y18n/-/y18n-4.0.0.tgz</a></p>
<p>Path to dependency file: graphql-js/package.json</p>
<p>Path to vulnerable library: graphql-js/node_modules/y18n/package.json</p>
<p>
Dependency Hierarchy:
- mocha-8.2.0.tgz (Root Library)
- yargs-13.3.2.tgz
- :x: **y18n-4.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/graphql-js/commit/972f0f818792ee924636343ae37cd5da2e83c6f7">972f0f818792ee924636343ae37cd5da2e83c6f7</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package y18n before 3.2.2, 4.0.1 and 5.0.5. PoC by po6ix: const y18n = require('y18n')(); y18n.setLocale('__proto__'); y18n.updateLocale({polluted: true}); console.log(polluted); // true
<p>Publish Date: 2020-11-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7774>CVE-2020-7774</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7774">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7774</a></p>
<p>Release Date: 2020-11-17</p>
<p>Fix Resolution: 5.0.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in tgz cve high severity vulnerability vulnerable library tgz the bare bones internationalization library used by yargs library home page a href path to dependency file graphql js package json path to vulnerable library graphql js node modules package json dependency hierarchy mocha tgz root library yargs tgz x tgz vulnerable library found in head commit a href found in base branch master vulnerability details this affects the package before and poc by const require setlocale proto updatelocale polluted true console log polluted true publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
202,857 | 15,304,622,688 | IssuesEvent | 2021-02-24 17:07:02 | submariner-io/shipyard | https://api.github.com/repos/submariner-io/shipyard | closed | Run subctl benchmark as part of OSP-AWS tests | automation testing | In order to test subctl benchmark feature, we will follow the documentation: https://github.com/submariner-io/submariner-website/pull/274/files
And execute it on our OSP-AWS (Jenkins downstream) environment.
Originally posted by @manosnoam in https://github.com/submariner-io/submariner-operator/issue_comments/697210490 | 1.0 | Run subctl benchmark as part of OSP-AWS tests - In order to test subctl benchmark feature, we will follow the documentation: https://github.com/submariner-io/submariner-website/pull/274/files
And execute it on our OSP-AWS (Jenkins downstream) environment.
Originally posted by @manosnoam in https://github.com/submariner-io/submariner-operator/issue_comments/697210490 | test | run subctl benchmark as part of osp aws tests in order to test subctl benchmark feature we will follow the documentation and execute it on our osp aws jenkins downstream environment originally posted by manosnoam in | 1 |
100,272 | 8,729,257,605 | IssuesEvent | 2018-12-10 19:43:54 | KhronosGroup/MoltenVK | https://api.github.com/repos/KhronosGroup/MoltenVK | closed | Fix the effect of rasterizerDiscardEnable on fragment shader compilation | bug fixed - please test & close | MoltenVK version: 1.0.24
Original GLSL:
#version 450
#extension GL_ARB_separate_shader_objects : enable
out gl_PerVertex {
vec4 gl_Position;
};
layout (std140, binding = 0) uniform Transform {
mat4 mvp;
} transform;
layout (location = 0) in vec2 pos;
layout (location = 1) in vec2 color;
layout (location = 0) out vec2 color_varying;
void main() {
gl_Position = transform.mvp * vec4(pos, 0.0, 1.0);
color_varying = color;
}
// fragment shader
#version 450
#extension GL_ARB_separate_shader_objects : enable
layout (location = 0) in vec2 color_varying;
layout (location = 0) out vec4 fragcolor;
void main() {
fragcolor = vec4(color_varying.x, 0.2, color_varying.y, 1.0);
}
SPIRV (using glslangValidator from macos vulkan sdk 1.1.82)
[mvk-info] Converting SPIR-V:
; SPIR-V
; Version: 1.0
; Generator: Khronos Glslang Reference Front End; 7
; Bound: 39
; Schema: 0
OpCapability Shader
%1 = OpExtInstImport "GLSL.std.450"
OpMemoryModel Logical GLSL450
OpEntryPoint Vertex %4 "main" %13 %25 %36 %37
OpSource GLSL 450
OpSourceExtension "GL_ARB_separate_shader_objects"
OpName %4 "main"
OpName %11 "gl_PerVertex"
OpMemberName %11 0 "gl_Position"
OpMemberName %11 1 "gl_PointSize"
OpMemberName %11 2 "gl_ClipDistance"
OpMemberName %11 3 "gl_CullDistance"
OpName %13 ""
OpName %17 "Transform"
OpMemberName %17 0 "mvp"
OpName %19 "transform"
OpName %25 "pos"
OpName %36 "color_varying"
OpName %37 "color"
OpMemberDecorate %11 0 BuiltIn Position
OpMemberDecorate %11 1 BuiltIn PointSize
OpMemberDecorate %11 2 BuiltIn ClipDistance
OpMemberDecorate %11 3 BuiltIn CullDistance
OpDecorate %11 Block
OpMemberDecorate %17 0 ColMajor
OpMemberDecorate %17 0 Offset 0
OpMemberDecorate %17 0 MatrixStride 16
OpDecorate %17 Block
OpDecorate %19 DescriptorSet 0
OpDecorate %19 Binding 0
OpDecorate %25 Location 0
OpDecorate %36 Location 0
OpDecorate %37 Location 1
%2 = OpTypeVoid
%3 = OpTypeFunction %2
%6 = OpTypeFloat 32
%7 = OpTypeVector %6 4
%8 = OpTypeInt 32 0
%9 = OpConstant %8 1
%10 = OpTypeArray %6 %9
%11 = OpTypeStruct %7 %6 %10 %10
%12 = OpTypePointer Output %11
%13 = OpVariable %12 Output
%14 = OpTypeInt 32 1
%15 = OpConstant %14 0
%16 = OpTypeMatrix %7 4
%17 = OpTypeStruct %16
%18 = OpTypePointer Uniform %17
%19 = OpVariable %18 Uniform
%20 = OpTypePointer Uniform %16
%23 = OpTypeVector %6 2
%24 = OpTypePointer Input %23
%25 = OpVariable %24 Input
%27 = OpConstant %6 0
%28 = OpConstant %6 1
%33 = OpTypePointer Output %7
%35 = OpTypePointer Output %23
%36 = OpVariable %35 Output
%37 = OpVariable %24 Input
%4 = OpFunction %2 None %3
%5 = OpLabel
%21 = OpAccessChain %20 %19 %15
%22 = OpLoad %16 %21
%26 = OpLoad %23 %25
%29 = OpCompositeExtract %6 %26 0
%30 = OpCompositeExtract %6 %26 1
%31 = OpCompositeConstruct %7 %29 %30 %27 %28
%32 = OpMatrixTimesVector %7 %22 %31
%34 = OpAccessChain %33 %13 %15
OpStore %34 %32
%38 = OpLoad %23 %37
OpStore %36 %38
OpReturn
OpFunctionEnd
End SPIR-V
Converted MSL:
#include <metal_stdlib>
#include <simd/simd.h>
using namespace metal;
struct Transform
{
float4x4 mvp;
};
struct main0_out
{
float2 color_varying [[user(locn0)]];
float4 gl_Position [[position]];
};
struct main0_in
{
float2 pos [[attribute(0)]];
float2 color [[attribute(1)]];
};
////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////
// is this supposed to be returning |out| here?/////////////////////////////////
vertex void main0(main0_in in [[stage_in]], constant Transform& transform [[buffer(0)]])
{
main0_out out = {};
out.gl_Position = transform.mvp * float4(in.pos, 0.0, 1.0);
out.color_varying = in.color;
out.gl_Position.y = -(out.gl_Position.y); // Invert Y-axis for Metal
}
End MSL
Estimated original GLSL:
#version 450
layout(binding = 0, std140) uniform Transform
{
mat4 mvp;
} transform;
layout(location = 0) in vec2 pos;
layout(location = 0) out vec2 color_varying;
layout(location = 1) in vec2 color;
void main()
{
gl_Position = transform.mvp * vec4(pos, 0.0, 1.0);
color_varying = color;
}
End GLSL
[mvk-debug] Performance to compile MSL source code into a MTLLibrary curr: 0.180 ms, avg: 0.206 ms, min: 0.180 ms, max: 0.231 ms, count: 2
[mvk-debug] Performance to retrieve shader library from the cache curr: 28.808 ms, avg: 28.808 ms, min: 28.808 ms, max: 28.808 ms, count: 1
[mvk-debug] Performance to retrieve a MTLFunction from a MTLLibrary curr: 0.005 ms, avg: 0.005 ms, min: 0.005 ms, max: 0.005 ms, count: 1
[mvk-debug] Performance to convert SPIR-V to MSL source code curr: 0.877 ms, avg: 2.636 ms, min: 0.877 ms, max: 4.394 ms, count: 2
[mvk-info] Converting SPIR-V:
; SPIR-V
; Version: 1.0
; Generator: Khronos Glslang Reference Front End; 7
; Bound: 24
; Schema: 0
OpCapability Shader
%1 = OpExtInstImport "GLSL.std.450"
OpMemoryModel Logical GLSL450
OpEntryPoint Fragment %4 "main" %9 %12
OpExecutionMode %4 OriginUpperLeft
OpSource GLSL 450
OpSourceExtension "GL_ARB_separate_shader_objects"
OpName %4 "main"
OpName %9 "fragcolor"
OpName %12 "color_varying"
OpDecorate %9 Location 0
OpDecorate %12 Location 0
%2 = OpTypeVoid
%3 = OpTypeFunction %2
%6 = OpTypeFloat 32
%7 = OpTypeVector %6 4
%8 = OpTypePointer Output %7
%9 = OpVariable %8 Output
%10 = OpTypeVector %6 2
%11 = OpTypePointer Input %10
%12 = OpVariable %11 Input
%13 = OpTypeInt 32 0
%14 = OpConstant %13 0
%15 = OpTypePointer Input %6
%18 = OpConstant %6 0.200000003
%19 = OpConstant %13 1
%22 = OpConstant %6 1
%4 = OpFunction %2 None %3
%5 = OpLabel
%16 = OpAccessChain %15 %12 %14
%17 = OpLoad %6 %16
%20 = OpAccessChain %15 %12 %19
%21 = OpLoad %6 %20
%23 = OpCompositeConstruct %7 %17 %18 %21 %22
OpStore %9 %23
OpReturn
OpFunctionEnd
End SPIR-V
Converted MSL:
#include <metal_stdlib>
#include <simd/simd.h>
using namespace metal;
struct main0_out
{
float4 fragcolor [[color(0)]];
};
struct main0_in
{
float2 color_varying [[user(locn0)]];
};
fragment main0_out main0(main0_in in [[stage_in]])
{
main0_out out = {};
out.fragcolor = float4(in.color_varying.x, 0.2, in.color_varying.y, 1.0);
return out;
}
End MSL
Estimated original GLSL:
#version 450
layout(location = 0) out vec4 fragcolor;
layout(location = 0) in vec2 color_varying;
void main()
{
fragcolor = vec4(color_varying.x, 0.2, color_varying.y, 1.0);
}
End GLSL
[mvk-debug] Performance to compile MSL source code into a MTLLibrary curr: 0.126 ms, avg: 0.179 ms, min: 0.126 ms, max: 0.231 ms, count: 3
[mvk-debug] Performance to retrieve shader library from the cache curr: 22.515 ms, avg: 25.662 ms, min: 22.515 ms, max: 28.808 ms, count: 2
[mvk-debug] Performance to retrieve a MTLFunction from a MTLLibrary curr: 0.001 ms, avg: 0.003 ms, min: 0.001 ms, max: 0.005 ms, count: 2
[***MoltenVK ERROR***] VK_ERROR_INITIALIZATION_FAILED: Render pipeline compile failed (error code 1):
Link failed: fragment input user(locn0) was not found in vertex shader outputs.
| 1.0 | Fix the effect of rasterizerDiscardEnable on fragment shader compilation - MoltenVK version: 1.0.24
Original GLSL:
#version 450
#extension GL_ARB_separate_shader_objects : enable
out gl_PerVertex {
vec4 gl_Position;
};
layout (std140, binding = 0) uniform Transform {
mat4 mvp;
} transform;
layout (location = 0) in vec2 pos;
layout (location = 1) in vec2 color;
layout (location = 0) out vec2 color_varying;
void main() {
gl_Position = transform.mvp * vec4(pos, 0.0, 1.0);
color_varying = color;
}
// fragment shader
#version 450
#extension GL_ARB_separate_shader_objects : enable
layout (location = 0) in vec2 color_varying;
layout (location = 0) out vec4 fragcolor;
void main() {
fragcolor = vec4(color_varying.x, 0.2, color_varying.y, 1.0);
}
SPIRV (using glslangValidator from macos vulkan sdk 1.1.82)
[mvk-info] Converting SPIR-V:
; SPIR-V
; Version: 1.0
; Generator: Khronos Glslang Reference Front End; 7
; Bound: 39
; Schema: 0
OpCapability Shader
%1 = OpExtInstImport "GLSL.std.450"
OpMemoryModel Logical GLSL450
OpEntryPoint Vertex %4 "main" %13 %25 %36 %37
OpSource GLSL 450
OpSourceExtension "GL_ARB_separate_shader_objects"
OpName %4 "main"
OpName %11 "gl_PerVertex"
OpMemberName %11 0 "gl_Position"
OpMemberName %11 1 "gl_PointSize"
OpMemberName %11 2 "gl_ClipDistance"
OpMemberName %11 3 "gl_CullDistance"
OpName %13 ""
OpName %17 "Transform"
OpMemberName %17 0 "mvp"
OpName %19 "transform"
OpName %25 "pos"
OpName %36 "color_varying"
OpName %37 "color"
OpMemberDecorate %11 0 BuiltIn Position
OpMemberDecorate %11 1 BuiltIn PointSize
OpMemberDecorate %11 2 BuiltIn ClipDistance
OpMemberDecorate %11 3 BuiltIn CullDistance
OpDecorate %11 Block
OpMemberDecorate %17 0 ColMajor
OpMemberDecorate %17 0 Offset 0
OpMemberDecorate %17 0 MatrixStride 16
OpDecorate %17 Block
OpDecorate %19 DescriptorSet 0
OpDecorate %19 Binding 0
OpDecorate %25 Location 0
OpDecorate %36 Location 0
OpDecorate %37 Location 1
%2 = OpTypeVoid
%3 = OpTypeFunction %2
%6 = OpTypeFloat 32
%7 = OpTypeVector %6 4
%8 = OpTypeInt 32 0
%9 = OpConstant %8 1
%10 = OpTypeArray %6 %9
%11 = OpTypeStruct %7 %6 %10 %10
%12 = OpTypePointer Output %11
%13 = OpVariable %12 Output
%14 = OpTypeInt 32 1
%15 = OpConstant %14 0
%16 = OpTypeMatrix %7 4
%17 = OpTypeStruct %16
%18 = OpTypePointer Uniform %17
%19 = OpVariable %18 Uniform
%20 = OpTypePointer Uniform %16
%23 = OpTypeVector %6 2
%24 = OpTypePointer Input %23
%25 = OpVariable %24 Input
%27 = OpConstant %6 0
%28 = OpConstant %6 1
%33 = OpTypePointer Output %7
%35 = OpTypePointer Output %23
%36 = OpVariable %35 Output
%37 = OpVariable %24 Input
%4 = OpFunction %2 None %3
%5 = OpLabel
%21 = OpAccessChain %20 %19 %15
%22 = OpLoad %16 %21
%26 = OpLoad %23 %25
%29 = OpCompositeExtract %6 %26 0
%30 = OpCompositeExtract %6 %26 1
%31 = OpCompositeConstruct %7 %29 %30 %27 %28
%32 = OpMatrixTimesVector %7 %22 %31
%34 = OpAccessChain %33 %13 %15
OpStore %34 %32
%38 = OpLoad %23 %37
OpStore %36 %38
OpReturn
OpFunctionEnd
End SPIR-V
Converted MSL:
#include <metal_stdlib>
#include <simd/simd.h>
using namespace metal;
struct Transform
{
float4x4 mvp;
};
struct main0_out
{
float2 color_varying [[user(locn0)]];
float4 gl_Position [[position]];
};
struct main0_in
{
float2 pos [[attribute(0)]];
float2 color [[attribute(1)]];
};
////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////
// is this supposed to be returning |out| here?/////////////////////////////////
vertex void main0(main0_in in [[stage_in]], constant Transform& transform [[buffer(0)]])
{
main0_out out = {};
out.gl_Position = transform.mvp * float4(in.pos, 0.0, 1.0);
out.color_varying = in.color;
out.gl_Position.y = -(out.gl_Position.y); // Invert Y-axis for Metal
}
End MSL
Estimated original GLSL:
#version 450
layout(binding = 0, std140) uniform Transform
{
mat4 mvp;
} transform;
layout(location = 0) in vec2 pos;
layout(location = 0) out vec2 color_varying;
layout(location = 1) in vec2 color;
void main()
{
gl_Position = transform.mvp * vec4(pos, 0.0, 1.0);
color_varying = color;
}
End GLSL
[mvk-debug] Performance to compile MSL source code into a MTLLibrary curr: 0.180 ms, avg: 0.206 ms, min: 0.180 ms, max: 0.231 ms, count: 2
[mvk-debug] Performance to retrieve shader library from the cache curr: 28.808 ms, avg: 28.808 ms, min: 28.808 ms, max: 28.808 ms, count: 1
[mvk-debug] Performance to retrieve a MTLFunction from a MTLLibrary curr: 0.005 ms, avg: 0.005 ms, min: 0.005 ms, max: 0.005 ms, count: 1
[mvk-debug] Performance to convert SPIR-V to MSL source code curr: 0.877 ms, avg: 2.636 ms, min: 0.877 ms, max: 4.394 ms, count: 2
[mvk-info] Converting SPIR-V:
; SPIR-V
; Version: 1.0
; Generator: Khronos Glslang Reference Front End; 7
; Bound: 24
; Schema: 0
OpCapability Shader
%1 = OpExtInstImport "GLSL.std.450"
OpMemoryModel Logical GLSL450
OpEntryPoint Fragment %4 "main" %9 %12
OpExecutionMode %4 OriginUpperLeft
OpSource GLSL 450
OpSourceExtension "GL_ARB_separate_shader_objects"
OpName %4 "main"
OpName %9 "fragcolor"
OpName %12 "color_varying"
OpDecorate %9 Location 0
OpDecorate %12 Location 0
%2 = OpTypeVoid
%3 = OpTypeFunction %2
%6 = OpTypeFloat 32
%7 = OpTypeVector %6 4
%8 = OpTypePointer Output %7
%9 = OpVariable %8 Output
%10 = OpTypeVector %6 2
%11 = OpTypePointer Input %10
%12 = OpVariable %11 Input
%13 = OpTypeInt 32 0
%14 = OpConstant %13 0
%15 = OpTypePointer Input %6
%18 = OpConstant %6 0.200000003
%19 = OpConstant %13 1
%22 = OpConstant %6 1
%4 = OpFunction %2 None %3
%5 = OpLabel
%16 = OpAccessChain %15 %12 %14
%17 = OpLoad %6 %16
%20 = OpAccessChain %15 %12 %19
%21 = OpLoad %6 %20
%23 = OpCompositeConstruct %7 %17 %18 %21 %22
OpStore %9 %23
OpReturn
OpFunctionEnd
End SPIR-V
Converted MSL:
#include <metal_stdlib>
#include <simd/simd.h>
using namespace metal;
struct main0_out
{
float4 fragcolor [[color(0)]];
};
struct main0_in
{
float2 color_varying [[user(locn0)]];
};
fragment main0_out main0(main0_in in [[stage_in]])
{
main0_out out = {};
out.fragcolor = float4(in.color_varying.x, 0.2, in.color_varying.y, 1.0);
return out;
}
End MSL
Estimated original GLSL:
#version 450
layout(location = 0) out vec4 fragcolor;
layout(location = 0) in vec2 color_varying;
void main()
{
fragcolor = vec4(color_varying.x, 0.2, color_varying.y, 1.0);
}
End GLSL
[mvk-debug] Performance to compile MSL source code into a MTLLibrary curr: 0.126 ms, avg: 0.179 ms, min: 0.126 ms, max: 0.231 ms, count: 3
[mvk-debug] Performance to retrieve shader library from the cache curr: 22.515 ms, avg: 25.662 ms, min: 22.515 ms, max: 28.808 ms, count: 2
[mvk-debug] Performance to retrieve a MTLFunction from a MTLLibrary curr: 0.001 ms, avg: 0.003 ms, min: 0.001 ms, max: 0.005 ms, count: 2
[***MoltenVK ERROR***] VK_ERROR_INITIALIZATION_FAILED: Render pipeline compile failed (error code 1):
Link failed: fragment input user(locn0) was not found in vertex shader outputs.
| test | fix the effect of rasterizerdiscardenable on fragment shader compilation moltenvk version original glsl version extension gl arb separate shader objects enable out gl pervertex gl position layout binding uniform transform mvp transform layout location in pos layout location in color layout location out color varying void main gl position transform mvp pos color varying color fragment shader version extension gl arb separate shader objects enable layout location in color varying layout location out fragcolor void main fragcolor color varying x color varying y spirv using glslangvalidator from macos vulkan sdk converting spir v spir v version generator khronos glslang reference front end bound schema opcapability shader opextinstimport glsl std opmemorymodel logical opentrypoint vertex main opsource glsl opsourceextension gl arb separate shader objects opname main opname gl pervertex opmembername gl position opmembername gl pointsize opmembername gl clipdistance opmembername gl culldistance opname opname transform opmembername mvp opname transform opname pos opname color varying opname color opmemberdecorate builtin position opmemberdecorate builtin pointsize opmemberdecorate builtin clipdistance opmemberdecorate builtin culldistance opdecorate block opmemberdecorate colmajor opmemberdecorate offset opmemberdecorate matrixstride opdecorate block opdecorate descriptorset opdecorate binding opdecorate location opdecorate location opdecorate location optypevoid optypefunction optypefloat optypevector optypeint opconstant optypearray optypestruct optypepointer output opvariable output optypeint opconstant optypematrix optypestruct optypepointer uniform opvariable uniform optypepointer uniform optypevector optypepointer input opvariable input opconstant opconstant optypepointer output optypepointer output opvariable output opvariable input opfunction none oplabel opaccesschain opload opload opcompositeextract opcompositeextract opcompositeconstruct opmatrixtimesvector opaccesschain opstore opload opstore opreturn opfunctionend end spir v converted msl include include using namespace metal struct transform mvp struct out color varying gl position struct in pos color is this supposed to be returning out here vertex void in in constant transform transform out out out gl position transform mvp in pos out color varying in color out gl position y out gl position y invert y axis for metal end msl estimated original glsl version layout binding uniform transform mvp transform layout location in pos layout location out color varying layout location in color void main gl position transform mvp pos color varying color end glsl performance to compile msl source code into a mtllibrary curr ms avg ms min ms max ms count performance to retrieve shader library from the cache curr ms avg ms min ms max ms count performance to retrieve a mtlfunction from a mtllibrary curr ms avg ms min ms max ms count performance to convert spir v to msl source code curr ms avg ms min ms max ms count converting spir v spir v version generator khronos glslang reference front end bound schema opcapability shader opextinstimport glsl std opmemorymodel logical opentrypoint fragment main opexecutionmode originupperleft opsource glsl opsourceextension gl arb separate shader objects opname main opname fragcolor opname color varying opdecorate location opdecorate location optypevoid optypefunction optypefloat optypevector optypepointer output opvariable output optypevector optypepointer input opvariable input optypeint opconstant optypepointer input opconstant opconstant opconstant opfunction none oplabel opaccesschain opload opaccesschain opload opcompositeconstruct opstore opreturn opfunctionend end spir v converted msl include include using namespace metal struct out fragcolor struct in color varying fragment out in in out out out fragcolor in color varying x in color varying y return out end msl estimated original glsl version layout location out fragcolor layout location in color varying void main fragcolor color varying x color varying y end glsl performance to compile msl source code into a mtllibrary curr ms avg ms min ms max ms count performance to retrieve shader library from the cache curr ms avg ms min ms max ms count performance to retrieve a mtlfunction from a mtllibrary curr ms avg ms min ms max ms count vk error initialization failed render pipeline compile failed error code link failed fragment input user was not found in vertex shader outputs | 1 |
243,835 | 18,727,183,290 | IssuesEvent | 2021-11-03 17:26:49 | AY2122S1-CS2103T-F13-4/tp | https://api.github.com/repos/AY2122S1-CS2103T-F13-4/tp | closed | [PE-D] Bug in the Add command | documentation valid flaw | Adding a student with the same name to one that already exists in the list, but different grade results in an error. This should be handled as it is a valid behaviour considering that many students across grades can have the same names.


<!--session: 1635494426019-c7d766a3-c920-4e33-828e-10a98cff1f9a-->
<!--Version: Web v3.4.1-->
-------------
Labels: `severity.Medium` `type.FunctionalityBug`
original: tsm1820/ped#5 | 1.0 | [PE-D] Bug in the Add command - Adding a student with the same name to one that already exists in the list, but different grade results in an error. This should be handled as it is a valid behaviour considering that many students across grades can have the same names.


<!--session: 1635494426019-c7d766a3-c920-4e33-828e-10a98cff1f9a-->
<!--Version: Web v3.4.1-->
-------------
Labels: `severity.Medium` `type.FunctionalityBug`
original: tsm1820/ped#5 | non_test | bug in the add command adding a student with the same name to one that already exists in the list but different grade results in an error this should be handled as it is a valid behaviour considering that many students across grades can have the same names labels severity medium type functionalitybug original ped | 0 |
253,085 | 21,650,377,618 | IssuesEvent | 2022-05-06 08:44:16 | matrixorigin/matrixone | https://api.github.com/repos/matrixorigin/matrixone | closed | The query result is incorrect while runnning show tables from test01 where tables_in_test01 like '%t2%'; | kind/bug kind/enhancement priority/high severity/major auto-test | <!-- Please describe your issue in English. -->
#### Can be reproduced ?
#### Steps:
Create table t1 and t2, then run
mysql> show tables from test01 where tables_in_test01 like '%t2%';
+-------+
| Table |
+-------+
| t1 |
| t2 |
+-------+
2 rows in set (0.00 sec)
#### Expected behavior:
+-------+
| Table |
+-------+
| t2 |
+-------+
#### Actual behavior:
The query result is incorrect
#### Environment:
- Version or commit-id (e.g. v0.1.0 or 8b23a93):
- Hardware parameters:
- OS type:
- Others:
#### Configuration file:
#### Additional context:
- Error message from client:
- Server log:
- Other information: | 1.0 | The query result is incorrect while runnning show tables from test01 where tables_in_test01 like '%t2%'; - <!-- Please describe your issue in English. -->
#### Can be reproduced ?
#### Steps:
Create table t1 and t2, then run
mysql> show tables from test01 where tables_in_test01 like '%t2%';
+-------+
| Table |
+-------+
| t1 |
| t2 |
+-------+
2 rows in set (0.00 sec)
#### Expected behavior:
+-------+
| Table |
+-------+
| t2 |
+-------+
#### Actual behavior:
The query result is incorrect
#### Environment:
- Version or commit-id (e.g. v0.1.0 or 8b23a93):
- Hardware parameters:
- OS type:
- Others:
#### Configuration file:
#### Additional context:
- Error message from client:
- Server log:
- Other information: | test | the query result is incorrect while runnning show tables from where tables in like can be reproduced steps create table and then run mysql show tables from where tables in like table rows in set sec expected behavior table actual behavior the query result is incorrect environment version or commit id e g or hardware parameters os type others configuration file additional context error message from client server log other information | 1 |
227,912 | 18,109,117,456 | IssuesEvent | 2021-09-22 23:40:37 | idaholab/moose | https://api.github.com/repos/idaholab/moose | closed | Migrate python to idaholab/moosetools | C: MOOSE Scripts C: TestHarness T: task P: normal C: HIT | ## Reason
<!--Why do you need this feature or what is the enhancement?-->
Interest exists outside of MOOSE for the tools in moose/python. This issue should be referenced for that effort.
## Design
<!--A concise description (design) of what you want to happen.--->
Establish an idaholab/moosetest repository with testing and include that repository as a submodule in MOOSE. Once setup items will be migrated as desired.
## Impact
<!--Will the enhancement change existing public APIs, internal APIs, or add something new?-->
User impact should be minimal except that the submodule will need to be managed. Developers will need to make changes to idaholab/moosetools to rather than idaholab/moose/python
| 1.0 | Migrate python to idaholab/moosetools - ## Reason
<!--Why do you need this feature or what is the enhancement?-->
Interest exists outside of MOOSE for the tools in moose/python. This issue should be referenced for that effort.
## Design
<!--A concise description (design) of what you want to happen.--->
Establish an idaholab/moosetest repository with testing and include that repository as a submodule in MOOSE. Once setup items will be migrated as desired.
## Impact
<!--Will the enhancement change existing public APIs, internal APIs, or add something new?-->
User impact should be minimal except that the submodule will need to be managed. Developers will need to make changes to idaholab/moosetools to rather than idaholab/moose/python
| test | migrate python to idaholab moosetools reason interest exists outside of moose for the tools in moose python this issue should be referenced for that effort design establish an idaholab moosetest repository with testing and include that repository as a submodule in moose once setup items will be migrated as desired impact user impact should be minimal except that the submodule will need to be managed developers will need to make changes to idaholab moosetools to rather than idaholab moose python | 1 |
212,256 | 16,436,032,159 | IssuesEvent | 2021-05-20 09:21:15 | nens/lizard-management-client | https://api.github.com/repos/nens/lizard-management-client | closed | Wrong filtering in timeseries table | Test result bug | "&organisation__uuid={organisation_uuid}" should become "&location__organisation__uuid={organisation_uuid}". | 1.0 | Wrong filtering in timeseries table - "&organisation__uuid={organisation_uuid}" should become "&location__organisation__uuid={organisation_uuid}". | test | wrong filtering in timeseries table organisation uuid organisation uuid should become location organisation uuid organisation uuid | 1 |
2,744 | 27,383,981,982 | IssuesEvent | 2023-02-28 12:02:31 | camunda/zeebe | https://api.github.com/repos/camunda/zeebe | opened | Running blocking ActorJob can impact and block scheduled timers | kind/bug area/reliability component/scheduler | **Describe the bug**
When investigating https://github.com/camunda/zeebe/issues/11847 we (@oleschoenburg and I) realized that some scheduled job (which should release a latch) is not executed.
After further investigation, we realized that it depends on which Thread the timer (or job) is scheduled.
> **Note:** Jobs that are scheduled via `runDelayed` are put into a TimerQueue (DeadlineTimerWheel). [Each ActorThread has its own queue](https://github.com/camunda/zeebe/blob/main/scheduler/src/main/java/io/camunda/zeebe/scheduler/ActorThread.java#L41). This means a scheduled job is bound to the specific Thread, after submitted.
When we schedule a job, on a Thread X and later the same Thread executed another Actor, which blocks the thread, then this Timer can't be executed after the time is due.
**Impact:**
It depends on the case, but it can be severe if we wait on one Actor (blocking) on something and want to release it on another Actor (after some time, e.g. via a scheduled timer). This will end in a deadlock if both are on the same thread, as we have seen here https://github.com/camunda/zeebe/issues/11847.
This can happen everywhere in our code base. If an actor job is scheduled and executed and blocks an ActorThread it might block the execution of future ActorJobs, which are scheduled as `runDelayed`.
<!-- A clear and concise description of what the bug is. -->
**To Reproduce**
Run test from https://github.com/camunda/zeebe/issues/11847. Can be simplified in a way that one actor needs to wait for on a latch and the other actor schedules an job, which releases the latch. If we run this multiple times, the chances are high that we run into the situation.
<!--
Steps to reproduce the behavior
If possible add a minimal reproducer code sample
- when using the Java client: https://github.com/zeebe-io/zeebe-test-template-java
-->
**Expected behavior**
One issue is of course we can't always be sure that we are not blocking. Another is that we expect, based on our programming model, that actors are independent. If we schedule an timer on one we expect it to be executed, even if another might be blocked.
Ideally we should have one TimerQueue or observer thread, which checks that queue and puts the due timers or jobs into the ActorTask. This allows to decouple this a bit more, and it is not necessary to check all the time the queues and maintain them in each thread.
<!-- A clear and concise description of what you expected to happen. -->
**Log/Stacktrace**
In the following failing test run we can see that the scheduled job, has been scheduled on the same Thread as the Blocking stream processor. Based on the test https://github.com/camunda/zeebe/issues/11847
<!-- If possible add the full stacktrace or Zeebe log which contains the issue. -->
<details><summary>Failing test run</summary>
<p>
```
12:12:07.795 [io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Start waiting - io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1
12:12:07.795 [Broker-0-LogStream-1] [-zb-actors-13] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1
12:12:07.796 [] [-zb-actors-13] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.796 [io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Actor io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1 set new state WAKING_UP from WAITING
12:12:07.796 [] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1
12:12:07.796 [Broker-0-LogStream-1] [-zb-actors-13] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.796 [io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1
12:12:07.796 [] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1
12:12:07.796 [io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1] [-zb-actors-13] ERROR io.camunda.zeebe.util.actor - Start waiting - io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1
12:12:07.822 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.823 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.823 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.823 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-StreamProcessor-1
12:12:07.823 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.823 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-StreamProcessor-1 set new state WAKING_UP from WAITING
12:12:07.823 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:07.823 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.823 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:07.823 [Broker-0-StreamProcessor-1] [-zb-actors-9] DEBUG io.camunda.zeebe.logstreams - Recovering state of partition 1 from snapshot
12:12:07.823 [Broker-0-StreamProcessor-1] [-zb-actors-9] INFO io.camunda.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
12:12:07.824 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Scheduled runnable io.camunda.zeebe.stream.impl.StreamProcessor$$Lambda$530/0x0000000800f72ed0@1027f7a6 - delay PT5S
12:12:07.824 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - ActorThread ACTIVE
12:12:07.824 [Broker-0-StreamProcessor-1] [-zb-actors-9] INFO io.camunda.zeebe.processor - Processor starts replay of events. [snapshot-position: -1, replay-mode: PROCESSING]
12:12:07.824 [Broker-0-StreamProcessor-1] [-zb-actors-9] INFO io.camunda.zeebe.processor - Processor finished replay, with [lastProcessedPosition: -1, lastWrittenPosition: -1]
12:12:07.824 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.824 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.824 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.824 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-StreamProcessor-1
12:12:07.824 [Broker-0-LogStream-1] [-zb-actors-9] TRACE io.camunda.zeebe.logstreams.impl.log.Sequencer - Starting new sequencer at position 1
12:12:07.824 [Broker-0-LogStream-1] [-zb-actors-9] DEBUG io.camunda.zeebe.logstreams.impl.flowcontrol.AppenderFlowControl - Configured log appender back pressure as BackpressureCfgVegas{initialLimit=1024, maxConcurrency=32768, alphaLimit=0.7, betaLimit=0.95}. Window limiting is disabled
12:12:07.825 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.825 [Broker-0-LogAppender-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogAppender-1
12:12:07.825 [Broker-0-LogAppender-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.825 [Broker-0-LogAppender-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.825 [Broker-0-LogAppender-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.825 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogAppender-1
12:12:07.825 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.825 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogAppender-1
12:12:07.825 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.825 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-StreamProcessor-1 set new state WAKING_UP from WAITING
12:12:07.825 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:07.825 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.825 [Broker-0-LogAppender-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogAppender-1
12:12:07.825 [Broker-0-StreamProcessor-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.825 [Broker-0-StreamProcessor-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.825 [Broker-0-StreamProcessor-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.825 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.825 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.825 [Broker-0-StreamProcessor-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.825 [Broker-0-StreamProcessor-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-StreamProcessor-1
12:12:07.825 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.825 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.825 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.825 [Broker-0-LogStream-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.825 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.825 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.825 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-StreamProcessor-1 set new state WAKING_UP from WAITING
12:12:07.825 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:07.825 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.825 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.825 [Broker-0-StreamProcessor-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.825 [Broker-0-StreamProcessor-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.825 [Broker-0-StreamProcessor-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.825 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.826 [Broker-0-StreamProcessor-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.826 [Broker-0-StreamProcessor-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Actor io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor set new state WAKING_UP from WAITING
12:12:07.826 [Broker-0-StreamProcessor-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.826 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.stream - SCHEDULE TASK ALTER
12:12:07.826 [Broker-0-StreamProcessor-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.826 [Broker-0-StreamProcessor-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-StreamProcessor-1
12:12:07.826 [Broker-0-StreamProcessor-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-StreamProcessor-1 set new state WAKING_UP from WAITING
12:12:07.826 [] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:07.826 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Scheduled runnable io.camunda.zeebe.stream.impl.ProcessingScheduleServiceImpl$$Lambda$586/0x0000000800f90dd0@1adf9cd6 - delay PT1M
12:12:07.826 [Broker-0-StreamProcessor-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-StreamProcessor-1
12:12:07.826 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - ActorThread ACTIVE
12:12:07.826 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.833 [] [main] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.833 [] [main] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.833 [] [main] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.833 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.834 [] [main] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogAppender-1
12:12:07.834 [] [main] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogAppender-1 set new state WAKING_UP from WAITING
12:12:07.834 [] [main] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogAppender-1
12:12:07.834 [Broker-0-LogAppender-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.834 [Broker-0-LogAppender-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.834 [Broker-0-LogAppender-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.834 [Broker-0-LogAppender-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogAppender-1
12:12:07.834 [Broker-0-LogAppender-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogAppender-1
12:12:07.834 [Broker-0-LogAppender-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogAppender-1 set new state WAKING_UP from WAITING
12:12:07.834 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogAppender-1
12:12:07.834 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.834 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-StreamProcessor-1 set new state WAKING_UP from WAITING
12:12:07.834 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:07.834 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.834 [Broker-0-LogAppender-1] [-zb-actors-12] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogAppender-1
12:12:07.835 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Wait processing, actortask Broker-0-StreamProcessor-1 ACTIVE phase: STARTED
12:12:08.036 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:14:08.036Z and await false
12:12:08.237 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:16:08.237Z and await false
12:12:08.438 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:18:08.438Z and await false
12:12:08.639 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:20:08.639Z and await false
12:12:08.839 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:22:08.839Z and await false
12:12:09.040 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:24:09.040Z and await false
12:12:09.241 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:26:09.241Z and await false
12:12:09.442 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:28:09.442Z and await false
12:12:09.642 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:30:09.642Z and await false
12:12:09.843 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:32:09.843Z and await false
12:12:10.044 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:34:10.044Z and await false
12:12:10.245 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:36:10.245Z and await false
12:12:10.446 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:38:10.446Z and await false
12:12:10.647 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:40:10.646Z and await false
12:12:10.847 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:42:10.847Z and await false
12:12:11.048 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:44:11.048Z and await false
12:12:11.249 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:46:11.249Z and await false
12:12:11.450 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:48:11.450Z and await false
12:12:11.651 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:50:11.650Z and await false
12:12:11.852 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:52:11.852Z and await false
12:12:12.053 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:54:12.053Z and await false
12:12:12.254 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:56:12.254Z and await false
12:12:12.455 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:58:12.455Z and await false
12:12:12.656 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T12:00:12.656Z and await false
12:12:12.839 [] [main] ERROR io.camunda.zeebe.stream - WAIT LATCH COUNT DOWN
12:12:12.840 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:12.840 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:12.840 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:12.841 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:12.841 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:12.841 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-StreamProcessor-1
12:12:12.841 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-StreamProcessor-1 set new state WAKING_UP from WAITING
12:12:12.841 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:12.841 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-StreamProcessor-1
12:12:12.841 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:12.841 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor set new state WAKING_UP from WAITING
12:12:12.841 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:12.841 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Scheduled Task, actortask io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor ACTIVE phase: STARTED
12:12:12.841 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - COUNT DOWN YO Clock: 2023-02-28T12:02:12.841Z
12:12:12.841 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - COUNTED DOWN 0
12:12:12.841 [] [main] DEBUG io.camunda.zeebe.stream - Close stream processor
12:12:12.841 [] [main] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:12.841 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Scheduled runnable io.camunda.zeebe.stream.impl.ProcessingScheduleServiceImpl$$Lambda$586/0x0000000800f90dd0@5fdd01e9 - delay PT1M
12:12:12.841 [] [main] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-StreamProcessor-1 set new state WAKING_UP from WAITING
12:12:12.841 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - ActorThread ACTIVE
12:12:12.841 [] [main] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:12.841 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:12.841 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:12.841 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:12.841 [] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:12.841 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:12.842 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor set new state WAKING_UP from WAITING
12:12:12.842 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:12.842 [] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:12.842 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:12.842 [] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:12.842 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:12.842 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:12.842 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:12.842 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:12.842 [Broker-0-LogStream-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:12.842 [Broker-0-StreamProcessor-1] [-zb-actors-3] DEBUG io.camunda.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
12:12:12.869 [] [main] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:12.869 [] [main] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:12.869 [] [main] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:12.869 [Broker-0-LogStream-1] [-zb-actors-1] INFO io.camunda.zeebe.logstreams - Close appender for log stream stream-1
12:12:12.869 [Broker-0-LogStream-1] [-zb-actors-1] INFO io.camunda.zeebe.logstreams.impl.log.Sequencer - Closing sequencer for writing
12:12:12.869 [Broker-0-LogStream-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogAppender-1
12:12:12.869 [Broker-0-LogStream-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogAppender-1 set new state WAKING_UP from WAITING
12:12:12.869 [Broker-0-LogStream-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogAppender-1
12:12:12.869 [Broker-0-LogStream-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:12.869 [] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogAppender-1
12:12:12.869 [] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogAppender-1
12:12:12.869 [Broker-0-LogAppender-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:12.869 [Broker-0-LogAppender-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:12.869 [Broker-0-LogAppender-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:12.869 [Broker-0-LogStream-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:12.869 [Broker-0-LogStream-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:12.869 [] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:12.869 [Broker-0-LogStream-1] [-zb-actors-1] INFO io.camunda.zeebe.logstreams - On closing logstream stream-1 close 1 readers
12:12:12.869 [] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:12.869 [] [main] DEBUG io.camunda.zeebe.util.actor - Closing actor thread ground '-zb-fs-workers'
12:12:12.870 [] [main] DEBUG io.camunda.zeebe.util.actor - Closing actor thread ground '-zb-actors'
12:12:12.870 [] [-zb-fs-workers-1] DEBUG io.camunda.zeebe.util.actor - Closing actor thread ground '-zb-fs-workers': closed successfully
12:12:12.871 [] [-zb-actors-10] DEBUG io.camunda.zeebe.util.actor - Closing actor thread ground '-zb-actors': closed successfully
org.awaitility.core.ConditionTimeoutException: Condition with alias 'ProcessScheduleService should still work' didn't complete within 5 seconds because condition with io.camunda.zeebe.stream.impl.StreamProcessorTest was not fulfilled.
at org.awaitility.core.ConditionAwaiter.await(ConditionAwaiter.java:167)
at org.awaitility.core.CallableCondition.await(CallableCondition.java:78)
at org.awaitility.core.CallableCondition.await(CallableCondition.java:26)
at org.awaitility.core.ConditionFactory.until(ConditionFactory.java:985)
at org.awaitility.core.ConditionFactory.until(ConditionFactory.java:954)
at io.camunda.zeebe.stream.impl.StreamProcessorTest.shouldRunAsyncSchedulingEvenIfProcessingIsBlocked(StreamProcessorTest.java:530)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:727)
at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147)
at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:86)
at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:217)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:213)
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:138)
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:68)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57)
at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54)
at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:90)
at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:55)
at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:102)
at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:54)
at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
at org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
at org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
at com.intellij.junit5.JUnit5IdeaTestRunner.startRunnerWithArgs(JUnit5IdeaTestRunner.java:57)
at com.intellij.rt.junit.IdeaTestRunner$Repeater$1.execute(IdeaTestRunner.java:38)
at com.intellij.rt.execution.junit.TestsRepeater.repeat(TestsRepeater.java:30)
at com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:35)
at com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:235)
at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:54)
Caused by: java.util.concurrent.TimeoutException
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:204)
at org.awaitility.core.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:101)
at org.awaitility.core.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:81)
at org.awaitility.core.ConditionAwaiter.await(ConditionAwaiter.java:103)
... 75 more
```
</p>
</details>
In contrast to the succeeding test run where the scheduling happens on a different thread.
<details><summary>Succeed test run</summary>
<p>
```
12:12:07.578 [io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Start waiting - io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1
12:12:07.578 [Broker-0-LogStream-1] [-zb-actors-10] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1
12:12:07.579 [io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Actor io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1 set new state WAKING_UP from WAITING
12:12:07.579 [] [-zb-actors-10] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.579 [] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1
12:12:07.579 [Broker-0-LogStream-1] [-zb-actors-10] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.579 [io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1
12:12:07.579 [] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1
12:12:07.579 [io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1] [-zb-actors-10] ERROR io.camunda.zeebe.util.actor - Start waiting - io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1
12:12:07.608 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.608 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.608 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.608 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-StreamProcessor-1
12:12:07.608 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.608 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-StreamProcessor-1 set new state WAKING_UP from WAITING
12:12:07.608 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:07.608 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.608 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:07.608 [Broker-0-StreamProcessor-1] [-zb-actors-9] DEBUG io.camunda.zeebe.logstreams - Recovering state of partition 1 from snapshot
12:12:07.608 [Broker-0-StreamProcessor-1] [-zb-actors-9] INFO io.camunda.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
12:12:07.609 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Scheduled runnable io.camunda.zeebe.stream.impl.StreamProcessor$$Lambda$530/0x0000000800f72ed0@301bcb3b - delay PT5S
12:12:07.609 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - ActorThread ACTIVE
12:12:07.609 [Broker-0-StreamProcessor-1] [-zb-actors-9] INFO io.camunda.zeebe.processor - Processor starts replay of events. [snapshot-position: -1, replay-mode: PROCESSING]
12:12:07.609 [Broker-0-StreamProcessor-1] [-zb-actors-9] INFO io.camunda.zeebe.processor - Processor finished replay, with [lastProcessedPosition: -1, lastWrittenPosition: -1]
12:12:07.609 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.609 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.609 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.609 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-StreamProcessor-1
12:12:07.609 [Broker-0-LogStream-1] [-zb-actors-9] TRACE io.camunda.zeebe.logstreams.impl.log.Sequencer - Starting new sequencer at position 1
12:12:07.609 [Broker-0-LogStream-1] [-zb-actors-9] DEBUG io.camunda.zeebe.logstreams.impl.flowcontrol.AppenderFlowControl - Configured log appender back pressure as BackpressureCfgVegas{initialLimit=1024, maxConcurrency=32768, alphaLimit=0.7, betaLimit=0.95}. Window limiting is disabled
12:12:07.610 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.610 [Broker-0-LogAppender-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogAppender-1
12:12:07.610 [Broker-0-LogAppender-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.610 [Broker-0-LogAppender-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.610 [Broker-0-LogAppender-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.610 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogAppender-1
12:12:07.610 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.610 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogAppender-1
12:12:07.610 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.610 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-StreamProcessor-1 set new state WAKING_UP from WAITING
12:12:07.610 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:07.610 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.610 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.610 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.610 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.610 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-StreamProcessor-1
12:12:07.610 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.610 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-StreamProcessor-1 set new state WAKING_UP from WAITING
12:12:07.610 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:07.610 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.610 [Broker-0-LogAppender-1] [-zb-actors-5] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogAppender-1
12:12:07.610 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.610 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-StreamProcessor-1
12:12:07.610 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-5] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.610 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-5] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.610 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-5] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.610 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.610 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-5] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.610 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.610 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-5] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.610 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-5] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-StreamProcessor-1 set new state WAKING_UP from WAITING
12:12:07.610 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-5] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:07.610 [] [-zb-actors-5] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.610 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-5] ERROR io.camunda.zeebe.util.actor - Start waiting - io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.610 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.610 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.610 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.611 [Broker-0-LogStream-1] [-zb-actors-12] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.611 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.611 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor set new state WAKING_UP from WAITING
12:12:07.611 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.611 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-12] ERROR io.camunda.zeebe.stream - SCHEDULE TASK ALTER
12:12:07.611 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.611 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-StreamProcessor-1
12:12:07.611 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-StreamProcessor-1 set new state WAKING_UP from WAITING
12:12:07.611 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:07.611 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-12] ERROR io.camunda.zeebe.util.actor - Scheduled runnable io.camunda.zeebe.stream.impl.ProcessingScheduleServiceImpl$$Lambda$586/0x0000000800f90dd0@2093c8d8 - delay PT1M
12:12:07.611 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-12] ERROR io.camunda.zeebe.util.actor - ActorThread ACTIVE
12:12:07.611 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-StreamProcessor-1
12:12:07.611 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-12] ERROR io.camunda.zeebe.util.actor - Start waiting - io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.618 [] [main] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.618 [] [main] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.618 [] [main] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.618 [Broker-0-LogStream-1] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.618 [] [main] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogAppender-1
12:12:07.619 [] [main] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogAppender-1 set new state WAKING_UP from WAITING
12:12:07.619 [] [main] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogAppender-1
12:12:07.619 [Broker-0-LogAppender-1] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.619 [Broker-0-LogAppender-1] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.619 [Broker-0-LogAppender-1] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.619 [Broker-0-LogAppender-1] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogAppender-1
12:12:07.619 [Broker-0-LogAppender-1] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogAppender-1
12:12:07.619 [Broker-0-LogAppender-1] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogAppender-1 set new state WAKING_UP from WAITING
12:12:07.619 [] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogAppender-1
12:12:07.619 [Broker-0-LogStream-1] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.619 [Broker-0-LogStream-1] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-StreamProcessor-1 set new state WAKING_UP from WAITING
12:12:07.619 [Broker-0-LogStream-1] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:07.619 [Broker-0-LogStream-1] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.619 [Broker-0-LogAppender-1] [-zb-actors-13] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogAppender-1
12:12:07.619 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Wait processing, actortask Broker-0-StreamProcessor-1 ACTIVE phase: STARTED
12:12:07.748 [] [-zb-actors-12] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.748 [] [-zb-actors-12] ERROR io.camunda.zeebe.util.actor - Actor io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor set new state WAKING_UP from WAITING
12:12:07.748 [] [-zb-actors-12] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.748 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-12] ERROR io.camunda.zeebe.util.actor - Scheduled Task, actortask io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor ACTIVE phase: STARTED
12:12:07.748 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-12] ERROR io.camunda.zeebe.util.actor - COUNT DOWN YO Clock: 2023-02-28T11:14:07.748Z
12:12:07.748 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-12] ERROR io.camunda.zeebe.util.actor - COUNTED DOWN 0
12:12:07.748 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-12] ERROR io.camunda.zeebe.util.actor - Scheduled runnable io.camunda.zeebe.stream.impl.ProcessingScheduleServiceImpl$$Lambda$586/0x0000000800f90dd0@24ac5812 - delay PT1M
12:12:07.748 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-12] ERROR io.camunda.zeebe.util.actor - ActorThread ACTIVE
12:12:07.748 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-12] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.748 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-12] ERROR io.camunda.zeebe.util.actor - Start waiting - io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.748 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:14:07.748Z and await true
12:12:07.749 [] [main] ERROR io.camunda.zeebe.stream - WAIT LATCH COUNT DOWN
12:12:07.750 [] [main] DEBUG io.camunda.zeebe.stream - Close stream processor
12:12:07.750 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.750 [] [main] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.750 [Broker-0-StreamProcessor-1] [-zb-actors-9] DEBUG io.camunda.zeebe.scheduler.ActorTask - Discard job io.camunda.zeebe.scheduler.future.FutureContinuationRunnable QUEUED from fastLane of Actor Broker-0-StreamProcessor-1.
12:12:07.750 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:07.750 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.750 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor set new state WAKING_UP from WAITING
12:12:07.750 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.750 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.750 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.750 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.750 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.750 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:07.750 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.750 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.750 [Broker-0-StreamProcessor-1] [-zb-actors-9] DEBUG io.camunda.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
12:12:07.750 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.780 [] [main] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.780 [] [main] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.780 [] [main] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.780 [Broker-0-LogStream-1] [-zb-actors-1] INFO io.camunda.zeebe.logstreams - Close appender for log stream stream-1
12:12:07.781 [Broker-0-LogStream-1] [-zb-actors-1] INFO io.camunda.zeebe.logstreams.impl.log.Sequencer - Closing sequencer for writing
12:12:07.781 [Broker-0-LogStream-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogAppender-1
12:12:07.781 [Broker-0-LogStream-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogAppender-1 set new state WAKING_UP from WAITING
12:12:07.781 [Broker-0-LogStream-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogAppender-1
12:12:07.781 [Broker-0-LogStream-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.781 [] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogAppender-1
12:12:07.781 [] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogAppender-1
12:12:07.781 [Broker-0-LogAppender-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.781 [Broker-0-LogAppender-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.781 [Broker-0-LogAppender-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.781 [Broker-0-LogStream-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.781 [Broker-0-LogStream-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.781 [] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.781 [Broker-0-LogStream-1] [-zb-actors-1] INFO io.camunda.zeebe.logstreams - On closing logstream stream-1 close 1 readers
12:12:07.781 [] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.781 [] [main] DEBUG io.camunda.zeebe.util.actor - Closing actor thread ground '-zb-fs-workers'
12:12:07.782 [] [main] DEBUG io.camunda.zeebe.util.actor - Closing actor thread ground '-zb-actors'
12:12:07.782 [] [-zb-fs-workers-0] DEBUG io.camunda.zeebe.util.actor - Closing actor thread ground '-zb-fs-workers': closed successfully
12:12:07.783 [] [-zb-actors-11] DEBUG io.camunda.zeebe.util.actor - Closing actor thread ground '-zb-actors': closed successfully
```
</p>
</details>
**Environment:**
- OS:
- Zeebe Version: all
- Configuration:
| True | Running blocking ActorJob can impact and block scheduled timers - **Describe the bug**
When investigating https://github.com/camunda/zeebe/issues/11847 we (@oleschoenburg and I) realized that some scheduled job (which should release a latch) is not executed.
After further investigation, we realized that it depends on which Thread the timer (or job) is scheduled.
> **Note:** Jobs that are scheduled via `runDelayed` are put into a TimerQueue (DeadlineTimerWheel). [Each ActorThread has its own queue](https://github.com/camunda/zeebe/blob/main/scheduler/src/main/java/io/camunda/zeebe/scheduler/ActorThread.java#L41). This means a scheduled job is bound to the specific Thread, after submitted.
When we schedule a job, on a Thread X and later the same Thread executed another Actor, which blocks the thread, then this Timer can't be executed after the time is due.
**Impact:**
It depends on the case, but it can be severe if we wait on one Actor (blocking) on something and want to release it on another Actor (after some time, e.g. via a scheduled timer). This will end in a deadlock if both are on the same thread, as we have seen here https://github.com/camunda/zeebe/issues/11847.
This can happen everywhere in our code base. If an actor job is scheduled and executed and blocks an ActorThread it might block the execution of future ActorJobs, which are scheduled as `runDelayed`.
<!-- A clear and concise description of what the bug is. -->
**To Reproduce**
Run test from https://github.com/camunda/zeebe/issues/11847. Can be simplified in a way that one actor needs to wait for on a latch and the other actor schedules an job, which releases the latch. If we run this multiple times, the chances are high that we run into the situation.
<!--
Steps to reproduce the behavior
If possible add a minimal reproducer code sample
- when using the Java client: https://github.com/zeebe-io/zeebe-test-template-java
-->
**Expected behavior**
One issue is of course we can't always be sure that we are not blocking. Another is that we expect, based on our programming model, that actors are independent. If we schedule an timer on one we expect it to be executed, even if another might be blocked.
Ideally we should have one TimerQueue or observer thread, which checks that queue and puts the due timers or jobs into the ActorTask. This allows to decouple this a bit more, and it is not necessary to check all the time the queues and maintain them in each thread.
<!-- A clear and concise description of what you expected to happen. -->
**Log/Stacktrace**
In the following failing test run we can see that the scheduled job, has been scheduled on the same Thread as the Blocking stream processor. Based on the test https://github.com/camunda/zeebe/issues/11847
<!-- If possible add the full stacktrace or Zeebe log which contains the issue. -->
<details><summary>Failing test run</summary>
<p>
```
12:12:07.795 [io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Start waiting - io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1
12:12:07.795 [Broker-0-LogStream-1] [-zb-actors-13] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1
12:12:07.796 [] [-zb-actors-13] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.796 [io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Actor io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1 set new state WAKING_UP from WAITING
12:12:07.796 [] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1
12:12:07.796 [Broker-0-LogStream-1] [-zb-actors-13] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.796 [io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1
12:12:07.796 [] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1
12:12:07.796 [io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1] [-zb-actors-13] ERROR io.camunda.zeebe.util.actor - Start waiting - io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1
12:12:07.822 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.823 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.823 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.823 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-StreamProcessor-1
12:12:07.823 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.823 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-StreamProcessor-1 set new state WAKING_UP from WAITING
12:12:07.823 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:07.823 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.823 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:07.823 [Broker-0-StreamProcessor-1] [-zb-actors-9] DEBUG io.camunda.zeebe.logstreams - Recovering state of partition 1 from snapshot
12:12:07.823 [Broker-0-StreamProcessor-1] [-zb-actors-9] INFO io.camunda.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
12:12:07.824 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Scheduled runnable io.camunda.zeebe.stream.impl.StreamProcessor$$Lambda$530/0x0000000800f72ed0@1027f7a6 - delay PT5S
12:12:07.824 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - ActorThread ACTIVE
12:12:07.824 [Broker-0-StreamProcessor-1] [-zb-actors-9] INFO io.camunda.zeebe.processor - Processor starts replay of events. [snapshot-position: -1, replay-mode: PROCESSING]
12:12:07.824 [Broker-0-StreamProcessor-1] [-zb-actors-9] INFO io.camunda.zeebe.processor - Processor finished replay, with [lastProcessedPosition: -1, lastWrittenPosition: -1]
12:12:07.824 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.824 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.824 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.824 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-StreamProcessor-1
12:12:07.824 [Broker-0-LogStream-1] [-zb-actors-9] TRACE io.camunda.zeebe.logstreams.impl.log.Sequencer - Starting new sequencer at position 1
12:12:07.824 [Broker-0-LogStream-1] [-zb-actors-9] DEBUG io.camunda.zeebe.logstreams.impl.flowcontrol.AppenderFlowControl - Configured log appender back pressure as BackpressureCfgVegas{initialLimit=1024, maxConcurrency=32768, alphaLimit=0.7, betaLimit=0.95}. Window limiting is disabled
12:12:07.825 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.825 [Broker-0-LogAppender-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogAppender-1
12:12:07.825 [Broker-0-LogAppender-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.825 [Broker-0-LogAppender-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.825 [Broker-0-LogAppender-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.825 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogAppender-1
12:12:07.825 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.825 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogAppender-1
12:12:07.825 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.825 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-StreamProcessor-1 set new state WAKING_UP from WAITING
12:12:07.825 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:07.825 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.825 [Broker-0-LogAppender-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogAppender-1
12:12:07.825 [Broker-0-StreamProcessor-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.825 [Broker-0-StreamProcessor-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.825 [Broker-0-StreamProcessor-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.825 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.825 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.825 [Broker-0-StreamProcessor-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.825 [Broker-0-StreamProcessor-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-StreamProcessor-1
12:12:07.825 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.825 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.825 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.825 [Broker-0-LogStream-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.825 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.825 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.825 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-StreamProcessor-1 set new state WAKING_UP from WAITING
12:12:07.825 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:07.825 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.825 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.825 [Broker-0-StreamProcessor-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.825 [Broker-0-StreamProcessor-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.825 [Broker-0-StreamProcessor-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.825 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.826 [Broker-0-StreamProcessor-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.826 [Broker-0-StreamProcessor-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Actor io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor set new state WAKING_UP from WAITING
12:12:07.826 [Broker-0-StreamProcessor-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.826 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.stream - SCHEDULE TASK ALTER
12:12:07.826 [Broker-0-StreamProcessor-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.826 [Broker-0-StreamProcessor-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-StreamProcessor-1
12:12:07.826 [Broker-0-StreamProcessor-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-StreamProcessor-1 set new state WAKING_UP from WAITING
12:12:07.826 [] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:07.826 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Scheduled runnable io.camunda.zeebe.stream.impl.ProcessingScheduleServiceImpl$$Lambda$586/0x0000000800f90dd0@1adf9cd6 - delay PT1M
12:12:07.826 [Broker-0-StreamProcessor-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-StreamProcessor-1
12:12:07.826 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - ActorThread ACTIVE
12:12:07.826 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.833 [] [main] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.833 [] [main] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.833 [] [main] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.833 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.834 [] [main] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogAppender-1
12:12:07.834 [] [main] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogAppender-1 set new state WAKING_UP from WAITING
12:12:07.834 [] [main] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogAppender-1
12:12:07.834 [Broker-0-LogAppender-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.834 [Broker-0-LogAppender-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.834 [Broker-0-LogAppender-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.834 [Broker-0-LogAppender-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogAppender-1
12:12:07.834 [Broker-0-LogAppender-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogAppender-1
12:12:07.834 [Broker-0-LogAppender-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogAppender-1 set new state WAKING_UP from WAITING
12:12:07.834 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogAppender-1
12:12:07.834 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.834 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-StreamProcessor-1 set new state WAKING_UP from WAITING
12:12:07.834 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:07.834 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.834 [Broker-0-LogAppender-1] [-zb-actors-12] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogAppender-1
12:12:07.835 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Wait processing, actortask Broker-0-StreamProcessor-1 ACTIVE phase: STARTED
12:12:08.036 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:14:08.036Z and await false
12:12:08.237 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:16:08.237Z and await false
12:12:08.438 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:18:08.438Z and await false
12:12:08.639 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:20:08.639Z and await false
12:12:08.839 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:22:08.839Z and await false
12:12:09.040 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:24:09.040Z and await false
12:12:09.241 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:26:09.241Z and await false
12:12:09.442 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:28:09.442Z and await false
12:12:09.642 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:30:09.642Z and await false
12:12:09.843 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:32:09.843Z and await false
12:12:10.044 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:34:10.044Z and await false
12:12:10.245 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:36:10.245Z and await false
12:12:10.446 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:38:10.446Z and await false
12:12:10.647 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:40:10.646Z and await false
12:12:10.847 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:42:10.847Z and await false
12:12:11.048 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:44:11.048Z and await false
12:12:11.249 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:46:11.249Z and await false
12:12:11.450 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:48:11.450Z and await false
12:12:11.651 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:50:11.650Z and await false
12:12:11.852 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:52:11.852Z and await false
12:12:12.053 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:54:12.053Z and await false
12:12:12.254 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:56:12.254Z and await false
12:12:12.455 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:58:12.455Z and await false
12:12:12.656 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T12:00:12.656Z and await false
12:12:12.839 [] [main] ERROR io.camunda.zeebe.stream - WAIT LATCH COUNT DOWN
12:12:12.840 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:12.840 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:12.840 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:12.841 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:12.841 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:12.841 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-StreamProcessor-1
12:12:12.841 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-StreamProcessor-1 set new state WAKING_UP from WAITING
12:12:12.841 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:12.841 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-StreamProcessor-1
12:12:12.841 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:12.841 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor set new state WAKING_UP from WAITING
12:12:12.841 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:12.841 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Scheduled Task, actortask io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor ACTIVE phase: STARTED
12:12:12.841 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - COUNT DOWN YO Clock: 2023-02-28T12:02:12.841Z
12:12:12.841 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - COUNTED DOWN 0
12:12:12.841 [] [main] DEBUG io.camunda.zeebe.stream - Close stream processor
12:12:12.841 [] [main] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:12.841 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Scheduled runnable io.camunda.zeebe.stream.impl.ProcessingScheduleServiceImpl$$Lambda$586/0x0000000800f90dd0@5fdd01e9 - delay PT1M
12:12:12.841 [] [main] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-StreamProcessor-1 set new state WAKING_UP from WAITING
12:12:12.841 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - ActorThread ACTIVE
12:12:12.841 [] [main] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:12.841 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:12.841 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:12.841 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:12.841 [] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:12.841 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:12.842 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor set new state WAKING_UP from WAITING
12:12:12.842 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:12.842 [] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:12.842 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:12.842 [] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:12.842 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:12.842 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:12.842 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:12.842 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:12.842 [Broker-0-LogStream-1] [-zb-actors-3] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:12.842 [Broker-0-StreamProcessor-1] [-zb-actors-3] DEBUG io.camunda.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
12:12:12.869 [] [main] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:12.869 [] [main] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:12.869 [] [main] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:12.869 [Broker-0-LogStream-1] [-zb-actors-1] INFO io.camunda.zeebe.logstreams - Close appender for log stream stream-1
12:12:12.869 [Broker-0-LogStream-1] [-zb-actors-1] INFO io.camunda.zeebe.logstreams.impl.log.Sequencer - Closing sequencer for writing
12:12:12.869 [Broker-0-LogStream-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogAppender-1
12:12:12.869 [Broker-0-LogStream-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogAppender-1 set new state WAKING_UP from WAITING
12:12:12.869 [Broker-0-LogStream-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogAppender-1
12:12:12.869 [Broker-0-LogStream-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:12.869 [] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogAppender-1
12:12:12.869 [] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogAppender-1
12:12:12.869 [Broker-0-LogAppender-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:12.869 [Broker-0-LogAppender-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:12.869 [Broker-0-LogAppender-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:12.869 [Broker-0-LogStream-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:12.869 [Broker-0-LogStream-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:12.869 [] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:12.869 [Broker-0-LogStream-1] [-zb-actors-1] INFO io.camunda.zeebe.logstreams - On closing logstream stream-1 close 1 readers
12:12:12.869 [] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:12.869 [] [main] DEBUG io.camunda.zeebe.util.actor - Closing actor thread ground '-zb-fs-workers'
12:12:12.870 [] [main] DEBUG io.camunda.zeebe.util.actor - Closing actor thread ground '-zb-actors'
12:12:12.870 [] [-zb-fs-workers-1] DEBUG io.camunda.zeebe.util.actor - Closing actor thread ground '-zb-fs-workers': closed successfully
12:12:12.871 [] [-zb-actors-10] DEBUG io.camunda.zeebe.util.actor - Closing actor thread ground '-zb-actors': closed successfully
org.awaitility.core.ConditionTimeoutException: Condition with alias 'ProcessScheduleService should still work' didn't complete within 5 seconds because condition with io.camunda.zeebe.stream.impl.StreamProcessorTest was not fulfilled.
at org.awaitility.core.ConditionAwaiter.await(ConditionAwaiter.java:167)
at org.awaitility.core.CallableCondition.await(CallableCondition.java:78)
at org.awaitility.core.CallableCondition.await(CallableCondition.java:26)
at org.awaitility.core.ConditionFactory.until(ConditionFactory.java:985)
at org.awaitility.core.ConditionFactory.until(ConditionFactory.java:954)
at io.camunda.zeebe.stream.impl.StreamProcessorTest.shouldRunAsyncSchedulingEvenIfProcessingIsBlocked(StreamProcessorTest.java:530)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:727)
at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147)
at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:86)
at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:217)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:213)
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:138)
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:68)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57)
at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54)
at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:90)
at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:55)
at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:102)
at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:54)
at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
at org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
at org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
at com.intellij.junit5.JUnit5IdeaTestRunner.startRunnerWithArgs(JUnit5IdeaTestRunner.java:57)
at com.intellij.rt.junit.IdeaTestRunner$Repeater$1.execute(IdeaTestRunner.java:38)
at com.intellij.rt.execution.junit.TestsRepeater.repeat(TestsRepeater.java:30)
at com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:35)
at com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:235)
at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:54)
Caused by: java.util.concurrent.TimeoutException
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:204)
at org.awaitility.core.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:101)
at org.awaitility.core.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:81)
at org.awaitility.core.ConditionAwaiter.await(ConditionAwaiter.java:103)
... 75 more
```
</p>
</details>
In contrast to the succeeding test run where the scheduling happens on a different thread.
<details><summary>Succeed test run</summary>
<p>
```
12:12:07.578 [io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Start waiting - io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1
12:12:07.578 [Broker-0-LogStream-1] [-zb-actors-10] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1
12:12:07.579 [io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Actor io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1 set new state WAKING_UP from WAITING
12:12:07.579 [] [-zb-actors-10] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.579 [] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1
12:12:07.579 [Broker-0-LogStream-1] [-zb-actors-10] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.579 [io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1
12:12:07.579 [] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1
12:12:07.579 [io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1] [-zb-actors-10] ERROR io.camunda.zeebe.util.actor - Start waiting - io.camunda.zeebe.logstreams.util.SyncLogStreamBuilder$1
12:12:07.608 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.608 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.608 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.608 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-StreamProcessor-1
12:12:07.608 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.608 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-StreamProcessor-1 set new state WAKING_UP from WAITING
12:12:07.608 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:07.608 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.608 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:07.608 [Broker-0-StreamProcessor-1] [-zb-actors-9] DEBUG io.camunda.zeebe.logstreams - Recovering state of partition 1 from snapshot
12:12:07.608 [Broker-0-StreamProcessor-1] [-zb-actors-9] INFO io.camunda.zeebe.logstreams - Recovered state of partition 1 from snapshot at position -1
12:12:07.609 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Scheduled runnable io.camunda.zeebe.stream.impl.StreamProcessor$$Lambda$530/0x0000000800f72ed0@301bcb3b - delay PT5S
12:12:07.609 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - ActorThread ACTIVE
12:12:07.609 [Broker-0-StreamProcessor-1] [-zb-actors-9] INFO io.camunda.zeebe.processor - Processor starts replay of events. [snapshot-position: -1, replay-mode: PROCESSING]
12:12:07.609 [Broker-0-StreamProcessor-1] [-zb-actors-9] INFO io.camunda.zeebe.processor - Processor finished replay, with [lastProcessedPosition: -1, lastWrittenPosition: -1]
12:12:07.609 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.609 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.609 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.609 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-StreamProcessor-1
12:12:07.609 [Broker-0-LogStream-1] [-zb-actors-9] TRACE io.camunda.zeebe.logstreams.impl.log.Sequencer - Starting new sequencer at position 1
12:12:07.609 [Broker-0-LogStream-1] [-zb-actors-9] DEBUG io.camunda.zeebe.logstreams.impl.flowcontrol.AppenderFlowControl - Configured log appender back pressure as BackpressureCfgVegas{initialLimit=1024, maxConcurrency=32768, alphaLimit=0.7, betaLimit=0.95}. Window limiting is disabled
12:12:07.610 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.610 [Broker-0-LogAppender-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogAppender-1
12:12:07.610 [Broker-0-LogAppender-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.610 [Broker-0-LogAppender-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.610 [Broker-0-LogAppender-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.610 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogAppender-1
12:12:07.610 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.610 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogAppender-1
12:12:07.610 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.610 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-StreamProcessor-1 set new state WAKING_UP from WAITING
12:12:07.610 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:07.610 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.610 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.610 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.610 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.610 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-StreamProcessor-1
12:12:07.610 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.610 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-StreamProcessor-1 set new state WAKING_UP from WAITING
12:12:07.610 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:07.610 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.610 [Broker-0-LogAppender-1] [-zb-actors-5] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogAppender-1
12:12:07.610 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.610 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-StreamProcessor-1
12:12:07.610 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-5] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.610 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-5] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.610 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-5] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.610 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.610 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-5] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.610 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.610 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-5] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.610 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-5] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-StreamProcessor-1 set new state WAKING_UP from WAITING
12:12:07.610 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-5] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:07.610 [] [-zb-actors-5] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.610 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-5] ERROR io.camunda.zeebe.util.actor - Start waiting - io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.610 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.610 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.610 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.611 [Broker-0-LogStream-1] [-zb-actors-12] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.611 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.611 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor set new state WAKING_UP from WAITING
12:12:07.611 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.611 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-12] ERROR io.camunda.zeebe.stream - SCHEDULE TASK ALTER
12:12:07.611 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.611 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-StreamProcessor-1
12:12:07.611 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-StreamProcessor-1 set new state WAKING_UP from WAITING
12:12:07.611 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:07.611 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-12] ERROR io.camunda.zeebe.util.actor - Scheduled runnable io.camunda.zeebe.stream.impl.ProcessingScheduleServiceImpl$$Lambda$586/0x0000000800f90dd0@2093c8d8 - delay PT1M
12:12:07.611 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-12] ERROR io.camunda.zeebe.util.actor - ActorThread ACTIVE
12:12:07.611 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-StreamProcessor-1
12:12:07.611 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-12] ERROR io.camunda.zeebe.util.actor - Start waiting - io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.618 [] [main] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.618 [] [main] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.618 [] [main] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.618 [Broker-0-LogStream-1] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.618 [] [main] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogAppender-1
12:12:07.619 [] [main] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogAppender-1 set new state WAKING_UP from WAITING
12:12:07.619 [] [main] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogAppender-1
12:12:07.619 [Broker-0-LogAppender-1] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.619 [Broker-0-LogAppender-1] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.619 [Broker-0-LogAppender-1] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.619 [Broker-0-LogAppender-1] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogAppender-1
12:12:07.619 [Broker-0-LogAppender-1] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogAppender-1
12:12:07.619 [Broker-0-LogAppender-1] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogAppender-1 set new state WAKING_UP from WAITING
12:12:07.619 [] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogAppender-1
12:12:07.619 [Broker-0-LogStream-1] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.619 [Broker-0-LogStream-1] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-StreamProcessor-1 set new state WAKING_UP from WAITING
12:12:07.619 [Broker-0-LogStream-1] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:07.619 [Broker-0-LogStream-1] [-zb-actors-2] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.619 [Broker-0-LogAppender-1] [-zb-actors-13] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogAppender-1
12:12:07.619 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Wait processing, actortask Broker-0-StreamProcessor-1 ACTIVE phase: STARTED
12:12:07.748 [] [-zb-actors-12] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.748 [] [-zb-actors-12] ERROR io.camunda.zeebe.util.actor - Actor io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor set new state WAKING_UP from WAITING
12:12:07.748 [] [-zb-actors-12] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.748 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-12] ERROR io.camunda.zeebe.util.actor - Scheduled Task, actortask io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor ACTIVE phase: STARTED
12:12:07.748 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-12] ERROR io.camunda.zeebe.util.actor - COUNT DOWN YO Clock: 2023-02-28T11:14:07.748Z
12:12:07.748 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-12] ERROR io.camunda.zeebe.util.actor - COUNTED DOWN 0
12:12:07.748 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-12] ERROR io.camunda.zeebe.util.actor - Scheduled runnable io.camunda.zeebe.stream.impl.ProcessingScheduleServiceImpl$$Lambda$586/0x0000000800f90dd0@24ac5812 - delay PT1M
12:12:07.748 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-12] ERROR io.camunda.zeebe.util.actor - ActorThread ACTIVE
12:12:07.748 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-12] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.748 [io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor] [-zb-actors-12] ERROR io.camunda.zeebe.util.actor - Start waiting - io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.748 [] [awaitility[ProcessScheduleService should still work]] ERROR io.camunda.zeebe.util.actor - Increase time Clock: 2023-02-28T11:14:07.748Z and await true
12:12:07.749 [] [main] ERROR io.camunda.zeebe.stream - WAIT LATCH COUNT DOWN
12:12:07.750 [] [main] DEBUG io.camunda.zeebe.stream - Close stream processor
12:12:07.750 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.750 [] [main] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-StreamProcessor-1
12:12:07.750 [Broker-0-StreamProcessor-1] [-zb-actors-9] DEBUG io.camunda.zeebe.scheduler.ActorTask - Discard job io.camunda.zeebe.scheduler.future.FutureContinuationRunnable QUEUED from fastLane of Actor Broker-0-StreamProcessor-1.
12:12:07.750 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:07.750 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.750 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor set new state WAKING_UP from WAITING
12:12:07.750 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.750 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.750 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.750 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.750 [Broker-0-StreamProcessor-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.750 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-StreamProcessor-1
12:12:07.750 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.750 [Broker-0-LogStream-1] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.750 [Broker-0-StreamProcessor-1] [-zb-actors-9] DEBUG io.camunda.zeebe.logstreams - Closed stream processor controller Broker-0-StreamProcessor-1.
12:12:07.750 [] [-zb-actors-9] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup io.camunda.zeebe.stream.impl.StreamProcessor$AsyncProcessingScheduleServiceActor
12:12:07.780 [] [main] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.780 [] [main] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.780 [] [main] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.780 [Broker-0-LogStream-1] [-zb-actors-1] INFO io.camunda.zeebe.logstreams - Close appender for log stream stream-1
12:12:07.781 [Broker-0-LogStream-1] [-zb-actors-1] INFO io.camunda.zeebe.logstreams.impl.log.Sequencer - Closing sequencer for writing
12:12:07.781 [Broker-0-LogStream-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogAppender-1
12:12:07.781 [Broker-0-LogStream-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogAppender-1 set new state WAKING_UP from WAITING
12:12:07.781 [Broker-0-LogStream-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogAppender-1
12:12:07.781 [Broker-0-LogStream-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Start waiting - Broker-0-LogStream-1
12:12:07.781 [] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogAppender-1
12:12:07.781 [] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogAppender-1
12:12:07.781 [Broker-0-LogAppender-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.781 [Broker-0-LogAppender-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Actor Broker-0-LogStream-1 set new state WAKING_UP from WAITING
12:12:07.781 [Broker-0-LogAppender-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.781 [Broker-0-LogStream-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.781 [Broker-0-LogStream-1] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Stop waiting - wakeup Broker-0-LogStream-1
12:12:07.781 [] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.781 [Broker-0-LogStream-1] [-zb-actors-1] INFO io.camunda.zeebe.logstreams - On closing logstream stream-1 close 1 readers
12:12:07.781 [] [-zb-actors-1] ERROR io.camunda.zeebe.util.actor - Resubmit - wakeup Broker-0-LogStream-1
12:12:07.781 [] [main] DEBUG io.camunda.zeebe.util.actor - Closing actor thread ground '-zb-fs-workers'
12:12:07.782 [] [main] DEBUG io.camunda.zeebe.util.actor - Closing actor thread ground '-zb-actors'
12:12:07.782 [] [-zb-fs-workers-0] DEBUG io.camunda.zeebe.util.actor - Closing actor thread ground '-zb-fs-workers': closed successfully
12:12:07.783 [] [-zb-actors-11] DEBUG io.camunda.zeebe.util.actor - Closing actor thread ground '-zb-actors': closed successfully
```
</p>
</details>
**Environment:**
- OS:
- Zeebe Version: all
- Configuration:
| non_test | running blocking actorjob can impact and block scheduled timers describe the bug when investigating we oleschoenburg and i realized that some scheduled job which should release a latch is not executed after further investigation we realized that it depends on which thread the timer or job is scheduled note jobs that are scheduled via rundelayed are put into a timerqueue deadlinetimerwheel this means a scheduled job is bound to the specific thread after submitted when we schedule a job on a thread x and later the same thread executed another actor which blocks the thread then this timer can t be executed after the time is due impact it depends on the case but it can be severe if we wait on one actor blocking on something and want to release it on another actor after some time e g via a scheduled timer this will end in a deadlock if both are on the same thread as we have seen here this can happen everywhere in our code base if an actor job is scheduled and executed and blocks an actorthread it might block the execution of future actorjobs which are scheduled as rundelayed to reproduce run test from can be simplified in a way that one actor needs to wait for on a latch and the other actor schedules an job which releases the latch if we run this multiple times the chances are high that we run into the situation steps to reproduce the behavior if possible add a minimal reproducer code sample when using the java client expected behavior one issue is of course we can t always be sure that we are not blocking another is that we expect based on our programming model that actors are independent if we schedule an timer on one we expect it to be executed even if another might be blocked ideally we should have one timerqueue or observer thread which checks that queue and puts the due timers or jobs into the actortask this allows to decouple this a bit more and it is not necessary to check all the time the queues and maintain them in each thread log stacktrace in the following failing test run we can see that the scheduled job has been scheduled on the same thread as the blocking stream processor based on the test failing test run error io camunda zeebe util actor start waiting io camunda zeebe logstreams util synclogstreambuilder error io camunda zeebe util actor stop waiting wakeup io camunda zeebe logstreams util synclogstreambuilder error io camunda zeebe util actor resubmit wakeup broker logstream error io camunda zeebe util actor actor io camunda zeebe logstreams util synclogstreambuilder set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup io camunda zeebe logstreams util synclogstreambuilder error io camunda zeebe util actor start waiting broker logstream error io camunda zeebe util actor stop waiting wakeup io camunda zeebe logstreams util synclogstreambuilder error io camunda zeebe util actor resubmit wakeup io camunda zeebe logstreams util synclogstreambuilder error io camunda zeebe util actor start waiting io camunda zeebe logstreams util synclogstreambuilder error io camunda zeebe util actor stop waiting wakeup broker logstream error io camunda zeebe util actor actor broker logstream set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker logstream error io camunda zeebe util actor start waiting broker streamprocessor error io camunda zeebe util actor stop waiting wakeup broker streamprocessor error io camunda zeebe util actor actor broker streamprocessor set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker streamprocessor error io camunda zeebe util actor start waiting broker logstream error io camunda zeebe util actor resubmit wakeup broker streamprocessor debug io camunda zeebe logstreams recovering state of partition from snapshot info io camunda zeebe logstreams recovered state of partition from snapshot at position error io camunda zeebe util actor scheduled runnable io camunda zeebe stream impl streamprocessor lambda delay error io camunda zeebe util actor actorthread active info io camunda zeebe processor processor starts replay of events info io camunda zeebe processor processor finished replay with error io camunda zeebe util actor stop waiting wakeup broker logstream error io camunda zeebe util actor actor broker logstream set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker logstream error io camunda zeebe util actor start waiting broker streamprocessor trace io camunda zeebe logstreams impl log sequencer starting new sequencer at position debug io camunda zeebe logstreams impl flowcontrol appenderflowcontrol configured log appender back pressure as backpressurecfgvegas initiallimit maxconcurrency alphalimit betalimit window limiting is disabled error io camunda zeebe util actor start waiting broker logstream error io camunda zeebe util actor stop waiting wakeup broker logappender error io camunda zeebe util actor stop waiting wakeup broker logstream error io camunda zeebe util actor actor broker logstream set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker logstream error io camunda zeebe util actor resubmit wakeup broker logappender error io camunda zeebe util actor stop waiting wakeup broker logstream error io camunda zeebe util actor stop waiting wakeup broker logappender error io camunda zeebe util actor stop waiting wakeup broker streamprocessor error io camunda zeebe util actor actor broker streamprocessor set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker streamprocessor error io camunda zeebe util actor start waiting broker logstream error io camunda zeebe util actor start waiting broker logappender error io camunda zeebe util actor stop waiting wakeup broker logstream error io camunda zeebe util actor actor broker logstream set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker logstream error io camunda zeebe util actor stop waiting wakeup broker streamprocessor error io camunda zeebe util actor start waiting broker logstream error io camunda zeebe util actor stop waiting wakeup broker streamprocessor error io camunda zeebe util actor start waiting broker streamprocessor error io camunda zeebe util actor stop waiting wakeup broker logstream error io camunda zeebe util actor actor broker logstream set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker logstream error io camunda zeebe util actor start waiting broker logstream error io camunda zeebe util actor stop waiting wakeup io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor error io camunda zeebe util actor stop waiting wakeup broker streamprocessor error io camunda zeebe util actor actor broker streamprocessor set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker streamprocessor error io camunda zeebe util actor resubmit wakeup io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor error io camunda zeebe util actor start waiting io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor error io camunda zeebe util actor stop waiting wakeup broker logstream error io camunda zeebe util actor actor broker logstream set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker logstream error io camunda zeebe util actor start waiting broker logstream error io camunda zeebe util actor stop waiting wakeup io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor error io camunda zeebe util actor actor io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor error io camunda zeebe stream schedule task alter error io camunda zeebe util actor stop waiting wakeup broker streamprocessor error io camunda zeebe util actor start waiting broker streamprocessor error io camunda zeebe util actor actor broker streamprocessor set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker streamprocessor error io camunda zeebe util actor scheduled runnable io camunda zeebe stream impl processingscheduleserviceimpl lambda delay error io camunda zeebe util actor start waiting broker streamprocessor error io camunda zeebe util actor actorthread active error io camunda zeebe util actor start waiting io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor error io camunda zeebe util actor stop waiting wakeup broker logstream error io camunda zeebe util actor actor broker logstream set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker logstream error io camunda zeebe util actor start waiting broker logstream error io camunda zeebe util actor stop waiting wakeup broker logappender error io camunda zeebe util actor actor broker logappender set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker logappender error io camunda zeebe util actor stop waiting wakeup broker logstream error io camunda zeebe util actor actor broker logstream set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker logstream error io camunda zeebe util actor stop waiting wakeup broker logappender error io camunda zeebe util actor start waiting broker logappender error io camunda zeebe util actor actor broker logappender set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker logappender error io camunda zeebe util actor stop waiting wakeup broker streamprocessor error io camunda zeebe util actor actor broker streamprocessor set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker streamprocessor error io camunda zeebe util actor start waiting broker logstream error io camunda zeebe util actor start waiting broker logappender error io camunda zeebe util actor wait processing actortask broker streamprocessor active phase started error io camunda zeebe util actor increase time clock and await false error io camunda zeebe util actor increase time clock and await false error io camunda zeebe util actor increase time clock and await false error io camunda zeebe util actor increase time clock and await false error io camunda zeebe util actor increase time clock and await false error io camunda zeebe util actor increase time clock and await false error io camunda zeebe util actor increase time clock and await false error io camunda zeebe util actor increase time clock and await false error io camunda zeebe util actor increase time clock and await false error io camunda zeebe util actor increase time clock and await false error io camunda zeebe util actor increase time clock and await false error io camunda zeebe util actor increase time clock and await false error io camunda zeebe util actor increase time clock and await false error io camunda zeebe util actor increase time clock and await false error io camunda zeebe util actor increase time clock and await false error io camunda zeebe util actor increase time clock and await false error io camunda zeebe util actor increase time clock and await false error io camunda zeebe util actor increase time clock and await false error io camunda zeebe util actor increase time clock and await false error io camunda zeebe util actor increase time clock and await false error io camunda zeebe util actor increase time clock and await false error io camunda zeebe util actor increase time clock and await false error io camunda zeebe util actor increase time clock and await false error io camunda zeebe util actor increase time clock and await false error io camunda zeebe stream wait latch count down error io camunda zeebe util actor stop waiting wakeup broker streamprocessor error io camunda zeebe util actor stop waiting wakeup broker streamprocessor error io camunda zeebe util actor stop waiting wakeup broker streamprocessor error io camunda zeebe util actor stop waiting wakeup broker streamprocessor error io camunda zeebe util actor stop waiting wakeup broker streamprocessor error io camunda zeebe util actor start waiting broker streamprocessor error io camunda zeebe util actor actor broker streamprocessor set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker streamprocessor error io camunda zeebe util actor start waiting broker streamprocessor error io camunda zeebe util actor stop waiting wakeup io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor error io camunda zeebe util actor actor io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor error io camunda zeebe util actor scheduled task actortask io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor active phase started error io camunda zeebe util actor count down yo clock error io camunda zeebe util actor counted down debug io camunda zeebe stream close stream processor error io camunda zeebe util actor stop waiting wakeup broker streamprocessor error io camunda zeebe util actor scheduled runnable io camunda zeebe stream impl processingscheduleserviceimpl lambda delay error io camunda zeebe util actor actor broker streamprocessor set new state waking up from waiting error io camunda zeebe util actor actorthread active error io camunda zeebe util actor resubmit wakeup broker streamprocessor error io camunda zeebe util actor stop waiting wakeup io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor error io camunda zeebe util actor start waiting io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor error io camunda zeebe util actor stop waiting wakeup broker streamprocessor error io camunda zeebe util actor resubmit wakeup broker streamprocessor error io camunda zeebe util actor stop waiting wakeup io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor error io camunda zeebe util actor actor io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor error io camunda zeebe util actor resubmit wakeup io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor error io camunda zeebe util actor stop waiting wakeup broker logstream error io camunda zeebe util actor resubmit wakeup io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor error io camunda zeebe util actor actor broker logstream set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker logstream error io camunda zeebe util actor stop waiting wakeup broker logstream error io camunda zeebe util actor resubmit wakeup broker streamprocessor error io camunda zeebe util actor start waiting broker logstream debug io camunda zeebe logstreams closed stream processor controller broker streamprocessor error io camunda zeebe util actor stop waiting wakeup broker logstream error io camunda zeebe util actor actor broker logstream set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker logstream info io camunda zeebe logstreams close appender for log stream stream info io camunda zeebe logstreams impl log sequencer closing sequencer for writing error io camunda zeebe util actor stop waiting wakeup broker logappender error io camunda zeebe util actor actor broker logappender set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker logappender error io camunda zeebe util actor start waiting broker logstream error io camunda zeebe util actor resubmit wakeup broker logappender error io camunda zeebe util actor resubmit wakeup broker logappender error io camunda zeebe util actor stop waiting wakeup broker logstream error io camunda zeebe util actor actor broker logstream set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker logstream error io camunda zeebe util actor stop waiting wakeup broker logstream error io camunda zeebe util actor stop waiting wakeup broker logstream error io camunda zeebe util actor resubmit wakeup broker logstream info io camunda zeebe logstreams on closing logstream stream close readers error io camunda zeebe util actor resubmit wakeup broker logstream debug io camunda zeebe util actor closing actor thread ground zb fs workers debug io camunda zeebe util actor closing actor thread ground zb actors debug io camunda zeebe util actor closing actor thread ground zb fs workers closed successfully debug io camunda zeebe util actor closing actor thread ground zb actors closed successfully org awaitility core conditiontimeoutexception condition with alias processscheduleservice should still work didn t complete within seconds because condition with io camunda zeebe stream impl streamprocessortest was not fulfilled at org awaitility core conditionawaiter await conditionawaiter java at org awaitility core callablecondition await callablecondition java at org awaitility core callablecondition await callablecondition java at org awaitility core conditionfactory until conditionfactory java at org awaitility core conditionfactory until conditionfactory java at io camunda zeebe stream impl streamprocessortest shouldrunasyncschedulingevenifprocessingisblocked streamprocessortest java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org junit platform commons util reflectionutils invokemethod reflectionutils java at org junit jupiter engine execution methodinvocation proceed methodinvocation java at org junit jupiter engine execution invocationinterceptorchain validatinginvocation proceed invocationinterceptorchain java at org junit jupiter engine extension timeoutextension intercept timeoutextension java at org junit jupiter engine extension timeoutextension intercepttestablemethod timeoutextension java at org junit jupiter engine extension timeoutextension intercepttestmethod timeoutextension java at org junit jupiter engine execution interceptingexecutableinvoker reflectiveinterceptorcall lambda ofvoidmethod interceptingexecutableinvoker java at org junit jupiter engine execution interceptingexecutableinvoker lambda invoke interceptingexecutableinvoker java at org junit jupiter engine execution invocationinterceptorchain interceptedinvocation proceed invocationinterceptorchain java at org junit jupiter engine execution invocationinterceptorchain proceed invocationinterceptorchain java at org junit jupiter engine execution invocationinterceptorchain chainandinvoke invocationinterceptorchain java at org junit jupiter engine execution invocationinterceptorchain invoke invocationinterceptorchain java at org junit jupiter engine execution interceptingexecutableinvoker invoke interceptingexecutableinvoker java at org junit jupiter engine execution interceptingexecutableinvoker invoke interceptingexecutableinvoker java at org junit jupiter engine descriptor testmethodtestdescriptor lambda invoketestmethod testmethodtestdescriptor java at org junit platform engine support hierarchical throwablecollector execute throwablecollector java at org junit jupiter engine descriptor testmethodtestdescriptor invoketestmethod testmethodtestdescriptor java at org junit jupiter engine descriptor testmethodtestdescriptor execute testmethodtestdescriptor java at org junit jupiter engine descriptor testmethodtestdescriptor execute testmethodtestdescriptor java at org junit platform engine support hierarchical nodetesttask lambda executerecursively nodetesttask java at org junit platform engine support hierarchical throwablecollector execute throwablecollector java at org junit platform engine support hierarchical nodetesttask lambda executerecursively nodetesttask java at org junit platform engine support hierarchical node around node java at org junit platform engine support hierarchical nodetesttask lambda executerecursively nodetesttask java at org junit platform engine support hierarchical throwablecollector execute throwablecollector java at org junit platform engine support hierarchical nodetesttask executerecursively nodetesttask java at org junit platform engine support hierarchical nodetesttask execute nodetesttask java at java base java util arraylist foreach arraylist java at org junit platform engine support hierarchical samethreadhierarchicaltestexecutorservice invokeall samethreadhierarchicaltestexecutorservice java at org junit platform engine support hierarchical nodetesttask lambda executerecursively nodetesttask java at org junit platform engine support hierarchical throwablecollector execute throwablecollector java at org junit platform engine support hierarchical nodetesttask lambda executerecursively nodetesttask java at org junit platform engine support hierarchical node around node java at org junit platform engine support hierarchical nodetesttask lambda executerecursively nodetesttask java at org junit platform engine support hierarchical throwablecollector execute throwablecollector java at org junit platform engine support hierarchical nodetesttask executerecursively nodetesttask java at org junit platform engine support hierarchical nodetesttask execute nodetesttask java at java base java util arraylist foreach arraylist java at org junit platform engine support hierarchical samethreadhierarchicaltestexecutorservice invokeall samethreadhierarchicaltestexecutorservice java at org junit platform engine support hierarchical nodetesttask lambda executerecursively nodetesttask java at org junit platform engine support hierarchical throwablecollector execute throwablecollector java at org junit platform engine support hierarchical nodetesttask lambda executerecursively nodetesttask java at org junit platform engine support hierarchical node around node java at org junit platform engine support hierarchical nodetesttask lambda executerecursively nodetesttask java at org junit platform engine support hierarchical throwablecollector execute throwablecollector java at org junit platform engine support hierarchical nodetesttask executerecursively nodetesttask java at org junit platform engine support hierarchical nodetesttask execute nodetesttask java at org junit platform engine support hierarchical samethreadhierarchicaltestexecutorservice submit samethreadhierarchicaltestexecutorservice java at org junit platform engine support hierarchical hierarchicaltestexecutor execute hierarchicaltestexecutor java at org junit platform engine support hierarchical hierarchicaltestengine execute hierarchicaltestengine java at org junit platform launcher core engineexecutionorchestrator execute engineexecutionorchestrator java at org junit platform launcher core engineexecutionorchestrator execute engineexecutionorchestrator java at org junit platform launcher core engineexecutionorchestrator execute engineexecutionorchestrator java at org junit platform launcher core engineexecutionorchestrator lambda execute engineexecutionorchestrator java at org junit platform launcher core engineexecutionorchestrator withinterceptedstreams engineexecutionorchestrator java at org junit platform launcher core engineexecutionorchestrator execute engineexecutionorchestrator java at org junit platform launcher core defaultlauncher execute defaultlauncher java at org junit platform launcher core defaultlauncher execute defaultlauncher java at org junit platform launcher core defaultlaunchersession delegatinglauncher execute defaultlaunchersession java at org junit platform launcher core sessionperrequestlauncher execute sessionperrequestlauncher java at com intellij startrunnerwithargs java at com intellij rt junit ideatestrunner repeater execute ideatestrunner java at com intellij rt execution junit testsrepeater repeat testsrepeater java at com intellij rt junit ideatestrunner repeater startrunnerwithargs ideatestrunner java at com intellij rt junit junitstarter preparestreamsandstart junitstarter java at com intellij rt junit junitstarter main junitstarter java caused by java util concurrent timeoutexception at java base java util concurrent futuretask get futuretask java at org awaitility core uninterruptibles getuninterruptibly uninterruptibles java at org awaitility core uninterruptibles getuninterruptibly uninterruptibles java at org awaitility core conditionawaiter await conditionawaiter java more in contrast to the succeeding test run where the scheduling happens on a different thread succeed test run error io camunda zeebe util actor start waiting io camunda zeebe logstreams util synclogstreambuilder error io camunda zeebe util actor stop waiting wakeup io camunda zeebe logstreams util synclogstreambuilder error io camunda zeebe util actor actor io camunda zeebe logstreams util synclogstreambuilder set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker logstream error io camunda zeebe util actor resubmit wakeup io camunda zeebe logstreams util synclogstreambuilder error io camunda zeebe util actor start waiting broker logstream error io camunda zeebe util actor stop waiting wakeup io camunda zeebe logstreams util synclogstreambuilder error io camunda zeebe util actor resubmit wakeup io camunda zeebe logstreams util synclogstreambuilder error io camunda zeebe util actor start waiting io camunda zeebe logstreams util synclogstreambuilder error io camunda zeebe util actor stop waiting wakeup broker logstream error io camunda zeebe util actor actor broker logstream set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker logstream error io camunda zeebe util actor start waiting broker streamprocessor error io camunda zeebe util actor stop waiting wakeup broker streamprocessor error io camunda zeebe util actor actor broker streamprocessor set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker streamprocessor error io camunda zeebe util actor start waiting broker logstream error io camunda zeebe util actor resubmit wakeup broker streamprocessor debug io camunda zeebe logstreams recovering state of partition from snapshot info io camunda zeebe logstreams recovered state of partition from snapshot at position error io camunda zeebe util actor scheduled runnable io camunda zeebe stream impl streamprocessor lambda delay error io camunda zeebe util actor actorthread active info io camunda zeebe processor processor starts replay of events info io camunda zeebe processor processor finished replay with error io camunda zeebe util actor stop waiting wakeup broker logstream error io camunda zeebe util actor actor broker logstream set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker logstream error io camunda zeebe util actor start waiting broker streamprocessor trace io camunda zeebe logstreams impl log sequencer starting new sequencer at position debug io camunda zeebe logstreams impl flowcontrol appenderflowcontrol configured log appender back pressure as backpressurecfgvegas initiallimit maxconcurrency alphalimit betalimit window limiting is disabled error io camunda zeebe util actor start waiting broker logstream error io camunda zeebe util actor stop waiting wakeup broker logappender error io camunda zeebe util actor stop waiting wakeup broker logstream error io camunda zeebe util actor actor broker logstream set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker logstream error io camunda zeebe util actor resubmit wakeup broker logappender error io camunda zeebe util actor stop waiting wakeup broker logstream error io camunda zeebe util actor stop waiting wakeup broker logappender error io camunda zeebe util actor stop waiting wakeup broker streamprocessor error io camunda zeebe util actor actor broker streamprocessor set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker streamprocessor error io camunda zeebe util actor start waiting broker logstream error io camunda zeebe util actor stop waiting wakeup broker logstream error io camunda zeebe util actor actor broker logstream set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker logstream error io camunda zeebe util actor start waiting broker streamprocessor error io camunda zeebe util actor stop waiting wakeup broker streamprocessor error io camunda zeebe util actor actor broker streamprocessor set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker streamprocessor error io camunda zeebe util actor start waiting broker logstream error io camunda zeebe util actor start waiting broker logappender error io camunda zeebe util actor stop waiting wakeup broker streamprocessor error io camunda zeebe util actor start waiting broker streamprocessor error io camunda zeebe util actor stop waiting wakeup broker logstream error io camunda zeebe util actor actor broker logstream set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker logstream error io camunda zeebe util actor stop waiting wakeup io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor error io camunda zeebe util actor stop waiting wakeup io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor error io camunda zeebe util actor start waiting broker logstream error io camunda zeebe util actor stop waiting wakeup broker streamprocessor error io camunda zeebe util actor actor broker streamprocessor set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker streamprocessor error io camunda zeebe util actor resubmit wakeup io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor error io camunda zeebe util actor start waiting io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor error io camunda zeebe util actor stop waiting wakeup broker logstream error io camunda zeebe util actor actor broker logstream set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker logstream error io camunda zeebe util actor start waiting broker logstream error io camunda zeebe util actor stop waiting wakeup io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor error io camunda zeebe util actor actor io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor error io camunda zeebe stream schedule task alter error io camunda zeebe util actor stop waiting wakeup broker streamprocessor error io camunda zeebe util actor start waiting broker streamprocessor error io camunda zeebe util actor actor broker streamprocessor set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker streamprocessor error io camunda zeebe util actor scheduled runnable io camunda zeebe stream impl processingscheduleserviceimpl lambda delay error io camunda zeebe util actor actorthread active error io camunda zeebe util actor start waiting broker streamprocessor error io camunda zeebe util actor start waiting io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor error io camunda zeebe util actor stop waiting wakeup broker logstream error io camunda zeebe util actor actor broker logstream set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker logstream error io camunda zeebe util actor start waiting broker logstream error io camunda zeebe util actor stop waiting wakeup broker logappender error io camunda zeebe util actor actor broker logappender set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker logappender error io camunda zeebe util actor stop waiting wakeup broker logstream error io camunda zeebe util actor actor broker logstream set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker logstream error io camunda zeebe util actor stop waiting wakeup broker logappender error io camunda zeebe util actor start waiting broker logappender error io camunda zeebe util actor actor broker logappender set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker logappender error io camunda zeebe util actor stop waiting wakeup broker streamprocessor error io camunda zeebe util actor actor broker streamprocessor set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker streamprocessor error io camunda zeebe util actor start waiting broker logstream error io camunda zeebe util actor start waiting broker logappender error io camunda zeebe util actor wait processing actortask broker streamprocessor active phase started error io camunda zeebe util actor stop waiting wakeup io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor error io camunda zeebe util actor actor io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor error io camunda zeebe util actor scheduled task actortask io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor active phase started error io camunda zeebe util actor count down yo clock error io camunda zeebe util actor counted down error io camunda zeebe util actor scheduled runnable io camunda zeebe stream impl processingscheduleserviceimpl lambda delay error io camunda zeebe util actor actorthread active error io camunda zeebe util actor stop waiting wakeup io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor error io camunda zeebe util actor start waiting io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor error io camunda zeebe util actor increase time clock and await true error io camunda zeebe stream wait latch count down debug io camunda zeebe stream close stream processor error io camunda zeebe util actor stop waiting wakeup broker streamprocessor error io camunda zeebe util actor stop waiting wakeup broker streamprocessor debug io camunda zeebe scheduler actortask discard job io camunda zeebe scheduler future futurecontinuationrunnable queued from fastlane of actor broker streamprocessor error io camunda zeebe util actor resubmit wakeup broker streamprocessor error io camunda zeebe util actor stop waiting wakeup io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor error io camunda zeebe util actor actor io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor error io camunda zeebe util actor stop waiting wakeup broker logstream error io camunda zeebe util actor actor broker logstream set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker logstream error io camunda zeebe util actor stop waiting wakeup broker logstream error io camunda zeebe util actor resubmit wakeup broker streamprocessor error io camunda zeebe util actor resubmit wakeup io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor error io camunda zeebe util actor start waiting broker logstream debug io camunda zeebe logstreams closed stream processor controller broker streamprocessor error io camunda zeebe util actor resubmit wakeup io camunda zeebe stream impl streamprocessor asyncprocessingscheduleserviceactor error io camunda zeebe util actor stop waiting wakeup broker logstream error io camunda zeebe util actor actor broker logstream set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker logstream info io camunda zeebe logstreams close appender for log stream stream info io camunda zeebe logstreams impl log sequencer closing sequencer for writing error io camunda zeebe util actor stop waiting wakeup broker logappender error io camunda zeebe util actor actor broker logappender set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker logappender error io camunda zeebe util actor start waiting broker logstream error io camunda zeebe util actor resubmit wakeup broker logappender error io camunda zeebe util actor resubmit wakeup broker logappender error io camunda zeebe util actor stop waiting wakeup broker logstream error io camunda zeebe util actor actor broker logstream set new state waking up from waiting error io camunda zeebe util actor resubmit wakeup broker logstream error io camunda zeebe util actor stop waiting wakeup broker logstream error io camunda zeebe util actor stop waiting wakeup broker logstream error io camunda zeebe util actor resubmit wakeup broker logstream info io camunda zeebe logstreams on closing logstream stream close readers error io camunda zeebe util actor resubmit wakeup broker logstream debug io camunda zeebe util actor closing actor thread ground zb fs workers debug io camunda zeebe util actor closing actor thread ground zb actors debug io camunda zeebe util actor closing actor thread ground zb fs workers closed successfully debug io camunda zeebe util actor closing actor thread ground zb actors closed successfully environment os zeebe version all configuration | 0 |
37,196 | 5,106,546,203 | IssuesEvent | 2017-01-05 11:51:32 | puikinsh/illdy | https://api.github.com/repos/puikinsh/illdy | closed | Widget - Menu issue | bug tested | ### The hover should work only on the category I make hover, as in styleguide.

---
### Styleguide:

| 1.0 | Widget - Menu issue - ### The hover should work only on the category I make hover, as in styleguide.

---
### Styleguide:

| test | widget menu issue the hover should work only on the category i make hover as in styleguide styleguide | 1 |
96,218 | 19,915,908,180 | IssuesEvent | 2022-01-25 22:37:42 | withfig/fig | https://api.github.com/repos/withfig/fig | opened | Prompt Loading Slower | bug codebase:shell_integrations performance | After installing Fig, a user's prompt starts loading slower than before. | 1.0 | Prompt Loading Slower - After installing Fig, a user's prompt starts loading slower than before. | non_test | prompt loading slower after installing fig a user s prompt starts loading slower than before | 0 |
171,527 | 13,236,928,341 | IssuesEvent | 2020-08-18 20:41:31 | QubesOS/updates-status | https://api.github.com/repos/QubesOS/updates-status | closed | core-agent-linux v4.1.14 (r4.1) | buggy r4.1-bullseye-cur-test r4.1-buster-cur-test r4.1-centos8-cur-test r4.1-fc29-cur-test r4.1-fc30-cur-test r4.1-fc31-cur-test r4.1-fc32-cur-test r4.1-stretch-cur-test | Update of core-agent-linux to v4.1.14 for Qubes r4.1, see comments below for details.
Built from: https://github.com/QubesOS/qubes-core-agent-linux/commit/5db43b95342904be67e702ab1977486f34962ac5
[Changes since previous version](https://github.com/QubesOS/qubes-core-agent-linux/compare/v4.1.12...v4.1.14):
QubesOS/qubes-core-agent-linux@5db43b9 version 4.1.14
QubesOS/qubes-core-agent-linux@a6c5e60 update-proxy-configs: handle Portage(Gentoo)
QubesOS/qubes-core-agent-linux@940b0f3 Do not use legacy distutils.spawn
QubesOS/qubes-core-agent-linux@39e07f9 version 4.1.13
QubesOS/qubes-core-agent-linux@587ac3b dnf: update for DNF 4+ API
QubesOS/qubes-core-agent-linux@3f728df Revert "Fix updates notification on Fedora 29"
QubesOS/qubes-core-agent-linux@630d94f Merge remote-tracking branch 'origin/pr/233'
QubesOS/qubes-core-agent-linux@8c3d181 debian: add 'rpm' as dependency
QubesOS/qubes-core-agent-linux@7049308 Use DNF instead of YUM if exists
QubesOS/qubes-core-agent-linux@6e724f7 fixed qubes.GetAppmenus ignoring some correct .desktop files
QubesOS/qubes-core-agent-linux@464f8f6 Merge remote-tracking branch 'origin/pr/231'
QubesOS/qubes-core-agent-linux@905b745 Merge remote-tracking branch 'origin/pr/230'
QubesOS/qubes-core-agent-linux@c12d9ce Fix missing dependency for managing Network-Manager in active user session
QubesOS/qubes-core-agent-linux@74a97b7 debian: conditional python version dependencies
Referenced issues:
QubesOS/qubes-issues#5836
QubesOS/qubes-issues#5692
If you're release manager, you can issue GPG-inline signed command:
* `Upload core-agent-linux 5db43b95342904be67e702ab1977486f34962ac5 r4.1 current repo` (available 7 days from now)
* `Upload core-agent-linux 5db43b95342904be67e702ab1977486f34962ac5 r4.1 current (dists) repo`, you can choose subset of distributions, like `vm-fc24 vm-fc25` (available 7 days from now)
* `Upload core-agent-linux 5db43b95342904be67e702ab1977486f34962ac5 r4.1 security-testing repo`
Above commands will work only if packages in current-testing repository were built from given commit (i.e. no new version superseded it).
| 8.0 | core-agent-linux v4.1.14 (r4.1) - Update of core-agent-linux to v4.1.14 for Qubes r4.1, see comments below for details.
Built from: https://github.com/QubesOS/qubes-core-agent-linux/commit/5db43b95342904be67e702ab1977486f34962ac5
[Changes since previous version](https://github.com/QubesOS/qubes-core-agent-linux/compare/v4.1.12...v4.1.14):
QubesOS/qubes-core-agent-linux@5db43b9 version 4.1.14
QubesOS/qubes-core-agent-linux@a6c5e60 update-proxy-configs: handle Portage(Gentoo)
QubesOS/qubes-core-agent-linux@940b0f3 Do not use legacy distutils.spawn
QubesOS/qubes-core-agent-linux@39e07f9 version 4.1.13
QubesOS/qubes-core-agent-linux@587ac3b dnf: update for DNF 4+ API
QubesOS/qubes-core-agent-linux@3f728df Revert "Fix updates notification on Fedora 29"
QubesOS/qubes-core-agent-linux@630d94f Merge remote-tracking branch 'origin/pr/233'
QubesOS/qubes-core-agent-linux@8c3d181 debian: add 'rpm' as dependency
QubesOS/qubes-core-agent-linux@7049308 Use DNF instead of YUM if exists
QubesOS/qubes-core-agent-linux@6e724f7 fixed qubes.GetAppmenus ignoring some correct .desktop files
QubesOS/qubes-core-agent-linux@464f8f6 Merge remote-tracking branch 'origin/pr/231'
QubesOS/qubes-core-agent-linux@905b745 Merge remote-tracking branch 'origin/pr/230'
QubesOS/qubes-core-agent-linux@c12d9ce Fix missing dependency for managing Network-Manager in active user session
QubesOS/qubes-core-agent-linux@74a97b7 debian: conditional python version dependencies
Referenced issues:
QubesOS/qubes-issues#5836
QubesOS/qubes-issues#5692
If you're release manager, you can issue GPG-inline signed command:
* `Upload core-agent-linux 5db43b95342904be67e702ab1977486f34962ac5 r4.1 current repo` (available 7 days from now)
* `Upload core-agent-linux 5db43b95342904be67e702ab1977486f34962ac5 r4.1 current (dists) repo`, you can choose subset of distributions, like `vm-fc24 vm-fc25` (available 7 days from now)
* `Upload core-agent-linux 5db43b95342904be67e702ab1977486f34962ac5 r4.1 security-testing repo`
Above commands will work only if packages in current-testing repository were built from given commit (i.e. no new version superseded it).
| test | core agent linux update of core agent linux to for qubes see comments below for details built from qubesos qubes core agent linux version qubesos qubes core agent linux update proxy configs handle portage gentoo qubesos qubes core agent linux do not use legacy distutils spawn qubesos qubes core agent linux version qubesos qubes core agent linux dnf update for dnf api qubesos qubes core agent linux revert fix updates notification on fedora qubesos qubes core agent linux merge remote tracking branch origin pr qubesos qubes core agent linux debian add rpm as dependency qubesos qubes core agent linux use dnf instead of yum if exists qubesos qubes core agent linux fixed qubes getappmenus ignoring some correct desktop files qubesos qubes core agent linux merge remote tracking branch origin pr qubesos qubes core agent linux merge remote tracking branch origin pr qubesos qubes core agent linux fix missing dependency for managing network manager in active user session qubesos qubes core agent linux debian conditional python version dependencies referenced issues qubesos qubes issues qubesos qubes issues if you re release manager you can issue gpg inline signed command upload core agent linux current repo available days from now upload core agent linux current dists repo you can choose subset of distributions like vm vm available days from now upload core agent linux security testing repo above commands will work only if packages in current testing repository were built from given commit i e no new version superseded it | 1 |
4,141 | 2,610,088,099 | IssuesEvent | 2015-02-26 18:26:43 | chrsmith/dsdsdaadf | https://api.github.com/repos/chrsmith/dsdsdaadf | opened | 深圳痤疮伤疤怎么治 | auto-migrated Priority-Medium Type-Defect | ```
深圳痤疮伤疤怎么治【深圳韩方科颜全国热线400-869-1818,24小
时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国��
�方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩�
��科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”
健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专��
�治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的�
��痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:25 | 1.0 | 深圳痤疮伤疤怎么治 - ```
深圳痤疮伤疤怎么治【深圳韩方科颜全国热线400-869-1818,24小
时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国��
�方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩�
��科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”
健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专��
�治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的�
��痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:25 | non_test | 深圳痤疮伤疤怎么治 深圳痤疮伤疤怎么治【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩国�� �方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩� ��科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹” 健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专�� �治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的� ��痘。 original issue reported on code google com by szft com on may at | 0 |
862 | 11,351,663,273 | IssuesEvent | 2020-01-24 11:47:13 | MicrosoftDocs/sql-docs | https://api.github.com/repos/MicrosoftDocs/sql-docs | closed | Column CPU returning rounded values? | Pri2 assigned-to-author doc-bug sql/prod supportability/tech | Data Column CPU appears to be returning values rounded to 1000.
Could you confirm this is intended behaviour please.
We are specifically looking where ERROR == 1
There is no mention of the typical 'returns microseconds, accurate to milliseconds' as in other BOL locations.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 207f4d6b-9d6e-1d6b-cd64-f30c107d2dd5
* Version Independent ID: 9a737c53-1a35-422d-a790-64cfc68ab307
* Content: [SQL:BatchCompleted Event Class - SQL Server](https://docs.microsoft.com/en-us/sql/relational-databases/event-classes/sql-batchcompleted-event-class?view=sql-server-ver15)
* Content Source: [docs/relational-databases/event-classes/sql-batchcompleted-event-class.md](https://github.com/MicrosoftDocs/sql-docs/blob/live/docs/relational-databases/event-classes/sql-batchcompleted-event-class.md)
* Product: **sql**
* Technology: **supportability**
* GitHub Login: @stevestein
* Microsoft Alias: **sstein** | True | Column CPU returning rounded values? - Data Column CPU appears to be returning values rounded to 1000.
Could you confirm this is intended behaviour please.
We are specifically looking where ERROR == 1
There is no mention of the typical 'returns microseconds, accurate to milliseconds' as in other BOL locations.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 207f4d6b-9d6e-1d6b-cd64-f30c107d2dd5
* Version Independent ID: 9a737c53-1a35-422d-a790-64cfc68ab307
* Content: [SQL:BatchCompleted Event Class - SQL Server](https://docs.microsoft.com/en-us/sql/relational-databases/event-classes/sql-batchcompleted-event-class?view=sql-server-ver15)
* Content Source: [docs/relational-databases/event-classes/sql-batchcompleted-event-class.md](https://github.com/MicrosoftDocs/sql-docs/blob/live/docs/relational-databases/event-classes/sql-batchcompleted-event-class.md)
* Product: **sql**
* Technology: **supportability**
* GitHub Login: @stevestein
* Microsoft Alias: **sstein** | non_test | column cpu returning rounded values data column cpu appears to be returning values rounded to could you confirm this is intended behaviour please we are specifically looking where error there is no mention of the typical returns microseconds accurate to milliseconds as in other bol locations document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product sql technology supportability github login stevestein microsoft alias sstein | 0 |
117,395 | 15,095,165,450 | IssuesEvent | 2021-02-07 09:55:49 | emilk/egui | https://api.github.com/repos/emilk/egui | closed | Disable widgets | design enhancement | There is currently no way to disable widgets (with few exceptions). One potential way to disable widgets would be like this:
``` rust
ui.enabled(false, |ui| {
ui.checkbox(...);
ui.add(Slider::f32(...)));
});
```
where everything in the closure would be non-interactive (except perhaps for `on_hover_text` tooltips), and have a grayed out disabled look to them. | 1.0 | Disable widgets - There is currently no way to disable widgets (with few exceptions). One potential way to disable widgets would be like this:
``` rust
ui.enabled(false, |ui| {
ui.checkbox(...);
ui.add(Slider::f32(...)));
});
```
where everything in the closure would be non-interactive (except perhaps for `on_hover_text` tooltips), and have a grayed out disabled look to them. | non_test | disable widgets there is currently no way to disable widgets with few exceptions one potential way to disable widgets would be like this rust ui enabled false ui ui checkbox ui add slider where everything in the closure would be non interactive except perhaps for on hover text tooltips and have a grayed out disabled look to them | 0 |
227,477 | 17,383,855,602 | IssuesEvent | 2021-08-01 08:21:43 | neulsom-EZY/EZY-server | https://api.github.com/repos/neulsom-EZY/EZY-server | opened | personal-plan/response-body 개선이 필요해 보이는 부분 | documentation invalid | ## GET 요청에서 아래와 같이 필요없는 정보가 나옵니다.
<img width="700" alt="Screen Shot 2021-08-01 at 5 20 09 PM" src="https://user-images.githubusercontent.com/67095821/127764460-85923dd0-d2c9-4737-b64e-84448a32c42e.png">
| 1.0 | personal-plan/response-body 개선이 필요해 보이는 부분 - ## GET 요청에서 아래와 같이 필요없는 정보가 나옵니다.
<img width="700" alt="Screen Shot 2021-08-01 at 5 20 09 PM" src="https://user-images.githubusercontent.com/67095821/127764460-85923dd0-d2c9-4737-b64e-84448a32c42e.png">
| non_test | personal plan response body 개선이 필요해 보이는 부분 get 요청에서 아래와 같이 필요없는 정보가 나옵니다 img width alt screen shot at pm src | 0 |
257,455 | 22,164,341,402 | IssuesEvent | 2022-06-05 01:15:28 | backend-br/vagas | https://api.github.com/repos/backend-br/vagas | closed | [Remoto] Pessoa Desenvolvedora Fullstack Jr. @Méliuz | CLT Python Remoto DevOps AWS Testes automatizados NoSQL CI GraphQL Rest React Native FullStack Stale | ## Descrição da vaga
Somos uma marca global, certificada pelo ranking GPTW e comprometida em criar produtos, serviços e parcerias alinhadas à nossa Cultura. Estendemos o compromisso às pessoas colaboradoras, investindo no desenvolvimento e satisfação das pessoas Meliantez (ou pessoas que trabalham no Méliuz).
Temos orgulho de onde viemos e queremos contar para o mundo onde chegamos - e para onde iremos.
Temos grandes sonhos e desafios e sabemos que para chegarmos cada vez mais longe precisamos aumentar nossa grande família. Buscamos contratar pessoas incríveis, que nos ajudarão a continuar aprimorando nossa arquitetura voltada a microsserviços e a construir aplicações cada vez mais escaláveis. Temos hoje produtos acessados por milhões de pessoas e muitos projetos para torná-los ainda mais interessantes para nossos usuários e parceiros. Na área de Serviços Financeiros da Engenharia, temos um acelerado crescimento e estamos desenvolvendo novos produtos para alcançar novos horizontes! Nossas squads são responsáveis por nossas soluções de pagamentos, anti-fraudes e muito mais!
Quer ver o seu trabalho impactar a vida de mais de 22 milhões de pessoas usuárias, atuar de qualquer parte do Brasil com horários flexíveis e outros benefícios exclusivos?
Descubra mais detalhes sobre esta vaga e seja parte da nossa história!
**RESPONSABILIDADES E ATRIBUIÇÕES**
Colaborar com nossos times de engenharia, produto e negócio na construção dos nossos produtos;
Atuar com protagonismo no desenvolvimento e evolução das nossas tecnologias como um fullstack;
Participar do projeto de soluções e arquiteturas escaláveis para atender grandes volumes de acessos e tráfego de dados;
Participar de projetos e implementações de persistência de dados das aplicações e garantir que o acesso seja seguro e eficiente;
Atuar de forma genuína na solução de problemas;
Atuar em todas as nossas plataformas de frontend (App, Extensão e Website), nossos BFFs e serviços backend.
## Local
Remoto
## Requisitos
**Obrigatórios:**
- Conhecimentos em linguagens e tecnologias como Python e [Node.js](http://node.js/) para backend e React/React Native no frontend;
- Conhecimentos de algoritmos e estruturas de dados;
- Experiência trabalhando com banco de dados relacionais e NoSQL;
**Diferenciais:**
- Conhecimento desenvolvendo APIs com REST, gRPC ou GraphQL;
- Conhecimentos em Arquitetura orientada a eventos e microsserviços;
- Experiência com testes automatizados e ambientes de CI/CD;
- Conhecimento ou experiência em ambientes cloud (AWS, Oracle, Azure, GCP)
- Experiência com ferramentas de monitoramento de sistemas;
- Conhecimento em conceitos DevOps;
- Mentalidade de melhoria contínua, buscando sempre melhorar incrementando nossas aplicações, serviços e produtos.
## Benefícios
🥗 Vale refeição / alimentação (R$60,00 por dia útil);
🤩Plano de saúde;
😁 Plano odontológico;
⏰ Horário flexível;
💰 Premiação por atingimento de metas coletivas;
💵 Participação nos Lucros e Resultados (PLR)
📚 Subsídio para treinamentos
🏊♀ Convênio com Sesc;
🤑 Cashback online em dobro;
🤰 Licença Maternidade/Paternidade estendida;
💻 Ajuda de custo mensal para o teletrabalho;
🪑 Empréstimo / Reembolso de cadeira e mesa de escritório
✝ Auxílio luto.
## Contratação
CLT
## Como se candidatar
Clique [AQUI ](https://meliuz.gupy.io/jobs/1580342)para se candidatar a esta vaga.
#### Alocação
- Remoto
#### Regime
- CLT
#### Nível
- Júnior
| 1.0 | [Remoto] Pessoa Desenvolvedora Fullstack Jr. @Méliuz - ## Descrição da vaga
Somos uma marca global, certificada pelo ranking GPTW e comprometida em criar produtos, serviços e parcerias alinhadas à nossa Cultura. Estendemos o compromisso às pessoas colaboradoras, investindo no desenvolvimento e satisfação das pessoas Meliantez (ou pessoas que trabalham no Méliuz).
Temos orgulho de onde viemos e queremos contar para o mundo onde chegamos - e para onde iremos.
Temos grandes sonhos e desafios e sabemos que para chegarmos cada vez mais longe precisamos aumentar nossa grande família. Buscamos contratar pessoas incríveis, que nos ajudarão a continuar aprimorando nossa arquitetura voltada a microsserviços e a construir aplicações cada vez mais escaláveis. Temos hoje produtos acessados por milhões de pessoas e muitos projetos para torná-los ainda mais interessantes para nossos usuários e parceiros. Na área de Serviços Financeiros da Engenharia, temos um acelerado crescimento e estamos desenvolvendo novos produtos para alcançar novos horizontes! Nossas squads são responsáveis por nossas soluções de pagamentos, anti-fraudes e muito mais!
Quer ver o seu trabalho impactar a vida de mais de 22 milhões de pessoas usuárias, atuar de qualquer parte do Brasil com horários flexíveis e outros benefícios exclusivos?
Descubra mais detalhes sobre esta vaga e seja parte da nossa história!
**RESPONSABILIDADES E ATRIBUIÇÕES**
Colaborar com nossos times de engenharia, produto e negócio na construção dos nossos produtos;
Atuar com protagonismo no desenvolvimento e evolução das nossas tecnologias como um fullstack;
Participar do projeto de soluções e arquiteturas escaláveis para atender grandes volumes de acessos e tráfego de dados;
Participar de projetos e implementações de persistência de dados das aplicações e garantir que o acesso seja seguro e eficiente;
Atuar de forma genuína na solução de problemas;
Atuar em todas as nossas plataformas de frontend (App, Extensão e Website), nossos BFFs e serviços backend.
## Local
Remoto
## Requisitos
**Obrigatórios:**
- Conhecimentos em linguagens e tecnologias como Python e [Node.js](http://node.js/) para backend e React/React Native no frontend;
- Conhecimentos de algoritmos e estruturas de dados;
- Experiência trabalhando com banco de dados relacionais e NoSQL;
**Diferenciais:**
- Conhecimento desenvolvendo APIs com REST, gRPC ou GraphQL;
- Conhecimentos em Arquitetura orientada a eventos e microsserviços;
- Experiência com testes automatizados e ambientes de CI/CD;
- Conhecimento ou experiência em ambientes cloud (AWS, Oracle, Azure, GCP)
- Experiência com ferramentas de monitoramento de sistemas;
- Conhecimento em conceitos DevOps;
- Mentalidade de melhoria contínua, buscando sempre melhorar incrementando nossas aplicações, serviços e produtos.
## Benefícios
🥗 Vale refeição / alimentação (R$60,00 por dia útil);
🤩Plano de saúde;
😁 Plano odontológico;
⏰ Horário flexível;
💰 Premiação por atingimento de metas coletivas;
💵 Participação nos Lucros e Resultados (PLR)
📚 Subsídio para treinamentos
🏊♀ Convênio com Sesc;
🤑 Cashback online em dobro;
🤰 Licença Maternidade/Paternidade estendida;
💻 Ajuda de custo mensal para o teletrabalho;
🪑 Empréstimo / Reembolso de cadeira e mesa de escritório
✝ Auxílio luto.
## Contratação
CLT
## Como se candidatar
Clique [AQUI ](https://meliuz.gupy.io/jobs/1580342)para se candidatar a esta vaga.
#### Alocação
- Remoto
#### Regime
- CLT
#### Nível
- Júnior
| test | pessoa desenvolvedora fullstack jr méliuz descrição da vaga somos uma marca global certificada pelo ranking gptw e comprometida em criar produtos serviços e parcerias alinhadas à nossa cultura estendemos o compromisso às pessoas colaboradoras investindo no desenvolvimento e satisfação das pessoas meliantez ou pessoas que trabalham no méliuz temos orgulho de onde viemos e queremos contar para o mundo onde chegamos e para onde iremos temos grandes sonhos e desafios e sabemos que para chegarmos cada vez mais longe precisamos aumentar nossa grande família buscamos contratar pessoas incríveis que nos ajudarão a continuar aprimorando nossa arquitetura voltada a microsserviços e a construir aplicações cada vez mais escaláveis temos hoje produtos acessados por milhões de pessoas e muitos projetos para torná los ainda mais interessantes para nossos usuários e parceiros na área de serviços financeiros da engenharia temos um acelerado crescimento e estamos desenvolvendo novos produtos para alcançar novos horizontes nossas squads são responsáveis por nossas soluções de pagamentos anti fraudes e muito mais quer ver o seu trabalho impactar a vida de mais de milhões de pessoas usuárias atuar de qualquer parte do brasil com horários flexíveis e outros benefícios exclusivos descubra mais detalhes sobre esta vaga e seja parte da nossa história responsabilidades e atribuições colaborar com nossos times de engenharia produto e negócio na construção dos nossos produtos atuar com protagonismo no desenvolvimento e evolução das nossas tecnologias como um fullstack participar do projeto de soluções e arquiteturas escaláveis para atender grandes volumes de acessos e tráfego de dados participar de projetos e implementações de persistência de dados das aplicações e garantir que o acesso seja seguro e eficiente atuar de forma genuína na solução de problemas atuar em todas as nossas plataformas de frontend app extensão e website nossos bffs e serviços backend local remoto requisitos obrigatórios conhecimentos em linguagens e tecnologias como python e para backend e react react native no frontend conhecimentos de algoritmos e estruturas de dados experiência trabalhando com banco de dados relacionais e nosql diferenciais conhecimento desenvolvendo apis com rest grpc ou graphql conhecimentos em arquitetura orientada a eventos e microsserviços experiência com testes automatizados e ambientes de ci cd conhecimento ou experiência em ambientes cloud aws oracle azure gcp experiência com ferramentas de monitoramento de sistemas conhecimento em conceitos devops mentalidade de melhoria contínua buscando sempre melhorar incrementando nossas aplicações serviços e produtos benefícios 🥗 vale refeição alimentação r por dia útil 🤩plano de saúde 😁 plano odontológico ⏰ horário flexível 💰 premiação por atingimento de metas coletivas 💵 participação nos lucros e resultados plr 📚 subsídio para treinamentos 🏊♀ convênio com sesc 🤑 cashback online em dobro 🤰 licença maternidade paternidade estendida 💻 ajuda de custo mensal para o teletrabalho 🪑 empréstimo reembolso de cadeira e mesa de escritório ✝ auxílio luto contratação clt como se candidatar clique se candidatar a esta vaga alocação remoto regime clt nível júnior | 1 |
157,360 | 12,371,218,085 | IssuesEvent | 2020-05-18 18:10:19 | rancher/dashboard | https://api.github.com/repos/rancher/dashboard | closed | "Clone as yaml" for a secret fails with "Internal Server Error" | [zube]: To Test area/secret kind/bug | Version: master-head(7b948b289)
Steps:
1. Create a secret "testnew" "by choosing type **"registry"** option Or by choosing **"opaque"** option
2. Clone the secret using "Clone as Yaml" option
The Yaml file is displayed as below, change the name to "testnew4"
```
apiVersion: v1
data:
.dockerconfigjson: eyJhdXRocyI6eyJyZWcuY29tIjp7InVzZXJuYW1lIjoicmVnMTIiLCJwYXNzd29yZCI6InZhbDEyIn19fQ==
kind: Secret
metadata:
creationTimestamp: "2020-05-05T23:44:57Z"
name: testnew4
namespace: default
resourceVersion: "56505"
selfLink: /api/v1/namespaces/default/secrets/regnew3
uid: ca5ce609-7bf5-4885-90e1-8ea6f2c1c76a
type: kubernetes.io/dockerconfigjson
```
3. Hit "Create" button.
Result
It fails with the error below:
```
{ "type": "error", "links": {}, "code": "", "message": "resourceVersion should not be set on objects to be created", "status": 500 }
```
Note: If we manually remove the "resourceVersion" from the yaml file, the clone succeeds.
| 1.0 | "Clone as yaml" for a secret fails with "Internal Server Error" - Version: master-head(7b948b289)
Steps:
1. Create a secret "testnew" "by choosing type **"registry"** option Or by choosing **"opaque"** option
2. Clone the secret using "Clone as Yaml" option
The Yaml file is displayed as below, change the name to "testnew4"
```
apiVersion: v1
data:
.dockerconfigjson: eyJhdXRocyI6eyJyZWcuY29tIjp7InVzZXJuYW1lIjoicmVnMTIiLCJwYXNzd29yZCI6InZhbDEyIn19fQ==
kind: Secret
metadata:
creationTimestamp: "2020-05-05T23:44:57Z"
name: testnew4
namespace: default
resourceVersion: "56505"
selfLink: /api/v1/namespaces/default/secrets/regnew3
uid: ca5ce609-7bf5-4885-90e1-8ea6f2c1c76a
type: kubernetes.io/dockerconfigjson
```
3. Hit "Create" button.
Result
It fails with the error below:
```
{ "type": "error", "links": {}, "code": "", "message": "resourceVersion should not be set on objects to be created", "status": 500 }
```
Note: If we manually remove the "resourceVersion" from the yaml file, the clone succeeds.
| test | clone as yaml for a secret fails with internal server error version master head steps create a secret testnew by choosing type registry option or by choosing opaque option clone the secret using clone as yaml option the yaml file is displayed as below change the name to apiversion data dockerconfigjson kind secret metadata creationtimestamp name namespace default resourceversion selflink api namespaces default secrets uid type kubernetes io dockerconfigjson hit create button result it fails with the error below type error links code message resourceversion should not be set on objects to be created status note if we manually remove the resourceversion from the yaml file the clone succeeds | 1 |
245,890 | 20,809,865,665 | IssuesEvent | 2022-03-18 00:29:27 | cypress-io/cypress | https://api.github.com/repos/cypress-io/cypress | closed | [@cypress/react] window.location.replace() method call in a component breaks cypress-ct runner | type: bug component testing npm: @cypress/react | ### Current behavior
When the React component under test use `window.location.replace()` method - all following cases in current and other describe/context blocks don't work. Seems that the href is not restored after the tests' end.
### Desired behavior
The environment should be restored after the test ends.
### Test code to reproduce
Please see the following repo: https://github.com/denis-domanskii/cypress-react-template .
Run `yarn & yarn cypress open-ct` to run the example.
`App` component has a button, which calls `window.location.replace()` on click:
https://github.com/denis-domanskii/cypress-react-template/blob/master/src/App.tsx#L31
The component test mounts the component and clicks the button. Href assertion works well, but the following `it` block fails (see the screenshot):
https://github.com/denis-domanskii/cypress-react-template/blob/master/src/App.spec.tsx#L12

### Versions
```
"@cypress/react": "^5.9.1,
"@cypress/webpack-dev-server": "^1.4.0",
"cypress": "^7.6.0",
``` | 1.0 | [@cypress/react] window.location.replace() method call in a component breaks cypress-ct runner - ### Current behavior
When the React component under test use `window.location.replace()` method - all following cases in current and other describe/context blocks don't work. Seems that the href is not restored after the tests' end.
### Desired behavior
The environment should be restored after the test ends.
### Test code to reproduce
Please see the following repo: https://github.com/denis-domanskii/cypress-react-template .
Run `yarn & yarn cypress open-ct` to run the example.
`App` component has a button, which calls `window.location.replace()` on click:
https://github.com/denis-domanskii/cypress-react-template/blob/master/src/App.tsx#L31
The component test mounts the component and clicks the button. Href assertion works well, but the following `it` block fails (see the screenshot):
https://github.com/denis-domanskii/cypress-react-template/blob/master/src/App.spec.tsx#L12

### Versions
```
"@cypress/react": "^5.9.1,
"@cypress/webpack-dev-server": "^1.4.0",
"cypress": "^7.6.0",
``` | test | window location replace method call in a component breaks cypress ct runner current behavior when the react component under test use window location replace method all following cases in current and other describe context blocks don t work seems that the href is not restored after the tests end desired behavior the environment should be restored after the test ends test code to reproduce please see the following repo run yarn yarn cypress open ct to run the example app component has a button which calls window location replace on click the component test mounts the component and clicks the button href assertion works well but the following it block fails see the screenshot versions cypress react cypress webpack dev server cypress | 1 |
283,680 | 21,327,580,585 | IssuesEvent | 2022-04-18 02:18:43 | hashicorp/terraform-provider-azurerm | https://api.github.com/repos/hashicorp/terraform-provider-azurerm | closed | Error on the example usage of azurerm_mysql_server in the documentation | documentation service/mysql | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Community Note
<!--- Please keep this note for the community --->
* Please vote on this issue by adding a :thumbsup: [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
Hello,
In the azurerm_mysql_server and azurerm_mysql_database documentation pages, there is the line `infrastructure_encryption_enabled = true`, making the apply fail.
This is due to the fact that the sku used in the example is `sku_name = "B_Gen5_2"` and the infrastructure encryption is not supported for sku Tier "Basic" for Server as shown in the error :
> Error: `infrastructure_encryption_enabled` is not supported for sku Tier `Basic` for Server: (Name "example-mysqlserver" / Resource Group "example-resources")
on main.tf line 10, in resource "azurerm_mysql_server" "example":
10: resource "azurerm_mysql_server" "example" {
### New or Affected Resource(s)/Data Source(s)
azurerm_mysql_server
### Potential Terraform Configuration
The line should then be `infrastructure_encryption_enabled = false` to work. Another solution is also to change the sku Tier.
### References
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/mysql_server
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/mysql_database | 1.0 | Error on the example usage of azurerm_mysql_server in the documentation - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Community Note
<!--- Please keep this note for the community --->
* Please vote on this issue by adding a :thumbsup: [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
Hello,
In the azurerm_mysql_server and azurerm_mysql_database documentation pages, there is the line `infrastructure_encryption_enabled = true`, making the apply fail.
This is due to the fact that the sku used in the example is `sku_name = "B_Gen5_2"` and the infrastructure encryption is not supported for sku Tier "Basic" for Server as shown in the error :
> Error: `infrastructure_encryption_enabled` is not supported for sku Tier `Basic` for Server: (Name "example-mysqlserver" / Resource Group "example-resources")
on main.tf line 10, in resource "azurerm_mysql_server" "example":
10: resource "azurerm_mysql_server" "example" {
### New or Affected Resource(s)/Data Source(s)
azurerm_mysql_server
### Potential Terraform Configuration
The line should then be `infrastructure_encryption_enabled = false` to work. Another solution is also to change the sku Tier.
### References
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/mysql_server
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/mysql_database | non_test | error on the example usage of azurerm mysql server in the documentation is there an existing issue for this i have searched the existing issues community note please vote on this issue by adding a thumbsup to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment description hello in the azurerm mysql server and azurerm mysql database documentation pages there is the line infrastructure encryption enabled true making the apply fail this is due to the fact that the sku used in the example is sku name b and the infrastructure encryption is not supported for sku tier basic for server as shown in the error error infrastructure encryption enabled is not supported for sku tier basic for server name example mysqlserver resource group example resources on main tf line in resource azurerm mysql server example resource azurerm mysql server example new or affected resource s data source s azurerm mysql server potential terraform configuration the line should then be infrastructure encryption enabled false to work another solution is also to change the sku tier references | 0 |
222,457 | 17,440,958,047 | IssuesEvent | 2021-08-05 04:37:36 | Submitty/Submitty | https://api.github.com/repos/Submitty/Submitty | closed | Initial test suite for Lichen Plagiarism | Lichen Plagiarism Detection Testing / Continuous Integration (CI) | In the lichen repo, make a tests top level folder
Inside have files:
tests/submissions/python/student_a/1/foo.py
tests/submissions/python/student_a/2/foo.py
tests/submissions/python/student_b/1/foo.py
<etc>
~10 'submissions' for each of our supported languages
With clear examples of common, unique, and matching (plagiarism)
And then make a directory for the expected output for each
tests/results/python/ranking/...
tests/results/python/concatenated/..
<etc>
And then a simple script to re-run the plagiarism scripts (no php/view, just files) and compare/diff with the expected output, file-by-file. The script passes if everything matches. Then we can hook this up as a travis regression test on the Lichen repository. | 1.0 | Initial test suite for Lichen Plagiarism - In the lichen repo, make a tests top level folder
Inside have files:
tests/submissions/python/student_a/1/foo.py
tests/submissions/python/student_a/2/foo.py
tests/submissions/python/student_b/1/foo.py
<etc>
~10 'submissions' for each of our supported languages
With clear examples of common, unique, and matching (plagiarism)
And then make a directory for the expected output for each
tests/results/python/ranking/...
tests/results/python/concatenated/..
<etc>
And then a simple script to re-run the plagiarism scripts (no php/view, just files) and compare/diff with the expected output, file-by-file. The script passes if everything matches. Then we can hook this up as a travis regression test on the Lichen repository. | test | initial test suite for lichen plagiarism in the lichen repo make a tests top level folder inside have files tests submissions python student a foo py tests submissions python student a foo py tests submissions python student b foo py submissions for each of our supported languages with clear examples of common unique and matching plagiarism and then make a directory for the expected output for each tests results python ranking tests results python concatenated and then a simple script to re run the plagiarism scripts no php view just files and compare diff with the expected output file by file the script passes if everything matches then we can hook this up as a travis regression test on the lichen repository | 1 |
90,348 | 8,234,180,631 | IssuesEvent | 2018-09-08 11:15:23 | humera987/HumTestData | https://api.github.com/repos/humera987/HumTestData | closed | project_test : ApiV1TestSuitesIdTestSuiteSearchGetAuthInvalid | project_test | Project : project_test
Job : UAT
Env : UAT
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 200
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Wed, 05 Sep 2018 11:18:30 GMT]}
Endpoint : http://13.56.210.25/api/v1/test-suites/5I87ZVFt/test-suite/search
Request :
Response :
{
"requestId" : "None",
"requestTime" : "2018-09-05T11:18:31.588+0000",
"errors" : false,
"messages" : [ ],
"data" : [ ],
"totalPages" : 0,
"totalElements" : 0
}
Logs :
Assertion [@StatusCode != 200] failed, not expecting [200] but found [200]
--- FX Bot --- | 1.0 | project_test : ApiV1TestSuitesIdTestSuiteSearchGetAuthInvalid - Project : project_test
Job : UAT
Env : UAT
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 200
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Wed, 05 Sep 2018 11:18:30 GMT]}
Endpoint : http://13.56.210.25/api/v1/test-suites/5I87ZVFt/test-suite/search
Request :
Response :
{
"requestId" : "None",
"requestTime" : "2018-09-05T11:18:31.588+0000",
"errors" : false,
"messages" : [ ],
"data" : [ ],
"totalPages" : 0,
"totalElements" : 0
}
Logs :
Assertion [@StatusCode != 200] failed, not expecting [200] but found [200]
--- FX Bot --- | test | project test project project test job uat env uat region fxlabs us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options content type transfer encoding date endpoint request response requestid none requesttime errors false messages data totalpages totalelements logs assertion failed not expecting but found fx bot | 1 |
5,825 | 8,664,126,363 | IssuesEvent | 2018-11-28 19:16:23 | nodejs/node | https://api.github.com/repos/nodejs/node | closed | I think child_process.fork() should officially support { detached: true } | child_process feature request good first issue | <!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
* **Version**: v9.2.1
* **Platform**: Linux ip-172-31-29-251 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
* **Subsystem**: child_process
<!-- Enter your issue details below this comment. -->
`child_process.spawn()` supports an option [`detached`](https://nodejs.org/api/child_process.html#child_process_options_detached) which "makes it possible for the child process to continue running after the parent exits." `child_process.fork()` does not officially support `detached` as one of its options (by "officially support", I mean that it is not documented as a valid option); but I think it should be officially supported.
My reasons:
1. It works today. (details below)
2. It's useful. (details below)
3. I can't think of any reason that it shouldn't be supported.
If you agree, then no code changes would be required, but the documentation for `child_process.fork()` would need to list `detached` as a valid option. (Also, TypeScript's `@types/node` would need to be updated, but that would probably be a separate GitHub issue somewhere else.)
**1. It works today:** First of all, if you look at the [current source](https://github.com/nodejs/node/blob/b1e6c0d44c075d8d3fee6c60fc92b90876700a30/lib/child_process.js#L54) for `child_process.fork()`, it's clear (and not surprising) that `fork()` is just a simple wrapper around `spawn()`. It passes most options through unchanged.
To prove that `detached` works with `fork()`: save this as demo.js:
```js
// launch with "node demo.js" or "node demo.js detached"
const child_process = require('child_process')
if (process.argv.indexOf('--daemon') === -1) {
let options = {};
if (process.argv.indexOf('detached') >= 0) {
options.detached = true;
}
const child = child_process.fork(__filename, ['--daemon'], options);
console.log('hello from parent; press ^C to terminate parent')
process.stdin.read()
} else {
console.log(`hello from child, my pid is ${process.pid}`)
setInterval(() => {}, 5000)
}
```
To see the NON-detached behavior, launch it with `node demo.js`. It will call `fork()`, so there are now two instances running. Then press ^C; if you do `ps aux | grep demo.js` you will see that both instances terminated.
To see the detached behavior, repeat the above but with `node demo.js detached`. In this case, after ^C, the child process is still running.
**2. It's useful:** `child_process.fork()` can be useful for starting daemon processes, and `detached` is certainly useful for daemons. | 1.0 | I think child_process.fork() should officially support { detached: true } - <!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
* **Version**: v9.2.1
* **Platform**: Linux ip-172-31-29-251 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
* **Subsystem**: child_process
<!-- Enter your issue details below this comment. -->
`child_process.spawn()` supports an option [`detached`](https://nodejs.org/api/child_process.html#child_process_options_detached) which "makes it possible for the child process to continue running after the parent exits." `child_process.fork()` does not officially support `detached` as one of its options (by "officially support", I mean that it is not documented as a valid option); but I think it should be officially supported.
My reasons:
1. It works today. (details below)
2. It's useful. (details below)
3. I can't think of any reason that it shouldn't be supported.
If you agree, then no code changes would be required, but the documentation for `child_process.fork()` would need to list `detached` as a valid option. (Also, TypeScript's `@types/node` would need to be updated, but that would probably be a separate GitHub issue somewhere else.)
**1. It works today:** First of all, if you look at the [current source](https://github.com/nodejs/node/blob/b1e6c0d44c075d8d3fee6c60fc92b90876700a30/lib/child_process.js#L54) for `child_process.fork()`, it's clear (and not surprising) that `fork()` is just a simple wrapper around `spawn()`. It passes most options through unchanged.
To prove that `detached` works with `fork()`: save this as demo.js:
```js
// launch with "node demo.js" or "node demo.js detached"
const child_process = require('child_process')
if (process.argv.indexOf('--daemon') === -1) {
let options = {};
if (process.argv.indexOf('detached') >= 0) {
options.detached = true;
}
const child = child_process.fork(__filename, ['--daemon'], options);
console.log('hello from parent; press ^C to terminate parent')
process.stdin.read()
} else {
console.log(`hello from child, my pid is ${process.pid}`)
setInterval(() => {}, 5000)
}
```
To see the NON-detached behavior, launch it with `node demo.js`. It will call `fork()`, so there are now two instances running. Then press ^C; if you do `ps aux | grep demo.js` you will see that both instances terminated.
To see the detached behavior, repeat the above but with `node demo.js detached`. In this case, after ^C, the child process is still running.
**2. It's useful:** `child_process.fork()` can be useful for starting daemon processes, and `detached` is certainly useful for daemons. | non_test | i think child process fork should officially support detached true thank you for reporting an issue this issue tracker is for bugs and issues found within node js core if you require more general support please file an issue on our help repo please fill in as much of the template below as you re able version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you are able version platform linux ip generic ubuntu smp wed oct utc gnu linux subsystem child process child process spawn supports an option which makes it possible for the child process to continue running after the parent exits child process fork does not officially support detached as one of its options by officially support i mean that it is not documented as a valid option but i think it should be officially supported my reasons it works today details below it s useful details below i can t think of any reason that it shouldn t be supported if you agree then no code changes would be required but the documentation for child process fork would need to list detached as a valid option also typescript s types node would need to be updated but that would probably be a separate github issue somewhere else it works today first of all if you look at the for child process fork it s clear and not surprising that fork is just a simple wrapper around spawn it passes most options through unchanged to prove that detached works with fork save this as demo js js launch with node demo js or node demo js detached const child process require child process if process argv indexof daemon let options if process argv indexof detached options detached true const child child process fork filename options console log hello from parent press c to terminate parent process stdin read else console log hello from child my pid is process pid setinterval to see the non detached behavior launch it with node demo js it will call fork so there are now two instances running then press c if you do ps aux grep demo js you will see that both instances terminated to see the detached behavior repeat the above but with node demo js detached in this case after c the child process is still running it s useful child process fork can be useful for starting daemon processes and detached is certainly useful for daemons | 0 |
79,219 | 7,698,866,161 | IssuesEvent | 2018-05-19 04:28:15 | GoogleCloudPlatform/forseti-security | https://api.github.com/repos/GoogleCloudPlatform/forseti-security | closed | Delete Inventory: TypeError: '1526624940511156' has type str, but expected one of: int, long | release-testing: 2.0 RC3 | ```
henry_henrychang_mygbiz_com@forseti-client-vm-4330:~$ forseti inventory delete 1526624940511156
```
```
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/forseti_security-2.0.0-py2.7.egg/google/cloud/forseti/services/cli.py", line 1182, in <module>
main(sys.argv[1:], ENV_CONFIG)
File "/usr/local/lib/python2.7/dist-packages/forseti_security-2.0.0-py2.7.egg/google/cloud/forseti/services/cli.py", line 1155, in main
services[config.service](client, config, output, config_env)
File "/usr/local/lib/python2.7/dist-packages/forseti_security-2.0.0-py2.7.egg/google/cloud/forseti/services/cli.py", line 864, in run_inventory
actions[config.action]()
File "/usr/local/lib/python2.7/dist-packages/forseti_security-2.0.0-py2.7.egg/google/cloud/forseti/services/cli.py", line 849, in do_delete_inventory
result = client.delete(config.id)
File "/usr/local/lib/python2.7/dist-packages/forseti_security-2.0.0-py2.7.egg/google/cloud/forseti/services/client.py", line 404, in delete
id=inventory_index_id)
TypeError: '1526624940511156' has type str, but expected one of: int, long
``` | 1.0 | Delete Inventory: TypeError: '1526624940511156' has type str, but expected one of: int, long - ```
henry_henrychang_mygbiz_com@forseti-client-vm-4330:~$ forseti inventory delete 1526624940511156
```
```
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/forseti_security-2.0.0-py2.7.egg/google/cloud/forseti/services/cli.py", line 1182, in <module>
main(sys.argv[1:], ENV_CONFIG)
File "/usr/local/lib/python2.7/dist-packages/forseti_security-2.0.0-py2.7.egg/google/cloud/forseti/services/cli.py", line 1155, in main
services[config.service](client, config, output, config_env)
File "/usr/local/lib/python2.7/dist-packages/forseti_security-2.0.0-py2.7.egg/google/cloud/forseti/services/cli.py", line 864, in run_inventory
actions[config.action]()
File "/usr/local/lib/python2.7/dist-packages/forseti_security-2.0.0-py2.7.egg/google/cloud/forseti/services/cli.py", line 849, in do_delete_inventory
result = client.delete(config.id)
File "/usr/local/lib/python2.7/dist-packages/forseti_security-2.0.0-py2.7.egg/google/cloud/forseti/services/client.py", line 404, in delete
id=inventory_index_id)
TypeError: '1526624940511156' has type str, but expected one of: int, long
``` | test | delete inventory typeerror has type str but expected one of int long henry henrychang mygbiz com forseti client vm forseti inventory delete traceback most recent call last file usr local lib dist packages forseti security egg google cloud forseti services cli py line in main sys argv env config file usr local lib dist packages forseti security egg google cloud forseti services cli py line in main services client config output config env file usr local lib dist packages forseti security egg google cloud forseti services cli py line in run inventory actions file usr local lib dist packages forseti security egg google cloud forseti services cli py line in do delete inventory result client delete config id file usr local lib dist packages forseti security egg google cloud forseti services client py line in delete id inventory index id typeerror has type str but expected one of int long | 1 |
133,884 | 5,216,109,715 | IssuesEvent | 2017-01-26 09:08:56 | rism-ch/muscat | https://api.github.com/repos/rism-ch/muscat | closed | Invalid dates when searching | Bug First priority Solr indexing Source | In the main sources page, when searching created_at or updated_at, the system expects a full date. If you just pass the year, the search fails. Add a fix or validaiton | 1.0 | Invalid dates when searching - In the main sources page, when searching created_at or updated_at, the system expects a full date. If you just pass the year, the search fails. Add a fix or validaiton | non_test | invalid dates when searching in the main sources page when searching created at or updated at the system expects a full date if you just pass the year the search fails add a fix or validaiton | 0 |
123,706 | 10,280,045,523 | IssuesEvent | 2019-08-26 03:17:16 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | closed | Module created through `ballerina openapi` doesn't contain Module.md file and test directory | Area/Tooling BetaTesting Component/ToolSwagger Priority/High Type/Improvement | When you create a new module through `ballerina add`, the directory hierarchy created looks something like below:
```
.
├── main.bal
├── Module.md
├── resources
└── tests
├── main_test.bal
└── resources
```
The directory hierarchy created when generating a service through the `openapi` command looks like the following:
```
.
├── fileservice.bal
├── resources
│ └── file-server.yaml
└── schema.bal
```
Shouldn't both commands create the same hierarchy?
| 1.0 | Module created through `ballerina openapi` doesn't contain Module.md file and test directory - When you create a new module through `ballerina add`, the directory hierarchy created looks something like below:
```
.
├── main.bal
├── Module.md
├── resources
└── tests
├── main_test.bal
└── resources
```
The directory hierarchy created when generating a service through the `openapi` command looks like the following:
```
.
├── fileservice.bal
├── resources
│ └── file-server.yaml
└── schema.bal
```
Shouldn't both commands create the same hierarchy?
| test | module created through ballerina openapi doesn t contain module md file and test directory when you create a new module through ballerina add the directory hierarchy created looks something like below ├── main bal ├── module md ├── resources └── tests ├── main test bal └── resources the directory hierarchy created when generating a service through the openapi command looks like the following ├── fileservice bal ├── resources │ └── file server yaml └── schema bal shouldn t both commands create the same hierarchy | 1 |
56,990 | 6,535,917,841 | IssuesEvent | 2017-08-31 16:07:42 | learn-co-curriculum/js-object-oriented-constructor-functions-readme | https://api.github.com/repos/learn-co-curriculum/js-object-oriented-constructor-functions-readme | closed | "we can create as many Puppies as we want" and then code example only shows one puppy being made | Test | And of course we can create as many objects we want with our constructor function.
```
function Puppy(name, age, color, size) {
this.name = name
this.age = age
this.color = color
this.size = size
}
let snoopy = new Puppy('snoopy', 3, 'white', 'medium')
// {name: 'snoopy', age: 3, color: 'white', size: 'medium'}
```
Maybe create two or three puppies in this code snippet | 1.0 | "we can create as many Puppies as we want" and then code example only shows one puppy being made - And of course we can create as many objects we want with our constructor function.
```
function Puppy(name, age, color, size) {
this.name = name
this.age = age
this.color = color
this.size = size
}
let snoopy = new Puppy('snoopy', 3, 'white', 'medium')
// {name: 'snoopy', age: 3, color: 'white', size: 'medium'}
```
Maybe create two or three puppies in this code snippet | test | we can create as many puppies as we want and then code example only shows one puppy being made and of course we can create as many objects we want with our constructor function function puppy name age color size this name name this age age this color color this size size let snoopy new puppy snoopy white medium name snoopy age color white size medium maybe create two or three puppies in this code snippet | 1 |
37,953 | 4,863,387,923 | IssuesEvent | 2016-11-14 15:20:36 | blockstack/blockstack.org | https://api.github.com/repos/blockstack/blockstack.org | opened | design support section | copywriting design production | @larrysalibra, @muneeb-ali have identified that there is a strong need for a well-rounded **support** section on blockstack.org
Here is a list of requirements
- [ ] how blockstack works
- [ ] faq | 1.0 | design support section - @larrysalibra, @muneeb-ali have identified that there is a strong need for a well-rounded **support** section on blockstack.org
Here is a list of requirements
- [ ] how blockstack works
- [ ] faq | non_test | design support section larrysalibra muneeb ali have identified that there is a strong need for a well rounded support section on blockstack org here is a list of requirements how blockstack works faq | 0 |
311,128 | 26,769,710,210 | IssuesEvent | 2023-01-31 13:17:29 | void-linux/void-packages | https://api.github.com/repos/void-linux/void-packages | opened | crawl-tiles & pipewire | bug needs-testing | ### Is this a new report?
Yes
### System Info
Void 6.1.8_1 x86_64 GenuineIntel uptodate rFF
### Package(s) Affected
acrawl-tiles-0.29.1_1
### Does a report exist for this bug with the project's home (upstream) and/or another distro?
https://github.com/crawl/crawl/issues/2954
### Expected behaviour
just launch the game
### Actual behaviour
it freezes
### Steps to reproduce
launch the `crawl-tiles` | 1.0 | crawl-tiles & pipewire - ### Is this a new report?
Yes
### System Info
Void 6.1.8_1 x86_64 GenuineIntel uptodate rFF
### Package(s) Affected
acrawl-tiles-0.29.1_1
### Does a report exist for this bug with the project's home (upstream) and/or another distro?
https://github.com/crawl/crawl/issues/2954
### Expected behaviour
just launch the game
### Actual behaviour
it freezes
### Steps to reproduce
launch the `crawl-tiles` | test | crawl tiles pipewire is this a new report yes system info void genuineintel uptodate rff package s affected acrawl tiles does a report exist for this bug with the project s home upstream and or another distro expected behaviour just launch the game actual behaviour it freezes steps to reproduce launch the crawl tiles | 1 |
40,517 | 5,300,650,320 | IssuesEvent | 2017-02-10 06:12:05 | TEAMMATES/teammates | https://api.github.com/repos/TEAMMATES/teammates | closed | Bug report: AllAccessControlUiTests failure in production server due to wrong password | a-Testing c.Bug | **Environment (dev/staging/live, version)**
`master` branch since 6299590
**Steps to Reproduce**
Run `AllAccessControlUiTests.java` in production server.
**Expected Behaviour**
The test passes.
**Actual Behaviour**
The test fails at the following sub-cases:
- `testStudentHome()`
- `testStudentAccessToAdminPages()`
**Cause**
`TEST_STUDENT1_PASSWORD` wrongly entered as `TEST_STUDENT2_ACCOUNT` in two lines.
| 1.0 | Bug report: AllAccessControlUiTests failure in production server due to wrong password - **Environment (dev/staging/live, version)**
`master` branch since 6299590
**Steps to Reproduce**
Run `AllAccessControlUiTests.java` in production server.
**Expected Behaviour**
The test passes.
**Actual Behaviour**
The test fails at the following sub-cases:
- `testStudentHome()`
- `testStudentAccessToAdminPages()`
**Cause**
`TEST_STUDENT1_PASSWORD` wrongly entered as `TEST_STUDENT2_ACCOUNT` in two lines.
| test | bug report allaccesscontroluitests failure in production server due to wrong password environment dev staging live version master branch since steps to reproduce run allaccesscontroluitests java in production server expected behaviour the test passes actual behaviour the test fails at the following sub cases teststudenthome teststudentaccesstoadminpages cause test password wrongly entered as test account in two lines | 1 |
240,190 | 20,015,799,515 | IssuesEvent | 2022-02-01 11:56:52 | Oldes/Rebol-issues | https://api.github.com/repos/Oldes/Rebol-issues | closed | SHIFT left or right by N bits | Test.written Type.wish Datatype: integer! CC.resolved | _Submitted by:_ **Carl**
Need a shift operator for integer values.
---
<sup>**Imported from:** **[CureCode](https://www.curecode.org/rebol3/ticket.rsp?id=1125)** [ Version: alpha 75 Type: Wish Platform: All Category: Native Reproduce: Always Fixed-in:alpha 76 ]</sup>
<sup>**Imported from**: https://github.com/rebol/rebol-issues/issues/1125</sup>
Comments:
---
---
> **Rebolbot** added the **Type.wish** on Jan 12, 2016
--- | 1.0 | SHIFT left or right by N bits - _Submitted by:_ **Carl**
Need a shift operator for integer values.
---
<sup>**Imported from:** **[CureCode](https://www.curecode.org/rebol3/ticket.rsp?id=1125)** [ Version: alpha 75 Type: Wish Platform: All Category: Native Reproduce: Always Fixed-in:alpha 76 ]</sup>
<sup>**Imported from**: https://github.com/rebol/rebol-issues/issues/1125</sup>
Comments:
---
---
> **Rebolbot** added the **Type.wish** on Jan 12, 2016
--- | test | shift left or right by n bits submitted by carl need a shift operator for integer values imported from imported from comments rebolbot added the type wish on jan | 1 |
262,983 | 23,029,015,908 | IssuesEvent | 2022-07-22 12:11:08 | Scille/parsec-cloud | https://api.github.com/repos/Scille/parsec-cloud | closed | Inconsistent timeout tests/test_cli.py::test_gui_with_diagnose_option | gui inconsistent testing | ERROR: type should be string, got "https://dev.azure.com/Scille/parsec/_build/results?buildId=11003&view=logs&j=5ba45810-c0e9-5996-abe7-b2cb0c9b6baa&t=b18150ab-3a6d-5272-0b08-a0dc06a2acd4&l=27\r\n\r\n```\r\n\r\n2022-05-09T20:16:22.1088063Z tests/test_cli.py::test_gui_with_diagnose_option[Standard environement] FAILED [ 0%]\r\n2022-05-09T20:16:22.1091469Z \r\n2022-05-09T20:16:22.1092270Z ================================== FAILURES ===================================\r\n2022-05-09T20:16:22.1092988Z ____________ test_gui_with_diagnose_option[Standard environement] _____________\r\n2022-05-09T20:16:22.1093374Z \r\n2022-05-09T20:16:22.1093748Z env = {}\r\n2022-05-09T20:16:22.1093959Z \r\n2022-05-09T20:16:22.1094353Z @pytest.mark.gui\r\n2022-05-09T20:16:22.1094796Z @pytest.mark.slow\r\n2022-05-09T20:16:22.1095249Z @pytest.mark.parametrize(\r\n2022-05-09T20:16:22.1095792Z \"env\",\r\n2022-05-09T20:16:22.1096143Z [\r\n2022-05-09T20:16:22.1096566Z pytest.param({}, id=\"Standard environement\"),\r\n2022-05-09T20:16:22.1097210Z pytest.param(\r\n2022-05-09T20:16:22.1097684Z {\"WINFSP_LIBRARY_PATH\": \"nope\"},\r\n2022-05-09T20:16:22.1098289Z id=\"Wrong winfsp library path\",\r\n2022-05-09T20:16:22.1098812Z marks=pytest.mark.skipif(sys.platform != \"win32\", reason=\"Windows only\"),\r\n2022-05-09T20:16:22.1099423Z ),\r\n2022-05-09T20:16:22.1099808Z pytest.param(\r\n2022-05-09T20:16:22.1100241Z {\"WINFSP_DEBUG_PATH\": \"nope\"},\r\n2022-05-09T20:16:22.1100703Z id=\"Wrong winfsp binary path\",\r\n2022-05-09T20:16:22.1101209Z marks=pytest.mark.skipif(sys.platform != \"win32\", reason=\"Windows only\"),\r\n2022-05-09T20:16:22.1101711Z ),\r\n2022-05-09T20:16:22.1102062Z ],\r\n2022-05-09T20:16:22.1102408Z )\r\n2022-05-09T20:16:22.1102788Z def test_gui_with_diagnose_option(env):\r\n2022-05-09T20:16:22.1103313Z > _run(f\"core gui --diagnose\", env=env, capture=False)\r\n2022-05-09T20:16:22.1103604Z \r\n2022-05-09T20:16:22.1104002Z tests\\test_cli.py:652: \r\n2022-05-09T20:16:22.1104505Z _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n2022-05-09T20:16:22.1106696Z tests\\test_cli.py:134: in _run\r\n2022-05-09T20:16:22.1107311Z ret = subprocess.run(cooked_cmd, cwd=CWD, env=env, timeout=timeout, **kwargs)\r\n2022-05-09T20:16:22.1107998Z c:\\hostedtoolcache\\windows\\python\\3.7.9\\x64\\lib\\subprocess.py:490: in run\r\n2022-05-09T20:16:22.1108642Z stdout, stderr = process.communicate(input, timeout=timeout)\r\n2022-05-09T20:16:22.1109405Z c:\\hostedtoolcache\\windows\\python\\3.7.9\\x64\\lib\\subprocess.py:983: in communicate\r\n2022-05-09T20:16:22.1110045Z sts = self.wait(timeout=self._remaining_time(endtime))\r\n2022-05-09T20:16:22.1110717Z c:\\hostedtoolcache\\windows\\python\\3.7.9\\x64\\lib\\subprocess.py:1019: in wait\r\n2022-05-09T20:16:22.1111186Z return self._wait(timeout=timeout)\r\n2022-05-09T20:16:22.1111639Z _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n2022-05-09T20:16:22.1112044Z \r\n2022-05-09T20:16:22.1112469Z self = <subprocess.Popen object at 0x00000227DE4B85C8>, timeout = 30.0\r\n2022-05-09T20:16:22.1112785Z \r\n2022-05-09T20:16:22.1113094Z def _wait(self, timeout):\r\n2022-05-09T20:16:22.1113521Z \"\"\"Internal implementation of wait() on Windows.\"\"\"\r\n2022-05-09T20:16:22.1113964Z if timeout is None:\r\n2022-05-09T20:16:22.1114356Z timeout_millis = _winapi.INFINITE\r\n2022-05-09T20:16:22.1114687Z else:\r\n2022-05-09T20:16:22.1115063Z timeout_millis = int(timeout * 1000)\r\n2022-05-09T20:16:22.1115471Z if self.returncode is None:\r\n2022-05-09T20:16:22.1115909Z # API note: Returns immediately if timeout_millis == 0.\r\n2022-05-09T20:16:22.1116357Z result = _winapi.WaitForSingleObject(self._handle,\r\n2022-05-09T20:16:22.1116798Z timeout_millis)\r\n2022-05-09T20:16:22.1117216Z if result == _winapi.WAIT_TIMEOUT:\r\n2022-05-09T20:16:22.1117695Z > raise TimeoutExpired(self.args, timeout)\r\n2022-05-09T20:16:22.1118254Z E subprocess.TimeoutExpired: Command '['python', '-m', 'parsec.cli', 'core', 'gui', '--diagnose']' timed out after 30.0 seconds\r\n2022-05-09T20:16:22.1118676Z \r\n2022-05-09T20:16:22.1119237Z c:\\hostedtoolcache\\windows\\python\\3.7.9\\x64\\lib\\subprocess.py:1261: TimeoutExpired\r\n2022-05-09T20:16:22.1119832Z ---------------------------- Captured stdout call -----------------------------\r\n2022-05-09T20:16:22.1120335Z ========= RUN core gui --diagnose ==============\r\n2022-05-09T20:16:22.1120836Z -------------- generated xml file: D:\\a\\1\\s\\test-results-gui.xml --------------\r\n2022-05-09T20:16:22.1121381Z ============================ slowest 10 durations =============================\r\n2022-05-09T20:16:22.1121958Z 30.02s call tests/test_cli.py::test_gui_with_diagnose_option[Standard environement]\r\n2022-05-09T20:16:22.1122284Z \r\n2022-05-09T20:16:22.1122650Z (2 durations < 0.005s hidden. Use -vv to show these durations.)\r\n2022-05-09T20:16:22.1123179Z =========================== short test summary info ===========================\r\n2022-05-09T20:16:22.1123930Z FAILED tests/test_cli.py::test_gui_with_diagnose_option[Standard environement]\r\n```" | 1.0 | Inconsistent timeout tests/test_cli.py::test_gui_with_diagnose_option - https://dev.azure.com/Scille/parsec/_build/results?buildId=11003&view=logs&j=5ba45810-c0e9-5996-abe7-b2cb0c9b6baa&t=b18150ab-3a6d-5272-0b08-a0dc06a2acd4&l=27
```
2022-05-09T20:16:22.1088063Z tests/test_cli.py::test_gui_with_diagnose_option[Standard environement] FAILED [ 0%]
2022-05-09T20:16:22.1091469Z
2022-05-09T20:16:22.1092270Z ================================== FAILURES ===================================
2022-05-09T20:16:22.1092988Z ____________ test_gui_with_diagnose_option[Standard environement] _____________
2022-05-09T20:16:22.1093374Z
2022-05-09T20:16:22.1093748Z env = {}
2022-05-09T20:16:22.1093959Z
2022-05-09T20:16:22.1094353Z @pytest.mark.gui
2022-05-09T20:16:22.1094796Z @pytest.mark.slow
2022-05-09T20:16:22.1095249Z @pytest.mark.parametrize(
2022-05-09T20:16:22.1095792Z "env",
2022-05-09T20:16:22.1096143Z [
2022-05-09T20:16:22.1096566Z pytest.param({}, id="Standard environement"),
2022-05-09T20:16:22.1097210Z pytest.param(
2022-05-09T20:16:22.1097684Z {"WINFSP_LIBRARY_PATH": "nope"},
2022-05-09T20:16:22.1098289Z id="Wrong winfsp library path",
2022-05-09T20:16:22.1098812Z marks=pytest.mark.skipif(sys.platform != "win32", reason="Windows only"),
2022-05-09T20:16:22.1099423Z ),
2022-05-09T20:16:22.1099808Z pytest.param(
2022-05-09T20:16:22.1100241Z {"WINFSP_DEBUG_PATH": "nope"},
2022-05-09T20:16:22.1100703Z id="Wrong winfsp binary path",
2022-05-09T20:16:22.1101209Z marks=pytest.mark.skipif(sys.platform != "win32", reason="Windows only"),
2022-05-09T20:16:22.1101711Z ),
2022-05-09T20:16:22.1102062Z ],
2022-05-09T20:16:22.1102408Z )
2022-05-09T20:16:22.1102788Z def test_gui_with_diagnose_option(env):
2022-05-09T20:16:22.1103313Z > _run(f"core gui --diagnose", env=env, capture=False)
2022-05-09T20:16:22.1103604Z
2022-05-09T20:16:22.1104002Z tests\test_cli.py:652:
2022-05-09T20:16:22.1104505Z _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
2022-05-09T20:16:22.1106696Z tests\test_cli.py:134: in _run
2022-05-09T20:16:22.1107311Z ret = subprocess.run(cooked_cmd, cwd=CWD, env=env, timeout=timeout, **kwargs)
2022-05-09T20:16:22.1107998Z c:\hostedtoolcache\windows\python\3.7.9\x64\lib\subprocess.py:490: in run
2022-05-09T20:16:22.1108642Z stdout, stderr = process.communicate(input, timeout=timeout)
2022-05-09T20:16:22.1109405Z c:\hostedtoolcache\windows\python\3.7.9\x64\lib\subprocess.py:983: in communicate
2022-05-09T20:16:22.1110045Z sts = self.wait(timeout=self._remaining_time(endtime))
2022-05-09T20:16:22.1110717Z c:\hostedtoolcache\windows\python\3.7.9\x64\lib\subprocess.py:1019: in wait
2022-05-09T20:16:22.1111186Z return self._wait(timeout=timeout)
2022-05-09T20:16:22.1111639Z _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
2022-05-09T20:16:22.1112044Z
2022-05-09T20:16:22.1112469Z self = <subprocess.Popen object at 0x00000227DE4B85C8>, timeout = 30.0
2022-05-09T20:16:22.1112785Z
2022-05-09T20:16:22.1113094Z def _wait(self, timeout):
2022-05-09T20:16:22.1113521Z """Internal implementation of wait() on Windows."""
2022-05-09T20:16:22.1113964Z if timeout is None:
2022-05-09T20:16:22.1114356Z timeout_millis = _winapi.INFINITE
2022-05-09T20:16:22.1114687Z else:
2022-05-09T20:16:22.1115063Z timeout_millis = int(timeout * 1000)
2022-05-09T20:16:22.1115471Z if self.returncode is None:
2022-05-09T20:16:22.1115909Z # API note: Returns immediately if timeout_millis == 0.
2022-05-09T20:16:22.1116357Z result = _winapi.WaitForSingleObject(self._handle,
2022-05-09T20:16:22.1116798Z timeout_millis)
2022-05-09T20:16:22.1117216Z if result == _winapi.WAIT_TIMEOUT:
2022-05-09T20:16:22.1117695Z > raise TimeoutExpired(self.args, timeout)
2022-05-09T20:16:22.1118254Z E subprocess.TimeoutExpired: Command '['python', '-m', 'parsec.cli', 'core', 'gui', '--diagnose']' timed out after 30.0 seconds
2022-05-09T20:16:22.1118676Z
2022-05-09T20:16:22.1119237Z c:\hostedtoolcache\windows\python\3.7.9\x64\lib\subprocess.py:1261: TimeoutExpired
2022-05-09T20:16:22.1119832Z ---------------------------- Captured stdout call -----------------------------
2022-05-09T20:16:22.1120335Z ========= RUN core gui --diagnose ==============
2022-05-09T20:16:22.1120836Z -------------- generated xml file: D:\a\1\s\test-results-gui.xml --------------
2022-05-09T20:16:22.1121381Z ============================ slowest 10 durations =============================
2022-05-09T20:16:22.1121958Z 30.02s call tests/test_cli.py::test_gui_with_diagnose_option[Standard environement]
2022-05-09T20:16:22.1122284Z
2022-05-09T20:16:22.1122650Z (2 durations < 0.005s hidden. Use -vv to show these durations.)
2022-05-09T20:16:22.1123179Z =========================== short test summary info ===========================
2022-05-09T20:16:22.1123930Z FAILED tests/test_cli.py::test_gui_with_diagnose_option[Standard environement]
``` | test | inconsistent timeout tests test cli py test gui with diagnose option tests test cli py test gui with diagnose option failed failures test gui with diagnose option env pytest mark gui pytest mark slow pytest mark parametrize env pytest param id standard environement pytest param winfsp library path nope id wrong winfsp library path marks pytest mark skipif sys platform reason windows only pytest param winfsp debug path nope id wrong winfsp binary path marks pytest mark skipif sys platform reason windows only def test gui with diagnose option env run f core gui diagnose env env capture false tests test cli py tests test cli py in run ret subprocess run cooked cmd cwd cwd env env timeout timeout kwargs c hostedtoolcache windows python lib subprocess py in run stdout stderr process communicate input timeout timeout c hostedtoolcache windows python lib subprocess py in communicate sts self wait timeout self remaining time endtime c hostedtoolcache windows python lib subprocess py in wait return self wait timeout timeout self timeout def wait self timeout internal implementation of wait on windows if timeout is none timeout millis winapi infinite else timeout millis int timeout if self returncode is none api note returns immediately if timeout millis result winapi waitforsingleobject self handle timeout millis if result winapi wait timeout raise timeoutexpired self args timeout e subprocess timeoutexpired command timed out after seconds c hostedtoolcache windows python lib subprocess py timeoutexpired captured stdout call run core gui diagnose generated xml file d a s test results gui xml slowest durations call tests test cli py test gui with diagnose option durations hidden use vv to show these durations short test summary info failed tests test cli py test gui with diagnose option | 1 |
291,954 | 25,187,953,776 | IssuesEvent | 2022-11-11 20:11:04 | rust-lang/rust | https://api.github.com/repos/rust-lang/rust | closed | error message when unit test type does not implement `Termination` is ungreat | C-enhancement A-attributes A-diagnostics T-compiler A-libtest D-papercut | Edit: outstanding work is to change the span from pointing at the function body to point at the return type.
https://github.com/rust-lang/rust/pull/50272 adds a test for the case where a unit test returns a value that does not implement `Termination`. The message currently talks about `main` and has an ugly multi-line span:
```
error[E0277]: `main` has invalid return type `std::result::Result<f32, std::num::ParseIntError>`
--> $DIR/termination-trait-test-wrong-type.rs:18:1
|
LL | / fn can_parse_zero_as_f32() -> Result<f32, ParseIntError> { //~ ERROR
LL | | "0".parse()
LL | | }
| |_^ `main` can only return types that implement `std::process::Termination`
|
= help: the trait `std::process::Termination` is not implemented for `std::result::Result<f32, std::num::ParseIntError>`
= note: required by `__test::test::assert_test_result`
error: aborting due to previous error
For more information about this error, try `rustc --explain E0277`.
```
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"fmease"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | 1.0 | error message when unit test type does not implement `Termination` is ungreat - Edit: outstanding work is to change the span from pointing at the function body to point at the return type.
https://github.com/rust-lang/rust/pull/50272 adds a test for the case where a unit test returns a value that does not implement `Termination`. The message currently talks about `main` and has an ugly multi-line span:
```
error[E0277]: `main` has invalid return type `std::result::Result<f32, std::num::ParseIntError>`
--> $DIR/termination-trait-test-wrong-type.rs:18:1
|
LL | / fn can_parse_zero_as_f32() -> Result<f32, ParseIntError> { //~ ERROR
LL | | "0".parse()
LL | | }
| |_^ `main` can only return types that implement `std::process::Termination`
|
= help: the trait `std::process::Termination` is not implemented for `std::result::Result<f32, std::num::ParseIntError>`
= note: required by `__test::test::assert_test_result`
error: aborting due to previous error
For more information about this error, try `rustc --explain E0277`.
```
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"fmease"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | test | error message when unit test type does not implement termination is ungreat edit outstanding work is to change the span from pointing at the function body to point at the return type adds a test for the case where a unit test returns a value that does not implement termination the message currently talks about main and has an ugly multi line span error main has invalid return type std result result dir termination trait test wrong type rs ll fn can parse zero as result error ll parse ll main can only return types that implement std process termination help the trait std process termination is not implemented for std result result note required by test test assert test result error aborting due to previous error for more information about this error try rustc explain | 1 |
301,963 | 26,113,058,638 | IssuesEvent | 2022-12-27 23:53:00 | littlewhitecloud/CustomTkinterTitlebar | https://api.github.com/repos/littlewhitecloud/CustomTkinterTitlebar | reopened | Can't maxsize correctly | bug enhancement help wanted invalid question need more test | 
关闭按钮仅显示一半。我不知道为什么会这样?
The close button only shows half. I don't know why it will happen? | 1.0 | Can't maxsize correctly - 
关闭按钮仅显示一半。我不知道为什么会这样?
The close button only shows half. I don't know why it will happen? | test | can t maxsize correctly 关闭按钮仅显示一半。我不知道为什么会这样? the close button only shows half i don t know why it will happen | 1 |
316,079 | 27,134,873,939 | IssuesEvent | 2023-02-16 12:30:27 | mantidproject/mantidimaging | https://api.github.com/repos/mantidproject/mantidimaging | closed | Create Test File for Spectrum Widget | Type: Improvement Quality: Unit Testing | ### Desired Behaviour
There is now a substantial amount of logic contained within `spectrum_widget.py` to the point where I would like to see this tested.
Create a new test script and tests for methods within `spectrum_widget.py`
### Current Behaviour
`spectrum_widget.py` is largely untested, only tested indirectly through tests within `presenter_test.py` and `model_test.py`.
#### Where?
* `mantidimaging/gui/windows/spectrum_viewer/spectrum_widget.py`
* `mantidimaging/gui/windows/spectrum_viewer/test/spectrum_widget_test.py`
| 1.0 | Create Test File for Spectrum Widget - ### Desired Behaviour
There is now a substantial amount of logic contained within `spectrum_widget.py` to the point where I would like to see this tested.
Create a new test script and tests for methods within `spectrum_widget.py`
### Current Behaviour
`spectrum_widget.py` is largely untested, only tested indirectly through tests within `presenter_test.py` and `model_test.py`.
#### Where?
* `mantidimaging/gui/windows/spectrum_viewer/spectrum_widget.py`
* `mantidimaging/gui/windows/spectrum_viewer/test/spectrum_widget_test.py`
| test | create test file for spectrum widget desired behaviour there is now a substantial amount of logic contained within spectrum widget py to the point where i would like to see this tested create a new test script and tests for methods within spectrum widget py current behaviour spectrum widget py is largely untested only tested indirectly through tests within presenter test py and model test py where mantidimaging gui windows spectrum viewer spectrum widget py mantidimaging gui windows spectrum viewer test spectrum widget test py | 1 |
285,379 | 21,516,592,821 | IssuesEvent | 2022-04-28 10:27:02 | DefectDojo/django-DefectDojo | https://api.github.com/repos/DefectDojo/django-DefectDojo | closed | Unclear Documentation in Findings | documentation stale | Unable to understand the below things in the DefectDojo
ℹ️ Touch FIndings
ℹ️ Manage Files
ℹ️ Make Finding a Template
ℹ️ Create a CWE Remediation Template
ℹ️ Apply Template to Findings
ℹ️ Finding Templates
ℹ️ ADD New Findings Template
ℹ️ Download Finding Template
ℹ️ Upload Threat model
ℹ️ Add Test Strategy | 1.0 | Unclear Documentation in Findings - Unable to understand the below things in the DefectDojo
ℹ️ Touch FIndings
ℹ️ Manage Files
ℹ️ Make Finding a Template
ℹ️ Create a CWE Remediation Template
ℹ️ Apply Template to Findings
ℹ️ Finding Templates
ℹ️ ADD New Findings Template
ℹ️ Download Finding Template
ℹ️ Upload Threat model
ℹ️ Add Test Strategy | non_test | unclear documentation in findings unable to understand the below things in the defectdojo ℹ️ touch findings ℹ️ manage files ℹ️ make finding a template ℹ️ create a cwe remediation template ℹ️ apply template to findings ℹ️ finding templates ℹ️ add new findings template ℹ️ download finding template ℹ️ upload threat model ℹ️ add test strategy | 0 |
315,954 | 9,634,261,894 | IssuesEvent | 2019-05-15 20:46:42 | DlfinBroom/ChatBot | https://api.github.com/repos/DlfinBroom/ChatBot | closed | Finish the Add, Edit, and Delete pages | Finish Priority: Medium View/Page | all of these views need to work, and do what there name implies | 1.0 | Finish the Add, Edit, and Delete pages - all of these views need to work, and do what there name implies | non_test | finish the add edit and delete pages all of these views need to work and do what there name implies | 0 |
22,056 | 14,974,781,290 | IssuesEvent | 2021-01-28 04:28:24 | sam20908/matrixpp | https://api.github.com/repos/sam20908/matrixpp | closed | Merge website code and library into the same branch | high priority infrastructure website work in progress | Separate branches have been causing painful problems. | 1.0 | Merge website code and library into the same branch - Separate branches have been causing painful problems. | non_test | merge website code and library into the same branch separate branches have been causing painful problems | 0 |
451,041 | 32,005,237,007 | IssuesEvent | 2023-09-21 14:29:21 | mantidproject/mantid | https://api.github.com/repos/mantidproject/mantid | closed | Include all imports in workspace groups documentation examples | Documentation Maintenance | The examples given in the [workspace groups documentation](http://docs.mantidproject.org/concepts/WorkspaceGroup.html) seem to assume that a user will already have the imports that are included by default in the Mantid python script editor. Consequently a number of the examples do not work if run in a completely blank script editor page. The examples in the documentation should include all imports that are required for them to complete successfully. | 1.0 | Include all imports in workspace groups documentation examples - The examples given in the [workspace groups documentation](http://docs.mantidproject.org/concepts/WorkspaceGroup.html) seem to assume that a user will already have the imports that are included by default in the Mantid python script editor. Consequently a number of the examples do not work if run in a completely blank script editor page. The examples in the documentation should include all imports that are required for them to complete successfully. | non_test | include all imports in workspace groups documentation examples the examples given in the seem to assume that a user will already have the imports that are included by default in the mantid python script editor consequently a number of the examples do not work if run in a completely blank script editor page the examples in the documentation should include all imports that are required for them to complete successfully | 0 |
331,591 | 29,044,383,814 | IssuesEvent | 2023-05-13 11:13:25 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | Failing test: Chrome X-Pack UI Functional Tests with ES SSL - Cases - group 2.x-pack/test/functional_with_es_ssl/apps/cases/group2/attachment_framework·ts - Cases Attachment framework Persistable state attachments "before all" hook for "renders a persistable attachment type correctly" | failed-test Team:ResponseOps | A test failed on a tracked branch
```
Error: expected 200 "OK", got 400 "Bad Request"
at Test._assertStatus (node_modules/supertest/lib/test.js:268:12)
at Test._assertFunction (node_modules/supertest/lib/test.js:283:11)
at Test.assert (node_modules/supertest/lib/test.js:173:18)
at localAssert (node_modules/supertest/lib/test.js:131:12)
at /home/buildkite-agent/builds/kb-n2-4-spot-e9a94b5d04746cd0/elastic/kibana-on-merge/kibana/node_modules/supertest/lib/test.js:128:5
at Test.Request.callback (node_modules/superagent/lib/node/index.js:728:3)
at /home/buildkite-agent/builds/kb-n2-4-spot-e9a94b5d04746cd0/elastic/kibana-on-merge/kibana/node_modules/superagent/lib/node/index.js:916:18
at IncomingMessage.<anonymous> (node_modules/superagent/lib/node/parsers/json.js:19:7)
at IncomingMessage.emit (node:events:525:35)
at endReadableNT (node:internal/streams/readable:1358:12)
at processTicksAndRejections (node:internal/process/task_queues:83:21)
```
First failure: [CI Build - 8.8](https://buildkite.com/elastic/kibana-on-merge/builds/30378#0188118d-dd7d-4e4d-82f7-e3671c0a0bb0)
<!-- kibanaCiData = {"failed-test":{"test.class":"Chrome X-Pack UI Functional Tests with ES SSL - Cases - group 2.x-pack/test/functional_with_es_ssl/apps/cases/group2/attachment_framework·ts","test.name":"Cases Attachment framework Persistable state attachments \"before all\" hook for \"renders a persistable attachment type correctly\"","test.failCount":1}} --> | 1.0 | Failing test: Chrome X-Pack UI Functional Tests with ES SSL - Cases - group 2.x-pack/test/functional_with_es_ssl/apps/cases/group2/attachment_framework·ts - Cases Attachment framework Persistable state attachments "before all" hook for "renders a persistable attachment type correctly" - A test failed on a tracked branch
```
Error: expected 200 "OK", got 400 "Bad Request"
at Test._assertStatus (node_modules/supertest/lib/test.js:268:12)
at Test._assertFunction (node_modules/supertest/lib/test.js:283:11)
at Test.assert (node_modules/supertest/lib/test.js:173:18)
at localAssert (node_modules/supertest/lib/test.js:131:12)
at /home/buildkite-agent/builds/kb-n2-4-spot-e9a94b5d04746cd0/elastic/kibana-on-merge/kibana/node_modules/supertest/lib/test.js:128:5
at Test.Request.callback (node_modules/superagent/lib/node/index.js:728:3)
at /home/buildkite-agent/builds/kb-n2-4-spot-e9a94b5d04746cd0/elastic/kibana-on-merge/kibana/node_modules/superagent/lib/node/index.js:916:18
at IncomingMessage.<anonymous> (node_modules/superagent/lib/node/parsers/json.js:19:7)
at IncomingMessage.emit (node:events:525:35)
at endReadableNT (node:internal/streams/readable:1358:12)
at processTicksAndRejections (node:internal/process/task_queues:83:21)
```
First failure: [CI Build - 8.8](https://buildkite.com/elastic/kibana-on-merge/builds/30378#0188118d-dd7d-4e4d-82f7-e3671c0a0bb0)
<!-- kibanaCiData = {"failed-test":{"test.class":"Chrome X-Pack UI Functional Tests with ES SSL - Cases - group 2.x-pack/test/functional_with_es_ssl/apps/cases/group2/attachment_framework·ts","test.name":"Cases Attachment framework Persistable state attachments \"before all\" hook for \"renders a persistable attachment type correctly\"","test.failCount":1}} --> | test | failing test chrome x pack ui functional tests with es ssl cases group x pack test functional with es ssl apps cases attachment framework·ts cases attachment framework persistable state attachments before all hook for renders a persistable attachment type correctly a test failed on a tracked branch error expected ok got bad request at test assertstatus node modules supertest lib test js at test assertfunction node modules supertest lib test js at test assert node modules supertest lib test js at localassert node modules supertest lib test js at home buildkite agent builds kb spot elastic kibana on merge kibana node modules supertest lib test js at test request callback node modules superagent lib node index js at home buildkite agent builds kb spot elastic kibana on merge kibana node modules superagent lib node index js at incomingmessage node modules superagent lib node parsers json js at incomingmessage emit node events at endreadablent node internal streams readable at processticksandrejections node internal process task queues first failure | 1 |
522,555 | 15,161,929,420 | IssuesEvent | 2021-02-12 09:50:16 | magento/magento2 | https://api.github.com/repos/magento/magento2 | closed | Product tier price not getting calculated properly when MSRP is applied | Component: Catalog Component: Msrp Issue: Clear Description Issue: Confirmed Issue: Format is valid Issue: Ready for Work Priority: P3 Progress: ready for dev Reported on 2.3.3 Reproduced on 2.4.x Severity: S3 Triage: Done stale issue | <!---
Please review our guidelines before adding a new issue: https://github.com/magento/magento2/wiki/Issue-reporting-guidelines
Fields marked with (*) are required. Please don't remove the template.
-->
### Preconditions (*)
<!---
Provide the exact Magento version (example: 2.3.2) and any important information on the environment where bug is reproducible.
-->
1. Magento 2.4-develop
2. Also in Magento 2.3.3
### Steps to reproduce (*)
<!---
Important: Provide a set of clear steps to reproduce this bug. We can not provide support without clear instructions on how to reproduce.
-->
1. Enable MSRP in
Store -> Configuration -> Sales -> Minimum Advertised Price
Set Display Actual price on gesture
2. Create a new simple product. Set it's price, quantity and weight(optional)
- Price=100$
3. Go to advanced pricing and add a special price, add 2 or more tier pricing of your choice(All groups is preferred)
- 
4. Save the product
### Expected result (*)
<!--- Tell us what do you expect to happen. -->
1. Exact price to be displayed when "Click for price is triggered".
2. Customer group price should be same as without MSRP

### Actual result (*)
<!--- Tell us what happened instead. Include error messages and issues. -->
1. Irregular price is being displayed
2. Price not properly render and is same for **Buy 2 for** and **Buy 3 for**

Here are the [Screenshots](https://imgur.com/a/7F0JRkM)
Note: Extension of the issue numbered #26583 | 1.0 | Product tier price not getting calculated properly when MSRP is applied - <!---
Please review our guidelines before adding a new issue: https://github.com/magento/magento2/wiki/Issue-reporting-guidelines
Fields marked with (*) are required. Please don't remove the template.
-->
### Preconditions (*)
<!---
Provide the exact Magento version (example: 2.3.2) and any important information on the environment where bug is reproducible.
-->
1. Magento 2.4-develop
2. Also in Magento 2.3.3
### Steps to reproduce (*)
<!---
Important: Provide a set of clear steps to reproduce this bug. We can not provide support without clear instructions on how to reproduce.
-->
1. Enable MSRP in
Store -> Configuration -> Sales -> Minimum Advertised Price
Set Display Actual price on gesture
2. Create a new simple product. Set it's price, quantity and weight(optional)
- Price=100$
3. Go to advanced pricing and add a special price, add 2 or more tier pricing of your choice(All groups is preferred)
- 
4. Save the product
### Expected result (*)
<!--- Tell us what do you expect to happen. -->
1. Exact price to be displayed when "Click for price is triggered".
2. Customer group price should be same as without MSRP

### Actual result (*)
<!--- Tell us what happened instead. Include error messages and issues. -->
1. Irregular price is being displayed
2. Price not properly render and is same for **Buy 2 for** and **Buy 3 for**

Here are the [Screenshots](https://imgur.com/a/7F0JRkM)
Note: Extension of the issue numbered #26583 | non_test | product tier price not getting calculated properly when msrp is applied please review our guidelines before adding a new issue fields marked with are required please don t remove the template preconditions provide the exact magento version example and any important information on the environment where bug is reproducible magento develop also in magento steps to reproduce important provide a set of clear steps to reproduce this bug we can not provide support without clear instructions on how to reproduce enable msrp in store configuration sales minimum advertised price set display actual price on gesture create a new simple product set it s price quantity and weight optional price go to advanced pricing and add a special price add or more tier pricing of your choice all groups is preferred save the product expected result exact price to be displayed when click for price is triggered customer group price should be same as without msrp actual result irregular price is being displayed price not properly render and is same for buy for and buy for here are the note extension of the issue numbered | 0 |
65,751 | 19,678,852,254 | IssuesEvent | 2022-01-11 14:59:15 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | closed | [SCREENREADER, KEYBOARD]: 'Close this modal' reads before knowing the context of what the modal | design system 508/Accessibility sitewide 508-defect-2 vsa-triage 508-issue-focus-mgmt vsp-design-system-team |
## Description
When a modal is opened, the first thing that is announced is “Close this modal”, then it reads the heading. So, a person would have to tab backwards to close the modal or close it without knowing what the content is. It would be an improved experience if the context of the modal was communicated with the opportunity to close it.
This was an output from the accessibility audit for https://github.com/department-of-veterans-affairs/va.gov-team/issues/5323.
## Point of Contact
**VFS Point of Contact:** Jennifer
## Acceptance Criteria
As a screen reader user, I want to know what the content of the modal is before I might close the modal.
## Steps to Recreate
1. Enter `https://staging.va.gov/profile` in browser
2. Start screenreading device of your choice
3. Navigate to the edit the Mailing address
4. Verify the content reading order is
a. "Close this modal"
b. Heading
## Possible Fixes (optional)
- If possible, add {title} to the aria-label like “Close this {title} modal”. (Likely the simplest/preferred solution)
- Another option would be to change the order of the elements, so the heading reads before the close button.
| 1.0 | [SCREENREADER, KEYBOARD]: 'Close this modal' reads before knowing the context of what the modal -
## Description
When a modal is opened, the first thing that is announced is “Close this modal”, then it reads the heading. So, a person would have to tab backwards to close the modal or close it without knowing what the content is. It would be an improved experience if the context of the modal was communicated with the opportunity to close it.
This was an output from the accessibility audit for https://github.com/department-of-veterans-affairs/va.gov-team/issues/5323.
## Point of Contact
**VFS Point of Contact:** Jennifer
## Acceptance Criteria
As a screen reader user, I want to know what the content of the modal is before I might close the modal.
## Steps to Recreate
1. Enter `https://staging.va.gov/profile` in browser
2. Start screenreading device of your choice
3. Navigate to the edit the Mailing address
4. Verify the content reading order is
a. "Close this modal"
b. Heading
## Possible Fixes (optional)
- If possible, add {title} to the aria-label like “Close this {title} modal”. (Likely the simplest/preferred solution)
- Another option would be to change the order of the elements, so the heading reads before the close button.
| non_test | close this modal reads before knowing the context of what the modal description when a modal is opened the first thing that is announced is “close this modal” then it reads the heading so a person would have to tab backwards to close the modal or close it without knowing what the content is it would be an improved experience if the context of the modal was communicated with the opportunity to close it this was an output from the accessibility audit for point of contact vfs point of contact jennifer acceptance criteria as a screen reader user i want to know what the content of the modal is before i might close the modal steps to recreate enter in browser start screenreading device of your choice navigate to the edit the mailing address verify the content reading order is a close this modal b heading possible fixes optional if possible add title to the aria label like “close this title modal” likely the simplest preferred solution another option would be to change the order of the elements so the heading reads before the close button | 0 |
5,515 | 3,930,325,810 | IssuesEvent | 2016-04-25 07:29:29 | Virtual-Labs/soil-mechanics-and-foundation-engineering-iiith | https://api.github.com/repos/Virtual-Labs/soil-mechanics-and-foundation-engineering-iiith | opened | QA_Water Content_Prerequisites_p1 | Category: Usability Developed By: VLEAD Release Number: Production Severity: S2 Status: Open | Defect Description :
In the "Water Content" experiment, the minimum requirement to run the experiment is not displayed in the page instead a page or Scrolling should appear providing information on minimum requirement to run this experiment, information like Bandwidth,Device Resolution,Hardware Configuration and Software Required.
Actual Result :
In the "Water Content" experiment, the minimum requirement to run the experiment is not displayed in the page.
Environment :
OS: Windows 7, Ubuntu-16.04,Centos-6
Browsers: Firefox-42.0,Chrome-47.0,chromium-45.0
Bandwidth : 100Mbps
Hardware Configuration:8GBRAM
Processor:i5
Test Step Link:
https://github.com/Virtual-Labs/soil-mechanics-and-foundation-engineering-iiith/blob/master/test-cases/integration_test-cases/Water%20Content/Water%20Content_22_Prerequisites_p1.org | True | QA_Water Content_Prerequisites_p1 - Defect Description :
In the "Water Content" experiment, the minimum requirement to run the experiment is not displayed in the page instead a page or Scrolling should appear providing information on minimum requirement to run this experiment, information like Bandwidth,Device Resolution,Hardware Configuration and Software Required.
Actual Result :
In the "Water Content" experiment, the minimum requirement to run the experiment is not displayed in the page.
Environment :
OS: Windows 7, Ubuntu-16.04,Centos-6
Browsers: Firefox-42.0,Chrome-47.0,chromium-45.0
Bandwidth : 100Mbps
Hardware Configuration:8GBRAM
Processor:i5
Test Step Link:
https://github.com/Virtual-Labs/soil-mechanics-and-foundation-engineering-iiith/blob/master/test-cases/integration_test-cases/Water%20Content/Water%20Content_22_Prerequisites_p1.org | non_test | qa water content prerequisites defect description in the water content experiment the minimum requirement to run the experiment is not displayed in the page instead a page or scrolling should appear providing information on minimum requirement to run this experiment information like bandwidth device resolution hardware configuration and software required actual result in the water content experiment the minimum requirement to run the experiment is not displayed in the page environment os windows ubuntu centos browsers firefox chrome chromium bandwidth hardware configuration processor test step link | 0 |
314,850 | 27,026,573,686 | IssuesEvent | 2023-02-11 17:28:11 | acikkaynak/deprem-yardim-frontend | https://api.github.com/repos/acikkaynak/deprem-yardim-frontend | opened | feat: multi select ile niyete göre visualization yapma. | enhancement approved emergency ios desktop p0 tested | data ekipleri dataları niyete göre kategorilendirmeye başladığı için
https://mui.com/material-ui/react-select/#multiple-select
ile niyete göre filtreleme yapmalıyız. default olarak hepsi seçili gelmeli. | 1.0 | feat: multi select ile niyete göre visualization yapma. - data ekipleri dataları niyete göre kategorilendirmeye başladığı için
https://mui.com/material-ui/react-select/#multiple-select
ile niyete göre filtreleme yapmalıyız. default olarak hepsi seçili gelmeli. | test | feat multi select ile niyete göre visualization yapma data ekipleri dataları niyete göre kategorilendirmeye başladığı için ile niyete göre filtreleme yapmalıyız default olarak hepsi seçili gelmeli | 1 |
87,440 | 17,268,059,351 | IssuesEvent | 2021-07-22 15:58:19 | joomla/joomla-cms | https://api.github.com/repos/joomla/joomla-cms | closed | [4.0] .hidden class missing in media/system/css/fields/calendar.css | No Code Attached Yet | Use of custom field calendar in com_contact mailform
If the frontend template does not have class .hidden,
the calendar is always expanded.
Solution
media/system/css/fields/calendar.css
adding
`.js-calendar.hidden
{display:none}`
| 1.0 | [4.0] .hidden class missing in media/system/css/fields/calendar.css - Use of custom field calendar in com_contact mailform
If the frontend template does not have class .hidden,
the calendar is always expanded.
Solution
media/system/css/fields/calendar.css
adding
`.js-calendar.hidden
{display:none}`
| non_test | hidden class missing in media system css fields calendar css use of custom field calendar in com contact mailform if the frontend template does not have class hidden the calendar is always expanded solution media system css fields calendar css adding js calendar hidden display none | 0 |
147,912 | 11,812,108,532 | IssuesEvent | 2020-03-19 19:29:50 | inf112-v20/legless-crane | https://api.github.com/repos/inf112-v20/legless-crane | closed | Set backup point | Need tests | - [x] Set the player's spawn point as the backup point at the start of the game.
- [x] setBackupPoint() and getBackupPoint() in player object
- [x] getBackupPoint should be used by handleDamage() when the player dies. | 1.0 | Set backup point - - [x] Set the player's spawn point as the backup point at the start of the game.
- [x] setBackupPoint() and getBackupPoint() in player object
- [x] getBackupPoint should be used by handleDamage() when the player dies. | test | set backup point set the player s spawn point as the backup point at the start of the game setbackuppoint and getbackuppoint in player object getbackuppoint should be used by handledamage when the player dies | 1 |
29,769 | 4,535,885,459 | IssuesEvent | 2016-09-08 18:42:24 | appium/appium | https://api.github.com/repos/appium/appium | closed | Fix Safari alerts in xcuitest-driver | iOS NeedsTriage XCUITest | ## The problem
Alerts were handled funnily in `appium-ios-driver`. Figure out how to get that working when running through xcuitest as well. | 1.0 | Fix Safari alerts in xcuitest-driver - ## The problem
Alerts were handled funnily in `appium-ios-driver`. Figure out how to get that working when running through xcuitest as well. | test | fix safari alerts in xcuitest driver the problem alerts were handled funnily in appium ios driver figure out how to get that working when running through xcuitest as well | 1 |
130,458 | 12,429,422,319 | IssuesEvent | 2020-05-25 08:26:27 | Jeffail/benthos | https://api.github.com/repos/Jeffail/benthos | closed | Add examples for the `awk` and `jmespath` processors | documentation help wanted | These are some of the more powerful processors in Benthos, so they're good candidates for expanding the docs with fleshed out examples. | 1.0 | Add examples for the `awk` and `jmespath` processors - These are some of the more powerful processors in Benthos, so they're good candidates for expanding the docs with fleshed out examples. | non_test | add examples for the awk and jmespath processors these are some of the more powerful processors in benthos so they re good candidates for expanding the docs with fleshed out examples | 0 |
104,585 | 8,982,853,445 | IssuesEvent | 2019-01-31 04:09:37 | danleyb2/helpdesk | https://api.github.com/repos/danleyb2/helpdesk | opened | user registration and login | testing | testing and bug fixing of
- [ ] registration using email, username and password
- [ ] login via the chosen credential
- [ ] Email activation/verification after login (optional for now)
| 1.0 | user registration and login - testing and bug fixing of
- [ ] registration using email, username and password
- [ ] login via the chosen credential
- [ ] Email activation/verification after login (optional for now)
| test | user registration and login testing and bug fixing of registration using email username and password login via the chosen credential email activation verification after login optional for now | 1 |
49,441 | 13,453,462,347 | IssuesEvent | 2020-09-09 01:01:47 | nasifimtiazohi/openmrs-module-coreapps-1.28.0 | https://api.github.com/repos/nasifimtiazohi/openmrs-module-coreapps-1.28.0 | opened | WS-2017-0421 (High) detected in ws-1.1.2.tgz | security vulnerability | ## WS-2017-0421 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ws-1.1.2.tgz</b></p></summary>
<p>simple to use, blazing fast and thoroughly tested websocket client, server and console for node.js, up-to-date against RFC-6455</p>
<p>Library home page: <a href="https://registry.npmjs.org/ws/-/ws-1.1.2.tgz">https://registry.npmjs.org/ws/-/ws-1.1.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/openmrs-module-coreapps-1.28.0/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/openmrs-module-coreapps-1.28.0/node_modules/ws/package.json</p>
<p>
Dependency Hierarchy:
- karma-1.7.1.tgz (Root Library)
- socket.io-1.7.3.tgz
- engine.io-1.8.3.tgz
- :x: **ws-1.1.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/nasifimtiazohi/openmrs-module-coreapps-1.28.0/commit/a473f1097dd760370898008cacd6434f0620476f">a473f1097dd760370898008cacd6434f0620476f</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Affected version of ws (0.2.6 through 3.3.0 excluding 0.3.4-2, 0.3.5-2, 0.3.5-3, 0.3.5-4, 1.1.5, 2.0.0-beta.0, 2.0.0-beta.1 and 2.0.0-beta.2) are vulnerable to A specially crafted value of the Sec-WebSocket-Extensions header that used Object.prototype property names as extension or parameter names could be used to make a ws server crash.
<p>Publish Date: 2017-11-08
<p>URL: <a href=https://github.com/websockets/ws/commit/c4fe46608acd61fbf7397eadc47378903f95b78a>WS-2017-0421</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/websockets/ws/commit/c4fe46608acd61fbf7397eadc47378903f95b78a">https://github.com/websockets/ws/commit/c4fe46608acd61fbf7397eadc47378903f95b78a</a></p>
<p>Release Date: 2017-11-08</p>
<p>Fix Resolution: 3.3.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2017-0421 (High) detected in ws-1.1.2.tgz - ## WS-2017-0421 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ws-1.1.2.tgz</b></p></summary>
<p>simple to use, blazing fast and thoroughly tested websocket client, server and console for node.js, up-to-date against RFC-6455</p>
<p>Library home page: <a href="https://registry.npmjs.org/ws/-/ws-1.1.2.tgz">https://registry.npmjs.org/ws/-/ws-1.1.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/openmrs-module-coreapps-1.28.0/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/openmrs-module-coreapps-1.28.0/node_modules/ws/package.json</p>
<p>
Dependency Hierarchy:
- karma-1.7.1.tgz (Root Library)
- socket.io-1.7.3.tgz
- engine.io-1.8.3.tgz
- :x: **ws-1.1.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/nasifimtiazohi/openmrs-module-coreapps-1.28.0/commit/a473f1097dd760370898008cacd6434f0620476f">a473f1097dd760370898008cacd6434f0620476f</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Affected version of ws (0.2.6 through 3.3.0 excluding 0.3.4-2, 0.3.5-2, 0.3.5-3, 0.3.5-4, 1.1.5, 2.0.0-beta.0, 2.0.0-beta.1 and 2.0.0-beta.2) are vulnerable to A specially crafted value of the Sec-WebSocket-Extensions header that used Object.prototype property names as extension or parameter names could be used to make a ws server crash.
<p>Publish Date: 2017-11-08
<p>URL: <a href=https://github.com/websockets/ws/commit/c4fe46608acd61fbf7397eadc47378903f95b78a>WS-2017-0421</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/websockets/ws/commit/c4fe46608acd61fbf7397eadc47378903f95b78a">https://github.com/websockets/ws/commit/c4fe46608acd61fbf7397eadc47378903f95b78a</a></p>
<p>Release Date: 2017-11-08</p>
<p>Fix Resolution: 3.3.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | ws high detected in ws tgz ws high severity vulnerability vulnerable library ws tgz simple to use blazing fast and thoroughly tested websocket client server and console for node js up to date against rfc library home page a href path to dependency file tmp ws scm openmrs module coreapps package json path to vulnerable library tmp ws scm openmrs module coreapps node modules ws package json dependency hierarchy karma tgz root library socket io tgz engine io tgz x ws tgz vulnerable library found in head commit a href vulnerability details affected version of ws through excluding beta beta and beta are vulnerable to a specially crafted value of the sec websocket extensions header that used object prototype property names as extension or parameter names could be used to make a ws server crash publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
217,219 | 24,324,803,807 | IssuesEvent | 2022-09-30 13:56:06 | H-459/exam_baragon_gal | https://api.github.com/repos/H-459/exam_baragon_gal | opened | CVE-2020-11619 (High) detected in jackson-databind-2.9.9.jar | security vulnerability | ## CVE-2020-11619 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.9.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /BaragonService/pom.xml</p>
<p>Path to vulnerable library: /tory/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/tory/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/tory/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/tory/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/tory/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.9.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/H-459/exam_baragon_gal/commit/3f0f8dad184e4887576158270b729a7bc404302c">3f0f8dad184e4887576158270b729a7bc404302c</a></p>
<p>Found in base branches: <b>feature, master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.springframework.aop.config.MethodLocatingFactoryBean (aka spring-aop).
<p>Publish Date: 2020-04-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11619>CVE-2020-11619</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11619">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11619</a></p>
<p>Release Date: 2020-04-07</p>
<p>Fix Resolution: 2.9.10.4</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| True | CVE-2020-11619 (High) detected in jackson-databind-2.9.9.jar - ## CVE-2020-11619 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.9.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /BaragonService/pom.xml</p>
<p>Path to vulnerable library: /tory/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/tory/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/tory/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/tory/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/tory/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.9.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/H-459/exam_baragon_gal/commit/3f0f8dad184e4887576158270b729a7bc404302c">3f0f8dad184e4887576158270b729a7bc404302c</a></p>
<p>Found in base branches: <b>feature, master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.springframework.aop.config.MethodLocatingFactoryBean (aka spring-aop).
<p>Publish Date: 2020-04-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11619>CVE-2020-11619</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11619">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11619</a></p>
<p>Release Date: 2020-04-07</p>
<p>Fix Resolution: 2.9.10.4</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| non_test | cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file baragonservice pom xml path to vulnerable library tory com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar tory com fasterxml jackson core jackson databind jackson databind jar tory com fasterxml jackson core jackson databind jackson databind jar tory com fasterxml jackson core jackson databind jackson databind jar tory com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branches feature master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org springframework aop config methodlocatingfactorybean aka spring aop publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution check this box to open an automated fix pr | 0 |
234,074 | 19,093,477,348 | IssuesEvent | 2021-11-29 14:30:43 | GridTools/gt4py | https://api.github.com/repos/GridTools/gt4py | closed | Add test for varargin feature | module: tests | #258 recently removed a test for the varargin feature. In order to ensure this feature keeps working, we should add a test for it to replace the removed one.
cc: @stubbiali | 1.0 | Add test for varargin feature - #258 recently removed a test for the varargin feature. In order to ensure this feature keeps working, we should add a test for it to replace the removed one.
cc: @stubbiali | test | add test for varargin feature recently removed a test for the varargin feature in order to ensure this feature keeps working we should add a test for it to replace the removed one cc stubbiali | 1 |
256,938 | 22,113,000,260 | IssuesEvent | 2022-06-01 23:24:06 | partiql/partiql-lang-rust | https://api.github.com/repos/partiql/partiql-lang-rust | closed | [conformance-testing] Create GitHub Actions workflow to report conformance test suite results | conformance tests | Involves creating a workflow that runs the conformance tests and reports back the failing and passing tests. Should also compare the results with the target branch (similar to codecov comparing coverage). | 1.0 | [conformance-testing] Create GitHub Actions workflow to report conformance test suite results - Involves creating a workflow that runs the conformance tests and reports back the failing and passing tests. Should also compare the results with the target branch (similar to codecov comparing coverage). | test | create github actions workflow to report conformance test suite results involves creating a workflow that runs the conformance tests and reports back the failing and passing tests should also compare the results with the target branch similar to codecov comparing coverage | 1 |
187,683 | 14,429,064,164 | IssuesEvent | 2020-12-06 12:43:16 | kalexmills/github-vet-tests-dec2020 | https://api.github.com/repos/kalexmills/github-vet-tests-dec2020 | closed | Agzs/ethereum-bft: src/github.com/getamis/istanbul-tools/vendor/github.com/prometheus/prometheus/storage/local/storage_test.go; 6 LoC | fresh test tiny |
Found a possible issue in [Agzs/ethereum-bft](https://www.github.com/Agzs/ethereum-bft) at [src/github.com/getamis/istanbul-tools/vendor/github.com/prometheus/prometheus/storage/local/storage_test.go](https://github.com/Agzs/ethereum-bft/blob/50558139fb10b4f47122f6716e8e2e1371770ef4/src/github.com/getamis/istanbul-tools/vendor/github.com/prometheus/prometheus/storage/local/storage_test.go#L2026-L2031)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to gotSamplePair at line 2028 may start a goroutine
[Click here to see the code in its original context.](https://github.com/Agzs/ethereum-bft/blob/50558139fb10b4f47122f6716e8e2e1371770ef4/src/github.com/getamis/istanbul-tools/vendor/github.com/prometheus/prometheus/storage/local/storage_test.go#L2026-L2031)
<details>
<summary>Click here to show the 6 line(s) of Go which triggered the analyzer.</summary>
```go
for i, gotSamplePair := range got {
wantSamplePair := want[i]
if !wantSamplePair.Equal(&gotSamplePair) {
t.Fatalf("want %v, got %v", wantSamplePair, gotSamplePair)
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 50558139fb10b4f47122f6716e8e2e1371770ef4
| 1.0 | Agzs/ethereum-bft: src/github.com/getamis/istanbul-tools/vendor/github.com/prometheus/prometheus/storage/local/storage_test.go; 6 LoC -
Found a possible issue in [Agzs/ethereum-bft](https://www.github.com/Agzs/ethereum-bft) at [src/github.com/getamis/istanbul-tools/vendor/github.com/prometheus/prometheus/storage/local/storage_test.go](https://github.com/Agzs/ethereum-bft/blob/50558139fb10b4f47122f6716e8e2e1371770ef4/src/github.com/getamis/istanbul-tools/vendor/github.com/prometheus/prometheus/storage/local/storage_test.go#L2026-L2031)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to gotSamplePair at line 2028 may start a goroutine
[Click here to see the code in its original context.](https://github.com/Agzs/ethereum-bft/blob/50558139fb10b4f47122f6716e8e2e1371770ef4/src/github.com/getamis/istanbul-tools/vendor/github.com/prometheus/prometheus/storage/local/storage_test.go#L2026-L2031)
<details>
<summary>Click here to show the 6 line(s) of Go which triggered the analyzer.</summary>
```go
for i, gotSamplePair := range got {
wantSamplePair := want[i]
if !wantSamplePair.Equal(&gotSamplePair) {
t.Fatalf("want %v, got %v", wantSamplePair, gotSamplePair)
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 50558139fb10b4f47122f6716e8e2e1371770ef4
| test | agzs ethereum bft src github com getamis istanbul tools vendor github com prometheus prometheus storage local storage test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message function call which takes a reference to gotsamplepair at line may start a goroutine click here to show the line s of go which triggered the analyzer go for i gotsamplepair range got wantsamplepair want if wantsamplepair equal gotsamplepair t fatalf want v got v wantsamplepair gotsamplepair leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id | 1 |
244,544 | 20,676,582,213 | IssuesEvent | 2022-03-10 09:51:55 | rancher/dashboard | https://api.github.com/repos/rancher/dashboard | reopened | Helm UI does not have any bread crumb to logs of the deployment | kind/bug [zube]: To Test internal priority/1 area/helm | Internal reference: SURE-4016
Reported in 2.6.3
Within the Helm UI, there's no way to connect the logs of a deployment during a session.
The new UI spawns a container to and will show you the logs of said deployment, but theres no way to navigate to the app and see logs from the past runs once you've navigated away. | 1.0 | Helm UI does not have any bread crumb to logs of the deployment - Internal reference: SURE-4016
Reported in 2.6.3
Within the Helm UI, there's no way to connect the logs of a deployment during a session.
The new UI spawns a container to and will show you the logs of said deployment, but theres no way to navigate to the app and see logs from the past runs once you've navigated away. | test | helm ui does not have any bread crumb to logs of the deployment internal reference sure reported in within the helm ui there s no way to connect the logs of a deployment during a session the new ui spawns a container to and will show you the logs of said deployment but theres no way to navigate to the app and see logs from the past runs once you ve navigated away | 1 |
19,542 | 3,219,489,908 | IssuesEvent | 2015-10-08 09:58:54 | GLab/ToMaTo | https://api.github.com/repos/GLab/ToMaTo | closed | moving elements defect | component: editor type: defect urgency: normal | how to reproduce
1) move element. element AND interface are moved in editor
2) backend doesn't accept the moving
what happens:
3) element gets set back (i.e. their old position) in editor, interface stays where it is.
what would be expected:
3) element AND interface get set back in editor | 1.0 | moving elements defect - how to reproduce
1) move element. element AND interface are moved in editor
2) backend doesn't accept the moving
what happens:
3) element gets set back (i.e. their old position) in editor, interface stays where it is.
what would be expected:
3) element AND interface get set back in editor | non_test | moving elements defect how to reproduce move element element and interface are moved in editor backend doesn t accept the moving what happens element gets set back i e their old position in editor interface stays where it is what would be expected element and interface get set back in editor | 0 |
219,669 | 24,513,379,791 | IssuesEvent | 2022-10-11 01:05:09 | ritvikbhawnani/zaproxy | https://api.github.com/repos/ritvikbhawnani/zaproxy | opened | CVE-2022-41853 (High) detected in hsqldb-2.5.2.jar | security vulnerability | ## CVE-2022-41853 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>hsqldb-2.5.2.jar</b></p></summary>
<p>HSQLDB - Lightweight 100% Java SQL Database Engine</p>
<p>Library home page: <a href="http://hsqldb.org">http://hsqldb.org</a></p>
<p>Path to dependency file: /tmp/ws-scm/zaproxy</p>
<p>Path to vulnerable library: /canner/.gradle/caches/modules-2/files-2.1/org.hsqldb/hsqldb/2.5.2/d8ec10f8ed2d9ac8c400208f4f78a546b116afe/hsqldb-2.5.2.jar,/zap/build/distFiles/lib/hsqldb-2.5.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **hsqldb-2.5.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ritvikbhawnani/zaproxy/commit/0d317021e9b57a93ca0792341b61576cc7d4fe16">0d317021e9b57a93ca0792341b61576cc7d4fe16</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Those using java.sql.Statement or java.sql.PreparedStatement in hsqldb (HyperSQL DataBase) to process untrusted input may be vulnerable to a remote code execution attack. By default it is allowed to call any static method of any Java class in the classpath resulting in code execution. The issue can be prevented by updating to 2.7.1 or by setting the system property "hsqldb.method_class_names" to classes which are allowed to be called. For example, System.setProperty("hsqldb.method_class_names", "abc") or Java argument -Dhsqldb.method_class_names="abc" can be used. From version 2.7.1 all classes by default are not accessible except those in java.lang.Math and need to be manually enabled.
<p>Publish Date: 2022-10-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-41853>CVE-2022-41853</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-41853 (High) detected in hsqldb-2.5.2.jar - ## CVE-2022-41853 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>hsqldb-2.5.2.jar</b></p></summary>
<p>HSQLDB - Lightweight 100% Java SQL Database Engine</p>
<p>Library home page: <a href="http://hsqldb.org">http://hsqldb.org</a></p>
<p>Path to dependency file: /tmp/ws-scm/zaproxy</p>
<p>Path to vulnerable library: /canner/.gradle/caches/modules-2/files-2.1/org.hsqldb/hsqldb/2.5.2/d8ec10f8ed2d9ac8c400208f4f78a546b116afe/hsqldb-2.5.2.jar,/zap/build/distFiles/lib/hsqldb-2.5.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **hsqldb-2.5.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ritvikbhawnani/zaproxy/commit/0d317021e9b57a93ca0792341b61576cc7d4fe16">0d317021e9b57a93ca0792341b61576cc7d4fe16</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Those using java.sql.Statement or java.sql.PreparedStatement in hsqldb (HyperSQL DataBase) to process untrusted input may be vulnerable to a remote code execution attack. By default it is allowed to call any static method of any Java class in the classpath resulting in code execution. The issue can be prevented by updating to 2.7.1 or by setting the system property "hsqldb.method_class_names" to classes which are allowed to be called. For example, System.setProperty("hsqldb.method_class_names", "abc") or Java argument -Dhsqldb.method_class_names="abc" can be used. From version 2.7.1 all classes by default are not accessible except those in java.lang.Math and need to be manually enabled.
<p>Publish Date: 2022-10-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-41853>CVE-2022-41853</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in hsqldb jar cve high severity vulnerability vulnerable library hsqldb jar hsqldb lightweight java sql database engine library home page a href path to dependency file tmp ws scm zaproxy path to vulnerable library canner gradle caches modules files org hsqldb hsqldb hsqldb jar zap build distfiles lib hsqldb jar dependency hierarchy x hsqldb jar vulnerable library found in head commit a href found in base branch main vulnerability details those using java sql statement or java sql preparedstatement in hsqldb hypersql database to process untrusted input may be vulnerable to a remote code execution attack by default it is allowed to call any static method of any java class in the classpath resulting in code execution the issue can be prevented by updating to or by setting the system property hsqldb method class names to classes which are allowed to be called for example system setproperty hsqldb method class names abc or java argument dhsqldb method class names abc can be used from version all classes by default are not accessible except those in java lang math and need to be manually enabled publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction required scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href step up your open source security game with mend | 0 |
162,318 | 12,643,234,610 | IssuesEvent | 2020-06-16 09:28:13 | FreeRDP/FreeRDP | https://api.github.com/repos/FreeRDP/FreeRDP | closed | freerdp-shadow-cli crashing randomly | fixed-waiting-test | freerdp-shadow-cli running on macOS 10.15.4 has been crashing randomly.
[crash.txt](https://github.com/FreeRDP/FreeRDP/files/4682521/crash.txt)
| 1.0 | freerdp-shadow-cli crashing randomly - freerdp-shadow-cli running on macOS 10.15.4 has been crashing randomly.
[crash.txt](https://github.com/FreeRDP/FreeRDP/files/4682521/crash.txt)
| test | freerdp shadow cli crashing randomly freerdp shadow cli running on macos has been crashing randomly | 1 |
34,718 | 7,459,354,250 | IssuesEvent | 2018-03-30 14:58:30 | kerdokullamae/test_koik_issued | https://api.github.com/repos/kerdokullamae/test_koik_issued | closed | Kerneli mälukasutuse jälgimise utiliidi parandus | C: AIS P: high R: fixed T: defect | **Reported by sven syld on 14 Oct 2015 08:39 UTC**
'''Kirjeldus'''
Indeksi täitmisel kasutatakse sellist klassi nagu MemTracker, mis käivitab käsu ```free -m``` ja parsib sealt välja mälulimiidi ja praeguse kasutatavuse.
Erinevates distrotes on väljund pisut erinev.
Tietos:
```
[src](sven@dev)$ free -m
total used free shared buffers cached
Mem: 1869 1447 422 0 0 8
-/+ buffers/cache: 1438 431
Swap: 3998 1161 2837
```
Arhiivis:
```
[www](root@ais201510)# free -m
total used free shared buff/cache available
Mem: 2001 279 1287 97 434 1461
Swap: 2047 0 2047
```
'''Todo'''
1) Praegu loetakse swapi kasutus 4ndalt realt. Teha see ümber nii, et nii mem kui ka swap tuvastataks eesoleva kirja järgi ("Mem:" ja "Swap:").
2) Kui getKernelMemState() ei oska mälukasutust välja võtta, siis ta võiks tagastada pigem null kui et exceptioni.
3) Teha getKernelMemState() kasutused ka nii ümber, et pigem näitaks küsimärki vms, kui et annaks errori (6 kasutuskohta on). | 1.0 | Kerneli mälukasutuse jälgimise utiliidi parandus - **Reported by sven syld on 14 Oct 2015 08:39 UTC**
'''Kirjeldus'''
Indeksi täitmisel kasutatakse sellist klassi nagu MemTracker, mis käivitab käsu ```free -m``` ja parsib sealt välja mälulimiidi ja praeguse kasutatavuse.
Erinevates distrotes on väljund pisut erinev.
Tietos:
```
[src](sven@dev)$ free -m
total used free shared buffers cached
Mem: 1869 1447 422 0 0 8
-/+ buffers/cache: 1438 431
Swap: 3998 1161 2837
```
Arhiivis:
```
[www](root@ais201510)# free -m
total used free shared buff/cache available
Mem: 2001 279 1287 97 434 1461
Swap: 2047 0 2047
```
'''Todo'''
1) Praegu loetakse swapi kasutus 4ndalt realt. Teha see ümber nii, et nii mem kui ka swap tuvastataks eesoleva kirja järgi ("Mem:" ja "Swap:").
2) Kui getKernelMemState() ei oska mälukasutust välja võtta, siis ta võiks tagastada pigem null kui et exceptioni.
3) Teha getKernelMemState() kasutused ka nii ümber, et pigem näitaks küsimärki vms, kui et annaks errori (6 kasutuskohta on). | non_test | kerneli mälukasutuse jälgimise utiliidi parandus reported by sven syld on oct utc kirjeldus indeksi täitmisel kasutatakse sellist klassi nagu memtracker mis käivitab käsu free m ja parsib sealt välja mälulimiidi ja praeguse kasutatavuse erinevates distrotes on väljund pisut erinev tietos sven dev free m total used free shared buffers cached mem buffers cache swap arhiivis root free m total used free shared buff cache available mem swap todo praegu loetakse swapi kasutus realt teha see ümber nii et nii mem kui ka swap tuvastataks eesoleva kirja järgi mem ja swap kui getkernelmemstate ei oska mälukasutust välja võtta siis ta võiks tagastada pigem null kui et exceptioni teha getkernelmemstate kasutused ka nii ümber et pigem näitaks küsimärki vms kui et annaks errori kasutuskohta on | 0 |
121,320 | 10,165,553,480 | IssuesEvent | 2019-08-07 14:07:40 | CityOfBoston/boston.gov-d8 | https://api.github.com/repos/CityOfBoston/boston.gov-d8 | closed | Summer in Boston Guide: Jamaica Plain Community Pool text format issues | duplicate max-testing | Under 'Community Center Pools' Jamaica Plain BCYF Center text is formatted differently than the rest.
D7: https://www.boston.gov/summer-boston

D8: http://bostond8dev.prod.acquia-sites.com/summer-boston

Tested on chrome 76 on OS X Mojave | 1.0 | Summer in Boston Guide: Jamaica Plain Community Pool text format issues - Under 'Community Center Pools' Jamaica Plain BCYF Center text is formatted differently than the rest.
D7: https://www.boston.gov/summer-boston

D8: http://bostond8dev.prod.acquia-sites.com/summer-boston

Tested on chrome 76 on OS X Mojave | test | summer in boston guide jamaica plain community pool text format issues under community center pools jamaica plain bcyf center text is formatted differently than the rest tested on chrome on os x mojave | 1 |
116,474 | 9,853,394,504 | IssuesEvent | 2019-06-19 14:42:12 | kcigeospatial/Fred_Co_Land-Management | https://api.github.com/repos/kcigeospatial/Fred_Co_Land-Management | closed | R4C-BOA-applicant must be property owner | Ready for PreProd Env. Retest Training Issue | entered applicant in R4C as Licensed contact. County requires the property owner to be the applicant. Doesnt appear to be a way to change the "role as an applicant" field or to add another contact after an application has been submitted. Is there a way to add a note next to the "this field indicates your responsibility for this request" for BOA only that says "applicant must be property owner"????

MW | 1.0 | R4C-BOA-applicant must be property owner - entered applicant in R4C as Licensed contact. County requires the property owner to be the applicant. Doesnt appear to be a way to change the "role as an applicant" field or to add another contact after an application has been submitted. Is there a way to add a note next to the "this field indicates your responsibility for this request" for BOA only that says "applicant must be property owner"????

MW | test | boa applicant must be property owner entered applicant in as licensed contact county requires the property owner to be the applicant doesnt appear to be a way to change the role as an applicant field or to add another contact after an application has been submitted is there a way to add a note next to the this field indicates your responsibility for this request for boa only that says applicant must be property owner mw | 1 |
4,281 | 2,610,090,761 | IssuesEvent | 2015-02-26 18:27:27 | chrsmith/dsdsdaadf | https://api.github.com/repos/chrsmith/dsdsdaadf | opened | 深圳痘痘怎样治最好 | auto-migrated Priority-Medium Type-Defect | ```
深圳痘痘怎样治最好【深圳韩方科颜全国热线400-869-1818,24小
时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国��
�方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩�
��科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”
健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专��
�治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的�
��痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:44 | 1.0 | 深圳痘痘怎样治最好 - ```
深圳痘痘怎样治最好【深圳韩方科颜全国热线400-869-1818,24小
时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国��
�方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩�
��科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”
健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专��
�治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的�
��痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:44 | non_test | 深圳痘痘怎样治最好 深圳痘痘怎样治最好【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩国�� �方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩� ��科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹” 健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专�� �治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的� ��痘。 original issue reported on code google com by szft com on may at | 0 |
81,443 | 7,781,899,489 | IssuesEvent | 2018-06-06 03:09:24 | w3c/csswg-drafts | https://api.github.com/repos/w3c/csswg-drafts | closed | [css-ui-4] text-overflow and anonymous blocks | Needs Testcase (WPT) Tracked in DoC css-ui-4 | `text-overflow` is defined in [CSS UI](https://drafts.csswg.org/css-ui-3/#text-overflow) (should be moved into CSS Overflow per resolution) as:
- Applies to: block containers
- Inherited: no
Since the property controls how inline contents overflow line boxes, I would assume that the value that matters is the one of the block container that establishes the inline formatting context to which the line boxes belong.
The problem is that properties that apply to IFC roots should be inheritable, because lots of IFC roots are anonymous blocks. But this is a special case, because `text-overflow` has effect only if `overflow` is not `visible`, and `overflow` is definitely not inherited. Therefore it cannot just be said that anonymous blocks must act as if they were assigned `text-overflow: inherit` via a rule in the UA origin (because `overflow` should definitely not be inherited)
So consider this example: https://jsfiddle.net/fck9xqtx/
```html
<div id="test1">abcdefghijklmoqrstuvwxyz</div>
<div id="test2">
<div>-</div>
<div>abcdefghijklmoqrstuvwxyz</div>
</div>
```
```css
#test1, #test2 {
text-overflow: ellipsis;
overflow: hidden;
width: 50px;
}
#test1::before {
content: "-";
display: block;
}
```
In `#test1` there is a `::before` block, and then the other text is wrapped inside an anonymous block which is what establishes the IFC (and not `#test1`).
In `#test2` it's basically the same, but now the inner block is element-generated.
Both Firefox and Chrome show ellipsis in `#test1` and do not in `#test2`. Edge does not show ellipsis in either case.
Given the current definitions, I think Edge's behavior makes more sense, but it's not useful because anonymous blocks can't be selected to assign `text-overflow` and `overflow` styles. Additionally I think IFC should always be established by anonymous blocks, but with Edge's behavior this would mean that `text-overflow` would never work.
So I think the spec should say something like that anonymous blocks are ignored even if the IFC is established by them, and that the values of `text-overflow` and `overflow` that matter are the ones of the nearest non-anonymous block ancestor. | 1.0 | [css-ui-4] text-overflow and anonymous blocks - `text-overflow` is defined in [CSS UI](https://drafts.csswg.org/css-ui-3/#text-overflow) (should be moved into CSS Overflow per resolution) as:
- Applies to: block containers
- Inherited: no
Since the property controls how inline contents overflow line boxes, I would assume that the value that matters is the one of the block container that establishes the inline formatting context to which the line boxes belong.
The problem is that properties that apply to IFC roots should be inheritable, because lots of IFC roots are anonymous blocks. But this is a special case, because `text-overflow` has effect only if `overflow` is not `visible`, and `overflow` is definitely not inherited. Therefore it cannot just be said that anonymous blocks must act as if they were assigned `text-overflow: inherit` via a rule in the UA origin (because `overflow` should definitely not be inherited)
So consider this example: https://jsfiddle.net/fck9xqtx/
```html
<div id="test1">abcdefghijklmoqrstuvwxyz</div>
<div id="test2">
<div>-</div>
<div>abcdefghijklmoqrstuvwxyz</div>
</div>
```
```css
#test1, #test2 {
text-overflow: ellipsis;
overflow: hidden;
width: 50px;
}
#test1::before {
content: "-";
display: block;
}
```
In `#test1` there is a `::before` block, and then the other text is wrapped inside an anonymous block which is what establishes the IFC (and not `#test1`).
In `#test2` it's basically the same, but now the inner block is element-generated.
Both Firefox and Chrome show ellipsis in `#test1` and do not in `#test2`. Edge does not show ellipsis in either case.
Given the current definitions, I think Edge's behavior makes more sense, but it's not useful because anonymous blocks can't be selected to assign `text-overflow` and `overflow` styles. Additionally I think IFC should always be established by anonymous blocks, but with Edge's behavior this would mean that `text-overflow` would never work.
So I think the spec should say something like that anonymous blocks are ignored even if the IFC is established by them, and that the values of `text-overflow` and `overflow` that matter are the ones of the nearest non-anonymous block ancestor. | test | text overflow and anonymous blocks text overflow is defined in should be moved into css overflow per resolution as applies to block containers inherited no since the property controls how inline contents overflow line boxes i would assume that the value that matters is the one of the block container that establishes the inline formatting context to which the line boxes belong the problem is that properties that apply to ifc roots should be inheritable because lots of ifc roots are anonymous blocks but this is a special case because text overflow has effect only if overflow is not visible and overflow is definitely not inherited therefore it cannot just be said that anonymous blocks must act as if they were assigned text overflow inherit via a rule in the ua origin because overflow should definitely not be inherited so consider this example html abcdefghijklmoqrstuvwxyz abcdefghijklmoqrstuvwxyz css text overflow ellipsis overflow hidden width before content display block in there is a before block and then the other text is wrapped inside an anonymous block which is what establishes the ifc and not in it s basically the same but now the inner block is element generated both firefox and chrome show ellipsis in and do not in edge does not show ellipsis in either case given the current definitions i think edge s behavior makes more sense but it s not useful because anonymous blocks can t be selected to assign text overflow and overflow styles additionally i think ifc should always be established by anonymous blocks but with edge s behavior this would mean that text overflow would never work so i think the spec should say something like that anonymous blocks are ignored even if the ifc is established by them and that the values of text overflow and overflow that matter are the ones of the nearest non anonymous block ancestor | 1 |
143,543 | 11,568,869,468 | IssuesEvent | 2020-02-20 16:34:26 | department-of-veterans-affairs/caseflow | https://api.github.com/repos/department-of-veterans-affairs/caseflow | opened | [Flaky Test] Hearing Schedule Daily Docket Daily docket for RO view user User can only update notes | Flaky test tech-improvement | ## Description
**THIS TEST HAS MOVED TO https://github.com/department-of-veterans-affairs/caseflow/blob/1af7b34d42bec9b3645bea0255a58fa6763716bc/spec/feature/hearings/daily_docket/ro_viewhearsched_spec.rb#L8**
## Background/context/resources
```
Hearing Schedule Daily Docket Daily docket for RO view user User can only update notes - spec.feature.hearings.daily_docket_spec
spec/feature/hearings/daily_docket_spec.rb
Failure/Error: expect(page).to have_content("You have successfully updated")
expected to find text "You have successfully updated" in "CaseflowHearings\n | Switch product\nBVATWARNER (DSUSER)\nDaily Docket (Wed 7/10/2019)\n\n< Back to schedule \nVLJ:\nCoordinator:\nHearing type: Central\nRegional office:\nRoom number: 2 (1W200B)\nDownload & Print Page\nPrint all Hearing Worksheets\nAppellant/Veteran ID/RepresentativeTime/RO(s)Actions\n1Bob Smith\n500000390\nH\n190709-1\n\n\nLoading address...\nLoading rep...\n8:30 am EDT\nDisposition\nSelect...\nCopy Requested by Appellant/Rep\nTranscript Requested\nNotes\nThis is a note about the hearing!\nRegional Office\nCentral\nHearing Location\nHearing Day\nWed 7/10/2019\n\nTime\n9:00 am\n1:00 pm\nOther\n8:30 am\nCancel\nSave\nBuilt with ♡ by theDigital Service at VA\nTrack Caseflow Status|Send feedback"
./spec/feature/hearings/daily_docket_spec.rb:137:in `block (3 levels) in <top (required)>'
```
- Circle CI Error: [ <!--CircleCI Failure alert text --> ](<!-- link to circleCI flake -->)
- Has the test already been skipped in the code?
- [ ] Skipped
- [ ] Not Skipped
- Related Flakes
+ https://github.com/department-of-veterans-affairs/caseflow/issues/13467
## Approach
<!-- Has our agreed upon default approach for tackling flaky tests. -->
Time box this investigation and fix.
Remember that if a test has been skipped for a decent amount of time, it may no longer map to the exact code.
If you reach the end of your time box and don't feel like the solution is in sight:
- [ ] document the work you've done, including dead ends and research
- [ ] skip the test in the code
- [ ] file a follow on ticket
- [ ] close this issue
| 1.0 | [Flaky Test] Hearing Schedule Daily Docket Daily docket for RO view user User can only update notes - ## Description
**THIS TEST HAS MOVED TO https://github.com/department-of-veterans-affairs/caseflow/blob/1af7b34d42bec9b3645bea0255a58fa6763716bc/spec/feature/hearings/daily_docket/ro_viewhearsched_spec.rb#L8**
## Background/context/resources
```
Hearing Schedule Daily Docket Daily docket for RO view user User can only update notes - spec.feature.hearings.daily_docket_spec
spec/feature/hearings/daily_docket_spec.rb
Failure/Error: expect(page).to have_content("You have successfully updated")
expected to find text "You have successfully updated" in "CaseflowHearings\n | Switch product\nBVATWARNER (DSUSER)\nDaily Docket (Wed 7/10/2019)\n\n< Back to schedule \nVLJ:\nCoordinator:\nHearing type: Central\nRegional office:\nRoom number: 2 (1W200B)\nDownload & Print Page\nPrint all Hearing Worksheets\nAppellant/Veteran ID/RepresentativeTime/RO(s)Actions\n1Bob Smith\n500000390\nH\n190709-1\n\n\nLoading address...\nLoading rep...\n8:30 am EDT\nDisposition\nSelect...\nCopy Requested by Appellant/Rep\nTranscript Requested\nNotes\nThis is a note about the hearing!\nRegional Office\nCentral\nHearing Location\nHearing Day\nWed 7/10/2019\n\nTime\n9:00 am\n1:00 pm\nOther\n8:30 am\nCancel\nSave\nBuilt with ♡ by theDigital Service at VA\nTrack Caseflow Status|Send feedback"
./spec/feature/hearings/daily_docket_spec.rb:137:in `block (3 levels) in <top (required)>'
```
- Circle CI Error: [ <!--CircleCI Failure alert text --> ](<!-- link to circleCI flake -->)
- Has the test already been skipped in the code?
- [ ] Skipped
- [ ] Not Skipped
- Related Flakes
+ https://github.com/department-of-veterans-affairs/caseflow/issues/13467
## Approach
<!-- Has our agreed upon default approach for tackling flaky tests. -->
Time box this investigation and fix.
Remember that if a test has been skipped for a decent amount of time, it may no longer map to the exact code.
If you reach the end of your time box and don't feel like the solution is in sight:
- [ ] document the work you've done, including dead ends and research
- [ ] skip the test in the code
- [ ] file a follow on ticket
- [ ] close this issue
| test | hearing schedule daily docket daily docket for ro view user user can only update notes description this test has moved to background context resources hearing schedule daily docket daily docket for ro view user user can only update notes spec feature hearings daily docket spec spec feature hearings daily docket spec rb failure error expect page to have content you have successfully updated expected to find text you have successfully updated in caseflowhearings n switch product nbvatwarner dsuser ndaily docket wed n n back to schedule nvlj ncoordinator nhearing type central nregional office nroom number ndownload print page nprint all hearing worksheets nappellant veteran id representativetime ro s actions smith nh n n nloading address nloading rep am edt ndisposition nselect ncopy requested by appellant rep ntranscript requested nnotes nthis is a note about the hearing nregional office ncentral nhearing location nhearing day nwed n ntime am pm nother am ncancel nsave nbuilt with ♡ by thedigital service at va ntrack caseflow status send feedback spec feature hearings daily docket spec rb in block levels in circle ci error has the test already been skipped in the code skipped not skipped related flakes approach time box this investigation and fix remember that if a test has been skipped for a decent amount of time it may no longer map to the exact code if you reach the end of your time box and don t feel like the solution is in sight document the work you ve done including dead ends and research skip the test in the code file a follow on ticket close this issue | 1 |
119,508 | 10,056,428,024 | IssuesEvent | 2019-07-22 09:08:04 | kyma-project/kyma | https://api.github.com/repos/kyma-project/kyma | closed | logging tests can be run in parallel with others | quality/testability | **Description**
It should be possible to run logging tests in parallel with other Kyma tests. We need to verify if it is possible now and change the test if necessary. Then its TestDefinition should be modified to enable concurrency.
**Reasons**
One way to make the Kyma test suite faster is to run tests in parallel.
**Acceptance Criteria**
- [ ] concurrency is enabled in logging test and it is stable on CI
See https://github.com/kyma-project/kyma/issues/4299 as example | 1.0 | logging tests can be run in parallel with others - **Description**
It should be possible to run logging tests in parallel with other Kyma tests. We need to verify if it is possible now and change the test if necessary. Then its TestDefinition should be modified to enable concurrency.
**Reasons**
One way to make the Kyma test suite faster is to run tests in parallel.
**Acceptance Criteria**
- [ ] concurrency is enabled in logging test and it is stable on CI
See https://github.com/kyma-project/kyma/issues/4299 as example | test | logging tests can be run in parallel with others description it should be possible to run logging tests in parallel with other kyma tests we need to verify if it is possible now and change the test if necessary then its testdefinition should be modified to enable concurrency reasons one way to make the kyma test suite faster is to run tests in parallel acceptance criteria concurrency is enabled in logging test and it is stable on ci see as example | 1 |
26,078 | 4,202,847,706 | IssuesEvent | 2016-06-28 01:05:23 | gulpjs/undertaker | https://api.github.com/repos/gulpjs/undertaker | closed | Integration tests | help wanted Tests | I want to create tests that have common gulp uses in order to make sure we don't break the use cases like I did in a few versions of orchestrator. | 1.0 | Integration tests - I want to create tests that have common gulp uses in order to make sure we don't break the use cases like I did in a few versions of orchestrator. | test | integration tests i want to create tests that have common gulp uses in order to make sure we don t break the use cases like i did in a few versions of orchestrator | 1 |
330,173 | 28,355,752,782 | IssuesEvent | 2023-04-12 07:11:13 | rust-lang/rust | https://api.github.com/repos/rust-lang/rust | closed | Rename tests/ui/unique to `box` | E-easy C-cleanup A-testsuite T-compiler | https://github.com/rust-lang/rust/tree/fe0b0428b89802f02a050eab72373b75709d0bce/tests/ui/unique all seem to be testing std::boxed::Box. It hasn't been called `unique` in many years, and that's now ambiguous with `ptr::Unqiue`. We should rename the test suite for clarity.
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"reez12g"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | 1.0 | Rename tests/ui/unique to `box` - https://github.com/rust-lang/rust/tree/fe0b0428b89802f02a050eab72373b75709d0bce/tests/ui/unique all seem to be testing std::boxed::Box. It hasn't been called `unique` in many years, and that's now ambiguous with `ptr::Unqiue`. We should rename the test suite for clarity.
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"reez12g"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | test | rename tests ui unique to box all seem to be testing std boxed box it hasn t been called unique in many years and that s now ambiguous with ptr unqiue we should rename the test suite for clarity | 1 |
371,194 | 10,962,767,115 | IssuesEvent | 2019-11-27 18:00:53 | kubernetes/minikube | https://api.github.com/repos/kubernetes/minikube | closed | Support buildah | co/runtime/crio help wanted kind/feature lifecycle/frozen priority/important-longterm | I added a previous ticket about `buildah` (PR #3225), but it timed out...
When we upgrade to Podman 1.2, we should include support for buildah (for building images).
Probably by including `buildah` next to `podman`, especially if "podman build" doesn't work...
It will share the runc and the cni plugins with podman and with cri-o.
No real plan yet on how to support building images from host, except for using `minikube ssh`.
But it will do as a start. Could even run buildah in a container, but then it's hard to share images. | 1.0 | Support buildah - I added a previous ticket about `buildah` (PR #3225), but it timed out...
When we upgrade to Podman 1.2, we should include support for buildah (for building images).
Probably by including `buildah` next to `podman`, especially if "podman build" doesn't work...
It will share the runc and the cni plugins with podman and with cri-o.
No real plan yet on how to support building images from host, except for using `minikube ssh`.
But it will do as a start. Could even run buildah in a container, but then it's hard to share images. | non_test | support buildah i added a previous ticket about buildah pr but it timed out when we upgrade to podman we should include support for buildah for building images probably by including buildah next to podman especially if podman build doesn t work it will share the runc and the cni plugins with podman and with cri o no real plan yet on how to support building images from host except for using minikube ssh but it will do as a start could even run buildah in a container but then it s hard to share images | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.