Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
181,110
| 30,624,704,345
|
IssuesEvent
|
2023-07-24 10:37:48
|
readthedocs/readthedocs.org
|
https://api.github.com/repos/readthedocs/readthedocs.org
|
closed
|
Filter by branch on project dashboard
|
Feature Needed: design decision
|
## Details
Would be nice if there is a way to only show builds from certain branches. This is especially useful if a project has PR builder activated, which sometimes results in the build log full of PR builds, so it takes many clicks to find the last run of, say, `latest`.

## Expected Result
A new drop-down menu that let me only see builds from the select branch.
## Actual Result
I have to click a few pages to find the last `latest` build. The drop-down is only to select branch to trigger new build for, and does not affect what I see on the dashboard.
|
1.0
|
Filter by branch on project dashboard - ## Details
Would be nice if there is a way to only show builds from certain branches. This is especially useful if a project has PR builder activated, which sometimes results in the build log full of PR builds, so it takes many clicks to find the last run of, say, `latest`.

## Expected Result
A new drop-down menu that let me only see builds from the select branch.
## Actual Result
I have to click a few pages to find the last `latest` build. The drop-down is only to select branch to trigger new build for, and does not affect what I see on the dashboard.
|
non_process
|
filter by branch on project dashboard details would be nice if there is a way to only show builds from certain branches this is especially useful if a project has pr builder activated which sometimes results in the build log full of pr builds so it takes many clicks to find the last run of say latest expected result a new drop down menu that let me only see builds from the select branch actual result i have to click a few pages to find the last latest build the drop down is only to select branch to trigger new build for and does not affect what i see on the dashboard
| 0
|
20,252
| 26,869,585,227
|
IssuesEvent
|
2023-02-04 09:31:24
|
nerfstudio-project/nerfstudio
|
https://api.github.com/repos/nerfstudio-project/nerfstudio
|
opened
|
Allow masks for colmap feature extraction with ns-process-data
|
enhancement data processing
|
**Is your feature request related to a problem? Please describe.**
While masking is currently broken for training, there are use-cases where dynamic objects in a may cause a lot of images to not get poses. For example, scenes with flowing water may be mostly static, but could be failed to collect poses for because of extraneous features extracted from the moving ripples in the water.
**Describe the solution you'd like**
An additional optional flag for inputting a directory full of binary masks to be used in colmap `--ImageReader.mask_path masks_dir`.
|
1.0
|
Allow masks for colmap feature extraction with ns-process-data - **Is your feature request related to a problem? Please describe.**
While masking is currently broken for training, there are use-cases where dynamic objects in a may cause a lot of images to not get poses. For example, scenes with flowing water may be mostly static, but could be failed to collect poses for because of extraneous features extracted from the moving ripples in the water.
**Describe the solution you'd like**
An additional optional flag for inputting a directory full of binary masks to be used in colmap `--ImageReader.mask_path masks_dir`.
|
process
|
allow masks for colmap feature extraction with ns process data is your feature request related to a problem please describe while masking is currently broken for training there are use cases where dynamic objects in a may cause a lot of images to not get poses for example scenes with flowing water may be mostly static but could be failed to collect poses for because of extraneous features extracted from the moving ripples in the water describe the solution you d like an additional optional flag for inputting a directory full of binary masks to be used in colmap imagereader mask path masks dir
| 1
|
12,490
| 14,958,633,724
|
IssuesEvent
|
2021-01-27 01:12:56
|
tokio-rs/tokio
|
https://api.github.com/repos/tokio-rs/tokio
|
closed
|
process: support ergonomic piping of stdio of one child process to another
|
A-tokio C-feature-request M-process
|
Currently it is impossible to pass in `ChildStd{in,out,err}` into `tokio::process::Command::std{in,out,err}` since they cannot be automatically converted to `std::process::Stdio` (namely since they do not support `IntoRaw{Fd,Handle}`).
Originally reported in #3447
|
1.0
|
process: support ergonomic piping of stdio of one child process to another - Currently it is impossible to pass in `ChildStd{in,out,err}` into `tokio::process::Command::std{in,out,err}` since they cannot be automatically converted to `std::process::Stdio` (namely since they do not support `IntoRaw{Fd,Handle}`).
Originally reported in #3447
|
process
|
process support ergonomic piping of stdio of one child process to another currently it is impossible to pass in childstd in out err into tokio process command std in out err since they cannot be automatically converted to std process stdio namely since they do not support intoraw fd handle originally reported in
| 1
|
243,393
| 26,278,078,681
|
IssuesEvent
|
2023-01-07 01:55:24
|
gavarasana/cra-test
|
https://api.github.com/repos/gavarasana/cra-test
|
opened
|
CVE-2021-23382 (High) detected in postcss-8.2.1.tgz
|
security vulnerability
|
## CVE-2021-23382 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>postcss-8.2.1.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-8.2.1.tgz">https://registry.npmjs.org/postcss/-/postcss-8.2.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/postcss-safe-parser/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- postcss-safe-parser-5.0.2.tgz (Root Library)
- :x: **postcss-8.2.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \/\*\s* sourceMappingURL=(.*).
<p>Publish Date: 2021-04-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23382>CVE-2021-23382</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382</a></p>
<p>Release Date: 2021-04-26</p>
<p>Fix Resolution (postcss): 8.2.13</p>
<p>Direct dependency fix Resolution (postcss-safe-parser): 6.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-23382 (High) detected in postcss-8.2.1.tgz - ## CVE-2021-23382 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>postcss-8.2.1.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-8.2.1.tgz">https://registry.npmjs.org/postcss/-/postcss-8.2.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/postcss-safe-parser/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- postcss-safe-parser-5.0.2.tgz (Root Library)
- :x: **postcss-8.2.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \/\*\s* sourceMappingURL=(.*).
<p>Publish Date: 2021-04-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23382>CVE-2021-23382</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382</a></p>
<p>Release Date: 2021-04-26</p>
<p>Fix Resolution (postcss): 8.2.13</p>
<p>Direct dependency fix Resolution (postcss-safe-parser): 6.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in postcss tgz cve high severity vulnerability vulnerable library postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file package json path to vulnerable library node modules postcss safe parser node modules postcss package json dependency hierarchy postcss safe parser tgz root library x postcss tgz vulnerable library found in base branch main vulnerability details the package postcss before are vulnerable to regular expression denial of service redos via getannotationurl and loadannotation in lib previous map js the vulnerable regexes are caused mainly by the sub pattern s sourcemappingurl publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution postcss direct dependency fix resolution postcss safe parser step up your open source security game with mend
| 0
|
117,674
| 11,953,462,230
|
IssuesEvent
|
2020-04-03 20:57:46
|
JabRef/jabref
|
https://api.github.com/repos/JabRef/jabref
|
closed
|
How to install the 5.0 release aside the dev Snapshot?
|
documentation
|
I have the 5.1 dev-snapshot installed and wanted to install the 5.0 release side by side. This seems not possible. The 4.1 release had been well installed along side the 5.0 devs. Is there a way to have both versions installed? 5.1dev and 5.0 ?
|
1.0
|
How to install the 5.0 release aside the dev Snapshot? - I have the 5.1 dev-snapshot installed and wanted to install the 5.0 release side by side. This seems not possible. The 4.1 release had been well installed along side the 5.0 devs. Is there a way to have both versions installed? 5.1dev and 5.0 ?
|
non_process
|
how to install the release aside the dev snapshot i have the dev snapshot installed and wanted to install the release side by side this seems not possible the release had been well installed along side the devs is there a way to have both versions installed and
| 0
|
3,666
| 6,542,111,225
|
IssuesEvent
|
2017-09-02 00:44:37
|
Polymer/polymer
|
https://api.github.com/repos/Polymer/polymer
|
closed
|
<dom-if> interferes with justify-content: space-between
|
1.x-2.x compatibility css p1
|
<!--
If you are asking a question rather than filing a bug, try one of these instead:
- StackOverflow (http://stackoverflow.com/questions/tagged/polymer)
- Polymer Slack Channel (https://bit.ly/polymerslack)
- Mailing List (https://groups.google.com/forum/#!forum/polymer-dev)
-->
<!-- Instructions For Filing a Bug: https://github.com/Polymer/polymer/blob/master/CONTRIBUTING.md#filing-bugs -->
### Description
<!-- Example: Error thrown when calling `appendChild` on Polymer element -->
Using `<template is='dom-if'>` in source code causes `<dom-if>` element to be put in resulting HTML (regardless of whether the condition is true or false).
If the parent element has `justify-content: space-between; display: flex;` then `<dom-if>` participates in element count for justification sake and the result looks broken.
This can be fixed by styling `<dom-if>` with `display: none`. Since this is a regression from Polymer 1.9, maybe the library can apply this automatically? I can't think of a use case where one wouldn't want `display: none` on the `<dom-if>`
#### Live Demo
<!-- Fork this JSBin, or provide your own URL -->
http://jsbin.com/megukipana/1/edit?html,output
#### Steps to Reproduce
Example:
```html
<template>
<style>
.parent {
border: 1px solid red;
justify-content: space-between;
width: 300px;
display: flex;
}
.child {
border: 1px solid green;
width: 100px;
}
</style>
<div class='parent'>
<div class='child'>Child1</div>
<template is='dom-if' if='1'>
<div class='child'>Child2</div>
</template>
</div>
</template>
```
#### Expected Results
Child1 is to the left, Child2 is to the right.
#### Actual Results
Space to the right if Child2 caused by empty `<dom-if>`
### Browsers Affected
<!-- Check all that apply -->
- [x] Chrome
- [x] Firefox
### Versions
<!--
`Polymer.version` will show the version for Polymer
`bower ls` or `npm ls` will show the version of webcomponents.js or webcomponents-lite.js
-->
- Polymer: v2.0
- webcomponents: vX.X.X
|
True
|
<dom-if> interferes with justify-content: space-between - <!--
If you are asking a question rather than filing a bug, try one of these instead:
- StackOverflow (http://stackoverflow.com/questions/tagged/polymer)
- Polymer Slack Channel (https://bit.ly/polymerslack)
- Mailing List (https://groups.google.com/forum/#!forum/polymer-dev)
-->
<!-- Instructions For Filing a Bug: https://github.com/Polymer/polymer/blob/master/CONTRIBUTING.md#filing-bugs -->
### Description
<!-- Example: Error thrown when calling `appendChild` on Polymer element -->
Using `<template is='dom-if'>` in source code causes `<dom-if>` element to be put in resulting HTML (regardless of whether the condition is true or false).
If the parent element has `justify-content: space-between; display: flex;` then `<dom-if>` participates in element count for justification sake and the result looks broken.
This can be fixed by styling `<dom-if>` with `display: none`. Since this is a regression from Polymer 1.9, maybe the library can apply this automatically? I can't think of a use case where one wouldn't want `display: none` on the `<dom-if>`
#### Live Demo
<!-- Fork this JSBin, or provide your own URL -->
http://jsbin.com/megukipana/1/edit?html,output
#### Steps to Reproduce
Example:
```html
<template>
<style>
.parent {
border: 1px solid red;
justify-content: space-between;
width: 300px;
display: flex;
}
.child {
border: 1px solid green;
width: 100px;
}
</style>
<div class='parent'>
<div class='child'>Child1</div>
<template is='dom-if' if='1'>
<div class='child'>Child2</div>
</template>
</div>
</template>
```
#### Expected Results
Child1 is to the left, Child2 is to the right.
#### Actual Results
Space to the right if Child2 caused by empty `<dom-if>`
### Browsers Affected
<!-- Check all that apply -->
- [x] Chrome
- [x] Firefox
### Versions
<!--
`Polymer.version` will show the version for Polymer
`bower ls` or `npm ls` will show the version of webcomponents.js or webcomponents-lite.js
-->
- Polymer: v2.0
- webcomponents: vX.X.X
|
non_process
|
interferes with justify content space between if you are asking a question rather than filing a bug try one of these instead stackoverflow polymer slack channel mailing list description using in source code causes element to be put in resulting html regardless of whether the condition is true or false if the parent element has justify content space between display flex then participates in element count for justification sake and the result looks broken this can be fixed by styling with display none since this is a regression from polymer maybe the library can apply this automatically i can t think of a use case where one wouldn t want display none on the live demo steps to reproduce example html parent border solid red justify content space between width display flex child border solid green width expected results is to the left is to the right actual results space to the right if caused by empty browsers affected chrome firefox versions polymer version will show the version for polymer bower ls or npm ls will show the version of webcomponents js or webcomponents lite js polymer webcomponents vx x x
| 0
|
201,412
| 15,802,274,928
|
IssuesEvent
|
2021-04-03 08:56:25
|
zikunz/ped
|
https://api.github.com/repos/zikunz/ped
|
opened
|
[UG] Potential improvement
|
severity.VeryLow type.DocumentationBug
|
Could spell out NUS SoC.
Could use "schedules" rather than "schedule"
Could include the welcome page screenshot (which is the same after typing help command)
<!--session: 1617437737885-9be6f5d3-b32d-4bb6-af87-1ab07f2a7014-->
|
1.0
|
[UG] Potential improvement - Could spell out NUS SoC.
Could use "schedules" rather than "schedule"
Could include the welcome page screenshot (which is the same after typing help command)
<!--session: 1617437737885-9be6f5d3-b32d-4bb6-af87-1ab07f2a7014-->
|
non_process
|
potential improvement could spell out nus soc could use schedules rather than schedule could include the welcome page screenshot which is the same after typing help command
| 0
|
406,554
| 27,570,947,737
|
IssuesEvent
|
2023-03-08 09:14:43
|
lmw7414/fastcampus-project-board-admin
|
https://api.github.com/repos/lmw7414/fastcampus-project-board-admin
|
closed
|
[์ด๋๋ฏผ] ์คํ๋ง ๋ถํธ ํ๋ก์ ํธ ์์ํ๊ธฐ
|
documentation
|
#1 ์์ ์ ๋ฆฌํ ๋ด์ฉ์ ํ ๋๋ก, ์ด๋๋ฏผ ์๋น์ค๋ฅผ ๋ง๋๋๋ฐ ํ์๋ก ํ ๋งํ ์์๋ฅผ ๋ด์ ํ๋ก์ ํธ ๋ผ๋๋ฅผ ์ธ์ฐ๊ณ ๊ฐ๋ฐํ๊ฒฝ์ ์ก๋๋ค.
## Reference
- https://start.spring.io/
|
1.0
|
[์ด๋๋ฏผ] ์คํ๋ง ๋ถํธ ํ๋ก์ ํธ ์์ํ๊ธฐ - #1 ์์ ์ ๋ฆฌํ ๋ด์ฉ์ ํ ๋๋ก, ์ด๋๋ฏผ ์๋น์ค๋ฅผ ๋ง๋๋๋ฐ ํ์๋ก ํ ๋งํ ์์๋ฅผ ๋ด์ ํ๋ก์ ํธ ๋ผ๋๋ฅผ ์ธ์ฐ๊ณ ๊ฐ๋ฐํ๊ฒฝ์ ์ก๋๋ค.
## Reference
- https://start.spring.io/
|
non_process
|
์คํ๋ง ๋ถํธ ํ๋ก์ ํธ ์์ํ๊ธฐ ์์ ์ ๋ฆฌํ ๋ด์ฉ์ ํ ๋๋ก ์ด๋๋ฏผ ์๋น์ค๋ฅผ ๋ง๋๋๋ฐ ํ์๋ก ํ ๋งํ ์์๋ฅผ ๋ด์ ํ๋ก์ ํธ ๋ผ๋๋ฅผ ์ธ์ฐ๊ณ ๊ฐ๋ฐํ๊ฒฝ์ ์ก๋๋ค reference
| 0
|
66,623
| 7,008,730,606
|
IssuesEvent
|
2017-12-19 16:36:00
|
geosolutions-it/MapStore2
|
https://api.github.com/repos/geosolutions-it/MapStore2
|
closed
|
Openlayers moves marker when click on vector data
|
bug In Test Priority: Blocker Project: C040 Tested
|
### Description
Clicking on map with Openlayers moves the marker to the first coordinate of the clicked feature. This is due to the changes for annotations.
### In case of Bug (otherwise remove this paragraph)
*Browser Affected*
any
*Steps to reproduce*
- Open a map with openlayers
- Add a shape file to the map (polygon or lines)
- Click at the center of the polygon/line
*Expected Result*
- FeatureInfo marker is on the clicked point
*Current Result*
- The marker is moved on the first coordinate of the vector.
### Other useful information (optional):
Tested with [this data](http://www.naturalearthdata.com/http//www.naturalearthdata.com/download/110m/physical/ne_110m_rivers_lake_centerlines.zip)
- Here a sample screenshot

|
2.0
|
Openlayers moves marker when click on vector data - ### Description
Clicking on map with Openlayers moves the marker to the first coordinate of the clicked feature. This is due to the changes for annotations.
### In case of Bug (otherwise remove this paragraph)
*Browser Affected*
any
*Steps to reproduce*
- Open a map with openlayers
- Add a shape file to the map (polygon or lines)
- Click at the center of the polygon/line
*Expected Result*
- FeatureInfo marker is on the clicked point
*Current Result*
- The marker is moved on the first coordinate of the vector.
### Other useful information (optional):
Tested with [this data](http://www.naturalearthdata.com/http//www.naturalearthdata.com/download/110m/physical/ne_110m_rivers_lake_centerlines.zip)
- Here a sample screenshot

|
non_process
|
openlayers moves marker when click on vector data description clicking on map with openlayers moves the marker to the first coordinate of the clicked feature this is due to the changes for annotations in case of bug otherwise remove this paragraph browser affected any steps to reproduce open a map with openlayers add a shape file to the map polygon or lines click at the center of the polygon line expected result featureinfo marker is on the clicked point current result the marker is moved on the first coordinate of the vector other useful information optional tested with here a sample screenshot
| 0
|
510,543
| 14,792,632,377
|
IssuesEvent
|
2021-01-12 14:58:40
|
containrrr/watchtower
|
https://api.github.com/repos/containrrr/watchtower
|
opened
|
In v1.1.6, Notifications Fail with Insecure (HTTP) Gotify URL
|
Priority: Medium Status: Available Type: Bug
|
**Describe the bug**
On v1.1.6, Gotify notifications fail when configured with HTTP URL.
**To Reproduce**
Steps to reproduce the behavior:
1. Configure Watchtower to send notifications to insecure (HTTP) Gotify URL.
2. Trigger notification
3. Watchtower attempts to send via HTTPS & it fails.
**Expected behavior**
Expectation would be that notifications would be sent to HTTP URL and not HTTPS URL.
**Screenshots**
v1.1.5:
time="2021-01-12T08:41:03-06:00" level=warning msg="Using an HTTP url for Gotify is insecure"
time="2021-01-12T08:41:04-06:00" level=info msg="Starting Watchtower and scheduling first run: 2021-01-13 06:15:00 -0600 CST"
v1.1.6:
time="2021-01-12T08:48:40-06:00" level=warning msg="Using an HTTP url for Gotify is insecure"
time="2021-01-12T08:48:41-06:00" level=info msg="Starting Watchtower and scheduling first run: 2021-01-13 06:15:00 -0600 CST"
Failed to send notification via shoutrrr (url=gotify://gotify/<removed>): failed to send notification to Gotify: Post "https://gotify/message?token=<removed>": dial tcp 172.21.0.17:443: connect: connection refused
**Environment**
Debian 10
docker-compose version 1.21.0-3
docker.io version 18.09.1+dfsg1-7.1+deb10u2
<details>
<summary><b> Logs from running watchtower with the <code>--debug</code> option </b></summary>
```
time="2021-01-12T08:56:41-06:00" level=debug
time="2021-01-12T08:56:41-06:00" level=warning msg="Using an HTTP url for Gotify is insecure"
time="2021-01-12T08:56:41-06:00" level=debug msg="Sleeping for a second to ensure the docker api client has been properly initialized."
time="2021-01-12T08:56:42-06:00" level=debug msg="Retrieving running containers"
time="2021-01-12T08:56:42-06:00" level=debug msg="There are no additional watchtower containers"
time="2021-01-12T08:56:42-06:00" level=debug msg="Watchtower HTTP API skipped."
time="2021-01-12T08:56:42-06:00" level=info msg="Starting Watchtower and scheduling first run: 2021-01-13 06:15:00 -0600 CST"
Failed to send notification via shoutrrr (url=gotify://gotify/<removed>): failed to send notification to Gotify: Post "https://gotify/message?token=<removed>": dial tcp 172.21.0.17:443: connect: connection refused
```
</details>
**Additional context**
Downgrading back to v1.1.5 resolves this for now.
|
1.0
|
In v1.1.6, Notifications Fail with Insecure (HTTP) Gotify URL - **Describe the bug**
On v1.1.6, Gotify notifications fail when configured with HTTP URL.
**To Reproduce**
Steps to reproduce the behavior:
1. Configure Watchtower to send notifications to insecure (HTTP) Gotify URL.
2. Trigger notification
3. Watchtower attempts to send via HTTPS & it fails.
**Expected behavior**
Expectation would be that notifications would be sent to HTTP URL and not HTTPS URL.
**Screenshots**
v1.1.5:
time="2021-01-12T08:41:03-06:00" level=warning msg="Using an HTTP url for Gotify is insecure"
time="2021-01-12T08:41:04-06:00" level=info msg="Starting Watchtower and scheduling first run: 2021-01-13 06:15:00 -0600 CST"
v1.1.6:
time="2021-01-12T08:48:40-06:00" level=warning msg="Using an HTTP url for Gotify is insecure"
time="2021-01-12T08:48:41-06:00" level=info msg="Starting Watchtower and scheduling first run: 2021-01-13 06:15:00 -0600 CST"
Failed to send notification via shoutrrr (url=gotify://gotify/<removed>): failed to send notification to Gotify: Post "https://gotify/message?token=<removed>": dial tcp 172.21.0.17:443: connect: connection refused
**Environment**
Debian 10
docker-compose version 1.21.0-3
docker.io version 18.09.1+dfsg1-7.1+deb10u2
<details>
<summary><b> Logs from running watchtower with the <code>--debug</code> option </b></summary>
```
time="2021-01-12T08:56:41-06:00" level=debug
time="2021-01-12T08:56:41-06:00" level=warning msg="Using an HTTP url for Gotify is insecure"
time="2021-01-12T08:56:41-06:00" level=debug msg="Sleeping for a second to ensure the docker api client has been properly initialized."
time="2021-01-12T08:56:42-06:00" level=debug msg="Retrieving running containers"
time="2021-01-12T08:56:42-06:00" level=debug msg="There are no additional watchtower containers"
time="2021-01-12T08:56:42-06:00" level=debug msg="Watchtower HTTP API skipped."
time="2021-01-12T08:56:42-06:00" level=info msg="Starting Watchtower and scheduling first run: 2021-01-13 06:15:00 -0600 CST"
Failed to send notification via shoutrrr (url=gotify://gotify/<removed>): failed to send notification to Gotify: Post "https://gotify/message?token=<removed>": dial tcp 172.21.0.17:443: connect: connection refused
```
</details>
**Additional context**
Downgrading back to v1.1.5 resolves this for now.
|
non_process
|
in notifications fail with insecure http gotify url describe the bug on gotify notifications fail when configured with http url to reproduce steps to reproduce the behavior configure watchtower to send notifications to insecure http gotify url trigger notification watchtower attempts to send via https it fails expected behavior expectation would be that notifications would be sent to http url and not https url screenshots time level warning msg using an http url for gotify is insecure time level info msg starting watchtower and scheduling first run cst time level warning msg using an http url for gotify is insecure time level info msg starting watchtower and scheduling first run cst failed to send notification via shoutrrr url gotify gotify failed to send notification to gotify post dial tcp connect connection refused environment debian docker compose version docker io version logs from running watchtower with the debug option time level debug time level warning msg using an http url for gotify is insecure time level debug msg sleeping for a second to ensure the docker api client has been properly initialized time level debug msg retrieving running containers time level debug msg there are no additional watchtower containers time level debug msg watchtower http api skipped time level info msg starting watchtower and scheduling first run cst failed to send notification via shoutrrr url gotify gotify failed to send notification to gotify post dial tcp connect connection refused additional context downgrading back to resolves this for now
| 0
|
186,075
| 15,045,645,658
|
IssuesEvent
|
2021-02-03 05:53:33
|
apache/buildstream
|
https://api.github.com/repos/apache/buildstream
|
closed
|
message format in user configuration is undocumented
|
bug documentation
|
[See original issue on GitLab](https://gitlab.com/BuildStream/buildstream/-/issues/510)
In GitLab by [[Gitlab user @tristanvb]](https://gitlab.com/tristanvb) on Jul 25, 2018, 13:32
## Summary
[//]: # (Summarize the bug encountered concisely)
While inspecting #509, we noticed that the `message-format` configuration is not properly documented.
We need to explain how this works in the [user configuration docs](http://buildstream.gitlab.io/buildstream/using_config.html), and ensure that we've listed and explained all of the possible identifiers which are valid for the message format.
The identifiers can be gleaned from the source currently at: https://gitlab.com/BuildStream/buildstream/blob/master/buildstream/_frontend/widget.py#L328
|
1.0
|
message format in user configuration is undocumented - [See original issue on GitLab](https://gitlab.com/BuildStream/buildstream/-/issues/510)
In GitLab by [[Gitlab user @tristanvb]](https://gitlab.com/tristanvb) on Jul 25, 2018, 13:32
## Summary
[//]: # (Summarize the bug encountered concisely)
While inspecting #509, we noticed that the `message-format` configuration is not properly documented.
We need to explain how this works in the [user configuration docs](http://buildstream.gitlab.io/buildstream/using_config.html), and ensure that we've listed and explained all of the possible identifiers which are valid for the message format.
The identifiers can be gleaned from the source currently at: https://gitlab.com/BuildStream/buildstream/blob/master/buildstream/_frontend/widget.py#L328
|
non_process
|
message format in user configuration is undocumented in gitlab by on jul summary summarize the bug encountered concisely while inspecting we noticed that the message format configuration is not properly documented we need to explain how this works in the and ensure that we ve listed and explained all of the possible identifiers which are valid for the message format the identifiers can be gleaned from the source currently at
| 0
|
12,120
| 14,740,711,311
|
IssuesEvent
|
2021-01-07 09:30:53
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Keener - Error Email - Accept invitation/Edit
|
anc-process anp-1 ant-bug has attachment
|
In GitLab by @kdjstudios on Nov 29, 2018, 08:45
**Submitted by:** Kyle
**Helpdesk:** NA
**Server:** External
**Client/Site:** Keener
**Account:** NA
**Issue:**
We received this error email about 10 times over the course of two hours this morning.
[SA_Billing_Error_Report_invitations_edit__ActionControllerActionControllerError__Cannot_redirect_to_nil_.msg](/uploads/f7fa953ecd0a34a9c17deb77115af94a/SA_Billing_Error_Report_invitations_edit__ActionControllerActionControllerError__Cannot_redirect_to_nil_.msg)
May we please find the cause to this? So far we have not received a HD ticket from the client, so I am unsure on what is happening in the user interface to generate this error. (I would presume, that this error is created when a existing user clicks the 'accept' button the the invite email, rather then the login link. Is that correct?)
|
1.0
|
Keener - Error Email - Accept invitation/Edit - In GitLab by @kdjstudios on Nov 29, 2018, 08:45
**Submitted by:** Kyle
**Helpdesk:** NA
**Server:** External
**Client/Site:** Keener
**Account:** NA
**Issue:**
We received this error email about 10 times over the course of two hours this morning.
[SA_Billing_Error_Report_invitations_edit__ActionControllerActionControllerError__Cannot_redirect_to_nil_.msg](/uploads/f7fa953ecd0a34a9c17deb77115af94a/SA_Billing_Error_Report_invitations_edit__ActionControllerActionControllerError__Cannot_redirect_to_nil_.msg)
May we please find the cause to this? So far we have not received a HD ticket from the client, so I am unsure on what is happening in the user interface to generate this error. (I would presume, that this error is created when a existing user clicks the 'accept' button the the invite email, rather then the login link. Is that correct?)
|
process
|
keener error email accept invitation edit in gitlab by kdjstudios on nov submitted by kyle helpdesk na server external client site keener account na issue we received this error email about times over the course of two hours this morning uploads sa billing error report invitations edit actioncontrolleractioncontrollererror cannot redirect to nil msg may we please find the cause to this so far we have not received a hd ticket from the client so i am unsure on what is happening in the user interface to generate this error i would presume that this error is created when a existing user clicks the accept button the the invite email rather then the login link is that correct
| 1
|
43,873
| 5,575,578,056
|
IssuesEvent
|
2017-03-28 02:37:54
|
infiniteautomation/ma-core-public
|
https://api.github.com/repos/infiniteautomation/ma-core-public
|
closed
|
Event Detector - Missing Definition Handling
|
Enhancement Ready for Testing
|
Ensure that the existing event detector types match the legacy types for JSON import and also reading from the DB.
Here is some code to perform checks at point init startup:
See commit: edc42f0e3eb128b7feff29c1373db066082f87ce
|
1.0
|
Event Detector - Missing Definition Handling - Ensure that the existing event detector types match the legacy types for JSON import and also reading from the DB.
Here is some code to perform checks at point init startup:
See commit: edc42f0e3eb128b7feff29c1373db066082f87ce
|
non_process
|
event detector missing definition handling ensure that the existing event detector types match the legacy types for json import and also reading from the db here is some code to perform checks at point init startup see commit
| 0
|
10,947
| 13,756,384,392
|
IssuesEvent
|
2020-10-06 19:51:14
|
paul-buerkner/brms
|
https://api.github.com/repos/paul-buerkner/brms
|
closed
|
Use extract_draws to get predictions of specified smooth terms
|
feature post-processing
|
Hi Paul,
This is in reference to this issue. https://discourse.mc-stan.org/t/calculate-the-first-derivative-and-its-posterior-distribution-of-an-estimated-spline-trend/10577
I thought I had resolved it, however, it was pointed out by the orignal author of the paper whose study I was trying to replicate in a bayesian inference, that I need to work at the level of smooths.
I'm trying to calculate the first derivative of the estimated trend, by making predictions at the x predictor separated by a small epsilon.
Is there currently a way of getting the linear sum of ONLY the spline term predictions? (I don't want the fixed effect and intercept in my predictions). But I do want the sampling variance within the spline term predictions.
Thank you!
|
1.0
|
Use extract_draws to get predictions of specified smooth terms - Hi Paul,
This is in reference to this issue. https://discourse.mc-stan.org/t/calculate-the-first-derivative-and-its-posterior-distribution-of-an-estimated-spline-trend/10577
I thought I had resolved it, however, it was pointed out by the orignal author of the paper whose study I was trying to replicate in a bayesian inference, that I need to work at the level of smooths.
I'm trying to calculate the first derivative of the estimated trend, by making predictions at the x predictor separated by a small epsilon.
Is there currently a way of getting the linear sum of ONLY the spline term predictions? (I don't want the fixed effect and intercept in my predictions). But I do want the sampling variance within the spline term predictions.
Thank you!
|
process
|
use extract draws to get predictions of specified smooth terms hi paul this is in reference to this issue i thought i had resolved it however it was pointed out by the orignal author of the paper whose study i was trying to replicate in a bayesian inference that i need to work at the level of smooths i m trying to calculate the first derivative of the estimated trend by making predictions at the x predictor separated by a small epsilon is there currently a way of getting the linear sum of only the spline term predictions i don t want the fixed effect and intercept in my predictions but i do want the sampling variance within the spline term predictions thank you
| 1
|
3,878
| 6,726,596,822
|
IssuesEvent
|
2017-10-17 10:26:39
|
yarnpkg/yarn
|
https://api.github.com/repos/yarnpkg/yarn
|
closed
|
Support --registry flag from CLI commands
|
bug-configuration cat-compatibility cat-enhancement good first issue help wanted triaged
|
<!-- *Before creating an issue please make sure you are using the latest version of yarn.* -->
**Do you want to request a _feature_ or report a _bug_?**
yes, feature
**What is the current behavior?**
Only global config is available for setting registry
**What is the expected behavior?**
yarg add/install --registry https://custom.registry works
**Please mention your node.js, yarn and operating system version.**
Node. 6.4.0
macOS. 10.11
yarn. 0.15.1
|
True
|
Support --registry flag from CLI commands - <!-- *Before creating an issue please make sure you are using the latest version of yarn.* -->
**Do you want to request a _feature_ or report a _bug_?**
yes, feature
**What is the current behavior?**
Only global config is available for setting registry
**What is the expected behavior?**
yarg add/install --registry https://custom.registry works
**Please mention your node.js, yarn and operating system version.**
Node. 6.4.0
macOS. 10.11
yarn. 0.15.1
|
non_process
|
support registry flag from cli commands do you want to request a feature or report a bug yes feature what is the current behavior only global config is available for setting registry what is the expected behavior yarg add install registry works please mention your node js yarn and operating system version node macos yarn
| 0
|
43,568
| 5,544,127,598
|
IssuesEvent
|
2017-03-22 18:24:53
|
chamilo/chamilo-lms
|
https://api.github.com/repos/chamilo/chamilo-lms
|
closed
|
Sesiรณn - Lista por categorรญa
|
Bug Requires testing
|
### Expected behavior / Resultado esperado / Rรฉsultat attendu
Cuando doy click en el enlace de cantidad de sesiones en la pรกgina de lista de categorรญas de sesiones, deberรญa listarme las sesiones que estan dentro de una categorรญa.
### Actual behavior / Resultado real / Rรฉsultat rรฉel
Me parece la lista vacia.
### Steps to reproduce / Pasos para reproducir / รtapes pour reproduire
- Crear una categorรญa de sesiรณn
- Crear una sesiรณn dentro de esta caategorรญa
- Ir a la lista de sesiones /main/session/session_category_list.php
- Entrar a una categorรญa por el enlace que indica la cantidad de sesiones /main/session/session_list.php?id_category=3
- No lista las sesiones de esta categorรญa
Estoy usando Chamilo 1.11.x del 14 Enero 2017
|
1.0
|
Sesiรณn - Lista por categorรญa - ### Expected behavior / Resultado esperado / Rรฉsultat attendu
Cuando doy click en el enlace de cantidad de sesiones en la pรกgina de lista de categorรญas de sesiones, deberรญa listarme las sesiones que estan dentro de una categorรญa.
### Actual behavior / Resultado real / Rรฉsultat rรฉel
Me parece la lista vacia.
### Steps to reproduce / Pasos para reproducir / รtapes pour reproduire
- Crear una categorรญa de sesiรณn
- Crear una sesiรณn dentro de esta caategorรญa
- Ir a la lista de sesiones /main/session/session_category_list.php
- Entrar a una categorรญa por el enlace que indica la cantidad de sesiones /main/session/session_list.php?id_category=3
- No lista las sesiones de esta categorรญa
Estoy usando Chamilo 1.11.x del 14 Enero 2017
|
non_process
|
sesiรณn lista por categorรญa expected behavior resultado esperado rรฉsultat attendu cuando doy click en el enlace de cantidad de sesiones en la pรกgina de lista de categorรญas de sesiones deberรญa listarme las sesiones que estan dentro de una categorรญa actual behavior resultado real rรฉsultat rรฉel me parece la lista vacia steps to reproduce pasos para reproducir รฉtapes pour reproduire crear una categorรญa de sesiรณn crear una sesiรณn dentro de esta caategorรญa ir a la lista de sesiones main session session category list php entrar a una categorรญa por el enlace que indica la cantidad de sesiones main session session list php id category no lista las sesiones de esta categorรญa estoy usando chamilo x del enero
| 0
|
221,005
| 24,590,398,680
|
IssuesEvent
|
2022-10-14 01:13:40
|
faizulho/vue-profile-website
|
https://api.github.com/repos/faizulho/vue-profile-website
|
opened
|
CVE-2022-37601 (High) detected in loader-utils-0.2.17.tgz, loader-utils-1.4.0.tgz
|
security vulnerability
|
## CVE-2022-37601 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>loader-utils-0.2.17.tgz</b>, <b>loader-utils-1.4.0.tgz</b></p></summary>
<p>
<details><summary><b>loader-utils-0.2.17.tgz</b></p></summary>
<p>utils for webpack loaders</p>
<p>Library home page: <a href="https://registry.npmjs.org/loader-utils/-/loader-utils-0.2.17.tgz">https://registry.npmjs.org/loader-utils/-/loader-utils-0.2.17.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/html-webpack-plugin/node_modules/loader-utils/package.json</p>
<p>
Dependency Hierarchy:
- cli-service-4.5.13.tgz (Root Library)
- html-webpack-plugin-3.2.0.tgz
- :x: **loader-utils-0.2.17.tgz** (Vulnerable Library)
</details>
<details><summary><b>loader-utils-1.4.0.tgz</b></p></summary>
<p>utils for webpack loaders</p>
<p>Library home page: <a href="https://registry.npmjs.org/loader-utils/-/loader-utils-1.4.0.tgz">https://registry.npmjs.org/loader-utils/-/loader-utils-1.4.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/loader-utils/package.json</p>
<p>
Dependency Hierarchy:
- cli-plugin-babel-4.5.8.tgz (Root Library)
- babel-loader-8.1.0.tgz
- :x: **loader-utils-1.4.0.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in function parseQuery in parseQuery.js in webpack loader-utils 2.0.0 via the name variable in parseQuery.js.
<p>Publish Date: 2022-10-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-37601>CVE-2022-37601</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-12</p>
<p>Fix Resolution (loader-utils): 2.0.0</p>
<p>Direct dependency fix Resolution (@vue/cli-plugin-babel): 5.0.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-37601 (High) detected in loader-utils-0.2.17.tgz, loader-utils-1.4.0.tgz - ## CVE-2022-37601 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>loader-utils-0.2.17.tgz</b>, <b>loader-utils-1.4.0.tgz</b></p></summary>
<p>
<details><summary><b>loader-utils-0.2.17.tgz</b></p></summary>
<p>utils for webpack loaders</p>
<p>Library home page: <a href="https://registry.npmjs.org/loader-utils/-/loader-utils-0.2.17.tgz">https://registry.npmjs.org/loader-utils/-/loader-utils-0.2.17.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/html-webpack-plugin/node_modules/loader-utils/package.json</p>
<p>
Dependency Hierarchy:
- cli-service-4.5.13.tgz (Root Library)
- html-webpack-plugin-3.2.0.tgz
- :x: **loader-utils-0.2.17.tgz** (Vulnerable Library)
</details>
<details><summary><b>loader-utils-1.4.0.tgz</b></p></summary>
<p>utils for webpack loaders</p>
<p>Library home page: <a href="https://registry.npmjs.org/loader-utils/-/loader-utils-1.4.0.tgz">https://registry.npmjs.org/loader-utils/-/loader-utils-1.4.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/loader-utils/package.json</p>
<p>
Dependency Hierarchy:
- cli-plugin-babel-4.5.8.tgz (Root Library)
- babel-loader-8.1.0.tgz
- :x: **loader-utils-1.4.0.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in function parseQuery in parseQuery.js in webpack loader-utils 2.0.0 via the name variable in parseQuery.js.
<p>Publish Date: 2022-10-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-37601>CVE-2022-37601</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-12</p>
<p>Fix Resolution (loader-utils): 2.0.0</p>
<p>Direct dependency fix Resolution (@vue/cli-plugin-babel): 5.0.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in loader utils tgz loader utils tgz cve high severity vulnerability vulnerable libraries loader utils tgz loader utils tgz loader utils tgz utils for webpack loaders library home page a href path to dependency file package json path to vulnerable library node modules html webpack plugin node modules loader utils package json dependency hierarchy cli service tgz root library html webpack plugin tgz x loader utils tgz vulnerable library loader utils tgz utils for webpack loaders library home page a href path to dependency file package json path to vulnerable library node modules loader utils package json dependency hierarchy cli plugin babel tgz root library babel loader tgz x loader utils tgz vulnerable library found in base branch main vulnerability details prototype pollution vulnerability in function parsequery in parsequery js in webpack loader utils via the name variable in parsequery js publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution loader utils direct dependency fix resolution vue cli plugin babel step up your open source security game with mend
| 0
|
10,579
| 13,389,371,511
|
IssuesEvent
|
2020-09-02 18:46:40
|
jgraley/inferno-cpp2v
|
https://api.github.com/repos/jgraley/inferno-cpp2v
|
closed
|
Make variable ordering match
|
Constraint Processing
|
In the `SimpleSolver`, I think the way variables are determined is giving a breadth-first ordering relative to the program tree. But everything else, and in particular the `DecidedCompare()` recursive walk is in depth-first. I think with a recursive function we can get those variables to appear in the same order. This will make comparison of conjecture vs simple solver easier.
|
1.0
|
Make variable ordering match - In the `SimpleSolver`, I think the way variables are determined is giving a breadth-first ordering relative to the program tree. But everything else, and in particular the `DecidedCompare()` recursive walk is in depth-first. I think with a recursive function we can get those variables to appear in the same order. This will make comparison of conjecture vs simple solver easier.
|
process
|
make variable ordering match in the simplesolver i think the way variables are determined is giving a breadth first ordering relative to the program tree but everything else and in particular the decidedcompare recursive walk is in depth first i think with a recursive function we can get those variables to appear in the same order this will make comparison of conjecture vs simple solver easier
| 1
|
5,166
| 7,940,822,784
|
IssuesEvent
|
2018-07-10 00:40:49
|
brucemiller/LaTeXML
|
https://api.github.com/repos/brucemiller/LaTeXML
|
closed
|
Incorrect references to tables, figures and equations
|
bug postprocessing
|
Affects XML and HTML document collections (i.e. split into multiple documents) generated with LaTeXML 0.8.2 using latexml LaTeX -> XML followed by latexmlpost XML -> collection of XML/HTML documents with splitnaming set to labelrelative and urlstyle set to server.
href attributes of hyperlinks (HTML)/refs (XML) to tables/figures/equations in index.html/index.xml pages do not get prefixed with index.html/index.xml. This gets resolved on the server if referenced from another branch of the tree, but does work not in the descendants of the page, which end up with the references to local same-page anchors (i.e. prefixed with "#"), which do not exist.
Using my test files and the commands listed below, in HTML collection, links to the the `table/figure/equation in minimalRefs-html5-html/Introduction/chap_testChap1/ssec_test_section1/index.html` from subsection of the same section, e.g. `/minimalRefs-html5-html/Introduction/chap_testChap1/ssec_test_section1/ssec_testsub.html` don't work as hrefs contain strings like "#Ch1.T1" and get interpreted as `minimalRefs-html5-html/Introduction/chap_testChap1/ssec_test_section1/ssec_testsub.html#Ch1.T1.`
A reference from another chapter, e.g. `href="../../../Introduction/chap_testChap1/ssec_test_section1/#Ch1.T1"` works on a server.
References to objects occurring in pages, which are not named "index" get correct hrefs and work fine, e.g. href in a hyperlink from `minimalRefs-html5-html/Introduction/chap_testChap1/ssec_test_section1/ssec_testsub2.html` linking to a table in `minimalRefs-html5-html/Introduction/chap_testChap1/ssec_test_section1/ssec_testsub.html#Ch1.T2` is
`href="ssec_testsub.html#Ch1.T2.`
I attach the files used for the test.
Commands used:
```
latexml --dest=minimalRefsTest.xml minimalRefsTest.tex
latexmlpost --dest=minimalRefs-html5-html/index.html --urlstyle=server --navigationtoc=context --splitat=subsection --splitnaming=labelrelative --css=LaTeXML-navbar-left.css --format=html5 minimalRefsTest.xml
```
or
```
latexmlpost --dest=minimalRefs-xml-xml/index.xml --urlstyle=server --navigationtoc=context --splitat=subsection --splitnaming=labelrelative --css=LaTeXML-navbar-left.css --format=xml minimalRefsTest.xml
```
[minimalRefsTest.zip](https://github.com/brucemiller/LaTeXML/files/2123059/minimalRefsTest.zip)
|
1.0
|
Incorrect references to tables, figures and equations - Affects XML and HTML document collections (i.e. split into multiple documents) generated with LaTeXML 0.8.2 using latexml LaTeX -> XML followed by latexmlpost XML -> collection of XML/HTML documents with splitnaming set to labelrelative and urlstyle set to server.
href attributes of hyperlinks (HTML)/refs (XML) to tables/figures/equations in index.html/index.xml pages do not get prefixed with index.html/index.xml. This gets resolved on the server if referenced from another branch of the tree, but does work not in the descendants of the page, which end up with the references to local same-page anchors (i.e. prefixed with "#"), which do not exist.
Using my test files and the commands listed below, in HTML collection, links to the the `table/figure/equation in minimalRefs-html5-html/Introduction/chap_testChap1/ssec_test_section1/index.html` from subsection of the same section, e.g. `/minimalRefs-html5-html/Introduction/chap_testChap1/ssec_test_section1/ssec_testsub.html` don't work as hrefs contain strings like "#Ch1.T1" and get interpreted as `minimalRefs-html5-html/Introduction/chap_testChap1/ssec_test_section1/ssec_testsub.html#Ch1.T1.`
A reference from another chapter, e.g. `href="../../../Introduction/chap_testChap1/ssec_test_section1/#Ch1.T1"` works on a server.
References to objects occurring in pages, which are not named "index" get correct hrefs and work fine, e.g. href in a hyperlink from `minimalRefs-html5-html/Introduction/chap_testChap1/ssec_test_section1/ssec_testsub2.html` linking to a table in `minimalRefs-html5-html/Introduction/chap_testChap1/ssec_test_section1/ssec_testsub.html#Ch1.T2` is
`href="ssec_testsub.html#Ch1.T2.`
I attach the files used for the test.
Commands used:
```
latexml --dest=minimalRefsTest.xml minimalRefsTest.tex
latexmlpost --dest=minimalRefs-html5-html/index.html --urlstyle=server --navigationtoc=context --splitat=subsection --splitnaming=labelrelative --css=LaTeXML-navbar-left.css --format=html5 minimalRefsTest.xml
```
or
```
latexmlpost --dest=minimalRefs-xml-xml/index.xml --urlstyle=server --navigationtoc=context --splitat=subsection --splitnaming=labelrelative --css=LaTeXML-navbar-left.css --format=xml minimalRefsTest.xml
```
[minimalRefsTest.zip](https://github.com/brucemiller/LaTeXML/files/2123059/minimalRefsTest.zip)
|
process
|
incorrect references to tables figures and equations affects xml and html document collections i e split into multiple documents generated with latexml using latexml latex xml followed by latexmlpost xml collection of xml html documents with splitnaming set to labelrelative and urlstyle set to server href attributes of hyperlinks html refs xml to tables figures equations in index html index xml pages do not get prefixed with index html index xml this gets resolved on the server if referenced from another branch of the tree but does work not in the descendants of the page which end up with the references to local same page anchors i e prefixed with which do not exist using my test files and the commands listed below in html collection links to the the table figure equation in minimalrefs html introduction chap ssec test index html from subsection of the same section e g minimalrefs html introduction chap ssec test ssec testsub html don t work as hrefs contain strings like and get interpreted as minimalrefs html introduction chap ssec test ssec testsub html a reference from another chapter e g href introduction chap ssec test works on a server references to objects occurring in pages which are not named index get correct hrefs and work fine e g href in a hyperlink from minimalrefs html introduction chap ssec test ssec html linking to a table in minimalrefs html introduction chap ssec test ssec testsub html is href ssec testsub html i attach the files used for the test commands used latexml dest minimalrefstest xml minimalrefstest tex latexmlpost dest minimalrefs html index html urlstyle server navigationtoc context splitat subsection splitnaming labelrelative css latexml navbar left css format minimalrefstest xml or latexmlpost dest minimalrefs xml xml index xml urlstyle server navigationtoc context splitat subsection splitnaming labelrelative css latexml navbar left css format xml minimalrefstest xml
| 1
|
34,593
| 9,417,936,590
|
IssuesEvent
|
2019-04-10 17:57:46
|
eclipse/openj9
|
https://api.github.com/repos/eclipse/openj9
|
closed
|
Regenerate test jobs to Add BUILD_TYPE
|
comp:build
|
I have created views with regexes to help sort the new separated jobs.
Please regenerate all the Test jobs to include the build type suffix
1. Nightly
1. OMR
1. Personal
1. Release
ex. Test-sanity.functional-JDK8-linux_x86-64_cmprssptrs_Nightly
Related #5182
cc @llxia
|
1.0
|
Regenerate test jobs to Add BUILD_TYPE - I have created views with regexes to help sort the new separated jobs.
Please regenerate all the Test jobs to include the build type suffix
1. Nightly
1. OMR
1. Personal
1. Release
ex. Test-sanity.functional-JDK8-linux_x86-64_cmprssptrs_Nightly
Related #5182
cc @llxia
|
non_process
|
regenerate test jobs to add build type i have created views with regexes to help sort the new separated jobs please regenerate all the test jobs to include the build type suffix nightly omr personal release ex test sanity functional linux cmprssptrs nightly related cc llxia
| 0
|
10,883
| 13,653,763,297
|
IssuesEvent
|
2020-09-27 14:18:35
|
raxod502/straight.el
|
https://api.github.com/repos/raxod502/straight.el
|
closed
|
"Process failed" error is very poor
|
error handling external command messaging process buffer ux
|
Currently when a subprocess invocation fails unexpectedly, you get
```
(error "Failed to run \"git\"; see buffer *straight-process*")
```
which is pretty bad. Firstly we should show part of the output from the subprocess in the minibuffer. Secondly we should just pop to the `*straight-process*` buffer automatically, instead of telling the user to do it themselves.
|
1.0
|
"Process failed" error is very poor - Currently when a subprocess invocation fails unexpectedly, you get
```
(error "Failed to run \"git\"; see buffer *straight-process*")
```
which is pretty bad. Firstly we should show part of the output from the subprocess in the minibuffer. Secondly we should just pop to the `*straight-process*` buffer automatically, instead of telling the user to do it themselves.
|
process
|
process failed error is very poor currently when a subprocess invocation fails unexpectedly you get error failed to run git see buffer straight process which is pretty bad firstly we should show part of the output from the subprocess in the minibuffer secondly we should just pop to the straight process buffer automatically instead of telling the user to do it themselves
| 1
|
277,813
| 24,104,922,956
|
IssuesEvent
|
2022-09-20 06:32:06
|
jajm/koha-staff-interface-redesign
|
https://api.github.com/repos/jajm/koha-staff-interface-redesign
|
closed
|
Menu link in the top bar needs to be white on small screens
|
type: bug status: needs testing
|
When looking at the new GUI on a small screen, the menu icon has now green link text, which has no good contrast. It should be white as the other entries there on bigger screens.
(search bar issue will be filed separately)

|
1.0
|
Menu link in the top bar needs to be white on small screens - When looking at the new GUI on a small screen, the menu icon has now green link text, which has no good contrast. It should be white as the other entries there on bigger screens.
(search bar issue will be filed separately)

|
non_process
|
menu link in the top bar needs to be white on small screens when looking at the new gui on a small screen the menu icon has now green link text which has no good contrast it should be white as the other entries there on bigger screens search bar issue will be filed separately
| 0
|
426,932
| 12,390,441,462
|
IssuesEvent
|
2020-05-20 10:40:55
|
wulkano/getkap.co
|
https://api.github.com/repos/wulkano/getkap.co
|
closed
|
Use hazel update server for download link
|
Priority: High Type: Enhancement
|
Right now the download link is hard-coded to the latest version, I think we have a url somewhere that redirects to the latest release, we should use that instead.
|
1.0
|
Use hazel update server for download link - Right now the download link is hard-coded to the latest version, I think we have a url somewhere that redirects to the latest release, we should use that instead.
|
non_process
|
use hazel update server for download link right now the download link is hard coded to the latest version i think we have a url somewhere that redirects to the latest release we should use that instead
| 0
|
34,494
| 2,781,575,136
|
IssuesEvent
|
2015-05-06 14:05:53
|
SoylentNews/rehash
|
https://api.github.com/repos/SoylentNews/rehash
|
closed
|
Server Error when preview comment having char < 0x1f, or char==0x7f
|
Bug: Critical Priority: High
|
_From @marty-b on March 29, 2015 21:28_
Tried to submit a comment to: https://dev.soylentnews.org/article.pl?sid=15/03/29/1622202
1.) Viewed story
2.) Clicked on Reply button
3.) Entered comment title: "0x7f - POT"
4.) Selected "Plain Old Text" from the Drop-Down list
5.) Clicked on 'Preview' button.
6.) Got server error:
OK
The server encountered an internal error or misconfiguration and was unable to complete your request.
Please contact the server administrator, slash@dev.soylentnews.org and inform them of the time the error occurred, and anything you might have done that may have caused the error.
More information about this error may be available in the server error log.
7.) paulej72 provided this snippet from logs:
[Sun Mar 29 20:49:50 2015] [error] /comments.pl:ModPerl::ROOT::ModPerl::Registry::srv_soylentnews_2eorg_rehash_site_soylent_2dmainpage_htdocs_comments_2epl:/srv/soylentnews.org/rehash/site/soylent-mainpage/htdocs/comments.pl:506:cannot getSkin for empty skid='0' ;; Which was called by:ModPerl::ROOT::ModPerl::Registry::srv_soylentnews_2eorg_rehash_site_soylent_2dmainpage_htdocs_comments_2epl:/srv/soylentnews.org/rehash/site/soylent-mainpage/htdocs/comments.pl:329
[Sun Mar 29 20:51:00 2015] [error] Cannot decode string with wide characters at /srv/soylentnews.org/perl/lib/perl5/site_perl/5.20.1/x86_64-linux-thread-multi/Encode.pm line 241.\n
_Copied from original issue: SoylentNews/slashcode#435_
|
1.0
|
Server Error when preview comment having char < 0x1f, or char==0x7f - _From @marty-b on March 29, 2015 21:28_
Tried to submit a comment to: https://dev.soylentnews.org/article.pl?sid=15/03/29/1622202
1.) Viewed story
2.) Clicked on Reply button
3.) Entered comment title: "0x7f - POT"
4.) Selected "Plain Old Text" from the Drop-Down list
5.) Clicked on 'Preview' button.
6.) Got server error:
OK
The server encountered an internal error or misconfiguration and was unable to complete your request.
Please contact the server administrator, slash@dev.soylentnews.org and inform them of the time the error occurred, and anything you might have done that may have caused the error.
More information about this error may be available in the server error log.
7.) paulej72 provided this snippet from logs:
[Sun Mar 29 20:49:50 2015] [error] /comments.pl:ModPerl::ROOT::ModPerl::Registry::srv_soylentnews_2eorg_rehash_site_soylent_2dmainpage_htdocs_comments_2epl:/srv/soylentnews.org/rehash/site/soylent-mainpage/htdocs/comments.pl:506:cannot getSkin for empty skid='0' ;; Which was called by:ModPerl::ROOT::ModPerl::Registry::srv_soylentnews_2eorg_rehash_site_soylent_2dmainpage_htdocs_comments_2epl:/srv/soylentnews.org/rehash/site/soylent-mainpage/htdocs/comments.pl:329
[Sun Mar 29 20:51:00 2015] [error] Cannot decode string with wide characters at /srv/soylentnews.org/perl/lib/perl5/site_perl/5.20.1/x86_64-linux-thread-multi/Encode.pm line 241.\n
_Copied from original issue: SoylentNews/slashcode#435_
|
non_process
|
server error when preview comment having char or char from marty b on march tried to submit a comment to viewed story clicked on reply button entered comment title pot selected plain old text from the drop down list clicked on preview button got server error ok the server encountered an internal error or misconfiguration and was unable to complete your request please contact the server administrator slash dev soylentnews org and inform them of the time the error occurred and anything you might have done that may have caused the error more information about this error may be available in the server error log provided this snippet from logs comments pl modperl root modperl registry srv soylentnews rehash site soylent htdocs comments srv soylentnews org rehash site soylent mainpage htdocs comments pl cannot getskin for empty skid which was called by modperl root modperl registry srv soylentnews rehash site soylent htdocs comments srv soylentnews org rehash site soylent mainpage htdocs comments pl cannot decode string with wide characters at srv soylentnews org perl lib site perl linux thread multi encode pm line n copied from original issue soylentnews slashcode
| 0
|
1,814
| 4,561,746,316
|
IssuesEvent
|
2016-09-14 12:52:38
|
openvstorage/framework
|
https://api.github.com/repos/openvstorage/framework
|
closed
|
Rescan not possible if a volumedriver disk is broken
|
process_duplicate type_bug
|
When losing a write role and trying to replace it we get this problem:
```
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/celery/app/trace.py", line 240, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/celery/app/trace.py", line 438, in __protected_call__
return self.run(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/helpers/decorators.py", line 301, in new_function
output = function(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/disk.py", line 212, in sync_with_reality
partition.delete()
File "/opt/OpenvStorage/ovs/dal/dataobject.py", line 700, in delete
raise LinkedObjectException('There {0} left in self.{1}'.format(multi, key))
LinkedObjectException: There are 4 items left in self.storagedrivers
```
|
1.0
|
Rescan not possible if a volumedriver disk is broken - When losing a write role and trying to replace it we get this problem:
```
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/celery/app/trace.py", line 240, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/celery/app/trace.py", line 438, in __protected_call__
return self.run(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/helpers/decorators.py", line 301, in new_function
output = function(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/disk.py", line 212, in sync_with_reality
partition.delete()
File "/opt/OpenvStorage/ovs/dal/dataobject.py", line 700, in delete
raise LinkedObjectException('There {0} left in self.{1}'.format(multi, key))
LinkedObjectException: There are 4 items left in self.storagedrivers
```
|
process
|
rescan not possible if a volumedriver disk is broken when losing a write role and trying to replace it we get this problem traceback most recent call last file usr lib dist packages celery app trace py line in trace task r retval fun args kwargs file usr lib dist packages celery app trace py line in protected call return self run args kwargs file opt openvstorage ovs lib helpers decorators py line in new function output function args kwargs file opt openvstorage ovs lib disk py line in sync with reality partition delete file opt openvstorage ovs dal dataobject py line in delete raise linkedobjectexception there left in self format multi key linkedobjectexception there are items left in self storagedrivers
| 1
|
7,662
| 10,755,810,461
|
IssuesEvent
|
2019-10-31 09:55:17
|
linnovate/root
|
https://api.github.com/repos/linnovate/root
|
opened
|
tasks from meetings inheritance
|
2.0.8 Process bug
|
go to meetings
open meeting add watcher and give him permissions
go to tasks tab and open task from meetings
result : the mission did not receive inheritance from the discussion
|
1.0
|
tasks from meetings inheritance - go to meetings
open meeting add watcher and give him permissions
go to tasks tab and open task from meetings
result : the mission did not receive inheritance from the discussion
|
process
|
tasks from meetings inheritance go to meetings open meeting add watcher and give him permissions go to tasks tab and open task from meetings result the mission did not receive inheritance from the discussion
| 1
|
11,348
| 14,170,012,457
|
IssuesEvent
|
2020-11-12 14:00:30
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
Process tests do not return for 5 minutes
|
area-System.Diagnostics.Process untriaged
|
If you run the System.Diagnostics.Process tests locally on Windows, after the tests themselves complete there will be a 5 minute delay before `dotnet build /t:test` returns. this happens after this point:
```
=== TEST EXECUTION SUMMARY ===
System.Diagnostics.Process.Tests Total: 302, Errors: 0, Failed: 0, Skipped: 1, Time: 52.678s
----- end Wed 11/11/2020 10:12:30.44 ----- exit code 0 ----------------------------------------------------------
```
This is because MSBuild is waiting on the spawned cmd process that runs xunit, and that is waiting on two test processes that the tests created but did not terminate. Both of those were clearly created with `CreateProcessLong()` because they are doing a 5 minute sleep. Such processes are intended to be terminated by tests. It needs to be figured out which test isn't terminating them. I looked through the code and didn't see the issue; bisection may help.
After fixing that, there is another delay, this one in the `WaitAsyncForProcess()` test: it creates a process that sleeps for 30 seconds, and waits for it to exit. Probably instead the child process should wait to exit on some event set by the parent after the parent has called `WaitForExitAsync()`. Note this test is outerloop.
|
1.0
|
Process tests do not return for 5 minutes - If you run the System.Diagnostics.Process tests locally on Windows, after the tests themselves complete there will be a 5 minute delay before `dotnet build /t:test` returns. this happens after this point:
```
=== TEST EXECUTION SUMMARY ===
System.Diagnostics.Process.Tests Total: 302, Errors: 0, Failed: 0, Skipped: 1, Time: 52.678s
----- end Wed 11/11/2020 10:12:30.44 ----- exit code 0 ----------------------------------------------------------
```
This is because MSBuild is waiting on the spawned cmd process that runs xunit, and that is waiting on two test processes that the tests created but did not terminate. Both of those were clearly created with `CreateProcessLong()` because they are doing a 5 minute sleep. Such processes are intended to be terminated by tests. It needs to be figured out which test isn't terminating them. I looked through the code and didn't see the issue; bisection may help.
After fixing that, there is another delay, this one in the `WaitAsyncForProcess()` test: it creates a process that sleeps for 30 seconds, and waits for it to exit. Probably instead the child process should wait to exit on some event set by the parent after the parent has called `WaitForExitAsync()`. Note this test is outerloop.
|
process
|
process tests do not return for minutes if you run the system diagnostics process tests locally on windows after the tests themselves complete there will be a minute delay before dotnet build t test returns this happens after this point test execution summary system diagnostics process tests total errors failed skipped time end wed exit code this is because msbuild is waiting on the spawned cmd process that runs xunit and that is waiting on two test processes that the tests created but did not terminate both of those were clearly created with createprocesslong because they are doing a minute sleep such processes are intended to be terminated by tests it needs to be figured out which test isn t terminating them i looked through the code and didn t see the issue bisection may help after fixing that there is another delay this one in the waitasyncforprocess test it creates a process that sleeps for seconds and waits for it to exit probably instead the child process should wait to exit on some event set by the parent after the parent has called waitforexitasync note this test is outerloop
| 1
|
11,068
| 13,903,782,551
|
IssuesEvent
|
2020-10-20 07:44:23
|
googleapis/nodejs-error-reporting
|
https://api.github.com/repos/googleapis/nodejs-error-reporting
|
opened
|
chore: add api-logging team to codeowners
|
api: clouderrorreporting lang: nodejs priority: p2 type: process
|
- removed "stackdriver" brand
- added api-logging team to codeowners
|
1.0
|
chore: add api-logging team to codeowners - - removed "stackdriver" brand
- added api-logging team to codeowners
|
process
|
chore add api logging team to codeowners removed stackdriver brand added api logging team to codeowners
| 1
|
18,700
| 24,595,906,558
|
IssuesEvent
|
2022-10-14 08:18:51
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Big query] Form steps > Response records are not getting created in the views
|
Bug Blocker P0 Response datastore Process: Fixed Process: Tested dev
|
AR: Form steps > Response records are not getting created in the views
ER: Form steps > Response records should get created in the views and response tables
[Note: Issue should be fixed for both instruction step and form step questions]
|
2.0
|
[Big query] Form steps > Response records are not getting created in the views - AR: Form steps > Response records are not getting created in the views
ER: Form steps > Response records should get created in the views and response tables
[Note: Issue should be fixed for both instruction step and form step questions]
|
process
|
form steps response records are not getting created in the views ar form steps response records are not getting created in the views er form steps response records should get created in the views and response tables
| 1
|
19,038
| 25,042,549,729
|
IssuesEvent
|
2022-11-04 22:56:45
|
USGS-WiM/StreamStats
|
https://api.github.com/repos/USGS-WiM/StreamStats
|
opened
|
BP: Add user instructions
|
Batch Processor
|
Part of #1455
We should provide some user instructions on the form.
We may also want to provide content to the StreamStats team about the BP on the Help > User Manual and Help > FAQ sections.
|
1.0
|
BP: Add user instructions - Part of #1455
We should provide some user instructions on the form.
We may also want to provide content to the StreamStats team about the BP on the Help > User Manual and Help > FAQ sections.
|
process
|
bp add user instructions part of we should provide some user instructions on the form we may also want to provide content to the streamstats team about the bp on the help user manual and help faq sections
| 1
|
6,393
| 9,476,263,116
|
IssuesEvent
|
2019-04-19 14:31:05
|
cityofaustin/techstack
|
https://api.github.com/repos/cityofaustin/techstack
|
closed
|
Form to lead user to right content CG (process page?)
|
Content Type: Process Page Size: L Team: Content
|
Deliverable in Community Garden project scope doc
https://docs.google.com/document/d/1NVNrS0FIfef1G5dxM8-u7Bvs7F_j3S5gQJjkcavpmWI/edit#
-development process tbd
- [x] Copy drafted
|
1.0
|
Form to lead user to right content CG (process page?) - Deliverable in Community Garden project scope doc
https://docs.google.com/document/d/1NVNrS0FIfef1G5dxM8-u7Bvs7F_j3S5gQJjkcavpmWI/edit#
-development process tbd
- [x] Copy drafted
|
process
|
form to lead user to right content cg process page deliverable in community garden project scope doc development process tbd copy drafted
| 1
|
18,701
| 24,596,180,673
|
IssuesEvent
|
2022-10-14 08:32:05
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Error: Error in migration engine. Reason: [migration-engine\connectors\sql-migration-connector\src\sql_renderer\mysql_renderer.rs:522:97] internal error: entered unreachable code
|
bug/1-unconfirmed kind/bug process/candidate topic: error reporting team/schema
|
<!-- If required, please update the title to be clear and descriptive -->
Command: `prisma db push`
Version: `4.4.0`
Binary Version: `f352a33b70356f46311da8b00d83386dd9f145d6`
Report: https://prisma-errors.netlify.app/report/14362
OS: `x64 win32 10.0.22000`
|
1.0
|
Error: Error in migration engine. Reason: [migration-engine\connectors\sql-migration-connector\src\sql_renderer\mysql_renderer.rs:522:97] internal error: entered unreachable code - <!-- If required, please update the title to be clear and descriptive -->
Command: `prisma db push`
Version: `4.4.0`
Binary Version: `f352a33b70356f46311da8b00d83386dd9f145d6`
Report: https://prisma-errors.netlify.app/report/14362
OS: `x64 win32 10.0.22000`
|
process
|
error error in migration engine reason internal error entered unreachable code command prisma db push version binary version report os
| 1
|
392,967
| 11,598,348,805
|
IssuesEvent
|
2020-02-24 22:53:22
|
dmwm/WMCore
|
https://api.github.com/repos/dmwm/WMCore
|
closed
|
Locking data from request manager in the view of dynamo
|
High Priority New Feature Unified Porting WMStats
|
Opening this GH issue for starting the thread that could lead to a more stable system.
Dynamo (@yiiyama) needs to know the data that should not be touch, to the eye of Dataops. unified is taking care of this for the moment and in the way it is implemented right now, I think this would be best as a view in reqmgr2.
All inputs of workflows in status assignment-approved, acquired, assigned, running-open, running-closed, force-complete, completed, should appear in the list.
All outputs of workfows in acquired, assigned, running-open, running-closed, force-complete, completed should appear in the list.
All secondary of workflows in status assignment-approved, acquired, assigned, running-open, running-closed, force-complete, completed less than 2 months ago should appear in the list.
This should cover most use cases.
|
1.0
|
Locking data from request manager in the view of dynamo - Opening this GH issue for starting the thread that could lead to a more stable system.
Dynamo (@yiiyama) needs to know the data that should not be touch, to the eye of Dataops. unified is taking care of this for the moment and in the way it is implemented right now, I think this would be best as a view in reqmgr2.
All inputs of workflows in status assignment-approved, acquired, assigned, running-open, running-closed, force-complete, completed, should appear in the list.
All outputs of workfows in acquired, assigned, running-open, running-closed, force-complete, completed should appear in the list.
All secondary of workflows in status assignment-approved, acquired, assigned, running-open, running-closed, force-complete, completed less than 2 months ago should appear in the list.
This should cover most use cases.
|
non_process
|
locking data from request manager in the view of dynamo opening this gh issue for starting the thread that could lead to a more stable system dynamo yiiyama needs to know the data that should not be touch to the eye of dataops unified is taking care of this for the moment and in the way it is implemented right now i think this would be best as a view in all inputs of workflows in status assignment approved acquired assigned running open running closed force complete completed should appear in the list all outputs of workfows in acquired assigned running open running closed force complete completed should appear in the list all secondary of workflows in status assignment approved acquired assigned running open running closed force complete completed less than months ago should appear in the list this should cover most use cases
| 0
|
173
| 2,736,253,850
|
IssuesEvent
|
2015-04-19 08:06:08
|
tjhancocks/Nova
|
https://api.github.com/repos/tjhancocks/Nova
|
closed
|
x86 Interrupt Request Handlers
|
core architecture cpu enhancement kernel
|
Implement the 16 Interrupt Requests that allow for hardware originating interrupts
|
1.0
|
x86 Interrupt Request Handlers - Implement the 16 Interrupt Requests that allow for hardware originating interrupts
|
non_process
|
interrupt request handlers implement the interrupt requests that allow for hardware originating interrupts
| 0
|
27,130
| 2,690,529,558
|
IssuesEvent
|
2015-03-31 16:33:29
|
IQSS/dataverse
|
https://api.github.com/repos/IQSS/dataverse
|
closed
|
File Download: Download term of access popup now has a third button, validation.
|
Component: File Upload & Handling Priority: Critical Status: QA Type: Bug
|
The third button always allows downloading, regardless of accepting terms. This seems like a developer's debug code was checked in by mistake.
|
1.0
|
File Download: Download term of access popup now has a third button, validation. -
The third button always allows downloading, regardless of accepting terms. This seems like a developer's debug code was checked in by mistake.
|
non_process
|
file download download term of access popup now has a third button validation the third button always allows downloading regardless of accepting terms this seems like a developer s debug code was checked in by mistake
| 0
|
3,030
| 6,034,398,525
|
IssuesEvent
|
2017-06-09 10:59:34
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
investigate flaky test-benchmark-child-process on Windows
|
benchmark child_process test windows
|
* **Version**: 8.0.0-pre
* **Platform**: win2008r2
* **Subsystem**: test
<!-- Enter your issue details below this comment. -->
https://ci.nodejs.org/job/node-test-binary-windows/8189/RUN_SUBSET=3,VS_VERSION=vs2015-x86,label=win2008r2/console
```console
not ok 356 sequential/test-benchmark-child-process
---
duration_ms: 60.211
severity: fail
stack: |-
timeout
```
<ins>refack adding</ins>
Ref: https://github.com/nodejs/node/issues/12560
|
1.0
|
investigate flaky test-benchmark-child-process on Windows - * **Version**: 8.0.0-pre
* **Platform**: win2008r2
* **Subsystem**: test
<!-- Enter your issue details below this comment. -->
https://ci.nodejs.org/job/node-test-binary-windows/8189/RUN_SUBSET=3,VS_VERSION=vs2015-x86,label=win2008r2/console
```console
not ok 356 sequential/test-benchmark-child-process
---
duration_ms: 60.211
severity: fail
stack: |-
timeout
```
<ins>refack adding</ins>
Ref: https://github.com/nodejs/node/issues/12560
|
process
|
investigate flaky test benchmark child process on windows version pre platform subsystem test console not ok sequential test benchmark child process duration ms severity fail stack timeout refack adding ref
| 1
|
131,986
| 10,726,990,340
|
IssuesEvent
|
2019-10-28 10:36:55
|
neuromation/cookiecutter-neuro-project
|
https://api.github.com/repos/neuromation/cookiecutter-neuro-project
|
opened
|
Add functionality to skip project generation
|
tests
|
add pytest parameter `--project-path=...` to skip long project generation and necessity to do `make setup`.
Need to facilitate debugging other tests than `test_make_setup`.
|
1.0
|
Add functionality to skip project generation - add pytest parameter `--project-path=...` to skip long project generation and necessity to do `make setup`.
Need to facilitate debugging other tests than `test_make_setup`.
|
non_process
|
add functionality to skip project generation add pytest parameter project path to skip long project generation and necessity to do make setup need to facilitate debugging other tests than test make setup
| 0
|
327,879
| 9,982,750,873
|
IssuesEvent
|
2019-07-10 10:37:07
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.dealnews.com - desktop site instead of mobile site
|
browser-fenix engine-gecko priority-normal
|
<!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.dealnews.com/
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android
**Tested Another Browser**: No
**Problem type**: Desktop site instead of mobile site
**Description**: looks like the desktop site. unsure but scrolling is terrible
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with โค๏ธ_
|
1.0
|
www.dealnews.com - desktop site instead of mobile site - <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.dealnews.com/
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android
**Tested Another Browser**: No
**Problem type**: Desktop site instead of mobile site
**Description**: looks like the desktop site. unsure but scrolling is terrible
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with โค๏ธ_
|
non_process
|
desktop site instead of mobile site url browser version firefox mobile operating system android tested another browser no problem type desktop site instead of mobile site description looks like the desktop site unsure but scrolling is terrible steps to reproduce browser configuration none from with โค๏ธ
| 0
|
335,515
| 10,154,585,849
|
IssuesEvent
|
2019-08-06 08:23:52
|
Code-Poets/sheetstorm
|
https://api.github.com/repos/Code-Poets/sheetstorm
|
closed
|
Separate the days with bold horizontal lines in xlsx
|
feature priority medium
|
Should be done:
------------
- separate the days with bold lines in xlsx
-
-
-
|
1.0
|
Separate the days with bold horizontal lines in xlsx - Should be done:
------------
- separate the days with bold lines in xlsx
-
-
-
|
non_process
|
separate the days with bold horizontal lines in xlsx should be done separate the days with bold lines in xlsx
| 0
|
13,030
| 15,382,163,747
|
IssuesEvent
|
2021-03-03 00:04:01
|
retaildevcrews/ngsa
|
https://api.github.com/repos/retaildevcrews/ngsa
|
closed
|
NGSA - Survey - M2 - Sprint 2
|
EngPrac Process
|
### How well was the backlog maintained
- [ ] We did not use a backlog.
- [ ] We created a backlog, but did not maintain it.
- [ ] Our backlog was loosely defined for the project.
- [ ] Our backlog was organized into well-defined work items.
- [x] Our backlog was organized into well-defined work items and was actively maintained.
- [ ] Our backlog was thorough, maintained, and every pull request was associated with a work item.
### How effective was sprint planning
- [ ] We did not do any planning.
- [ ] We planned some of the work.
- [x] We planned but did not estimate the work.
- [ ] We underestimated and didnโt close out the sprint.
- [ ] All work was planned and well estimated.
- [ ] All work was planned, well estimated, and had well-defined acceptance criteria.
### How useful were stand ups
- [ ] We didn't have stand ups.
- [ ] We didnโt meet with any regular cadence.
- [ ] Participation was not consistent.
- [ ] They were too long, with too much detail.
- [ ] People shared updates, but I usually didnโt get unblocked.
- [x] Very efficient. People shared openly and received the help they needed.
### How informative was the retrospective
- [ ] We didnโt have a retrospective.
- [ ] We had a retrospective because they are part of our process, but it wasn't useful.
- [ ] Retrospectives helped us understand and improve some aspects of the project and team interactions.
- [x] Retrospectives were key to our teamโs success. We surfaced areas of improvement and acted on them.
### How thorough were design reviews
- [ ] We didnโt do any design reviews.
- [x] We did a high-level system/architecture review.
- [ ] We produced and reviewed architecture and component/sequence/data flow diagrams.
- [ ] We produced and reviewed all design artifacts and solicited feedback from domain experts.
- [x] We produced and reviewed all design artifacts and solicited feedback from domain experts. As the project progressed, we actively validated and updated our designs, based on our learnings.
### How effective were code reviews
- [ ] We didnโt review code changes
- [ ] We used automated tooling to enforce basic convention/standards.
- [x] We used automated tooling to enforce basic convention/standards. Code changes required approval from one individual on the team.
- [ ] We used automated tooling to enforce basic convention/standards. Code changes required approval from two or more individuals on the team.
- [ ] We used automated tooling to enforce basic convention/standards. Code changes required approval from two or more individuals on the team. Domain experts were added to reviews, when applicable.
### How were changes introduced to the codebase
- [ ] No governance; anyone could introduce changes to any part/branch of the codebase.
- [ ] Branches were used to isolate new changes and folded into an upstream branch via Pull Request.
- [ ] Branches were used to isolate new changes and folded into an upstream branch via Pull Request. Pull Requests were scoped to smaller, more granular changes.
- [ ] Branches were used to isolate new changes. Pull Requests were used to fold changes into a primary working branch. Multiple upstream branches were used to manage changes. Main is always shippable. Branch policies and/or commit hooks were in place.
- [x] Branches were used to isolate new changes. Pull Requests were used to fold changes into a primary working branch. Branch names and commit message(s) follow a convention and always reference back to a work item. Multiple upstream branches were used to manage/validate/promote changes. Main represents `last known good` and is always shippable. Branch policies and/or commit hooks were in place.
### How rigorous was the code validation
- [ ] We did not do any testing.
- [ ] Our work was primarily validated through manual testing.
- [ ] We consciously did not allocate time for automated testing.
- [ ] Automated tests existed in the project, but were challenging to run.
- [x] New tests or test modifications accompanied every significant code change.
- [ ] Our project contained automated tests, every check-in must have a test, and they ran as part of CI.
### How smooth was continuous integration
- [ ] We didnโt have any continuous integration configured.
- [ ] Builds were always done on a central build server.
- [ ] Builds are always done on a central build server. Automated tests prevented check-ins that would result in a broken build, for some of the code bases.
- [ ] Builds are always done on a central build server. Automated tests prevented check-ins that would result in a broken build, for all the code bases.
- [x] Builds are always done on a central build server. Automated tests prevented check-ins that would result in a broken build, for all the code bases. Built artifacts were always shared from a central artifact/package server.
### How reliable was continuous delivery
- [ ] We didnโt have any continuous delivery configured.
- [ ] We had scripts for some deployments.
- [ ] We had scripts for both creating and deploying some services to an environment.
- [ ] We had scripts for both creating and deploying all services to an environment.
- [x] There were multiple environments and deployments into them were automated and well understood.
### How was observability achieved
- [ ] We didnโt add any logging, metrics, tracing, or monitoring.
- [ ] We added some logging, metrics, tracing, and/or monitoring but it was not done consistently across all system components.
- [ ] We added logging, metrics, tracing, and/or monitoring across most components. However, the implementation was not complete; ex) we did not use correlation ids or business context was missing or alerts were not defined for monitored components, etc.
- [ ] We added extensive logging, metrics, tracing, and monitoring alerts to facilitate debugging, viewing of historical trends, understanding control flow, and the current state of the system.
- [x] We designed and implemented instrumentation to help run the solution with the goal of adding value to the customer.
### How was security evaluated in this engagement
- [ ] We did not evaluate security as a part of this engagement.
- [ ] Security was evaluated only at the end of the engagement; little to no time was available to remediate issues.
- [ ] Security was evaluated only at the end of the engagement; there was time remaining prior to hand-off to fix issues (if needed).
- [x] Secure design was considered during the design and implementation phases but with no ongoing support.
- [ ] Secure design was considered during the design and implementation phases, and ongoing automated testing was introduced to the DevSecOps process prior to hand off.
### How was impactful Product Group engineering feedback provided
- [ ] Microsoft products/services worked flawlessly without any issues, therefore, there was no engineering feedback to share.
- [ ] We encountered some friction with Microsoft products/services but didnโt submit any engineering feedback for the Product Group.
- [ ] We shared our feedback directly with the Product Group but only in an ad-hoc manner (i.e. via email, teams, etc).
- [ ] Mostly at the end of the engagement, we submitted some engineering feedback via CSE Feedback tool.
- [x] On an ongoing basis, we submitted all of the relevant high-quality feedback via CSE Feedback tool, including priority, scenario-based description, repro steps with screenshots, and attached relevant email threads with the Product Group.
|
1.0
|
NGSA - Survey - M2 - Sprint 2 - ### How well was the backlog maintained
- [ ] We did not use a backlog.
- [ ] We created a backlog, but did not maintain it.
- [ ] Our backlog was loosely defined for the project.
- [ ] Our backlog was organized into well-defined work items.
- [x] Our backlog was organized into well-defined work items and was actively maintained.
- [ ] Our backlog was thorough, maintained, and every pull request was associated with a work item.
### How effective was sprint planning
- [ ] We did not do any planning.
- [ ] We planned some of the work.
- [x] We planned but did not estimate the work.
- [ ] We underestimated and didnโt close out the sprint.
- [ ] All work was planned and well estimated.
- [ ] All work was planned, well estimated, and had well-defined acceptance criteria.
### How useful were stand ups
- [ ] We didn't have stand ups.
- [ ] We didnโt meet with any regular cadence.
- [ ] Participation was not consistent.
- [ ] They were too long, with too much detail.
- [ ] People shared updates, but I usually didnโt get unblocked.
- [x] Very efficient. People shared openly and received the help they needed.
### How informative was the retrospective
- [ ] We didnโt have a retrospective.
- [ ] We had a retrospective because they are part of our process, but it wasn't useful.
- [ ] Retrospectives helped us understand and improve some aspects of the project and team interactions.
- [x] Retrospectives were key to our teamโs success. We surfaced areas of improvement and acted on them.
### How thorough were design reviews
- [ ] We didnโt do any design reviews.
- [x] We did a high-level system/architecture review.
- [ ] We produced and reviewed architecture and component/sequence/data flow diagrams.
- [ ] We produced and reviewed all design artifacts and solicited feedback from domain experts.
- [x] We produced and reviewed all design artifacts and solicited feedback from domain experts. As the project progressed, we actively validated and updated our designs, based on our learnings.
### How effective were code reviews
- [ ] We didnโt review code changes
- [ ] We used automated tooling to enforce basic convention/standards.
- [x] We used automated tooling to enforce basic convention/standards. Code changes required approval from one individual on the team.
- [ ] We used automated tooling to enforce basic convention/standards. Code changes required approval from two or more individuals on the team.
- [ ] We used automated tooling to enforce basic convention/standards. Code changes required approval from two or more individuals on the team. Domain experts were added to reviews, when applicable.
### How were changes introduced to the codebase
- [ ] No governance; anyone could introduce changes to any part/branch of the codebase.
- [ ] Branches were used to isolate new changes and folded into an upstream branch via Pull Request.
- [ ] Branches were used to isolate new changes and folded into an upstream branch via Pull Request. Pull Requests were scoped to smaller, more granular changes.
- [ ] Branches were used to isolate new changes. Pull Requests were used to fold changes into a primary working branch. Multiple upstream branches were used to manage changes. Main is always shippable. Branch policies and/or commit hooks were in place.
- [x] Branches were used to isolate new changes. Pull Requests were used to fold changes into a primary working branch. Branch names and commit message(s) follow a convention and always reference back to a work item. Multiple upstream branches were used to manage/validate/promote changes. Main represents `last known good` and is always shippable. Branch policies and/or commit hooks were in place.
### How rigorous was the code validation
- [ ] We did not do any testing.
- [ ] Our work was primarily validated through manual testing.
- [ ] We consciously did not allocate time for automated testing.
- [ ] Automated tests existed in the project, but were challenging to run.
- [x] New tests or test modifications accompanied every significant code change.
- [ ] Our project contained automated tests, every check-in must have a test, and they ran as part of CI.
### How smooth was continuous integration
- [ ] We didnโt have any continuous integration configured.
- [ ] Builds were always done on a central build server.
- [ ] Builds are always done on a central build server. Automated tests prevented check-ins that would result in a broken build, for some of the code bases.
- [ ] Builds are always done on a central build server. Automated tests prevented check-ins that would result in a broken build, for all the code bases.
- [x] Builds are always done on a central build server. Automated tests prevented check-ins that would result in a broken build, for all the code bases. Built artifacts were always shared from a central artifact/package server.
### How reliable was continuous delivery
- [ ] We didnโt have any continuous delivery configured.
- [ ] We had scripts for some deployments.
- [ ] We had scripts for both creating and deploying some services to an environment.
- [ ] We had scripts for both creating and deploying all services to an environment.
- [x] There were multiple environments and deployments into them were automated and well understood.
### How was observability achieved
- [ ] We didnโt add any logging, metrics, tracing, or monitoring.
- [ ] We added some logging, metrics, tracing, and/or monitoring but it was not done consistently across all system components.
- [ ] We added logging, metrics, tracing, and/or monitoring across most components. However, the implementation was not complete; ex) we did not use correlation ids or business context was missing or alerts were not defined for monitored components, etc.
- [ ] We added extensive logging, metrics, tracing, and monitoring alerts to facilitate debugging, viewing of historical trends, understanding control flow, and the current state of the system.
- [x] We designed and implemented instrumentation to help run the solution with the goal of adding value to the customer.
### How was security evaluated in this engagement
- [ ] We did not evaluate security as a part of this engagement.
- [ ] Security was evaluated only at the end of the engagement; little to no time was available to remediate issues.
- [ ] Security was evaluated only at the end of the engagement; there was time remaining prior to hand-off to fix issues (if needed).
- [x] Secure design was considered during the design and implementation phases but with no ongoing support.
- [ ] Secure design was considered during the design and implementation phases, and ongoing automated testing was introduced to the DevSecOps process prior to hand off.
### How was impactful Product Group engineering feedback provided
- [ ] Microsoft products/services worked flawlessly without any issues, therefore, there was no engineering feedback to share.
- [ ] We encountered some friction with Microsoft products/services but didnโt submit any engineering feedback for the Product Group.
- [ ] We shared our feedback directly with the Product Group but only in an ad-hoc manner (i.e. via email, teams, etc).
- [ ] Mostly at the end of the engagement, we submitted some engineering feedback via CSE Feedback tool.
- [x] On an ongoing basis, we submitted all of the relevant high-quality feedback via CSE Feedback tool, including priority, scenario-based description, repro steps with screenshots, and attached relevant email threads with the Product Group.
|
process
|
ngsa survey sprint how well was the backlog maintained we did not use a backlog we created a backlog but did not maintain it our backlog was loosely defined for the project our backlog was organized into well defined work items our backlog was organized into well defined work items and was actively maintained our backlog was thorough maintained and every pull request was associated with a work item how effective was sprint planning we did not do any planning we planned some of the work we planned but did not estimate the work we underestimated and didnโt close out the sprint all work was planned and well estimated all work was planned well estimated and had well defined acceptance criteria how useful were stand ups we didn t have stand ups we didnโt meet with any regular cadence participation was not consistent they were too long with too much detail people shared updates but i usually didnโt get unblocked very efficient people shared openly and received the help they needed how informative was the retrospective we didnโt have a retrospective we had a retrospective because they are part of our process but it wasn t useful retrospectives helped us understand and improve some aspects of the project and team interactions retrospectives were key to our teamโs success we surfaced areas of improvement and acted on them how thorough were design reviews we didnโt do any design reviews we did a high level system architecture review we produced and reviewed architecture and component sequence data flow diagrams we produced and reviewed all design artifacts and solicited feedback from domain experts we produced and reviewed all design artifacts and solicited feedback from domain experts as the project progressed we actively validated and updated our designs based on our learnings how effective were code reviews we didnโt review code changes we used automated tooling to enforce basic convention standards we used automated tooling to enforce basic convention standards code changes required approval from one individual on the team we used automated tooling to enforce basic convention standards code changes required approval from two or more individuals on the team we used automated tooling to enforce basic convention standards code changes required approval from two or more individuals on the team domain experts were added to reviews when applicable how were changes introduced to the codebase no governance anyone could introduce changes to any part branch of the codebase branches were used to isolate new changes and folded into an upstream branch via pull request branches were used to isolate new changes and folded into an upstream branch via pull request pull requests were scoped to smaller more granular changes branches were used to isolate new changes pull requests were used to fold changes into a primary working branch multiple upstream branches were used to manage changes main is always shippable branch policies and or commit hooks were in place branches were used to isolate new changes pull requests were used to fold changes into a primary working branch branch names and commit message s follow a convention and always reference back to a work item multiple upstream branches were used to manage validate promote changes main represents last known good and is always shippable branch policies and or commit hooks were in place how rigorous was the code validation we did not do any testing our work was primarily validated through manual testing we consciously did not allocate time for automated testing automated tests existed in the project but were challenging to run new tests or test modifications accompanied every significant code change our project contained automated tests every check in must have a test and they ran as part of ci how smooth was continuous integration we didnโt have any continuous integration configured builds were always done on a central build server builds are always done on a central build server automated tests prevented check ins that would result in a broken build for some of the code bases builds are always done on a central build server automated tests prevented check ins that would result in a broken build for all the code bases builds are always done on a central build server automated tests prevented check ins that would result in a broken build for all the code bases built artifacts were always shared from a central artifact package server how reliable was continuous delivery we didnโt have any continuous delivery configured we had scripts for some deployments we had scripts for both creating and deploying some services to an environment we had scripts for both creating and deploying all services to an environment there were multiple environments and deployments into them were automated and well understood how was observability achieved we didnโt add any logging metrics tracing or monitoring we added some logging metrics tracing and or monitoring but it was not done consistently across all system components we added logging metrics tracing and or monitoring across most components however the implementation was not complete ex we did not use correlation ids or business context was missing or alerts were not defined for monitored components etc we added extensive logging metrics tracing and monitoring alerts to facilitate debugging viewing of historical trends understanding control flow and the current state of the system we designed and implemented instrumentation to help run the solution with the goal of adding value to the customer how was security evaluated in this engagement we did not evaluate security as a part of this engagement security was evaluated only at the end of the engagement little to no time was available to remediate issues security was evaluated only at the end of the engagement there was time remaining prior to hand off to fix issues if needed secure design was considered during the design and implementation phases but with no ongoing support secure design was considered during the design and implementation phases and ongoing automated testing was introduced to the devsecops process prior to hand off how was impactful product group engineering feedback provided microsoft products services worked flawlessly without any issues therefore there was no engineering feedback to share we encountered some friction with microsoft products services but didnโt submit any engineering feedback for the product group we shared our feedback directly with the product group but only in an ad hoc manner i e via email teams etc mostly at the end of the engagement we submitted some engineering feedback via cse feedback tool on an ongoing basis we submitted all of the relevant high quality feedback via cse feedback tool including priority scenario based description repro steps with screenshots and attached relevant email threads with the product group
| 1
|
650
| 3,114,988,491
|
IssuesEvent
|
2015-09-03 12:18:10
|
alex/django-filter
|
https://api.github.com/repos/alex/django-filter
|
opened
|
Handle `RemovedInDjango110Warning` Notices
|
Testing/Process
|
Running the test suite for `py34-django-latest` we get a number of `RemovedInDjango110Warning` notices...
```
py34-django-latest runtests: commands[0] | ./runtests.py
Creating test database for alias 'default'...
......s...../Users/carlton/Documents/Django-Stack/django-filter/django_filters/filterset.py:104: RemovedInDjango110Warning: 'get_field_by_name is an unofficial API that has been deprecated. You may be able to replace it with 'get_field()'
rel, model, direct, m2m = opts.get_field_by_name(parts[-1])
.......ssss../Users/carlton/Documents/Django-Stack/django-filter/django_filters/filterset.py:90: RemovedInDjango110Warning: 'get_field_by_name is an unofficial API that has been deprecated. You may be able to replace it with 'get_field()'
rel = opts.get_field_by_name(name)[0]
.s..x.ss..x..x.....................................u..................xx..................................u.......x........................x................s.../Users/carlton/Documents/Django-Stack/django-filter/tests/test_filterset.py:195: RemovedInDjango110Warning: 'get_field_by_name is an unofficial API that has been deprecated. You may be able to replace it with 'get_field()'
f = User._meta.get_field_by_name('comments')[0]
./Users/carlton/Documents/Django-Stack/django-filter/tests/test_filterset.py:222: RemovedInDjango110Warning: 'get_field_by_name is an unofficial API that has been deprecated. You may be able to replace it with 'get_field()'
f = Worker._meta.get_field_by_name('employers')[0]
./Users/carlton/Documents/Django-Stack/django-filter/tests/test_filterset.py:204: RemovedInDjango110Warning: 'get_field_by_name is an unofficial API that has been deprecated. You may be able to replace it with 'get_field()'
f = Book._meta.get_field_by_name('lovers')[0]
./Users/carlton/Documents/Django-Stack/django-filter/tests/test_filterset.py:213: RemovedInDjango110Warning: 'get_field_by_name is an unofficial API that has been deprecated. You may be able to replace it with 'get_field()'
f = DirectedNode._meta.get_field_by_name('inbound_nodes')[0]
./Users/carlton/Documents/Django-Stack/django-filter/tests/test_filterset.py:186: RemovedInDjango110Warning: 'get_field_by_name is an unofficial API that has been deprecated. You may be able to replace it with 'get_field()'
f = Account._meta.get_field_by_name('profile')[0]
.................sss......................./Users/carlton/Documents/Django-Stack/django-filter/.tox/py34-django-latest/lib/python3.4/site-packages/django/test/testcases.py:229: RemovedInDjango110Warning: SimpleTestCase.urls is deprecated and will be removed in Django 1.10. Use @override_settings(ROOT_URLCONF=...) in GenericClassBasedViewTests instead.
self._urlconf_setup()
/Users/carlton/Documents/Django-Stack/django-filter/tests/urls.py:13: RemovedInDjango110Warning: django.conf.urls.patterns() is deprecated and will be removed in Django 1.10. Update your urlpatterns to be a list of django.conf.urls.url() instances instead.
(r'^books/$', FilterView.as_view(model=Book)),
/Users/carlton/Documents/Django-Stack/django-filter/.tox/py34-django-latest/lib/python3.4/site-packages/django/conf/urls/__init__.py:89: RemovedInDjango110Warning: Support for string view arguments to url() is deprecated and will be removed in Django 1.10 (got django_filters.views.object_filter). Pass the callable instead.
t = url(prefix=prefix, *t)
/Users/carlton/Documents/Django-Stack/django-filter/.tox/py34-django-latest/lib/python3.4/site-packages/django/template/utils.py:37: RemovedInDjango110Warning: You haven't defined a TEMPLATES setting. You must do so before upgrading to Django 1.10. Otherwise Django will be unable to load templates.
"unable to load templates.", RemovedInDjango110Warning)
...../Users/carlton/Documents/Django-Stack/django-filter/.tox/py34-django-latest/lib/python3.4/site-packages/django/test/testcases.py:229: RemovedInDjango110Warning: SimpleTestCase.urls is deprecated and will be removed in Django 1.10. Use @override_settings(ROOT_URLCONF=...) in GenericFunctionalViewTests instead.
self._urlconf_setup()
...........
----------------------------------------------------------------------
Ran 248 tests in 0.556s
FAILED (skipped=12, expected failures=7, unexpected successes=2)
```
|
1.0
|
Handle `RemovedInDjango110Warning` Notices - Running the test suite for `py34-django-latest` we get a number of `RemovedInDjango110Warning` notices...
```
py34-django-latest runtests: commands[0] | ./runtests.py
Creating test database for alias 'default'...
......s...../Users/carlton/Documents/Django-Stack/django-filter/django_filters/filterset.py:104: RemovedInDjango110Warning: 'get_field_by_name is an unofficial API that has been deprecated. You may be able to replace it with 'get_field()'
rel, model, direct, m2m = opts.get_field_by_name(parts[-1])
.......ssss../Users/carlton/Documents/Django-Stack/django-filter/django_filters/filterset.py:90: RemovedInDjango110Warning: 'get_field_by_name is an unofficial API that has been deprecated. You may be able to replace it with 'get_field()'
rel = opts.get_field_by_name(name)[0]
.s..x.ss..x..x.....................................u..................xx..................................u.......x........................x................s.../Users/carlton/Documents/Django-Stack/django-filter/tests/test_filterset.py:195: RemovedInDjango110Warning: 'get_field_by_name is an unofficial API that has been deprecated. You may be able to replace it with 'get_field()'
f = User._meta.get_field_by_name('comments')[0]
./Users/carlton/Documents/Django-Stack/django-filter/tests/test_filterset.py:222: RemovedInDjango110Warning: 'get_field_by_name is an unofficial API that has been deprecated. You may be able to replace it with 'get_field()'
f = Worker._meta.get_field_by_name('employers')[0]
./Users/carlton/Documents/Django-Stack/django-filter/tests/test_filterset.py:204: RemovedInDjango110Warning: 'get_field_by_name is an unofficial API that has been deprecated. You may be able to replace it with 'get_field()'
f = Book._meta.get_field_by_name('lovers')[0]
./Users/carlton/Documents/Django-Stack/django-filter/tests/test_filterset.py:213: RemovedInDjango110Warning: 'get_field_by_name is an unofficial API that has been deprecated. You may be able to replace it with 'get_field()'
f = DirectedNode._meta.get_field_by_name('inbound_nodes')[0]
./Users/carlton/Documents/Django-Stack/django-filter/tests/test_filterset.py:186: RemovedInDjango110Warning: 'get_field_by_name is an unofficial API that has been deprecated. You may be able to replace it with 'get_field()'
f = Account._meta.get_field_by_name('profile')[0]
.................sss......................./Users/carlton/Documents/Django-Stack/django-filter/.tox/py34-django-latest/lib/python3.4/site-packages/django/test/testcases.py:229: RemovedInDjango110Warning: SimpleTestCase.urls is deprecated and will be removed in Django 1.10. Use @override_settings(ROOT_URLCONF=...) in GenericClassBasedViewTests instead.
self._urlconf_setup()
/Users/carlton/Documents/Django-Stack/django-filter/tests/urls.py:13: RemovedInDjango110Warning: django.conf.urls.patterns() is deprecated and will be removed in Django 1.10. Update your urlpatterns to be a list of django.conf.urls.url() instances instead.
(r'^books/$', FilterView.as_view(model=Book)),
/Users/carlton/Documents/Django-Stack/django-filter/.tox/py34-django-latest/lib/python3.4/site-packages/django/conf/urls/__init__.py:89: RemovedInDjango110Warning: Support for string view arguments to url() is deprecated and will be removed in Django 1.10 (got django_filters.views.object_filter). Pass the callable instead.
t = url(prefix=prefix, *t)
/Users/carlton/Documents/Django-Stack/django-filter/.tox/py34-django-latest/lib/python3.4/site-packages/django/template/utils.py:37: RemovedInDjango110Warning: You haven't defined a TEMPLATES setting. You must do so before upgrading to Django 1.10. Otherwise Django will be unable to load templates.
"unable to load templates.", RemovedInDjango110Warning)
...../Users/carlton/Documents/Django-Stack/django-filter/.tox/py34-django-latest/lib/python3.4/site-packages/django/test/testcases.py:229: RemovedInDjango110Warning: SimpleTestCase.urls is deprecated and will be removed in Django 1.10. Use @override_settings(ROOT_URLCONF=...) in GenericFunctionalViewTests instead.
self._urlconf_setup()
...........
----------------------------------------------------------------------
Ran 248 tests in 0.556s
FAILED (skipped=12, expected failures=7, unexpected successes=2)
```
|
process
|
handle notices running the test suite for django latest we get a number of notices django latest runtests commands runtests py creating test database for alias default s users carlton documents django stack django filter django filters filterset py get field by name is an unofficial api that has been deprecated you may be able to replace it with get field rel model direct opts get field by name parts ssss users carlton documents django stack django filter django filters filterset py get field by name is an unofficial api that has been deprecated you may be able to replace it with get field rel opts get field by name name s x ss x x u xx u x x s users carlton documents django stack django filter tests test filterset py get field by name is an unofficial api that has been deprecated you may be able to replace it with get field f user meta get field by name comments users carlton documents django stack django filter tests test filterset py get field by name is an unofficial api that has been deprecated you may be able to replace it with get field f worker meta get field by name employers users carlton documents django stack django filter tests test filterset py get field by name is an unofficial api that has been deprecated you may be able to replace it with get field f book meta get field by name lovers users carlton documents django stack django filter tests test filterset py get field by name is an unofficial api that has been deprecated you may be able to replace it with get field f directednode meta get field by name inbound nodes users carlton documents django stack django filter tests test filterset py get field by name is an unofficial api that has been deprecated you may be able to replace it with get field f account meta get field by name profile sss users carlton documents django stack django filter tox django latest lib site packages django test testcases py simpletestcase urls is deprecated and will be removed in django use override settings root urlconf in genericclassbasedviewtests instead self urlconf setup users carlton documents django stack django filter tests urls py django conf urls patterns is deprecated and will be removed in django update your urlpatterns to be a list of django conf urls url instances instead r books filterview as view model book users carlton documents django stack django filter tox django latest lib site packages django conf urls init py support for string view arguments to url is deprecated and will be removed in django got django filters views object filter pass the callable instead t url prefix prefix t users carlton documents django stack django filter tox django latest lib site packages django template utils py you haven t defined a templates setting you must do so before upgrading to django otherwise django will be unable to load templates unable to load templates users carlton documents django stack django filter tox django latest lib site packages django test testcases py simpletestcase urls is deprecated and will be removed in django use override settings root urlconf in genericfunctionalviewtests instead self urlconf setup ran tests in failed skipped expected failures unexpected successes
| 1
|
14,486
| 17,602,253,021
|
IssuesEvent
|
2021-08-17 13:15:02
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[PM] Admins > Add new admin screen > UI issue
|
Bug P2 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
|
Admins > Add new admin screen > Buttons are not aligned properly

|
3.0
|
[PM] Admins > Add new admin screen > UI issue - Admins > Add new admin screen > Buttons are not aligned properly

|
process
|
admins add new admin screen ui issue admins add new admin screen buttons are not aligned properly
| 1
|
3,066
| 6,051,274,672
|
IssuesEvent
|
2017-06-12 23:24:04
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
test: investigate flaky test-child-process-stdio-big-write-end
|
arm child_process test
|
<!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
* **Version**: `master`
* **Platform**: arm
* **Subsystem**: test
<!-- Enter your issue details below this comment. -->
```
1528 parallel/test-child-process-stdio-big-write-end
duration_ms 120.161
severity fail
stack timeout
```
https://ci.nodejs.org/job/node-test-commit-arm/10256/nodes=ubuntu1604-arm64/
|
1.0
|
test: investigate flaky test-child-process-stdio-big-write-end - <!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
* **Version**: `master`
* **Platform**: arm
* **Subsystem**: test
<!-- Enter your issue details below this comment. -->
```
1528 parallel/test-child-process-stdio-big-write-end
duration_ms 120.161
severity fail
stack timeout
```
https://ci.nodejs.org/job/node-test-commit-arm/10256/nodes=ubuntu1604-arm64/
|
process
|
test investigate flaky test child process stdio big write end thank you for reporting an issue this issue tracker is for bugs and issues found within node js core if you require more general support please file an issue on our help repo please fill in as much of the template below as you re able version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you are able version master platform arm subsystem test parallel test child process stdio big write end duration ms severity fail stack timeout
| 1
|
751,852
| 26,261,435,809
|
IssuesEvent
|
2023-01-06 08:16:19
|
WavesHQ/bridge
|
https://api.github.com/repos/WavesHQ/bridge
|
closed
|
`Contract` - shouldn't be able to send ether along with ERC20 token
|
needs/area needs/triage kind/bug needs/priority
|
<!--
Please use this template while reporting a bug and provide as much info as possible.
If the matter is security related, please disclose it privately via security@defichain.com
-->
#### What happened:
If users send ETH along with ERC20 token, those ETH will be unaccounted for. Admin will have to manually return ETH to the user.
#### What you expected to happen:
Txn including ERC20 and ETH should revert.
#### How to reproduce it (as minimally and precisely as possible):
#### Anything else we need to know?:
|
1.0
|
`Contract` - shouldn't be able to send ether along with ERC20 token - <!--
Please use this template while reporting a bug and provide as much info as possible.
If the matter is security related, please disclose it privately via security@defichain.com
-->
#### What happened:
If users send ETH along with ERC20 token, those ETH will be unaccounted for. Admin will have to manually return ETH to the user.
#### What you expected to happen:
Txn including ERC20 and ETH should revert.
#### How to reproduce it (as minimally and precisely as possible):
#### Anything else we need to know?:
|
non_process
|
contract shouldn t be able to send ether along with token please use this template while reporting a bug and provide as much info as possible if the matter is security related please disclose it privately via security defichain com what happened if users send eth along with token those eth will be unaccounted for admin will have to manually return eth to the user what you expected to happen txn including and eth should revert how to reproduce it as minimally and precisely as possible anything else we need to know
| 0
|
14,467
| 17,571,238,762
|
IssuesEvent
|
2021-08-14 18:43:02
|
elastic/beats
|
https://api.github.com/repos/elastic/beats
|
closed
|
Add glob or recursive option for processor `drop_fields`
|
enhancement libbeat :Processors Team:Integrations Stalled
|
**Describe the enhancement:**
Add glob or recursive option for processor `drop_fields` which would be able to check the entire document for the field to remove.
**Describe a specific use case for the enhancement or feature:**
The processor works only with statically defined list of the fields and also only with fields at root level. If user needs to check the entire document for a field and does not know exact position, the only option is a script.
Thank you
|
1.0
|
Add glob or recursive option for processor `drop_fields` - **Describe the enhancement:**
Add glob or recursive option for processor `drop_fields` which would be able to check the entire document for the field to remove.
**Describe a specific use case for the enhancement or feature:**
The processor works only with statically defined list of the fields and also only with fields at root level. If user needs to check the entire document for a field and does not know exact position, the only option is a script.
Thank you
|
process
|
add glob or recursive option for processor drop fields describe the enhancement add glob or recursive option for processor drop fields which would be able to check the entire document for the field to remove describe a specific use case for the enhancement or feature the processor works only with statically defined list of the fields and also only with fields at root level if user needs to check the entire document for a field and does not know exact position the only option is a script thank you
| 1
|
336,766
| 24,512,367,572
|
IssuesEvent
|
2022-10-10 23:24:07
|
based-kwl/SemesterPlanner-client
|
https://api.github.com/repos/based-kwl/SemesterPlanner-client
|
closed
|
User story backlog
|
documentation
|
Create User Stories Backlog (You should create user stories for all requirements. Before the beginning
of each Sprint you will plan the user stories to be completed and estimate their user story points
using planning poker):
- [x] Backlog
- [x] plan user stories for Sprint 3
|
1.0
|
User story backlog - Create User Stories Backlog (You should create user stories for all requirements. Before the beginning
of each Sprint you will plan the user stories to be completed and estimate their user story points
using planning poker):
- [x] Backlog
- [x] plan user stories for Sprint 3
|
non_process
|
user story backlog create user stories backlog you should create user stories for all requirements before the beginning of each sprint you will plan the user stories to be completed and estimate their user story points using planning poker backlog plan user stories for sprint
| 0
|
16,564
| 21,577,493,411
|
IssuesEvent
|
2022-05-02 15:08:19
|
camunda/zeebe
|
https://api.github.com/repos/camunda/zeebe
|
opened
|
IllegalStateException when writing decision evaluation event
|
kind/bug area/reliability team/process-automation
|
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
When trying to write the decision evaluation event an `IllegalArgumentException` is thrown. This is because when searching for decision by decision requirements key multiple results with the same decision id are returned:
```java
final var decisionKeysByDecisionId =
decisionState
.findDecisionsByDecisionRequirementsKey(decision.getDecisionRequirementsKey())
.stream()
.collect(
Collectors.toMap(
persistedDecision -> bufferAsString(persistedDecision.getDecisionId()),
DecisionInfo::new));
```
These duplicate decision id cause the `toMap` function to fail, as no merge function is provided.
The found decisions do all have a different version.
**To Reproduce**
<!--
Steps to reproduce the behavior
If possible add a minimal reproducer code sample
- when using the Java client: https://github.com/zeebe-io/zeebe-test-template-java
-->
It was a challenge to reproduce this issue but I found a way to do this. It requires 2 DRD's that both contain a decision with the same id and a process which contains a business rule task referencing this decision id.
[Repro files.zip](https://github.com/camunda/zeebe/files/8603769/Repro.files.zip)
Next follow these steps:
1. Deploy `translateDay.dmn`
2. Deploy `translateMonth.dmn`
3. Without making any changes redeploy `translateDay.dmn`
4. Deploy `translateProcess.dmn`
5. Start a PI: `zbctl create instance translateProcess --insecure --variables '{"day":"monday","month":"april"}'`
At this point an exception should be thrown.
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
No exception should occur.
**Log/Stacktrace**
<!-- If possible add the full stacktrace or Zeebe log which contains the issue. -->
<details><summary>Full Stacktrace</summary>
<p>
```
java.lang.IllegalStateException: Duplicate key checkRisk_en (attempted merging values DecisionInfo[key=2251799813685254, version=1] and DecisionInfo[key=2251799813685628, version=3])
at java.util.stream.Collectors.duplicateKeyException(Unknown Source) ~[?:?]
at java.util.stream.Collectors.lambda$uniqKeysMapAccumulator$1(Unknown Source) ~[?:?]
at java.util.stream.ReduceOps$3ReducingSink.accept(Unknown Source) ~[?:?]
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(Unknown Source) ~[?:?]
at java.util.stream.AbstractPipeline.copyInto(Unknown Source) ~[?:?]
at java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source) ~[?:?]
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown Source) ~[?:?]
at java.util.stream.AbstractPipeline.evaluate(Unknown Source) ~[?:?]
at java.util.stream.ReferencePipeline.collect(Unknown Source) ~[?:?]
at io.camunda.zeebe.engine.processing.bpmn.behavior.BpmnDecisionBehavior.writeDecisionEvaluationEvent(BpmnDecisionBehavior.java:233) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.behavior.BpmnDecisionBehavior.lambda$evaluateDecision$3(BpmnDecisionBehavior.java:114) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.Either$Right.flatMap(Either.java:366) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.behavior.BpmnDecisionBehavior.evaluateDecision(BpmnDecisionBehavior.java:109) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.task.BusinessRuleTaskProcessor$CalledDecisionBehavior.lambda$onActivate$0(BusinessRuleTaskProcessor.java:89) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.Either$Right.flatMap(Either.java:366) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.task.BusinessRuleTaskProcessor$CalledDecisionBehavior.onActivate(BusinessRuleTaskProcessor.java:89) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.task.BusinessRuleTaskProcessor.onActivate(BusinessRuleTaskProcessor.java:40) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.task.BusinessRuleTaskProcessor.onActivate(BusinessRuleTaskProcessor.java:21) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.BpmnStreamProcessor.lambda$processEvent$2(BpmnStreamProcessor.java:128) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.Either$Right.ifRightOrLeft(Either.java:381) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.BpmnStreamProcessor.processEvent(BpmnStreamProcessor.java:127) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.BpmnStreamProcessor.lambda$processRecord$0(BpmnStreamProcessor.java:110) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.Either$Right.ifRightOrLeft(Either.java:381) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.BpmnStreamProcessor.processRecord(BpmnStreamProcessor.java:107) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.streamprocessor.TypedRecordProcessor.processRecord(TypedRecordProcessor.java:54) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.lambda$processInTransaction$3(ProcessingStateMachine.java:300) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.db.impl.rocksdb.transaction.ZeebeTransaction.run(ZeebeTransaction.java:84) ~[zeebe-db-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.processInTransaction(ProcessingStateMachine.java:290) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.processCommand(ProcessingStateMachine.java:253) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.tryToReadNextRecord(ProcessingStateMachine.java:213) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.readNextRecord(ProcessingStateMachine.java:189) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorJob.invoke(ActorJob.java:79) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorJob.execute(ActorJob.java:44) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorTask.execute(ActorTask.java:122) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorThread.executeCurrentTask(ActorThread.java:97) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorThread.doWork(ActorThread.java:80) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorThread.run(ActorThread.java:189) ~[zeebe-util-8.0.0.jar:8.0.0]
```
</p>
</details>
**Environment:**
- OS: <!-- [e.g. Linux] -->
- Zeebe Version: <!-- [e.g. 0.20.0] -->
- Configuration: <!-- [e.g. exporters etc.] -->
|
1.0
|
IllegalStateException when writing decision evaluation event - **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
When trying to write the decision evaluation event an `IllegalArgumentException` is thrown. This is because when searching for decision by decision requirements key multiple results with the same decision id are returned:
```java
final var decisionKeysByDecisionId =
decisionState
.findDecisionsByDecisionRequirementsKey(decision.getDecisionRequirementsKey())
.stream()
.collect(
Collectors.toMap(
persistedDecision -> bufferAsString(persistedDecision.getDecisionId()),
DecisionInfo::new));
```
These duplicate decision id cause the `toMap` function to fail, as no merge function is provided.
The found decisions do all have a different version.
**To Reproduce**
<!--
Steps to reproduce the behavior
If possible add a minimal reproducer code sample
- when using the Java client: https://github.com/zeebe-io/zeebe-test-template-java
-->
It was a challenge to reproduce this issue but I found a way to do this. It requires 2 DRD's that both contain a decision with the same id and a process which contains a business rule task referencing this decision id.
[Repro files.zip](https://github.com/camunda/zeebe/files/8603769/Repro.files.zip)
Next follow these steps:
1. Deploy `translateDay.dmn`
2. Deploy `translateMonth.dmn`
3. Without making any changes redeploy `translateDay.dmn`
4. Deploy `translateProcess.dmn`
5. Start a PI: `zbctl create instance translateProcess --insecure --variables '{"day":"monday","month":"april"}'`
At this point an exception should be thrown.
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
No exception should occur.
**Log/Stacktrace**
<!-- If possible add the full stacktrace or Zeebe log which contains the issue. -->
<details><summary>Full Stacktrace</summary>
<p>
```
java.lang.IllegalStateException: Duplicate key checkRisk_en (attempted merging values DecisionInfo[key=2251799813685254, version=1] and DecisionInfo[key=2251799813685628, version=3])
at java.util.stream.Collectors.duplicateKeyException(Unknown Source) ~[?:?]
at java.util.stream.Collectors.lambda$uniqKeysMapAccumulator$1(Unknown Source) ~[?:?]
at java.util.stream.ReduceOps$3ReducingSink.accept(Unknown Source) ~[?:?]
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(Unknown Source) ~[?:?]
at java.util.stream.AbstractPipeline.copyInto(Unknown Source) ~[?:?]
at java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source) ~[?:?]
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown Source) ~[?:?]
at java.util.stream.AbstractPipeline.evaluate(Unknown Source) ~[?:?]
at java.util.stream.ReferencePipeline.collect(Unknown Source) ~[?:?]
at io.camunda.zeebe.engine.processing.bpmn.behavior.BpmnDecisionBehavior.writeDecisionEvaluationEvent(BpmnDecisionBehavior.java:233) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.behavior.BpmnDecisionBehavior.lambda$evaluateDecision$3(BpmnDecisionBehavior.java:114) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.Either$Right.flatMap(Either.java:366) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.behavior.BpmnDecisionBehavior.evaluateDecision(BpmnDecisionBehavior.java:109) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.task.BusinessRuleTaskProcessor$CalledDecisionBehavior.lambda$onActivate$0(BusinessRuleTaskProcessor.java:89) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.Either$Right.flatMap(Either.java:366) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.task.BusinessRuleTaskProcessor$CalledDecisionBehavior.onActivate(BusinessRuleTaskProcessor.java:89) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.task.BusinessRuleTaskProcessor.onActivate(BusinessRuleTaskProcessor.java:40) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.task.BusinessRuleTaskProcessor.onActivate(BusinessRuleTaskProcessor.java:21) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.BpmnStreamProcessor.lambda$processEvent$2(BpmnStreamProcessor.java:128) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.Either$Right.ifRightOrLeft(Either.java:381) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.BpmnStreamProcessor.processEvent(BpmnStreamProcessor.java:127) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.BpmnStreamProcessor.lambda$processRecord$0(BpmnStreamProcessor.java:110) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.Either$Right.ifRightOrLeft(Either.java:381) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.bpmn.BpmnStreamProcessor.processRecord(BpmnStreamProcessor.java:107) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.streamprocessor.TypedRecordProcessor.processRecord(TypedRecordProcessor.java:54) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.lambda$processInTransaction$3(ProcessingStateMachine.java:300) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.db.impl.rocksdb.transaction.ZeebeTransaction.run(ZeebeTransaction.java:84) ~[zeebe-db-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.processInTransaction(ProcessingStateMachine.java:290) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.processCommand(ProcessingStateMachine.java:253) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.tryToReadNextRecord(ProcessingStateMachine.java:213) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.streamprocessor.ProcessingStateMachine.readNextRecord(ProcessingStateMachine.java:189) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorJob.invoke(ActorJob.java:79) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorJob.execute(ActorJob.java:44) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorTask.execute(ActorTask.java:122) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorThread.executeCurrentTask(ActorThread.java:97) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorThread.doWork(ActorThread.java:80) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorThread.run(ActorThread.java:189) ~[zeebe-util-8.0.0.jar:8.0.0]
```
</p>
</details>
**Environment:**
- OS: <!-- [e.g. Linux] -->
- Zeebe Version: <!-- [e.g. 0.20.0] -->
- Configuration: <!-- [e.g. exporters etc.] -->
|
process
|
illegalstateexception when writing decision evaluation event describe the bug when trying to write the decision evaluation event an illegalargumentexception is thrown this is because when searching for decision by decision requirements key multiple results with the same decision id are returned java final var decisionkeysbydecisionid decisionstate finddecisionsbydecisionrequirementskey decision getdecisionrequirementskey stream collect collectors tomap persisteddecision bufferasstring persisteddecision getdecisionid decisioninfo new these duplicate decision id cause the tomap function to fail as no merge function is provided the found decisions do all have a different version to reproduce steps to reproduce the behavior if possible add a minimal reproducer code sample when using the java client it was a challenge to reproduce this issue but i found a way to do this it requires drd s that both contain a decision with the same id and a process which contains a business rule task referencing this decision id next follow these steps deploy translateday dmn deploy translatemonth dmn without making any changes redeploy translateday dmn deploy translateprocess dmn start a pi zbctl create instance translateprocess insecure variables day monday month april at this point an exception should be thrown expected behavior no exception should occur log stacktrace full stacktrace java lang illegalstateexception duplicate key checkrisk en attempted merging values decisioninfo and decisioninfo at java util stream collectors duplicatekeyexception unknown source at java util stream collectors lambda uniqkeysmapaccumulator unknown source at java util stream reduceops accept unknown source at java util arraylist arraylistspliterator foreachremaining unknown source at java util stream abstractpipeline copyinto unknown source at java util stream abstractpipeline wrapandcopyinto unknown source at java util stream reduceops reduceop evaluatesequential unknown source at java util stream abstractpipeline evaluate unknown source at java util stream referencepipeline collect unknown source at io camunda zeebe engine processing bpmn behavior bpmndecisionbehavior writedecisionevaluationevent bpmndecisionbehavior java at io camunda zeebe engine processing bpmn behavior bpmndecisionbehavior lambda evaluatedecision bpmndecisionbehavior java at io camunda zeebe util either right flatmap either java at io camunda zeebe engine processing bpmn behavior bpmndecisionbehavior evaluatedecision bpmndecisionbehavior java at io camunda zeebe engine processing bpmn task businessruletaskprocessor calleddecisionbehavior lambda onactivate businessruletaskprocessor java at io camunda zeebe util either right flatmap either java at io camunda zeebe engine processing bpmn task businessruletaskprocessor calleddecisionbehavior onactivate businessruletaskprocessor java at io camunda zeebe engine processing bpmn task businessruletaskprocessor onactivate businessruletaskprocessor java at io camunda zeebe engine processing bpmn task businessruletaskprocessor onactivate businessruletaskprocessor java at io camunda zeebe engine processing bpmn bpmnstreamprocessor lambda processevent bpmnstreamprocessor java at io camunda zeebe util either right ifrightorleft either java at io camunda zeebe engine processing bpmn bpmnstreamprocessor processevent bpmnstreamprocessor java at io camunda zeebe engine processing bpmn bpmnstreamprocessor lambda processrecord bpmnstreamprocessor java at io camunda zeebe util either right ifrightorleft either java at io camunda zeebe engine processing bpmn bpmnstreamprocessor processrecord bpmnstreamprocessor java at io camunda zeebe engine processing streamprocessor typedrecordprocessor processrecord typedrecordprocessor java at io camunda zeebe engine processing streamprocessor processingstatemachine lambda processintransaction processingstatemachine java at io camunda zeebe db impl rocksdb transaction zeebetransaction run zeebetransaction java at io camunda zeebe engine processing streamprocessor processingstatemachine processintransaction processingstatemachine java at io camunda zeebe engine processing streamprocessor processingstatemachine processcommand processingstatemachine java at io camunda zeebe engine processing streamprocessor processingstatemachine trytoreadnextrecord processingstatemachine java at io camunda zeebe engine processing streamprocessor processingstatemachine readnextrecord processingstatemachine java at io camunda zeebe util sched actorjob invoke actorjob java at io camunda zeebe util sched actorjob execute actorjob java at io camunda zeebe util sched actortask execute actortask java at io camunda zeebe util sched actorthread executecurrenttask actorthread java at io camunda zeebe util sched actorthread dowork actorthread java at io camunda zeebe util sched actorthread run actorthread java environment os zeebe version configuration
| 1
|
88,842
| 8,179,119,982
|
IssuesEvent
|
2018-08-28 15:35:28
|
celery/celery
|
https://api.github.com/repos/celery/celery
|
closed
|
Unable to save pickled objects with couchbase as result backend
|
Component: Canvas Component: Couchbase Results Backend Issue Type: Bug Status: Has Testcase โ
|
Hi it seems like when I attempt to process groups of chords, the couchbase result backend is consistently failing to unlock the chord when reading from the db:
`celery.chord_unlock[e3139ae5-a67d-4f0c-8c54-73b1e19433d2] retry: Retry in 1s: ValueFormatError()`
This behavior does not occur with the redis result backend, i can switch between them and see that the error unlocking only occurs on couchbase.
## Steps to reproduce
Attempt to process a chord with couchbase backend using pickle serialization.
## Expected behavior
Chords process correctly, and resulting data is fed to the next task
## Actual behavior
Celery is unable to unlock the chord from the result backend
## Celery project info:
```
celery -A ipaassteprunner report
software -> celery:4.1.0 (latentcall) kombu:4.1.0 py:2.7.10
billiard:3.5.0.3 py-amqp:2.2.2
platform -> system:Darwin arch:64bit imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:pyamqp results:couchbase://isadmin:**@localhost:8091/tasks
task_serializer: 'pickle'
result_serializer: 'pickle'
dbconfig: <ipaascommon.ipaas_config.DatabaseConfig object at 0x10fbbfe10>
db_pass: u'********'
IpaasConfig: <class 'ipaascommon.ipaas_config.IpaasConfig'>
imports:
('ipaassteprunner.tasks',)
worker_redirect_stdouts: False
DatabaseConfig: u'********'
db_port: '8091'
ipaas_constants: <module 'ipaascommon.ipaas_constants' from '/Library/Python/2.7/site-packages/ipaascommon/ipaas_constants.pyc'>
enable_utc: True
db_user: 'isadmin'
db_host: 'localhost'
result_backend: u'couchbase://isadmin:********@localhost:8091/tasks'
result_expires: 3600
iconfig: <ipaascommon.ipaas_config.IpaasConfig object at 0x10fbbfd90>
broker_url: u'amqp://guest:********@localhost:5672//'
task_bucket: 'tasks'
accept_content: ['pickle']
```
### Additional Debug output
```
[2017-12-13 15:39:57,860: INFO/MainProcess] Received task: celery.chord_unlock[e3139ae5-a67d-4f0c-8c54-73b1e19433d2] ETA:[2017-12-13 20:39:58.853535+00:00]
[2017-12-13 15:39:57,861: DEBUG/MainProcess] basic.qos: prefetch_count->27
[2017-12-13 15:39:58,859: DEBUG/MainProcess] TaskPool: Apply <function _fast_trace_task at 0x10b410b90> (args:('celery.chord_unlock', 'e3139ae5-a67d-4f0c-8c54-73b1e19433d2', {'origin': 'gen53678@silo2460', 'lang': 'py', 'task': 'celery.chord_unlock', 'group': None, 'root_id': '0acd3e0d-7532-445c-8916-b5fc8a6395ab', u'delivery_info': {u'priority': None, u'redelivered': False, u'routing_key': u'celery', u'exchange': u''}, 'expires': None, u'correlation_id': 'e3139ae5-a67d-4f0c-8c54-73b1e19433d2', 'retries': 311, 'timelimit': [None, None], 'argsrepr': "('90c64bef-21ba-42f9-be75-fdd724375a7a', {'chord_size': 2, 'task': 'ipaassteprunner.tasks.transfer_data', 'subtask_type': None, 'kwargs': {}, 'args': (), 'options': {'chord_size': None, 'chain': [...], 'task_id': '9c6b5e1c-2089-4db7-9590-117aeaf782c7', 'root_id': '0acd3e0d-7532-445c-8916-b5fc8a6395ab', 'parent_id': 'c27c9565-19a6-4683-8180-60f0c25007e9', 'reply_to': '0a58093c-6fdd-3458-9a34-7d5e094ac6a8'}, 'immutable': False})", 'eta': '2017-12-13T20:39:58.853535+00:00', 'parent_id': 'c27c9565-19a6-4683-8180-60f0c25007e9', u'reply_to':... kwargs:{})
[2017-12-13 15:40:00,061: DEBUG/MainProcess] basic.qos: prefetch_count->26
[2017-12-13 15:40:00,065: DEBUG/MainProcess] Task accepted: celery.chord_unlock[e3139ae5-a67d-4f0c-8c54-73b1e19433d2] pid:53679
[2017-12-13 15:40:00,076: INFO/ForkPoolWorker-6] Task celery.chord_unlock[e3139ae5-a67d-4f0c-8c54-73b1e19433d2] retry: Retry in 1s: ValueFormatError()
```
### Stack trace from chord unlocking failure
```python
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/celery/app/trace.py", line 374, in trace_task
R = retval = fun(*args, **kwargs)
File "/Library/Python/2.7/site-packages/celery/app/trace.py", line 629, in __protected_call__
return self.run(*args, **kwargs)
File "/Library/Python/2.7/site-packages/celery/app/builtins.py", line 75, in unlock_chord
raise self.retry(countdown=interval, max_retries=max_retries)
File "/Library/Python/2.7/site-packages/celery/app/task.py", line 689, in retry
raise ret
Retry: Retry in 1s
```
|
1.0
|
Unable to save pickled objects with couchbase as result backend - Hi it seems like when I attempt to process groups of chords, the couchbase result backend is consistently failing to unlock the chord when reading from the db:
`celery.chord_unlock[e3139ae5-a67d-4f0c-8c54-73b1e19433d2] retry: Retry in 1s: ValueFormatError()`
This behavior does not occur with the redis result backend, i can switch between them and see that the error unlocking only occurs on couchbase.
## Steps to reproduce
Attempt to process a chord with couchbase backend using pickle serialization.
## Expected behavior
Chords process correctly, and resulting data is fed to the next task
## Actual behavior
Celery is unable to unlock the chord from the result backend
## Celery project info:
```
celery -A ipaassteprunner report
software -> celery:4.1.0 (latentcall) kombu:4.1.0 py:2.7.10
billiard:3.5.0.3 py-amqp:2.2.2
platform -> system:Darwin arch:64bit imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:pyamqp results:couchbase://isadmin:**@localhost:8091/tasks
task_serializer: 'pickle'
result_serializer: 'pickle'
dbconfig: <ipaascommon.ipaas_config.DatabaseConfig object at 0x10fbbfe10>
db_pass: u'********'
IpaasConfig: <class 'ipaascommon.ipaas_config.IpaasConfig'>
imports:
('ipaassteprunner.tasks',)
worker_redirect_stdouts: False
DatabaseConfig: u'********'
db_port: '8091'
ipaas_constants: <module 'ipaascommon.ipaas_constants' from '/Library/Python/2.7/site-packages/ipaascommon/ipaas_constants.pyc'>
enable_utc: True
db_user: 'isadmin'
db_host: 'localhost'
result_backend: u'couchbase://isadmin:********@localhost:8091/tasks'
result_expires: 3600
iconfig: <ipaascommon.ipaas_config.IpaasConfig object at 0x10fbbfd90>
broker_url: u'amqp://guest:********@localhost:5672//'
task_bucket: 'tasks'
accept_content: ['pickle']
```
### Additional Debug output
```
[2017-12-13 15:39:57,860: INFO/MainProcess] Received task: celery.chord_unlock[e3139ae5-a67d-4f0c-8c54-73b1e19433d2] ETA:[2017-12-13 20:39:58.853535+00:00]
[2017-12-13 15:39:57,861: DEBUG/MainProcess] basic.qos: prefetch_count->27
[2017-12-13 15:39:58,859: DEBUG/MainProcess] TaskPool: Apply <function _fast_trace_task at 0x10b410b90> (args:('celery.chord_unlock', 'e3139ae5-a67d-4f0c-8c54-73b1e19433d2', {'origin': 'gen53678@silo2460', 'lang': 'py', 'task': 'celery.chord_unlock', 'group': None, 'root_id': '0acd3e0d-7532-445c-8916-b5fc8a6395ab', u'delivery_info': {u'priority': None, u'redelivered': False, u'routing_key': u'celery', u'exchange': u''}, 'expires': None, u'correlation_id': 'e3139ae5-a67d-4f0c-8c54-73b1e19433d2', 'retries': 311, 'timelimit': [None, None], 'argsrepr': "('90c64bef-21ba-42f9-be75-fdd724375a7a', {'chord_size': 2, 'task': 'ipaassteprunner.tasks.transfer_data', 'subtask_type': None, 'kwargs': {}, 'args': (), 'options': {'chord_size': None, 'chain': [...], 'task_id': '9c6b5e1c-2089-4db7-9590-117aeaf782c7', 'root_id': '0acd3e0d-7532-445c-8916-b5fc8a6395ab', 'parent_id': 'c27c9565-19a6-4683-8180-60f0c25007e9', 'reply_to': '0a58093c-6fdd-3458-9a34-7d5e094ac6a8'}, 'immutable': False})", 'eta': '2017-12-13T20:39:58.853535+00:00', 'parent_id': 'c27c9565-19a6-4683-8180-60f0c25007e9', u'reply_to':... kwargs:{})
[2017-12-13 15:40:00,061: DEBUG/MainProcess] basic.qos: prefetch_count->26
[2017-12-13 15:40:00,065: DEBUG/MainProcess] Task accepted: celery.chord_unlock[e3139ae5-a67d-4f0c-8c54-73b1e19433d2] pid:53679
[2017-12-13 15:40:00,076: INFO/ForkPoolWorker-6] Task celery.chord_unlock[e3139ae5-a67d-4f0c-8c54-73b1e19433d2] retry: Retry in 1s: ValueFormatError()
```
### Stack trace from chord unlocking failure
```python
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/celery/app/trace.py", line 374, in trace_task
R = retval = fun(*args, **kwargs)
File "/Library/Python/2.7/site-packages/celery/app/trace.py", line 629, in __protected_call__
return self.run(*args, **kwargs)
File "/Library/Python/2.7/site-packages/celery/app/builtins.py", line 75, in unlock_chord
raise self.retry(countdown=interval, max_retries=max_retries)
File "/Library/Python/2.7/site-packages/celery/app/task.py", line 689, in retry
raise ret
Retry: Retry in 1s
```
|
non_process
|
unable to save pickled objects with couchbase as result backend hi it seems like when i attempt to process groups of chords the couchbase result backend is consistently failing to unlock the chord when reading from the db celery chord unlock retry retry in valueformaterror this behavior does not occur with the redis result backend i can switch between them and see that the error unlocking only occurs on couchbase steps to reproduce attempt to process a chord with couchbase backend using pickle serialization expected behavior chords process correctly and resulting data is fed to the next task actual behavior celery is unable to unlock the chord from the result backend celery project info celery a ipaassteprunner report software celery latentcall kombu py billiard py amqp platform system darwin arch imp cpython loader celery loaders app apploader settings transport pyamqp results couchbase isadmin localhost tasks task serializer pickle result serializer pickle dbconfig db pass u ipaasconfig imports ipaassteprunner tasks worker redirect stdouts false databaseconfig u db port ipaas constants enable utc true db user isadmin db host localhost result backend u couchbase isadmin localhost tasks result expires iconfig broker url u amqp guest localhost task bucket tasks accept content additional debug output received task celery chord unlock eta basic qos prefetch count taskpool apply args celery chord unlock origin lang py task celery chord unlock group none root id u delivery info u priority none u redelivered false u routing key u celery u exchange u expires none u correlation id retries timelimit argsrepr chord size task ipaassteprunner tasks transfer data subtask type none kwargs args options chord size none chain task id root id parent id reply to immutable false eta parent id u reply to kwargs basic qos prefetch count task accepted celery chord unlock pid task celery chord unlock retry retry in valueformaterror stack trace from chord unlocking failure python traceback most recent call last file library python site packages celery app trace py line in trace task r retval fun args kwargs file library python site packages celery app trace py line in protected call return self run args kwargs file library python site packages celery app builtins py line in unlock chord raise self retry countdown interval max retries max retries file library python site packages celery app task py line in retry raise ret retry retry in
| 0
|
3,433
| 6,533,097,499
|
IssuesEvent
|
2017-08-31 03:48:32
|
rubberduck-vba/Rubberduck
|
https://api.github.com/repos/rubberduck-vba/Rubberduck
|
closed
|
MSForms control members trigger 'Member does not exist on interface' inspection
|
bug false-positive parse-tree-processing
|
I am getting the following errors:
> Member 'Clear' is not declared on the interface for type 'ListBox'.
> Member 'SetFocus' is not declared on the interface for type 'TextBox'.
These are clearly valid members because I can see them in the VBA code completion, they compile, and they function. You can clear a listbox and you can set the current focus to a textbox. There might be a greater issue with evaluating members of form objects but I couldn't find an open issue for it.
My code exists locally in the form and I am calling the object with no qualifications:
```
serialNumber.SetFocus
partNumber.Clear
```
|
1.0
|
MSForms control members trigger 'Member does not exist on interface' inspection - I am getting the following errors:
> Member 'Clear' is not declared on the interface for type 'ListBox'.
> Member 'SetFocus' is not declared on the interface for type 'TextBox'.
These are clearly valid members because I can see them in the VBA code completion, they compile, and they function. You can clear a listbox and you can set the current focus to a textbox. There might be a greater issue with evaluating members of form objects but I couldn't find an open issue for it.
My code exists locally in the form and I am calling the object with no qualifications:
```
serialNumber.SetFocus
partNumber.Clear
```
|
process
|
msforms control members trigger member does not exist on interface inspection i am getting the following errors member clear is not declared on the interface for type listbox member setfocus is not declared on the interface for type textbox these are clearly valid members because i can see them in the vba code completion they compile and they function you can clear a listbox and you can set the current focus to a textbox there might be a greater issue with evaluating members of form objects but i couldn t find an open issue for it my code exists locally in the form and i am calling the object with no qualifications serialnumber setfocus partnumber clear
| 1
|
13,931
| 16,686,711,269
|
IssuesEvent
|
2021-06-08 08:47:19
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Canceling Union with same layer kills QGIS
|
Bug Crash/Data Corruption Processing
|
<!--
Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone.
If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix
Checklist before submitting
- [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists
- [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles).
- [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue
-->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
If the Union processing algorithm is executed with the same layer as both input and overlay layer, QGIS closes when the cancel button is pressed. No stacktrace, only the following log output:
```
โ2021-06-04T12:27:22.995 Warning: Qt Concurrent has caught an exception thrown from a worker thread. โ
โ2021-06-04T12:27:22.995 This is not supported, exceptions thrown in worker threads must be โ
โ2021-06-04T12:27:22.995 caught before control returns to Qt Concurrent. โ
โ2021-06-04T12:27:23.002 Stacktrace (piped through c++filt): โ
โ2021-06-04T12:27:23.022 ./output/bin/qgis(+0xe15a)[0x55eb509f215a] โ
โ2021-06-04T12:27:23.022 ./output/bin/qgis(+0xe880)[0x55eb509f2880] โ
โ2021-06-04T12:27:23.022 /lib/x86_64-linux-gnu/libQt5Core.so.5(+0xc47c8)[0x7f7b258d97c8] โ
โ2021-06-04T12:27:23.023 /lib/x86_64-linux-gnu/libQt5Core.so.5(+0xc48e9)[0x7f7b258d98e9] โ
โ2021-06-04T12:27:23.023 /lib/x86_64-linux-gnu/libQt5Core.so.5(QMessageLogger::warning(char const*, ...) const+0xb6)[0x7f7b258a6476] โ
โ2021-06-04T12:27:23.023 /lib/x86_64-linux-gnu/libQt5Core.so.5(+0x92dd6)[0x7f7b258a7dd6] โ
โ2021-06-04T12:27:23.023 /lib/x86_64-linux-gnu/libQt5Core.so.5(+0xccb81)[0x7f7b258e1b81] โ
โ2021-06-04T12:27:23.023 /lib/x86_64-linux-gnu/libpthread.so.0(+0x8ea7)[0x7f7b257f9ea7] โ
โ2021-06-04T12:27:23.023 /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7f7b22ea7def] โ
โ2021-06-04T12:27:23.024 ../src/core/qgsmessagelog.cpp:29 : (logMessage) [248312ms] [thread:0x55eb52701c50] 2021-06-04T12:27:23 Qt[1] Qt Concurrentโ
โ2021-06-04T12:27:23.024 This is not supported, exceptions thrown in worker threads must be โ
โ2021-06-04T12:27:23.024 caught before control returns to Qt Concurrent. โ
โ2021-06-04T12:27:23.024 terminate called after throwing an instance of 'Tools::IllegalArgumentException'
```
**How to Reproduce**
1. Load a vector layer
2. Run Union algorithm
3. select the same layer as both input and overlay
4. click run
5. click cancel while the algorithm is running
**QGIS and OS versions**
02266ef8e6
Debian bullseye
<!-- In the QGIS Help menu -> About, click in the table, Ctrl+A and then Ctrl+C. Finally paste here -->
|
1.0
|
Canceling Union with same layer kills QGIS - <!--
Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone.
If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix
Checklist before submitting
- [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists
- [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles).
- [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue
-->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
If the Union processing algorithm is executed with the same layer as both input and overlay layer, QGIS closes when the cancel button is pressed. No stacktrace, only the following log output:
```
โ2021-06-04T12:27:22.995 Warning: Qt Concurrent has caught an exception thrown from a worker thread. โ
โ2021-06-04T12:27:22.995 This is not supported, exceptions thrown in worker threads must be โ
โ2021-06-04T12:27:22.995 caught before control returns to Qt Concurrent. โ
โ2021-06-04T12:27:23.002 Stacktrace (piped through c++filt): โ
โ2021-06-04T12:27:23.022 ./output/bin/qgis(+0xe15a)[0x55eb509f215a] โ
โ2021-06-04T12:27:23.022 ./output/bin/qgis(+0xe880)[0x55eb509f2880] โ
โ2021-06-04T12:27:23.022 /lib/x86_64-linux-gnu/libQt5Core.so.5(+0xc47c8)[0x7f7b258d97c8] โ
โ2021-06-04T12:27:23.023 /lib/x86_64-linux-gnu/libQt5Core.so.5(+0xc48e9)[0x7f7b258d98e9] โ
โ2021-06-04T12:27:23.023 /lib/x86_64-linux-gnu/libQt5Core.so.5(QMessageLogger::warning(char const*, ...) const+0xb6)[0x7f7b258a6476] โ
โ2021-06-04T12:27:23.023 /lib/x86_64-linux-gnu/libQt5Core.so.5(+0x92dd6)[0x7f7b258a7dd6] โ
โ2021-06-04T12:27:23.023 /lib/x86_64-linux-gnu/libQt5Core.so.5(+0xccb81)[0x7f7b258e1b81] โ
โ2021-06-04T12:27:23.023 /lib/x86_64-linux-gnu/libpthread.so.0(+0x8ea7)[0x7f7b257f9ea7] โ
โ2021-06-04T12:27:23.023 /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7f7b22ea7def] โ
โ2021-06-04T12:27:23.024 ../src/core/qgsmessagelog.cpp:29 : (logMessage) [248312ms] [thread:0x55eb52701c50] 2021-06-04T12:27:23 Qt[1] Qt Concurrentโ
โ2021-06-04T12:27:23.024 This is not supported, exceptions thrown in worker threads must be โ
โ2021-06-04T12:27:23.024 caught before control returns to Qt Concurrent. โ
โ2021-06-04T12:27:23.024 terminate called after throwing an instance of 'Tools::IllegalArgumentException'
```
**How to Reproduce**
1. Load a vector layer
2. Run Union algorithm
3. select the same layer as both input and overlay
4. click run
5. click cancel while the algorithm is running
**QGIS and OS versions**
02266ef8e6
Debian bullseye
<!-- In the QGIS Help menu -> About, click in the table, Ctrl+A and then Ctrl+C. Finally paste here -->
|
process
|
canceling union with same layer kills qgis bug fixing and feature development is a community responsibility and not the responsibility of the qgis project alone if this bug report or feature request is high priority for you we suggest engaging a qgis developer or support organisation and financially sponsoring a fix checklist before submitting search through existing issue reports and gis stackexchange com to check whether the issue already exists test with a create a light and self contained sample dataset and project file which demonstrates the issue describe the bug if the union processing algorithm is executed with the same layer as both input and overlay layer qgis closes when the cancel button is pressed no stacktrace only the following log output โ warning qt concurrent has caught an exception thrown from a worker thread โ โ this is not supported exceptions thrown in worker threads must be โ โ caught before control returns to qt concurrent โ โ stacktrace piped through c filt โ โ output bin qgis โ โ output bin qgis โ โ lib linux gnu so โ โ lib linux gnu so โ โ lib linux gnu so qmessagelogger warning char const const โ โ lib linux gnu so โ โ lib linux gnu so โ โ lib linux gnu libpthread so โ โ lib linux gnu libc so clone โ โ src core qgsmessagelog cpp logmessage qt qt concurrentโ โ this is not supported exceptions thrown in worker threads must be โ โ caught before control returns to qt concurrent โ โ terminate called after throwing an instance of tools illegalargumentexception how to reproduce load a vector layer run union algorithm select the same layer as both input and overlay click run click cancel while the algorithm is running qgis and os versions debian bullseye about click in the table ctrl a and then ctrl c finally paste here
| 1
|
9,720
| 12,717,197,692
|
IssuesEvent
|
2020-06-24 04:22:36
|
kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines
|
closed
|
Define Kubeflow Pipelines' Stable Requirements using Kubeflow Community's Process (as defined in Application Requirements Template)
|
kind/process lifecycle/stale status/triaged
|
### What steps did you take:
[A clear and concise description of what the bug is.]
### What happened:
### What did you expect to happen:
### Environment:
<!-- Please fill in those that seem relevant. -->
How did you deploy Kubeflow Pipelines (KFP)?
<!-- If you are not sure, here's [an introduction of all options](https://www.kubeflow.org/docs/pipelines/installation/overview/). -->
KFP version: <!-- If you are not sure, build commit shows on bottom of KFP UI left sidenav. -->
KFP SDK version: <!-- Please attach the output of this shell command: $pip list | grep kfp -->
### Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
/kind feature
<!-- Please include labels by uncommenting them to help us better triage issues, choose from the following -->
<!--
// /area frontend
// /area backend
// /area sdk
// /area testing
// /area engprod
-->
|
1.0
|
Define Kubeflow Pipelines' Stable Requirements using Kubeflow Community's Process (as defined in Application Requirements Template) - ### What steps did you take:
[A clear and concise description of what the bug is.]
### What happened:
### What did you expect to happen:
### Environment:
<!-- Please fill in those that seem relevant. -->
How did you deploy Kubeflow Pipelines (KFP)?
<!-- If you are not sure, here's [an introduction of all options](https://www.kubeflow.org/docs/pipelines/installation/overview/). -->
KFP version: <!-- If you are not sure, build commit shows on bottom of KFP UI left sidenav. -->
KFP SDK version: <!-- Please attach the output of this shell command: $pip list | grep kfp -->
### Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
/kind feature
<!-- Please include labels by uncommenting them to help us better triage issues, choose from the following -->
<!--
// /area frontend
// /area backend
// /area sdk
// /area testing
// /area engprod
-->
|
process
|
define kubeflow pipelines stable requirements using kubeflow community s process as defined in application requirements template what steps did you take what happened what did you expect to happen environment how did you deploy kubeflow pipelines kfp kfp version kfp sdk version anything else you would like to add kind feature area frontend area backend area sdk area testing area engprod
| 1
|
11,534
| 30,833,227,849
|
IssuesEvent
|
2023-08-02 04:43:15
|
Koniverse/SubWallet-Extension
|
https://api.github.com/repos/Koniverse/SubWallet-Extension
|
closed
|
Not showing staking record on account using different stash and controller account
|
enhancement extension architecture
|
Current staking feature does not show staking data in case the controller account is different from the stash account. More update later. Expected to be resolved after architecture update
|
1.0
|
Not showing staking record on account using different stash and controller account - Current staking feature does not show staking data in case the controller account is different from the stash account. More update later. Expected to be resolved after architecture update
|
non_process
|
not showing staking record on account using different stash and controller account current staking feature does not show staking data in case the controller account is different from the stash account more update later expected to be resolved after architecture update
| 0
|
64,507
| 14,666,140,062
|
IssuesEvent
|
2020-12-29 15:41:13
|
jgeraigery/experian-java
|
https://api.github.com/repos/jgeraigery/experian-java
|
opened
|
CVE-2020-9547 (High) detected in jackson-databind-2.9.2.jar
|
security vulnerability
|
## CVE-2020-9547 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.2.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: experian-java/MavenWorkspace/bis-services-lib/bis-services-base/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.2/jackson-databind-2.9.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/experian-java/commit/d89dcb23dbf81afc230b102b366ac005def1fe39">d89dcb23dbf81afc230b102b366ac005def1fe39</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to com.ibatis.sqlmap.engine.transaction.jta.JtaTransactionConfig (aka ibatis-sqlmap).
<p>Publish Date: 2020-03-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9547>CVE-2020-9547</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9547">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9547</a></p>
<p>Release Date: 2020-03-02</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.10.3</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.2","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.10.3"}],"vulnerabilityIdentifier":"CVE-2020-9547","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to com.ibatis.sqlmap.engine.transaction.jta.JtaTransactionConfig (aka ibatis-sqlmap).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9547","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-9547 (High) detected in jackson-databind-2.9.2.jar - ## CVE-2020-9547 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.2.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: experian-java/MavenWorkspace/bis-services-lib/bis-services-base/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.2/jackson-databind-2.9.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/experian-java/commit/d89dcb23dbf81afc230b102b366ac005def1fe39">d89dcb23dbf81afc230b102b366ac005def1fe39</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to com.ibatis.sqlmap.engine.transaction.jta.JtaTransactionConfig (aka ibatis-sqlmap).
<p>Publish Date: 2020-03-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9547>CVE-2020-9547</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9547">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9547</a></p>
<p>Release Date: 2020-03-02</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.10.3</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.2","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.10.3"}],"vulnerabilityIdentifier":"CVE-2020-9547","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to com.ibatis.sqlmap.engine.transaction.jta.JtaTransactionConfig (aka ibatis-sqlmap).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9547","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file experian java mavenworkspace bis services lib bis services base pom xml path to vulnerable library canner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to com ibatis sqlmap engine transaction jta jtatransactionconfig aka ibatis sqlmap publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to com ibatis sqlmap engine transaction jta jtatransactionconfig aka ibatis sqlmap vulnerabilityurl
| 0
|
379
| 2,823,565,061
|
IssuesEvent
|
2015-05-21 09:36:33
|
austundag/testing
|
https://api.github.com/repos/austundag/testing
|
closed
|
Only the allergies from the logon VistA instance show up
|
enhancement in process
|
It appears that currently only allergies from one VistA instance (PANAROMA and KODAK) shows up in eHMP. From eHMP presentation allergies from both instances should show up although they are identical. Investigation is needed to understand the reason and change settings if there is any.
|
1.0
|
Only the allergies from the logon VistA instance show up - It appears that currently only allergies from one VistA instance (PANAROMA and KODAK) shows up in eHMP. From eHMP presentation allergies from both instances should show up although they are identical. Investigation is needed to understand the reason and change settings if there is any.
|
process
|
only the allergies from the logon vista instance show up it appears that currently only allergies from one vista instance panaroma and kodak shows up in ehmp from ehmp presentation allergies from both instances should show up although they are identical investigation is needed to understand the reason and change settings if there is any
| 1
|
5,465
| 8,328,747,091
|
IssuesEvent
|
2018-09-27 02:32:33
|
uccser/verto
|
https://api.github.com/repos/uccser/verto
|
closed
|
Add blockquote tag
|
processor implementation
|
The standard Markdown blockquote tag is limited in formatting (can be improved by CSS), but it would be nice to have a Verto tag to allow finer editing, especially for automatic use with [Bootstrap 4](https://getbootstrap.com/docs/4.1/content/typography/#blockquotes). Possibly could look like:
```markdown
{blockquote}
First and foremost, we believe that speed is more than a feature.
- [Fred Wilson](https://en.wikipedia.org/wiki/Fred_Wilson_(financier))
{blockquote end}
```
This could be used with a template like the following:
```html
<blockquote class="blockquote">
{{ content }}
<footer class="blockquote-footer">
{{ source_content}}
</footer>
</blockquote>
```
to produce something like the following
```html
<blockquote class="blockquote">
<p>First and foremost, we believe that speed is more than a feature.</p>
<footer class="blockquote-footer">
[Fred Wilson](https://en.wikipedia.org/wiki/Fred_Wilson_(financier))
</footer>
</blockquote>
```
Would need to figure out how to detect footer information (could be argument value like image `alt` text, though it wouldn't work for in-context translation (though this may be a worth compromise for a simpler tag).
## Optional arguments
- **align** - Text value to be passed through to template.
- **source** - URL text value to be passed through to the template.
|
1.0
|
Add blockquote tag - The standard Markdown blockquote tag is limited in formatting (can be improved by CSS), but it would be nice to have a Verto tag to allow finer editing, especially for automatic use with [Bootstrap 4](https://getbootstrap.com/docs/4.1/content/typography/#blockquotes). Possibly could look like:
```markdown
{blockquote}
First and foremost, we believe that speed is more than a feature.
- [Fred Wilson](https://en.wikipedia.org/wiki/Fred_Wilson_(financier))
{blockquote end}
```
This could be used with a template like the following:
```html
<blockquote class="blockquote">
{{ content }}
<footer class="blockquote-footer">
{{ source_content}}
</footer>
</blockquote>
```
to produce something like the following
```html
<blockquote class="blockquote">
<p>First and foremost, we believe that speed is more than a feature.</p>
<footer class="blockquote-footer">
[Fred Wilson](https://en.wikipedia.org/wiki/Fred_Wilson_(financier))
</footer>
</blockquote>
```
Would need to figure out how to detect footer information (could be argument value like image `alt` text, though it wouldn't work for in-context translation (though this may be a worth compromise for a simpler tag).
## Optional arguments
- **align** - Text value to be passed through to template.
- **source** - URL text value to be passed through to the template.
|
process
|
add blockquote tag the standard markdown blockquote tag is limited in formatting can be improved by css but it would be nice to have a verto tag to allow finer editing especially for automatic use with possibly could look like markdown blockquote first and foremost we believe that speed is more than a feature blockquote end this could be used with a template like the following html content source content to produce something like the following html first and foremost we believe that speed is more than a feature would need to figure out how to detect footer information could be argument value like image alt text though it wouldn t work for in context translation though this may be a worth compromise for a simpler tag optional arguments align text value to be passed through to template source url text value to be passed through to the template
| 1
|
212,171
| 16,430,824,876
|
IssuesEvent
|
2021-05-20 01:09:16
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
runner-ct: cy.screenshot wrong for very long viewports
|
component testing stage: review/qa
|
<!-- ๐ Use the template below to report a bug. Fill in as much info as possible.
Have a question? Start a new discussion ๐ https://github.com/cypress-io/cypress/discussions
As an open source project with a small maintainer team, it may take some time for your issue to be addressed. Please be patient and we will respond as soon as we can. ๐ -->
### Current behavior
<!-- A description including screenshots, stack traces, DEBUG logs, etc. ๐ https://on.cypress.io/troubleshooting -->
Left: e2e runner with viewport set to 200, 2000
Right: CT runner with viewport set to 200, 2000

Screenshot persisted is also wrong, but in a different way:

### Desired behavior
<!-- Remember, we are not familiar with the application you're testing, so please provide a clear description of what should happen.-->
Should be what you would expect.
### Test code to reproduce
<!-- Provide test code that we can copy, paste, and run on our machine to see the issue. -->
<!-- You could also provide a repo that we can clone and run. You can fork ๐ https://github.com/cypress-io/cypress-test-tiny repo, set up a failing test, then link to your fork. -->
### Versions
<!-- Cypress version, last known working Cypress version (if applicable), Browser and version, Operating System, CI Provider, etc -->
<!-- If possible, please update Cypress to latest version and check if the bug is still present. -->
|
1.0
|
runner-ct: cy.screenshot wrong for very long viewports - <!-- ๐ Use the template below to report a bug. Fill in as much info as possible.
Have a question? Start a new discussion ๐ https://github.com/cypress-io/cypress/discussions
As an open source project with a small maintainer team, it may take some time for your issue to be addressed. Please be patient and we will respond as soon as we can. ๐ -->
### Current behavior
<!-- A description including screenshots, stack traces, DEBUG logs, etc. ๐ https://on.cypress.io/troubleshooting -->
Left: e2e runner with viewport set to 200, 2000
Right: CT runner with viewport set to 200, 2000

Screenshot persisted is also wrong, but in a different way:

### Desired behavior
<!-- Remember, we are not familiar with the application you're testing, so please provide a clear description of what should happen.-->
Should be what you would expect.
### Test code to reproduce
<!-- Provide test code that we can copy, paste, and run on our machine to see the issue. -->
<!-- You could also provide a repo that we can clone and run. You can fork ๐ https://github.com/cypress-io/cypress-test-tiny repo, set up a failing test, then link to your fork. -->
### Versions
<!-- Cypress version, last known working Cypress version (if applicable), Browser and version, Operating System, CI Provider, etc -->
<!-- If possible, please update Cypress to latest version and check if the bug is still present. -->
|
non_process
|
runner ct cy screenshot wrong for very long viewports ๐ use the template below to report a bug fill in as much info as possible have a question start a new discussion ๐ as an open source project with a small maintainer team it may take some time for your issue to be addressed please be patient and we will respond as soon as we can ๐ current behavior left runner with viewport set to right ct runner with viewport set to screenshot persisted is also wrong but in a different way desired behavior should be what you would expect test code to reproduce versions
| 0
|
202,653
| 15,294,133,202
|
IssuesEvent
|
2021-02-24 01:47:58
|
microsoft/vscode
|
https://api.github.com/repos/microsoft/vscode
|
closed
|
Unexpected collapse/expand selfhost test provider behavior
|
polish testing
|
Testing #117305
When I expand/collapse test sections (i.e. just clicking on the icon/expansion symbol below), I don't expect the view to change to that file.

I would expect to be brought to that file if I clicked more on the test/section's name, not just on the expansion symbol. Or at the very least, if I'm collapsing a section, I wouldn't expect for it to open - only if I'm expanding it.
I also find that sometimes when I expand/close a section, it doesn't open the file, while others (most?) times it does.
|
1.0
|
Unexpected collapse/expand selfhost test provider behavior - Testing #117305
When I expand/collapse test sections (i.e. just clicking on the icon/expansion symbol below), I don't expect the view to change to that file.

I would expect to be brought to that file if I clicked more on the test/section's name, not just on the expansion symbol. Or at the very least, if I'm collapsing a section, I wouldn't expect for it to open - only if I'm expanding it.
I also find that sometimes when I expand/close a section, it doesn't open the file, while others (most?) times it does.
|
non_process
|
unexpected collapse expand selfhost test provider behavior testing when i expand collapse test sections i e just clicking on the icon expansion symbol below i don t expect the view to change to that file i would expect to be brought to that file if i clicked more on the test section s name not just on the expansion symbol or at the very least if i m collapsing a section i wouldn t expect for it to open only if i m expanding it i also find that sometimes when i expand close a section it doesn t open the file while others most times it does
| 0
|
5,933
| 8,755,427,482
|
IssuesEvent
|
2018-12-14 14:51:12
|
w3c/wg-effectiveness
|
https://api.github.com/repos/w3c/wg-effectiveness
|
closed
|
is limiting continuous incubation prohibiting organic growth?
|
Process
|
The [Continuous Incubation](https://github.com/w3c/wg-effectiveness/blob/master/continuous_incubation.md) document describes a scenario of fresh ideas changing the course of a WG. This can be problematic, but it can also demonstrate that a group has adapted its needs organically. A healthy WG will work on real implementations as it develops and may need to change course. I am wary of being overly prescriptive about growth and incubation. Most of this should be covered by staying within the charter.
|
1.0
|
is limiting continuous incubation prohibiting organic growth? - The [Continuous Incubation](https://github.com/w3c/wg-effectiveness/blob/master/continuous_incubation.md) document describes a scenario of fresh ideas changing the course of a WG. This can be problematic, but it can also demonstrate that a group has adapted its needs organically. A healthy WG will work on real implementations as it develops and may need to change course. I am wary of being overly prescriptive about growth and incubation. Most of this should be covered by staying within the charter.
|
process
|
is limiting continuous incubation prohibiting organic growth the document describes a scenario of fresh ideas changing the course of a wg this can be problematic but it can also demonstrate that a group has adapted its needs organically a healthy wg will work on real implementations as it develops and may need to change course i am wary of being overly prescriptive about growth and incubation most of this should be covered by staying within the charter
| 1
|
6,004
| 8,808,939,439
|
IssuesEvent
|
2018-12-27 16:59:31
|
linnovate/root
|
https://api.github.com/repos/linnovate/root
|
closed
|
plus menu, creating a project
|
2.0.6 Fixed Process bug
|
when trying to create a project using the plus menu, instead of creating a project at the end of the list, it instead jumps to the first project on the list
|
1.0
|
plus menu, creating a project - when trying to create a project using the plus menu, instead of creating a project at the end of the list, it instead jumps to the first project on the list
|
process
|
plus menu creating a project when trying to create a project using the plus menu instead of creating a project at the end of the list it instead jumps to the first project on the list
| 1
|
540,620
| 15,814,785,401
|
IssuesEvent
|
2021-04-05 10:02:47
|
AY2021S2-CS2113-F10-1/tp
|
https://api.github.com/repos/AY2021S2-CS2113-F10-1/tp
|
closed
|
[PE-D] filter feature exceptions
|
priority.High severity.High type.Bug
|
If the filter feature is only able to take in certain values for filter type <value> (eg 1-5 room) in certain format, the right error message should be prompt to the user instead of just take in the value user typed and return no result. I key in filter type 4room and no result came out for the find. Maybe can try to use .contain() to slove this problem.

<!--session: 1617437414602-29b8db1a-ad79-439f-85c0-23508c43b097-->
-------------
Labels: `severity.High` `type.FeatureFlaw`
original: e00426142/ped#5
|
1.0
|
[PE-D] filter feature exceptions - If the filter feature is only able to take in certain values for filter type <value> (eg 1-5 room) in certain format, the right error message should be prompt to the user instead of just take in the value user typed and return no result. I key in filter type 4room and no result came out for the find. Maybe can try to use .contain() to slove this problem.

<!--session: 1617437414602-29b8db1a-ad79-439f-85c0-23508c43b097-->
-------------
Labels: `severity.High` `type.FeatureFlaw`
original: e00426142/ped#5
|
non_process
|
filter feature exceptions if the filter feature is only able to take in certain values for filter type eg room in certain format the right error message should be prompt to the user instead of just take in the value user typed and return no result i key in filter type and no result came out for the find maybe can try to use contain to slove this problem labels severity high type featureflaw original ped
| 0
|
272,504
| 8,514,247,028
|
IssuesEvent
|
2018-10-31 18:02:16
|
zulip/zulip
|
https://api.github.com/repos/zulip/zulip
|
opened
|
Add badge to guest avatars.
|
area: misc priority: high
|
It's a bit of a security issue for guests and members to look the same in the Zulip UI, since for social engineering reasons.
https://github.com/zulip/zulip/commit/deb29749c2c3c5bfae7108d0bdbc0a83f7771070 adds the necessary piping for the frontend to know whether a user is a guest or not. Two things remaining are:
* [ ] Add CSS to make the avatar look like this

* [ ] In the user popover, add a line under Local time that says "Administrator", "Member", or "Guest" depending on what they are.

* [ ] In the user profile, under "Joined" add a field "Role" that is "Administrator", "Member", or "Guest" as appropriate.

|
1.0
|
Add badge to guest avatars. - It's a bit of a security issue for guests and members to look the same in the Zulip UI, since for social engineering reasons.
https://github.com/zulip/zulip/commit/deb29749c2c3c5bfae7108d0bdbc0a83f7771070 adds the necessary piping for the frontend to know whether a user is a guest or not. Two things remaining are:
* [ ] Add CSS to make the avatar look like this

* [ ] In the user popover, add a line under Local time that says "Administrator", "Member", or "Guest" depending on what they are.

* [ ] In the user profile, under "Joined" add a field "Role" that is "Administrator", "Member", or "Guest" as appropriate.

|
non_process
|
add badge to guest avatars it s a bit of a security issue for guests and members to look the same in the zulip ui since for social engineering reasons adds the necessary piping for the frontend to know whether a user is a guest or not two things remaining are add css to make the avatar look like this in the user popover add a line under local time that says administrator member or guest depending on what they are in the user profile under joined add a field role that is administrator member or guest as appropriate
| 0
|
14,353
| 17,375,115,781
|
IssuesEvent
|
2021-07-30 19:44:01
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Clarify when template parameters are actually parsed
|
Pri2 devops-cicd-process/tech devops/prod doc-enhancement
|
The documentation doesn't state when template parameters are actually replace in the template. I always assumed they would be replace compile time (since inside the template the parameter expression (`${{`) is used) and for this type of expression the documentation states they are replaced compile time.
But recently I ran into a weird issue. I needed to pass a value to a template that is defined in a variable group. Now since the following works:
**Example 1:**
```
trigger: none
variables:
- name: VariablesTest
value: "value1"
jobs:
- template: TestTemplate.yaml
parameters:
parameter1: ${{ variables.VariablesTest }}
```
I would assume that the following also works:
**Example 2:**
```
trigger: none
variables:
- group: PaulTest #this contains the `VariablesTest` variable
jobs:
- template: TestTemplate.yaml
parameters:
parameter1: ${{ variables.VariablesTest }}
```
However, this causes the parameter value to be empty. Strangely enough, the following DOES work:
**Example 3:**
```
trigger: none
variables:
- group: PaulTest #this contains the `VariablesTest` variable
jobs:
- template: TestTemplate.yaml
parameters:
parameter1: $(VariablesTest)
```
I was under the impression that you could not use `$(` variables as parameter values, because they were expanded compile time, and `$(` variables are expanded at runtime. So I don't understand the rules here, and the documentation doesn't state anything about this. And if this last example works (which it does), why doesn't the second example also work? Since the documentation states you can use variables in template expressions as long as these variables are present when the pipeline is compiled, which, looking at the 3rd example, is also the case when they are added via a variable group.
I'm really confused as to how this all works, and the docs don't help me. Please explain it to me (and hopefully update the docs)
**Edit:**
I found [this doc](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/runs?view=azure-devops#process-the-pipeline) stating:
> It also answers another common issue: why can't I use variables to resolve service connection / environment names? Resources are authorized before a stage can start running, so stage- and job-level variables aren't available. Pipeline-level variables can be used, but only those explicitly included in the pipeline. <ins>**Variable groups are themselves a resource subject to authorization, so their data is likewise not available when checking resource authorization**</ins>.
But this simply doesn't seem to be true, given that example 3 works. So now I'm even more confused
---
#### Document Details
โ *Do not edit this section. It is required for docs.microsoft.com โ GitHub issue linking.*
* ID: 6724abea-bbdc-bf66-ed5e-3214fa6c3e66
* Version Independent ID: 4f8dab21-3f0e-da32-cc0e-1d85c13c0065
* Content: [Templates - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops)
* Content Source: [docs/pipelines/process/templates.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/templates.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Clarify when template parameters are actually parsed - The documentation doesn't state when template parameters are actually replace in the template. I always assumed they would be replace compile time (since inside the template the parameter expression (`${{`) is used) and for this type of expression the documentation states they are replaced compile time.
But recently I ran into a weird issue. I needed to pass a value to a template that is defined in a variable group. Now since the following works:
**Example 1:**
```
trigger: none
variables:
- name: VariablesTest
value: "value1"
jobs:
- template: TestTemplate.yaml
parameters:
parameter1: ${{ variables.VariablesTest }}
```
I would assume that the following also works:
**Example 2:**
```
trigger: none
variables:
- group: PaulTest #this contains the `VariablesTest` variable
jobs:
- template: TestTemplate.yaml
parameters:
parameter1: ${{ variables.VariablesTest }}
```
However, this causes the parameter value to be empty. Strangely enough, the following DOES work:
**Example 3:**
```
trigger: none
variables:
- group: PaulTest #this contains the `VariablesTest` variable
jobs:
- template: TestTemplate.yaml
parameters:
parameter1: $(VariablesTest)
```
I was under the impression that you could not use `$(` variables as parameter values, because they were expanded compile time, and `$(` variables are expanded at runtime. So I don't understand the rules here, and the documentation doesn't state anything about this. And if this last example works (which it does), why doesn't the second example also work? Since the documentation states you can use variables in template expressions as long as these variables are present when the pipeline is compiled, which, looking at the 3rd example, is also the case when they are added via a variable group.
I'm really confused as to how this all works, and the docs don't help me. Please explain it to me (and hopefully update the docs)
**Edit:**
I found [this doc](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/runs?view=azure-devops#process-the-pipeline) stating:
> It also answers another common issue: why can't I use variables to resolve service connection / environment names? Resources are authorized before a stage can start running, so stage- and job-level variables aren't available. Pipeline-level variables can be used, but only those explicitly included in the pipeline. <ins>**Variable groups are themselves a resource subject to authorization, so their data is likewise not available when checking resource authorization**</ins>.
But this simply doesn't seem to be true, given that example 3 works. So now I'm even more confused
---
#### Document Details
โ *Do not edit this section. It is required for docs.microsoft.com โ GitHub issue linking.*
* ID: 6724abea-bbdc-bf66-ed5e-3214fa6c3e66
* Version Independent ID: 4f8dab21-3f0e-da32-cc0e-1d85c13c0065
* Content: [Templates - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops)
* Content Source: [docs/pipelines/process/templates.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/templates.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
clarify when template parameters are actually parsed the documentation doesn t state when template parameters are actually replace in the template i always assumed they would be replace compile time since inside the template the parameter expression is used and for this type of expression the documentation states they are replaced compile time but recently i ran into a weird issue i needed to pass a value to a template that is defined in a variable group now since the following works example trigger none variables name variablestest value jobs template testtemplate yaml parameters variables variablestest i would assume that the following also works example trigger none variables group paultest this contains the variablestest variable jobs template testtemplate yaml parameters variables variablestest however this causes the parameter value to be empty strangely enough the following does work example trigger none variables group paultest this contains the variablestest variable jobs template testtemplate yaml parameters variablestest i was under the impression that you could not use variables as parameter values because they were expanded compile time and variables are expanded at runtime so i don t understand the rules here and the documentation doesn t state anything about this and if this last example works which it does why doesn t the second example also work since the documentation states you can use variables in template expressions as long as these variables are present when the pipeline is compiled which looking at the example is also the case when they are added via a variable group i m really confused as to how this all works and the docs don t help me please explain it to me and hopefully update the docs edit i found stating it also answers another common issue why can t i use variables to resolve service connection environment names resources are authorized before a stage can start running so stage and job level variables aren t available pipeline level variables can be used but only those explicitly included in the pipeline variable groups are themselves a resource subject to authorization so their data is likewise not available when checking resource authorization but this simply doesn t seem to be true given that example works so now i m even more confused document details โ do not edit this section it is required for docs microsoft com โ github issue linking id bbdc version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
20,567
| 27,228,360,909
|
IssuesEvent
|
2023-02-21 11:20:02
|
corona-warn-app/cwa-wishlist
|
https://api.github.com/repos/corona-warn-app/cwa-wishlist
|
closed
|
Version 3.0: Do not show how long the app needs to be installed before a SRS warning can be issued
|
enhancement mirrored-to-jira Test/Share process SRS
|
## Upcoming Implementation
If the user just installed the app and tries to warn others with a SRS test, the app will show that a warning can not yet be issued and that the app needs to be installed for at least x days/hours.
## Suggested change to the upcoming implementation
I **strongly** suggest to not show how long the app needs to be installed until a SRS warning can be issued, as this might encourage trolls to leave the app installed, because they have a clear goal in front of their eyes when they can troll other users. If the app doesn't give them a goal, but only informs the user that a SRS warning can't be issued at that point of time, I'm quite sure that many trolls will give up and uninstall the app again.
## Expected Benefits
In my opinion, this change could drastically reduce the number of trolls who install the app just to warn others, although they are not positive.
## Related issue
- https://github.com/corona-warn-app/cwa-wishlist/issues/872
---
Internal Tracking ID: [EXPOSUREAPP-14519](https://jira-ibs.wbs.net.sap/browse/EXPOSUREAPP-14519)
|
1.0
|
Version 3.0: Do not show how long the app needs to be installed before a SRS warning can be issued - ## Upcoming Implementation
If the user just installed the app and tries to warn others with a SRS test, the app will show that a warning can not yet be issued and that the app needs to be installed for at least x days/hours.
## Suggested change to the upcoming implementation
I **strongly** suggest to not show how long the app needs to be installed until a SRS warning can be issued, as this might encourage trolls to leave the app installed, because they have a clear goal in front of their eyes when they can troll other users. If the app doesn't give them a goal, but only informs the user that a SRS warning can't be issued at that point of time, I'm quite sure that many trolls will give up and uninstall the app again.
## Expected Benefits
In my opinion, this change could drastically reduce the number of trolls who install the app just to warn others, although they are not positive.
## Related issue
- https://github.com/corona-warn-app/cwa-wishlist/issues/872
---
Internal Tracking ID: [EXPOSUREAPP-14519](https://jira-ibs.wbs.net.sap/browse/EXPOSUREAPP-14519)
|
process
|
version do not show how long the app needs to be installed before a srs warning can be issued upcoming implementation if the user just installed the app and tries to warn others with a srs test the app will show that a warning can not yet be issued and that the app needs to be installed for at least x days hours suggested change to the upcoming implementation i strongly suggest to not show how long the app needs to be installed until a srs warning can be issued as this might encourage trolls to leave the app installed because they have a clear goal in front of their eyes when they can troll other users if the app doesn t give them a goal but only informs the user that a srs warning can t be issued at that point of time i m quite sure that many trolls will give up and uninstall the app again expected benefits in my opinion this change could drastically reduce the number of trolls who install the app just to warn others although they are not positive related issue internal tracking id
| 1
|
11,739
| 14,581,659,206
|
IssuesEvent
|
2020-12-18 11:04:05
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
Site participant registry > Invited tab > Alignment issue
|
Bug P2 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
|
Site participant registry > Invited tab > Enrolled participants > Alignment should be same as invision screen

|
3.0
|
Site participant registry > Invited tab > Alignment issue - Site participant registry > Invited tab > Enrolled participants > Alignment should be same as invision screen

|
process
|
site participant registry invited tab alignment issue site participant registry invited tab enrolled participants alignment should be same as invision screen
| 1
|
481,385
| 13,885,114,114
|
IssuesEvent
|
2020-10-18 18:41:07
|
webkom/lego
|
https://api.github.com/repos/webkom/lego
|
closed
|
CompanyInterest submission validation error
|
priority:high
|
So the interest form is broken in prod, so our updated changes won't work (as the normal form does not work).


```
Traceback (most recent call last):
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/rest_framework/viewsets.py", line 116, in view
return self.dispatch(request, *args, **kwargs)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/rest_framework/views.py", line 495, in dispatch
response = self.handle_exception(exc)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/rest_framework/views.py", line 455, in handle_exception
self.raise_uncaught_exception(exc)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/rest_framework/views.py", line 466, in raise_uncaught_exception
raise exc
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/rest_framework/views.py", line 492, in dispatch
response = handler(request, *args, **kwargs)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/rest_framework/mixins.py", line 21, in create
self.perform_create(serializer)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/rest_framework/mixins.py", line 26, in perform_create
serializer.save()
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/rest_framework/serializers.py", line 214, in save
self.instance = self.create(validated_data)
File "/Users/smith/code/abakus/lego/lego/apps/companies/serializers.py", line 235, in create
company_interest.semesters.add(*semesters)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/db/models/fields/related_descriptors.py", line 938, in add
through_defaults=through_defaults,
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/db/models/fields/related_descriptors.py", line 1067, in _add_items
new_ids.difference_update(vals)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/db/models/query.py", line 274, in __iter__
self._fetch_all()
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/db/models/query.py", line 1242, in _fetch_all
self._result_cache = list(self._iterable_class(self))
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/db/models/query.py", line 182, in __iter__
for row in compiler.results_iter(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size):
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/db/models/sql/compiler.py", line 1092, in results_iter
results = self.execute_sql(MULTI, chunked_fetch=chunked_fetch, chunk_size=chunk_size)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/db/models/sql/compiler.py", line 1140, in execute_sql
cursor.execute(sql, params)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/debug_toolbar/panels/sql/tracking.py", line 198, in execute
return self._record(self.cursor.execute, sql, params)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/debug_toolbar/panels/sql/tracking.py", line 133, in _record
return method(sql, params)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/db/backends/utils.py", line 99, in execute
return super().execute(sql, params)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/db/backends/utils.py", line 76, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/db/backends/utils.py", line 79, in _execute
self.db.validate_no_broken_transaction()
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/db/backends/base/base.py", line 438, in validate_no_broken_transaction
"An error occurred in the current transaction. You can't "
django.db.transaction.TransactionManagementError: An error occurred in the current transaction. You can't execute queries until the end of the 'atomic' block.
```
|
1.0
|
CompanyInterest submission validation error - So the interest form is broken in prod, so our updated changes won't work (as the normal form does not work).


```
Traceback (most recent call last):
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/rest_framework/viewsets.py", line 116, in view
return self.dispatch(request, *args, **kwargs)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/rest_framework/views.py", line 495, in dispatch
response = self.handle_exception(exc)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/rest_framework/views.py", line 455, in handle_exception
self.raise_uncaught_exception(exc)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/rest_framework/views.py", line 466, in raise_uncaught_exception
raise exc
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/rest_framework/views.py", line 492, in dispatch
response = handler(request, *args, **kwargs)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/rest_framework/mixins.py", line 21, in create
self.perform_create(serializer)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/rest_framework/mixins.py", line 26, in perform_create
serializer.save()
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/rest_framework/serializers.py", line 214, in save
self.instance = self.create(validated_data)
File "/Users/smith/code/abakus/lego/lego/apps/companies/serializers.py", line 235, in create
company_interest.semesters.add(*semesters)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/db/models/fields/related_descriptors.py", line 938, in add
through_defaults=through_defaults,
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/db/models/fields/related_descriptors.py", line 1067, in _add_items
new_ids.difference_update(vals)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/db/models/query.py", line 274, in __iter__
self._fetch_all()
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/db/models/query.py", line 1242, in _fetch_all
self._result_cache = list(self._iterable_class(self))
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/db/models/query.py", line 182, in __iter__
for row in compiler.results_iter(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size):
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/db/models/sql/compiler.py", line 1092, in results_iter
results = self.execute_sql(MULTI, chunked_fetch=chunked_fetch, chunk_size=chunk_size)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/db/models/sql/compiler.py", line 1140, in execute_sql
cursor.execute(sql, params)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/debug_toolbar/panels/sql/tracking.py", line 198, in execute
return self._record(self.cursor.execute, sql, params)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/debug_toolbar/panels/sql/tracking.py", line 133, in _record
return method(sql, params)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/db/backends/utils.py", line 99, in execute
return super().execute(sql, params)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/db/backends/utils.py", line 76, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/db/backends/utils.py", line 79, in _execute
self.db.validate_no_broken_transaction()
File "/Users/smith/code/abakus/lego/venv/lib/python3.7/site-packages/django/db/backends/base/base.py", line 438, in validate_no_broken_transaction
"An error occurred in the current transaction. You can't "
django.db.transaction.TransactionManagementError: An error occurred in the current transaction. You can't execute queries until the end of the 'atomic' block.
```
|
non_process
|
companyinterest submission validation error so the interest form is broken in prod so our updated changes won t work as the normal form does not work traceback most recent call last file users smith code abakus lego venv lib site packages django core handlers exception py line in inner response get response request file users smith code abakus lego venv lib site packages django core handlers base py line in get response response self process exception by middleware e request file users smith code abakus lego venv lib site packages django core handlers base py line in get response response wrapped callback request callback args callback kwargs file users smith code abakus lego venv lib site packages django views decorators csrf py line in wrapped view return view func args kwargs file users smith code abakus lego venv lib site packages rest framework viewsets py line in view return self dispatch request args kwargs file users smith code abakus lego venv lib site packages rest framework views py line in dispatch response self handle exception exc file users smith code abakus lego venv lib site packages rest framework views py line in handle exception self raise uncaught exception exc file users smith code abakus lego venv lib site packages rest framework views py line in raise uncaught exception raise exc file users smith code abakus lego venv lib site packages rest framework views py line in dispatch response handler request args kwargs file users smith code abakus lego venv lib site packages rest framework mixins py line in create self perform create serializer file users smith code abakus lego venv lib site packages rest framework mixins py line in perform create serializer save file users smith code abakus lego venv lib site packages rest framework serializers py line in save self instance self create validated data file users smith code abakus lego lego apps companies serializers py line in create company interest semesters add semesters file users smith code abakus lego venv lib site packages django db models fields related descriptors py line in add through defaults through defaults file users smith code abakus lego venv lib site packages django db models fields related descriptors py line in add items new ids difference update vals file users smith code abakus lego venv lib site packages django db models query py line in iter self fetch all file users smith code abakus lego venv lib site packages django db models query py line in fetch all self result cache list self iterable class self file users smith code abakus lego venv lib site packages django db models query py line in iter for row in compiler results iter chunked fetch self chunked fetch chunk size self chunk size file users smith code abakus lego venv lib site packages django db models sql compiler py line in results iter results self execute sql multi chunked fetch chunked fetch chunk size chunk size file users smith code abakus lego venv lib site packages django db models sql compiler py line in execute sql cursor execute sql params file users smith code abakus lego venv lib site packages debug toolbar panels sql tracking py line in execute return self record self cursor execute sql params file users smith code abakus lego venv lib site packages debug toolbar panels sql tracking py line in record return method sql params file users smith code abakus lego venv lib site packages django db backends utils py line in execute return super execute sql params file users smith code abakus lego venv lib site packages django db backends utils py line in execute return self execute with wrappers sql params many false executor self execute file users smith code abakus lego venv lib site packages django db backends utils py line in execute with wrappers return executor sql params many context file users smith code abakus lego venv lib site packages django db backends utils py line in execute self db validate no broken transaction file users smith code abakus lego venv lib site packages django db backends base base py line in validate no broken transaction an error occurred in the current transaction you can t django db transaction transactionmanagementerror an error occurred in the current transaction you can t execute queries until the end of the atomic block
| 0
|
138,746
| 5,346,353,225
|
IssuesEvent
|
2017-02-17 19:26:48
|
DigitalCampus/moodle-block_oppia_mobile_export
|
https://api.github.com/repos/DigitalCampus/moodle-block_oppia_mobile_export
|
closed
|
Don't strip out all the HTML tags from quiz question text
|
enhancement medium priority
|
so can give some layout/emphasis etc in the questions. Linked to: https://github.com/DigitalCampus/oppia-mobile-android/issues/317
|
1.0
|
Don't strip out all the HTML tags from quiz question text - so can give some layout/emphasis etc in the questions. Linked to: https://github.com/DigitalCampus/oppia-mobile-android/issues/317
|
non_process
|
don t strip out all the html tags from quiz question text so can give some layout emphasis etc in the questions linked to
| 0
|
78,387
| 14,991,906,892
|
IssuesEvent
|
2021-01-29 09:06:04
|
haproxy/haproxy
|
https://api.github.com/repos/haproxy/haproxy
|
closed
|
src/flt_http_comp.c: unused value suspected by coverity (new finding)
|
type: code-report
|
```
210 if (ret == sz && !b_data(&trash))
211 next = htx_remove_blk(htx, blk);
212 else
CID 1445802 (#1 of 1): Unused value (UNUSED_VALUE)returned_pointer: Assigning value from htx_replace_blk_value(htx, blk, v, ist2(b_head(&trash), b_data(&trash))) to blk here, but that stored value is overwritten before it can be used.
213 blk = htx_replace_blk_value(htx, blk, v, ist2(b_head(&trash), b_data(&trash)));
214
215 len -= ret;
```
|
1.0
|
src/flt_http_comp.c: unused value suspected by coverity (new finding) - ```
210 if (ret == sz && !b_data(&trash))
211 next = htx_remove_blk(htx, blk);
212 else
CID 1445802 (#1 of 1): Unused value (UNUSED_VALUE)returned_pointer: Assigning value from htx_replace_blk_value(htx, blk, v, ist2(b_head(&trash), b_data(&trash))) to blk here, but that stored value is overwritten before it can be used.
213 blk = htx_replace_blk_value(htx, blk, v, ist2(b_head(&trash), b_data(&trash)));
214
215 len -= ret;
```
|
non_process
|
src flt http comp c unused value suspected by coverity new finding if ret sz b data trash next htx remove blk htx blk else cid of unused value unused value returned pointer assigning value from htx replace blk value htx blk v b head trash b data trash to blk here but that stored value is overwritten before it can be used blk htx replace blk value htx blk v b head trash b data trash len ret
| 0
|
753,433
| 26,347,137,870
|
IssuesEvent
|
2023-01-10 23:26:10
|
helpmebot/helpmebot
|
https://api.github.com/repos/helpmebot/helpmebot
|
opened
|
Phabricator task lookup
|
priority/low type/feature migrated
|
```
!phab T566
[Task T566] Phabricator task lookup | Open | @stwalkerster | #helpmebot
```
|
1.0
|
Phabricator task lookup - ```
!phab T566
[Task T566] Phabricator task lookup | Open | @stwalkerster | #helpmebot
```
|
non_process
|
phabricator task lookup phab phabricator task lookup open stwalkerster helpmebot
| 0
|
27,546
| 6,885,284,722
|
IssuesEvent
|
2017-11-21 15:40:57
|
zeebe-io/zeebe
|
https://api.github.com/repos/zeebe-io/zeebe
|
opened
|
Map fails to remove bucket on merging overflow buckets
|
bug code ready zb-map
|
During our stability tests we discovered an bug which is triggered during merging of overflow buckets. We also assume that this can trigger other problems (see second stack trace).
```
java.lang.IllegalArgumentException: No bucket in buffer 0 on offset 4
at io.zeebe.map.BucketBufferArray.removeBucket(BucketBufferArray.java:400) ~[classes/:?]
at io.zeebe.map.ZbMapBucketMergeHelper.tryMergeOverflowBucket(ZbMapBucketMergeHelper.java:295) ~[classes/:?]
at io.zeebe.map.ZbMapBucketMergeHelper.tryMergingBuckets(ZbMapBucketMergeHelper.java:222) ~[classes/:?]
at io.zeebe.map.ZbMap.remove(ZbMap.java:332) ~[classes/:?]
at io.zeebe.map.Long2BytesZbMap.remove(Long2BytesZbMap.java:74) ~[classes/:?]
at io.zeebe.broker.task.map.TaskInstanceMap.remove(TaskInstanceMap.java:74) ~[classes/:?]
at io.zeebe.broker.task.processor.TaskInstanceStreamProcessor$CompleteTaskProcessor.updateState(TaskInstanceStreamProcessor.java:357) ~[classes/:?]
at io.zeebe.logstreams.processor.StreamProcessorController$ProcessState.lambda$new$3(StreamProcessorController.java:348) [classes/:?]
at io.zeebe.util.state.ComposedState$FailSafeStep.doWork(ComposedState.java:41) ~[classes/:?]
at io.zeebe.util.state.ComposedState.doWork(ComposedState.java:63) ~[classes/:?]
at io.zeebe.util.state.StateMachine.doWork(StateMachine.java:111) [classes/:?]
at io.zeebe.util.state.StateMachineAgent.doWork(StateMachineAgent.java:53) [classes/:?]
at io.zeebe.logstreams.processor.StreamProcessorController.doWork(StreamProcessorController.java:125) [classes/:?]
at io.zeebe.util.actor.ActorRunner.tryRunActor(ActorRunner.java:165) [classes/:?]
at io.zeebe.util.actor.ActorRunner.runActor(ActorRunner.java:145) [classes/:?]
at io.zeebe.util.actor.ActorRunner.doWork(ActorRunner.java:114) [classes/:?]
at io.zeebe.util.actor.ActorRunner.run(ActorRunner.java:71) [classes/:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_152]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_152]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_152]
```
```
java.lang.IllegalStateException: must call wrap() before
at io.zeebe.broker.workflow.map.ActivityInstanceMap.ensureRead(ActivityInstanceMap.java:144) ~[classes/:?]
at io.zeebe.broker.workflow.map.ActivityInstanceMap.setTaskKey(ActivityInstanceMap.java:135) ~[classes/:?]
at io.zeebe.broker.workflow.processor.WorkflowInstanceStreamProcessor$TaskCreatedProcessor.updateState(WorkflowInstanceStreamProcessor.java:921) ~[classes/:?]
at io.zeebe.logstreams.processor.StreamProcessorController$ProcessState.lambda$new$3(StreamProcessorController.java:348) [classes/:?]
at io.zeebe.util.state.ComposedState$FailSafeStep.doWork(ComposedState.java:41) ~[classes/:?]
at io.zeebe.util.state.ComposedState.doWork(ComposedState.java:63) ~[classes/:?]
at io.zeebe.util.state.StateMachine.doWork(StateMachine.java:111) [classes/:?]
at io.zeebe.util.state.StateMachineAgent.doWork(StateMachineAgent.java:53) [classes/:?]
at io.zeebe.logstreams.processor.StreamProcessorController.doWork(StreamProcessorController.java:125) [classes/:?]
at io.zeebe.util.actor.ActorRunner.tryRunActor(ActorRunner.java:165) [classes/:?]
at io.zeebe.util.actor.ActorRunner.runActor(ActorRunner.java:145) [classes/:?]
at io.zeebe.util.actor.ActorRunner.doWork(ActorRunner.java:114) [classes/:?]
at io.zeebe.util.actor.ActorRunner.run(ActorRunner.java:71) [classes/:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_152]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_152]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_152]
```
|
1.0
|
Map fails to remove bucket on merging overflow buckets - During our stability tests we discovered an bug which is triggered during merging of overflow buckets. We also assume that this can trigger other problems (see second stack trace).
```
java.lang.IllegalArgumentException: No bucket in buffer 0 on offset 4
at io.zeebe.map.BucketBufferArray.removeBucket(BucketBufferArray.java:400) ~[classes/:?]
at io.zeebe.map.ZbMapBucketMergeHelper.tryMergeOverflowBucket(ZbMapBucketMergeHelper.java:295) ~[classes/:?]
at io.zeebe.map.ZbMapBucketMergeHelper.tryMergingBuckets(ZbMapBucketMergeHelper.java:222) ~[classes/:?]
at io.zeebe.map.ZbMap.remove(ZbMap.java:332) ~[classes/:?]
at io.zeebe.map.Long2BytesZbMap.remove(Long2BytesZbMap.java:74) ~[classes/:?]
at io.zeebe.broker.task.map.TaskInstanceMap.remove(TaskInstanceMap.java:74) ~[classes/:?]
at io.zeebe.broker.task.processor.TaskInstanceStreamProcessor$CompleteTaskProcessor.updateState(TaskInstanceStreamProcessor.java:357) ~[classes/:?]
at io.zeebe.logstreams.processor.StreamProcessorController$ProcessState.lambda$new$3(StreamProcessorController.java:348) [classes/:?]
at io.zeebe.util.state.ComposedState$FailSafeStep.doWork(ComposedState.java:41) ~[classes/:?]
at io.zeebe.util.state.ComposedState.doWork(ComposedState.java:63) ~[classes/:?]
at io.zeebe.util.state.StateMachine.doWork(StateMachine.java:111) [classes/:?]
at io.zeebe.util.state.StateMachineAgent.doWork(StateMachineAgent.java:53) [classes/:?]
at io.zeebe.logstreams.processor.StreamProcessorController.doWork(StreamProcessorController.java:125) [classes/:?]
at io.zeebe.util.actor.ActorRunner.tryRunActor(ActorRunner.java:165) [classes/:?]
at io.zeebe.util.actor.ActorRunner.runActor(ActorRunner.java:145) [classes/:?]
at io.zeebe.util.actor.ActorRunner.doWork(ActorRunner.java:114) [classes/:?]
at io.zeebe.util.actor.ActorRunner.run(ActorRunner.java:71) [classes/:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_152]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_152]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_152]
```
```
java.lang.IllegalStateException: must call wrap() before
at io.zeebe.broker.workflow.map.ActivityInstanceMap.ensureRead(ActivityInstanceMap.java:144) ~[classes/:?]
at io.zeebe.broker.workflow.map.ActivityInstanceMap.setTaskKey(ActivityInstanceMap.java:135) ~[classes/:?]
at io.zeebe.broker.workflow.processor.WorkflowInstanceStreamProcessor$TaskCreatedProcessor.updateState(WorkflowInstanceStreamProcessor.java:921) ~[classes/:?]
at io.zeebe.logstreams.processor.StreamProcessorController$ProcessState.lambda$new$3(StreamProcessorController.java:348) [classes/:?]
at io.zeebe.util.state.ComposedState$FailSafeStep.doWork(ComposedState.java:41) ~[classes/:?]
at io.zeebe.util.state.ComposedState.doWork(ComposedState.java:63) ~[classes/:?]
at io.zeebe.util.state.StateMachine.doWork(StateMachine.java:111) [classes/:?]
at io.zeebe.util.state.StateMachineAgent.doWork(StateMachineAgent.java:53) [classes/:?]
at io.zeebe.logstreams.processor.StreamProcessorController.doWork(StreamProcessorController.java:125) [classes/:?]
at io.zeebe.util.actor.ActorRunner.tryRunActor(ActorRunner.java:165) [classes/:?]
at io.zeebe.util.actor.ActorRunner.runActor(ActorRunner.java:145) [classes/:?]
at io.zeebe.util.actor.ActorRunner.doWork(ActorRunner.java:114) [classes/:?]
at io.zeebe.util.actor.ActorRunner.run(ActorRunner.java:71) [classes/:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_152]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_152]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_152]
```
|
non_process
|
map fails to remove bucket on merging overflow buckets during our stability tests we discovered an bug which is triggered during merging of overflow buckets we also assume that this can trigger other problems see second stack trace java lang illegalargumentexception no bucket in buffer on offset at io zeebe map bucketbufferarray removebucket bucketbufferarray java at io zeebe map zbmapbucketmergehelper trymergeoverflowbucket zbmapbucketmergehelper java at io zeebe map zbmapbucketmergehelper trymergingbuckets zbmapbucketmergehelper java at io zeebe map zbmap remove zbmap java at io zeebe map remove java at io zeebe broker task map taskinstancemap remove taskinstancemap java at io zeebe broker task processor taskinstancestreamprocessor completetaskprocessor updatestate taskinstancestreamprocessor java at io zeebe logstreams processor streamprocessorcontroller processstate lambda new streamprocessorcontroller java at io zeebe util state composedstate failsafestep dowork composedstate java at io zeebe util state composedstate dowork composedstate java at io zeebe util state statemachine dowork statemachine java at io zeebe util state statemachineagent dowork statemachineagent java at io zeebe logstreams processor streamprocessorcontroller dowork streamprocessorcontroller java at io zeebe util actor actorrunner tryrunactor actorrunner java at io zeebe util actor actorrunner runactor actorrunner java at io zeebe util actor actorrunner dowork actorrunner java at io zeebe util actor actorrunner run actorrunner java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java java lang illegalstateexception must call wrap before at io zeebe broker workflow map activityinstancemap ensureread activityinstancemap java at io zeebe broker workflow map activityinstancemap settaskkey activityinstancemap java at io zeebe broker workflow processor workflowinstancestreamprocessor taskcreatedprocessor updatestate workflowinstancestreamprocessor java at io zeebe logstreams processor streamprocessorcontroller processstate lambda new streamprocessorcontroller java at io zeebe util state composedstate failsafestep dowork composedstate java at io zeebe util state composedstate dowork composedstate java at io zeebe util state statemachine dowork statemachine java at io zeebe util state statemachineagent dowork statemachineagent java at io zeebe logstreams processor streamprocessorcontroller dowork streamprocessorcontroller java at io zeebe util actor actorrunner tryrunactor actorrunner java at io zeebe util actor actorrunner runactor actorrunner java at io zeebe util actor actorrunner dowork actorrunner java at io zeebe util actor actorrunner run actorrunner java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java
| 0
|
18,272
| 24,350,854,201
|
IssuesEvent
|
2022-10-02 23:02:41
|
mmattDonk/AI-TTS-Donations
|
https://api.github.com/repos/mmattDonk/AI-TTS-Donations
|
closed
|
make a server client
|
feature_request Low Priority @solrock/processor @solrock/frontend @solrock/backend
|
serve the audio via an overlay, maybe a new repo for this? it would be really niche.
its really meant for the IT nerds for streamers to be able to set it up by themselves and have the streamer just input 1 overlay into their OBS.
|
1.0
|
make a server client - serve the audio via an overlay, maybe a new repo for this? it would be really niche.
its really meant for the IT nerds for streamers to be able to set it up by themselves and have the streamer just input 1 overlay into their OBS.
|
process
|
make a server client serve the audio via an overlay maybe a new repo for this it would be really niche its really meant for the it nerds for streamers to be able to set it up by themselves and have the streamer just input overlay into their obs
| 1
|
33,722
| 9,201,535,932
|
IssuesEvent
|
2019-03-07 19:53:37
|
ArctosDB/arctos
|
https://api.github.com/repos/ArctosDB/arctos
|
closed
|
Shipping Label report not working from Arctos
|
CF Report Builder Function-Transactions Priority-Critical
|
@dustymc
Dusty - shipping label report printout is not working from loans. We just tried for loan MVZ.Herp.2016.8268.Herp and we are getting this error:
An error occurred while processing this page!
The report CFR file /usr/local/httpd/htdocs/wwwarctos/Reports/templates/ship_label.cfr could not be found.
|
1.0
|
Shipping Label report not working from Arctos - @dustymc
Dusty - shipping label report printout is not working from loans. We just tried for loan MVZ.Herp.2016.8268.Herp and we are getting this error:
An error occurred while processing this page!
The report CFR file /usr/local/httpd/htdocs/wwwarctos/Reports/templates/ship_label.cfr could not be found.
|
non_process
|
shipping label report not working from arctos dustymc dusty shipping label report printout is not working from loans we just tried for loan mvz herp herp and we are getting this error an error occurred while processing this page the report cfr file usr local httpd htdocs wwwarctos reports templates ship label cfr could not be found
| 0
|
559,841
| 16,578,534,645
|
IssuesEvent
|
2021-05-31 08:35:36
|
saltudelft/libsa4py
|
https://api.github.com/repos/saltudelft/libsa4py
|
opened
|
Add line numbers for functions
|
Priority: Medium enhancement
|
Line and column numbers should be added to the final JSONOutput in the field `fn_ln`.
|
1.0
|
Add line numbers for functions - Line and column numbers should be added to the final JSONOutput in the field `fn_ln`.
|
non_process
|
add line numbers for functions line and column numbers should be added to the final jsonoutput in the field fn ln
| 0
|
120,303
| 4,787,802,191
|
IssuesEvent
|
2016-10-30 06:55:02
|
dhowe/AdNauseam
|
https://api.github.com/repos/dhowe/AdNauseam
|
closed
|
DNT whitelist checkbox not updating correctly
|
PRIORITY: Low
|
to recreate:
1) go to settings page, enable Hiding and Clicking
2) enable both DNT checkboxes
3) disable Hiding and Clicking
4) go to whitelist and notice the DNT list is still checked

|
1.0
|
DNT whitelist checkbox not updating correctly - to recreate:
1) go to settings page, enable Hiding and Clicking
2) enable both DNT checkboxes
3) disable Hiding and Clicking
4) go to whitelist and notice the DNT list is still checked

|
non_process
|
dnt whitelist checkbox not updating correctly to recreate go to settings page enable hiding and clicking enable both dnt checkboxes disable hiding and clicking go to whitelist and notice the dnt list is still checked
| 0
|
343,711
| 24,778,854,129
|
IssuesEvent
|
2022-10-24 01:35:24
|
AY2223S1-CS2103T-W16-2/tp
|
https://api.github.com/repos/AY2223S1-CS2103T-W16-2/tp
|
closed
|
Add support for parsing admonition boxes in markdown
|
type.enhancement priority.Medium documentation.UserGuide documentation.DeveloperGuide documentation
|
Will make life easier as we don't have to remember the exact HTML syntax each time. Less prone to bugs and inconsistencies too.
|
3.0
|
Add support for parsing admonition boxes in markdown - Will make life easier as we don't have to remember the exact HTML syntax each time. Less prone to bugs and inconsistencies too.
|
non_process
|
add support for parsing admonition boxes in markdown will make life easier as we don t have to remember the exact html syntax each time less prone to bugs and inconsistencies too
| 0
|
289,192
| 8,861,452,956
|
IssuesEvent
|
2019-01-10 00:42:27
|
nycJSorg/angular-presentation
|
https://api.github.com/repos/nycJSorg/angular-presentation
|
closed
|
maybe link to
native phone apps => NativeScript,
VR scenes => ???
and show how the imports change?
|
Low priority help wanted
|
maybe link to
native phone apps => NativeScript,
VR scenes => ???
and show how the imports change?
Author: Will
Slide: [Local](http://localhost:4200/create-first-app/imports),[Public](https://angular-presentation.firebaseapp.com/create-first-app/imports)
|
1.0
|
maybe link to
native phone apps => NativeScript,
VR scenes => ???
and show how the imports change? - maybe link to
native phone apps => NativeScript,
VR scenes => ???
and show how the imports change?
Author: Will
Slide: [Local](http://localhost:4200/create-first-app/imports),[Public](https://angular-presentation.firebaseapp.com/create-first-app/imports)
|
non_process
|
maybe link to native phone apps nativescript vr scenes and show how the imports change maybe link to native phone apps nativescript vr scenes and show how the imports change author will slide
| 0
|
126,031
| 4,971,653,623
|
IssuesEvent
|
2016-12-05 19:19:20
|
SIU-CS/BarGame-Production
|
https://api.github.com/repos/SIU-CS/BarGame-Production
|
closed
|
Navigation
|
Priority-Medium Product Backlog
|
I want to be able to easily navigate from quests to the shop to players (using a menu system).
|
1.0
|
Navigation - I want to be able to easily navigate from quests to the shop to players (using a menu system).
|
non_process
|
navigation i want to be able to easily navigate from quests to the shop to players using a menu system
| 0
|
7,642
| 25,336,566,658
|
IssuesEvent
|
2022-11-18 17:20:51
|
yugabyte/yugabyte-db
|
https://api.github.com/repos/yugabyte/yugabyte-db
|
closed
|
[DocDB][ASAN Unit Test Failure][2.8] YbAdminSnapshotScheduleTest.CleanupDeletedTablets
|
kind/bug area/docdb priority/medium qa_automation
|
Jira Link: [DB-3357](https://yugabyte.atlassian.net/browse/DB-3357)
### Description
On 2.8.9.0-b14 YbAdminSnapshotScheduleTest.CleanupDeletedTablets unit test is failing with a heap-use-after-free:
```
[m-1] ==30154==ERROR: AddressSanitizer: heap-use-after-free on address 0x61300024c9e0 at pc 0x7f1f279945f2 bp 0x7f1eeffc08d0 sp 0x7f1eeffc08c8
[m-1] READ of size 8 at 0x61300024c9e0 thread T73 (raft [worker]xx)
[m-1] #0 0x7f1f279945f1 in std::__1::shared_ptr<std::__1::function<unsigned long ()> >::get() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20210916160346-44ba371965-centos7-x86_64-clang7/installed/asan/libcxx/include/c++/v1/memory:3911:49
[m-1] #1 0x7f1f279945f1 in std::__1::shared_ptr<std::__1::function<unsigned long ()> >::operator bool() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20210916160346-44ba371965-centos7-x86_64-clang7/installed/asan/libcxx/include/c++/v1/memory:3922
[m-1] #2 0x7f1f279945f1 in rocksdb::MutableCFOptions::MaxFileSizeForCompaction() const /nfusr/centos-gcp-cloud/jenkins-worker-fxh7d0/jenkins/jenkins-github-yugabyte-db-centos-master-clang7-asan-120/yugabyte-db/build/asan-clang7-dynamic-ninja/../../src/yb/rocksdb/util/mutable_cf_options.cc:75
[m-1] #3 0x7f1f276f192b in rocksdb::VersionStorageInfo::CalculateBaseBytes(rocksdb::ImmutableCFOptions const&, rocksdb::MutableCFOptions const&) /nfusr/centos-gcp-cloud/jenkins-worker-fxh7d0/jenkins/jenkins-github-yugabyte-db-centos-master-clang7-asan-120/yugabyte-db/build/asan-clang7-dynamic-ninja/../../src/yb/rocksdb/db/version_set.cc:1887:15
[m-1] #4 0x7f1f276f0db9 in rocksdb::Version::PrepareApply(rocksdb::MutableCFOptions const&, bool) /nfusr/centos-gcp-cloud/jenkins-worker-fxh7d0/jenkins/jenkins-github-yugabyte-db-centos-master-clang7-asan-120/yugabyte-db/build/asan-clang7-dynamic-ninja/../../src/yb/rocksdb/db/version_set.cc:1036:17
[...]
```
https://gist.github.com/def-/e64909db2dc8ff24a51868d5daf6a1ce
There was another ASAN bug in this test but the stack looks different: https://github.com/yugabyte/yugabyte-db/issues/10325 cc @jmeehan16
|
1.0
|
[DocDB][ASAN Unit Test Failure][2.8] YbAdminSnapshotScheduleTest.CleanupDeletedTablets - Jira Link: [DB-3357](https://yugabyte.atlassian.net/browse/DB-3357)
### Description
On 2.8.9.0-b14 YbAdminSnapshotScheduleTest.CleanupDeletedTablets unit test is failing with a heap-use-after-free:
```
[m-1] ==30154==ERROR: AddressSanitizer: heap-use-after-free on address 0x61300024c9e0 at pc 0x7f1f279945f2 bp 0x7f1eeffc08d0 sp 0x7f1eeffc08c8
[m-1] READ of size 8 at 0x61300024c9e0 thread T73 (raft [worker]xx)
[m-1] #0 0x7f1f279945f1 in std::__1::shared_ptr<std::__1::function<unsigned long ()> >::get() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20210916160346-44ba371965-centos7-x86_64-clang7/installed/asan/libcxx/include/c++/v1/memory:3911:49
[m-1] #1 0x7f1f279945f1 in std::__1::shared_ptr<std::__1::function<unsigned long ()> >::operator bool() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20210916160346-44ba371965-centos7-x86_64-clang7/installed/asan/libcxx/include/c++/v1/memory:3922
[m-1] #2 0x7f1f279945f1 in rocksdb::MutableCFOptions::MaxFileSizeForCompaction() const /nfusr/centos-gcp-cloud/jenkins-worker-fxh7d0/jenkins/jenkins-github-yugabyte-db-centos-master-clang7-asan-120/yugabyte-db/build/asan-clang7-dynamic-ninja/../../src/yb/rocksdb/util/mutable_cf_options.cc:75
[m-1] #3 0x7f1f276f192b in rocksdb::VersionStorageInfo::CalculateBaseBytes(rocksdb::ImmutableCFOptions const&, rocksdb::MutableCFOptions const&) /nfusr/centos-gcp-cloud/jenkins-worker-fxh7d0/jenkins/jenkins-github-yugabyte-db-centos-master-clang7-asan-120/yugabyte-db/build/asan-clang7-dynamic-ninja/../../src/yb/rocksdb/db/version_set.cc:1887:15
[m-1] #4 0x7f1f276f0db9 in rocksdb::Version::PrepareApply(rocksdb::MutableCFOptions const&, bool) /nfusr/centos-gcp-cloud/jenkins-worker-fxh7d0/jenkins/jenkins-github-yugabyte-db-centos-master-clang7-asan-120/yugabyte-db/build/asan-clang7-dynamic-ninja/../../src/yb/rocksdb/db/version_set.cc:1036:17
[...]
```
https://gist.github.com/def-/e64909db2dc8ff24a51868d5daf6a1ce
There was another ASAN bug in this test but the stack looks different: https://github.com/yugabyte/yugabyte-db/issues/10325 cc @jmeehan16
|
non_process
|
ybadminsnapshotscheduletest cleanupdeletedtablets jira link description on ybadminsnapshotscheduletest cleanupdeletedtablets unit test is failing with a heap use after free error addresssanitizer heap use after free on address at pc bp sp read of size at thread raft xx in std shared ptr get const opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c memory in std shared ptr operator bool const opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c memory in rocksdb mutablecfoptions maxfilesizeforcompaction const nfusr centos gcp cloud jenkins worker jenkins jenkins github yugabyte db centos master asan yugabyte db build asan dynamic ninja src yb rocksdb util mutable cf options cc in rocksdb versionstorageinfo calculatebasebytes rocksdb immutablecfoptions const rocksdb mutablecfoptions const nfusr centos gcp cloud jenkins worker jenkins jenkins github yugabyte db centos master asan yugabyte db build asan dynamic ninja src yb rocksdb db version set cc in rocksdb version prepareapply rocksdb mutablecfoptions const bool nfusr centos gcp cloud jenkins worker jenkins jenkins github yugabyte db centos master asan yugabyte db build asan dynamic ninja src yb rocksdb db version set cc there was another asan bug in this test but the stack looks different cc
| 0
|
360,621
| 25,299,957,111
|
IssuesEvent
|
2022-11-17 09:57:51
|
IgniteUI/igniteui-theming
|
https://api.github.com/repos/IgniteUI/igniteui-theming
|
closed
|
[SassDoc] - Some items are not documented
|
:book: documentation :white_check_mark: status: resolved
|
Palettes are not documented, while others items are not made private.
|
1.0
|
[SassDoc] - Some items are not documented - Palettes are not documented, while others items are not made private.
|
non_process
|
some items are not documented palettes are not documented while others items are not made private
| 0
|
18,252
| 24,334,813,214
|
IssuesEvent
|
2022-10-01 00:56:35
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
[processor/probabilisticsampler] add more mterics to probabilistic sampler
|
enhancement priority:p2 processor/probabilisticsampler
|
**Is your feature request related to a problem? Please describe.**
wish to expose more metrics to observe probabilistic processor stats.
**Describe the solution you'd like**
following https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/tailsamplingprocessor/metrics.go#L47
I'd like to add those. pls assign it to me. :-P
|
1.0
|
[processor/probabilisticsampler] add more mterics to probabilistic sampler - **Is your feature request related to a problem? Please describe.**
wish to expose more metrics to observe probabilistic processor stats.
**Describe the solution you'd like**
following https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/tailsamplingprocessor/metrics.go#L47
I'd like to add those. pls assign it to me. :-P
|
process
|
add more mterics to probabilistic sampler is your feature request related to a problem please describe wish to expose more metrics to observe probabilistic processor stats describe the solution you d like following i d like to add those pls assign it to me p
| 1
|
738,140
| 25,546,925,693
|
IssuesEvent
|
2022-11-29 19:39:44
|
zowe/imperative
|
https://api.github.com/repos/zowe/imperative
|
closed
|
Plugin Management Facility should not imply multiple uninstall
|
bug plugins priority-low
|
_From @zFernand0 on June 28, 2018 14:18_
Looking at the help of the uninstall command
```
USAGE
-----
"test_cli\TestCLI.ts" plugins uninstall [plugin...] [options]
```
It makes you think that you can uninstall multiple plugins with one single command like:
`$ ts-node "test_cli\TestCLI.ts" plugins uninstall normal-plugin normal-plugin-2`
But instead it errors with:
```
Command Error:
Uninstall Failed
Error Details:
Plugin name ' normal-plugin,normal-plugin-2' is not installed.
```
- Solutions
- Maybe we should NOT have the ellipsis (`...`) in the command definition
imperative/packages/imperative/src/plugins/cmd/uninstall/uninstall.definition.ts
Line 45
name: "plugin...",
- OR just support an array of strings and split by commas (`,`)
---
Full command and output
```
$ ts-node "test_cli\TestCLI.ts" plugins uninstall normal-plugin normal-plugin-2
Command Error:
Uninstall Failed
Error Details:
Plugin name ' normal-plugin,normal-plugin-2' is not installed.
```
_Copied from original issue: gizafoundation/imperative#117_
|
1.0
|
Plugin Management Facility should not imply multiple uninstall - _From @zFernand0 on June 28, 2018 14:18_
Looking at the help of the uninstall command
```
USAGE
-----
"test_cli\TestCLI.ts" plugins uninstall [plugin...] [options]
```
It makes you think that you can uninstall multiple plugins with one single command like:
`$ ts-node "test_cli\TestCLI.ts" plugins uninstall normal-plugin normal-plugin-2`
But instead it errors with:
```
Command Error:
Uninstall Failed
Error Details:
Plugin name ' normal-plugin,normal-plugin-2' is not installed.
```
- Solutions
- Maybe we should NOT have the ellipsis (`...`) in the command definition
imperative/packages/imperative/src/plugins/cmd/uninstall/uninstall.definition.ts
Line 45
name: "plugin...",
- OR just support an array of strings and split by commas (`,`)
---
Full command and output
```
$ ts-node "test_cli\TestCLI.ts" plugins uninstall normal-plugin normal-plugin-2
Command Error:
Uninstall Failed
Error Details:
Plugin name ' normal-plugin,normal-plugin-2' is not installed.
```
_Copied from original issue: gizafoundation/imperative#117_
|
non_process
|
plugin management facility should not imply multiple uninstall from on june looking at the help of the uninstall command usage test cli testcli ts plugins uninstall it makes you think that you can uninstall multiple plugins with one single command like ts node test cli testcli ts plugins uninstall normal plugin normal plugin but instead it errors with command error uninstall failed error details plugin name normal plugin normal plugin is not installed solutions maybe we should not have the ellipsis in the command definition imperative packages imperative src plugins cmd uninstall uninstall definition ts line name plugin or just support an array of strings and split by commas full command and output ts node test cli testcli ts plugins uninstall normal plugin normal plugin command error uninstall failed error details plugin name normal plugin normal plugin is not installed copied from original issue gizafoundation imperative
| 0
|
26,323
| 5,243,675,737
|
IssuesEvent
|
2017-01-31 21:21:35
|
PeterCamilleri/mysh
|
https://api.github.com/repos/PeterCamilleri/mysh
|
closed
|
Add docs for helper methods.
|
documentation enhancement
|
The Mysh module has several helper methods for action authors. These deserve a section in the mysh readme file.
These include:
- Mysh.parse_args(input)
- Mysh.input.readline(parms)
- String#preprocess(evaluator=$mysh_exec_host)
- MNV[] and MNV[]=
- mysh(command_string)
- Mysh.run(args)
|
1.0
|
Add docs for helper methods. - The Mysh module has several helper methods for action authors. These deserve a section in the mysh readme file.
These include:
- Mysh.parse_args(input)
- Mysh.input.readline(parms)
- String#preprocess(evaluator=$mysh_exec_host)
- MNV[] and MNV[]=
- mysh(command_string)
- Mysh.run(args)
|
non_process
|
add docs for helper methods the mysh module has several helper methods for action authors these deserve a section in the mysh readme file these include mysh parse args input mysh input readline parms string preprocess evaluator mysh exec host mnv and mnv mysh command string mysh run args
| 0
|
5,145
| 7,923,822,623
|
IssuesEvent
|
2018-07-05 15:05:19
|
SlicerIGT/SlicerIGT
|
https://api.github.com/repos/SlicerIGT/SlicerIGT
|
closed
|
Quaternion Average does not update when input transforms are modified
|
TransformProcessor bug
|
Steps to reproduce:
1. Open fresh 3D Slicer
2. Navigate to TransformProcessor module
3. Make sure the mode is Quaternion Average
4. Create a new linear transform and add it to the input list of transforms
5. Create a new linear transform and set it as the output
6. Manually change the input transform, see that the output does not update
|
1.0
|
Quaternion Average does not update when input transforms are modified - Steps to reproduce:
1. Open fresh 3D Slicer
2. Navigate to TransformProcessor module
3. Make sure the mode is Quaternion Average
4. Create a new linear transform and add it to the input list of transforms
5. Create a new linear transform and set it as the output
6. Manually change the input transform, see that the output does not update
|
process
|
quaternion average does not update when input transforms are modified steps to reproduce open fresh slicer navigate to transformprocessor module make sure the mode is quaternion average create a new linear transform and add it to the input list of transforms create a new linear transform and set it as the output manually change the input transform see that the output does not update
| 1
|
120,428
| 12,069,534,618
|
IssuesEvent
|
2020-04-16 16:11:47
|
ritjoe/sugarizer-lite
|
https://api.github.com/repos/ritjoe/sugarizer-lite
|
opened
|
Describe math-hurdler
|
documentation
|
This will be the issue where we will describe aspects, features and components of the math-hurdler activity.
At the moment I couldn't find @BlueJay89 in the contributors list so please let me know as soon as possible when I can add him as an assignee.
Prelude: math-hurdler is a python sugar math game. It is to be ported into the js (sugarizer)
link: https://github.com/sugarlabs/math-hurdler
setup: check project readme.
|
1.0
|
Describe math-hurdler - This will be the issue where we will describe aspects, features and components of the math-hurdler activity.
At the moment I couldn't find @BlueJay89 in the contributors list so please let me know as soon as possible when I can add him as an assignee.
Prelude: math-hurdler is a python sugar math game. It is to be ported into the js (sugarizer)
link: https://github.com/sugarlabs/math-hurdler
setup: check project readme.
|
non_process
|
describe math hurdler this will be the issue where we will describe aspects features and components of the math hurdler activity at the moment i couldn t find in the contributors list so please let me know as soon as possible when i can add him as an assignee prelude math hurdler is a python sugar math game it is to be ported into the js sugarizer link setup check project readme
| 0
|
22,692
| 31,996,321,313
|
IssuesEvent
|
2023-09-21 09:25:06
|
googleapis/python-video-transcoder
|
https://api.github.com/repos/googleapis/python-video-transcoder
|
opened
|
[Policy Bot] found one or more issues with this repository.
|
type: process policybot
|
[Policy Bot](https://github.com/googleapis/repo-automation-bots/tree/main/packages/policy#policy-bot) found one or more issues with this repository.
- [x] Default branch is 'main'
- [x] Branch protection is enabled
- [x] Merge commits disabled
- [ ] There is a CODEOWNERS file
- [x] GitHub [detects a valid LICENSE.md](https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/licensing-a-repository#detecting-a-license)
- [ ] There is a CODE_OF_CONDUCT.md
- [ ] There is a CONTRIBUTING.md
- [x] There is a SECURITY.md
|
1.0
|
[Policy Bot] found one or more issues with this repository. -
[Policy Bot](https://github.com/googleapis/repo-automation-bots/tree/main/packages/policy#policy-bot) found one or more issues with this repository.
- [x] Default branch is 'main'
- [x] Branch protection is enabled
- [x] Merge commits disabled
- [ ] There is a CODEOWNERS file
- [x] GitHub [detects a valid LICENSE.md](https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/licensing-a-repository#detecting-a-license)
- [ ] There is a CODE_OF_CONDUCT.md
- [ ] There is a CONTRIBUTING.md
- [x] There is a SECURITY.md
|
process
|
found one or more issues with this repository found one or more issues with this repository default branch is main branch protection is enabled merge commits disabled there is a codeowners file github there is a code of conduct md there is a contributing md there is a security md
| 1
|
120,974
| 10,144,623,828
|
IssuesEvent
|
2019-08-04 22:45:11
|
dw/mitogen
|
https://api.github.com/repos/dw/mitogen
|
closed
|
Incompatible with CentOS 7.5 with SELinux enabled
|
NeedsTest ansible bug user-reported
|
Problem reported in `#ansible`, user is running CentOS 7.5 with SELinux enabled. SSH connection succeeds, but sudo connection fails, it looks like, because SELinux is preventing the passing of a socket from the lesser privileged process to the more privileged process via sudo.
https://danwalsh.livejournal.com/74421.html looks like a similar issue, and describes the mechanism in play. One simple fallback would be to parameterize `hybrid_tty_create_child` child use so that for SELinux, no socket is used, but that makes file transfer very slow.
Another option is to document how to fix up the SELinux rules.
Another option is to replace the socket usage with pipe usage on Linux when SELinux is present. That gives us a 64KiB buffer, which is only half the 128KiB we want, but still much better than the ~3KiB the TTY layer gives us.
<details><summary>Log</summary>
```
ansible-playbook local.yml -i inventory/ --limit host000 --tags selinux -vvv
ansible-playbook 2.5.4
config file = /home/user/repositories/service/ansible/ansible.cfg
configured module search path = [u'/home/user/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.6 (default, Nov 23 2017, 15:49:48) [GCC 4.8.4]
Using /home/user/repositories/service/ansible/ansible.cfg as config file
Parsed /home/user/repositories/service/ansible/inventory/hosts inventory source with ini plugin
PLAYBOOK: local.yml *****************************************************************************************************************************************************************************************
1 plays in local.yml
PLAY [all] **************************************************************************************************************************************************************************************************
[pid 15232] 17:22:29.042213 D mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='MainThread'): initialized
[pid 15232] 17:22:29.043184 D ansible_mitogen.process: Service pool configured: size=16
TASK [Gathering Facts] **************************************************************************************************************************************************************************************
task path: /home/user/repositories/service/ansible/local.yml:2
[pid 15251] 17:22:29.179536 D mitogen: unix.connect(path='/tmp/mitogen_unix_a0Len9')
[pid 15251] 17:22:29.180703 D mitogen: unix.connect(): local ID is 1, remote is 0
[pid 15232] 17:22:29.205069 D mitogen: mitogen.ssh.Stream(u'default').connect()
[pid 15232] 17:22:29.349278 D mitogen: create_child() child 15254 fd 62, parent 15232, cmd: ssh -o "LogLevel ERROR" -o "Compression yes" -o "ServerAliveInterval 15" -o "ServerAliveCountMax 3" -o "BatchMode yes" -o "StrictHostKeyChecking yes" -C -o ControlMaster=auto -o ControlPersist=60s host000 /usr/bin/python -c "'import codecs,os,sys;_=codecs.decode;exec(_(_(\"eNqFkd1LwzAUxZ/Xv6JvSVjMmg4RCgVlA93DEIpsDzqkH6kLa5OQfjn/eu86Ze188O3+OCc5h3sjug11xYw0AhPH0m5AMncBcm0PmATOBOasMT72KPc8cuGIDsmCys+cFroSOBqCHcJ2CB0ABFZHiC/iGlJLNwxdlMW2kwq5scp6UXyKtKnjpBC9PGsqO0ukmpljvdcKQc/JlW0a9g9bYSup1Wsw3/WxQrXSAqOH6HHjoV04fnb2ABZ4LNAxThEuZa0/hArKdB/r5iDvf4ebzXK1Xj6xtmRtJlnFmWF5LZkSdcBv/blPEHEgprOyFphTtF69PHue96YQ1Et1BlcgziJ8x6c7ZNoIBdtHNkGEWRFnmPt3nBOKvqSBn3ITXnxbiroEnU6Tm5+ART+f133l7v5z/23JRy2/Ae+3t+0=\".encode(),\"base64\"),\"zip\"))'"
[pid 15232] 17:22:29.351328 D mitogen: mitogen.ssh.Stream(u'local.15254').connect(): child process stdin/stdout=62
[pid 15232] 17:22:30.665728 D mitogen: mitogen.ssh.Stream(u'local.15254'): received 'MITO000\n'
[pid 15232] 17:22:30.666094 D mitogen: mitogen.ssh.Stream(u'local.15254')._ec0_received()
[pid 15232] 17:22:30.703335 D mitogen: CallChain(Context(2, u'ssh.host000')).call_async(): ansible_mitogen.target.init_child(candidate_temp_dirs=[u'~/.ansible/tmp', u'/var/tmp', u'/tmp'], log_level=10)
[pid 15232] 17:22:30.707757 D mitogen: _build_tuple('/usr/lib/python2.7/dist-packages/ansible/__init__.py', u'ansible') -> [u'cli', u'compat', u'config', u'constants', u'errors', u'executor', u'galaxy', u'inventory', u'module_utils', u'modules', u'parsing', u'playbook', u'plugins', u'release', u'template', u'utils', u'vars']
[pid 15232] 17:22:30.708795 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'ansible.release')
[pid 15232] 17:22:30.709204 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'ansible')
[pid 15232] 17:22:30.711580 D mitogen: _build_tuple('/usr/lib/python2.7/dist-packages/ansible/module_utils/__init__.py', u'ansible.module_utils') -> [u'_text', u'ansible_tower', u'api', u'aws', u'azure_rm_common', u'basic', u'cloud', u'cloudscale', u'cloudstack', u'common', u'connection', u'crypto', u'database', u'digital_ocean', u'dimensiondata', u'docker_common', u'ec2', u'exoscale', u'f5_utils', u'facts', u'gcdns', u'gce', u'gcp', u'gcp_utils', u'infinibox', u'influxdb', u'ipa', u'ismount', u'json_utils', u'k8s', u'keycloak', u'known_hosts', u'lxd', u'manageiq', u'mysql', u'net_tools', u'netapp', u'network', u'oneandone', u'oneview', u'openstack', u'ovirt', u'parsing', u'postgres', u'powershell', u'pure', u'pycompat24', u'rax', u'redhat', u'remote_management', u'service', u'six', u'splitter', u'univention_umc', u'urls', u'vca', u'vmware', u'vultr']
[pid 15232] 17:22:30.711926 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'ansible.module_utils')
[pid 15232] 17:22:30.794555 D mitogen: _get_module_via_sys_modules('_selinux') -> <module '_selinux' from '/usr/lib/python2.7/dist-packages/selinux/_selinux.so'>
[pid 15232] 17:22:30.795158 D mitogen: get_module_source('_selinux'): cannot find source
[pid 15232] 17:22:30.795947 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'ansible.module_utils._text')
[pid 15232] 17:22:30.796747 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'ansible.module_utils.parsing')
[pid 15232] 17:22:30.797202 D mitogen: _build_tuple('/usr/lib/python2.7/dist-packages/ansible/module_utils/parsing/__init__.py', u'ansible.module_utils.parsing') -> [u'convert_bool']
[pid 15232] 17:22:30.797406 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'ansible.module_utils.parsing.convert_bool')
[pid 15232] 17:22:30.797662 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'ansible.module_utils.pycompat24')
[pid 15232] 17:22:30.797981 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'ansible.module_utils.six')
[pid 15232] 17:22:30.798254 D mitogen: _build_tuple('/usr/lib/python2.7/dist-packages/ansible/module_utils/six/__init__.py', u'ansible.module_utils.six') -> []
[pid 15232] 17:22:30.800194 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'ansible.module_utils.basic')
[pid 15232] 17:22:30.801222 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'ansible.module_utils.json_utils')
[pid 15232] 17:22:30.802056 D mitogen: _build_tuple('/home/user/repositories/mitogen-0.2.3/ansible_mitogen/__init__.py', u'ansible_mitogen') -> [u'connection', u'loaders', u'logging', u'mixins', u'module_finder', u'parsing', u'planner', u'plugins', u'process', u'runner', u'services', u'strategy', u'target']
[pid 15232] 17:22:30.802237 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'ansible_mitogen')
[pid 15232] 17:22:30.849209 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'ansible_mitogen.target')
[pid 15232] 17:22:30.851064 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'mitogen.compat')
[pid 15232] 17:22:30.851395 D mitogen: _build_tuple('/home/user/repositories/mitogen-0.2.3/mitogen/compat/__init__.py', u'mitogen.compat') -> [u'functools', u'pkgutil', u'tokenize']
[pid 15232] 17:22:30.851581 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'mitogen.compat.functools')
[pid 15232] 17:22:30.852326 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'mitogen.fork')
[pid 15232] 17:22:30.852746 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'mitogen.parent')
[pid 15232] 17:22:30.860418 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'mitogen.select')
[pid 15232] 17:22:30.860880 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'mitogen.service')
[pid 15232] 17:22:30.864015 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'ansible_mitogen.runner')
[pid 15232] 17:22:30.864826 D mitogen: _build_tuple('/home/user/repositories/mitogen-0.2.3/mitogen/__init__.py', u'mitogen') -> [u'compat', u'core', u'debug', u'doas', u'docker', u'fakessh', u'fork', u'jail', u'kubectl', u'lxc', u'lxd', u'master', u'minify', u'parent', u'select', u'service', u'setns', u'ssh', u'su', u'sudo', u'unix', u'utils']
[pid 15232] 17:22:30.990868 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'encodings.utf_8' is submodule of a package we did not load
[pid 15232] 17:22:30.991124 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'json.decoder' is submodule of a package we did not load
[pid 15232] 17:22:30.991290 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'json.re' is submodule of a package we did not load
[pid 15232] 17:22:30.991453 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'json.sys' is submodule of a package we did not load
[pid 15232] 17:22:30.991611 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'json.struct' is submodule of a package we did not load
[pid 15232] 17:22:30.991748 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'json.json' is submodule of a package we did not load
[pid 15232] 17:22:30.991884 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'json.scanner' is submodule of a package we did not load
[pid 15232] 17:22:30.992010 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'json._json' is submodule of a package we did not load
[pid 15232] 17:22:31.003845 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'encodings.hex_codec' is submodule of a package we did not load
[pid 15232] 17:22:31.004105 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'encodings.binascii' is submodule of a package we did not load
[pid 15232] 17:22:31.004326 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'json.encoder' is submodule of a package we did not load
[pid 15232] 17:22:31.017231 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.logging'
[pid 15232] 17:22:31.017591 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.os'
[pid 15232] 17:22:31.017744 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.random'
[pid 15232] 17:22:31.017903 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.sys'
[pid 15232] 17:22:31.018060 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.threading'
[pid 15232] 17:22:31.018195 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.traceback'
[pid 15232] 17:22:31.018345 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.mitogen'
[pid 15232] 17:22:31.030246 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.codecs'
[pid 15232] 17:22:31.030575 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.errno'
[pid 15232] 17:22:31.030707 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.fcntl'
[pid 15232] 17:22:31.030833 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.getpass'
[pid 15232] 17:22:31.030964 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.inspect'
[pid 15232] 17:22:31.048435 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.signal'
[pid 15232] 17:22:31.061904 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.socket'
[pid 15232] 17:22:31.062272 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.subprocess'
[pid 15232] 17:22:31.062448 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.termios'
[pid 15232] 17:22:31.062584 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.textwrap'
[pid 15232] 17:22:31.062713 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.time'
[pid 15232] 17:22:31.062846 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.zlib'
[pid 15232] 17:22:31.062975 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.cStringIO'
[pid 15232] 17:22:31.063099 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.functools'
[pid 15232] 17:22:31.063221 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.compat.threading'
[pid 15232] 17:22:31.063341 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.grp'
[pid 15232] 17:22:31.063459 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.pprint'
[pid 15232] 17:22:31.075043 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.pwd'
[pid 15232] 17:22:31.075329 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.stat'
[pid 15232] 17:22:31.075465 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.json'
[pid 15232] 17:22:31.075601 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'ctypes.os' is submodule of a package we did not load
[pid 15232] 17:22:31.075731 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'ctypes.sys' is submodule of a package we did not load
[pid 15232] 17:22:31.075851 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'ctypes._ctypes' is submodule of a package we did not load
[pid 15232] 17:22:31.088423 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'ctypes.struct' is submodule of a package we did not load
[pid 15232] 17:22:31.088715 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'ctypes.ctypes' is submodule of a package we did not load
[pid 15232] 17:22:31.088852 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'ctypes._endian' is submodule of a package we did not load
[pid 15232] 17:22:31.118436 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.locale'
[pid 15232] 17:22:31.127560 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.os'
[pid 15232] 17:22:31.127926 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.re'
[pid 15232] 17:22:31.128077 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.shlex'
[pid 15232] 17:22:31.128217 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.subprocess'
[pid 15232] 17:22:31.128347 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.sys'
[pid 15232] 17:22:31.128471 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.types'
[pid 15232] 17:22:31.128595 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.time'
[pid 15232] 17:22:31.128720 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.select'
[pid 15232] 17:22:31.128845 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.shutil'
[pid 15232] 17:22:31.128965 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.stat'
[pid 15232] 17:22:31.129079 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.tempfile'
[pid 15232] 17:22:31.129195 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.traceback'
[pid 15232] 17:22:31.129304 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.grp'
[pid 15232] 17:22:31.133888 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.pwd'
[pid 15232] 17:22:31.134175 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.platform'
[pid 15232] 17:22:31.134339 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.errno'
[pid 15232] 17:22:31.134476 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.datetime'
[pid 15232] 17:22:31.145892 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.collections'
[pid 15232] 17:22:31.146258 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.itertools'
[pid 15232] 17:22:31.146430 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.syslog'
[pid 15232] 17:22:31.146599 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.systemd'
[pid 15232] 17:22:31.146741 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.selinux'
[pid 15232] 17:22:31.146915 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'selinux.sys' is submodule of a package we did not load
[pid 15232] 17:22:31.147046 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'selinux.os' is submodule of a package we did not load
[pid 15232] 17:22:31.147192 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'selinux.imp' is submodule of a package we did not load
[pid 15232] 17:22:31.147342 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'selinux.shutil' is submodule of a package we did not load
[pid 15232] 17:22:31.147472 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'selinux.errno' is submodule of a package we did not load
[pid 15232] 17:22:31.147599 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'selinux.stat' is submodule of a package we did not load
[pid 15232] 17:22:31.147726 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.hashlib'
[pid 15232] 17:22:31.147850 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.ansible'
[pid 15232] 17:22:31.148004 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.ast'
[pid 15232] 17:22:31.160917 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.six.moves'
[pid 15232] 17:22:31.161189 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.codecs'
[pid 15232] 17:22:31.161331 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.parsing.ansible'
[pid 15232] 17:22:31.161474 D mitogen.ctx.ssh.host000: ansible_mitogen.runner: EnvironmentFileWatcher(u'/home/user/.pam_environment') installed; existing keys: []
[pid 15232] 17:22:31.161617 D mitogen.ctx.ssh.host000: ansible_mitogen.runner: EnvironmentFileWatcher(u'/etc/environment') installed; existing keys: []
[pid 15232] 17:22:31.161749 D mitogen.ctx.ssh.host000: mitogen: replaced Poller(0x7f4f2ecd7590) with EpollPoller(0x7f4f29997fd0) (new: 4 readers, 0 writers; old: 4 readers, 0 writers)
[pid 15232] 17:22:31.161882 D mitogen.ctx.ssh.host000: mitogen: Router(Broker(0x7f4f2ecd7450)).upgrade()
[pid 15232] 17:22:31.162033 D mitogen: IdAllocator(Router(Broker(0x7f425b637550))): allocating [3..1003)
[pid 15232] 17:22:31.162166 D mitogen: IdAllocator(Router(Broker(0x7f425b637550))): allocating [3..1003) to Context(2, u'ssh.host000')
[pid 15232] 17:22:31.190345 D mitogen.ctx.ssh.host000: mitogen: mitogen.fork.Stream(u'default').connect()
[pid 15232] 17:22:31.240012 D mitogen.ctx.ssh.host000: mitogen: mitogen.fork.Stream(u'fork.1374').connect(): child process stdin/stdout=16
[pid 15232] 17:22:31.240417 D mitogen: Adding route to 3 via mitogen.ssh.Stream(u'ssh.host000')
[pid 15232] 17:22:31.240546 D mitogen: Router(Broker(0x7f425b637550)).add_route(3, mitogen.ssh.Stream(u'ssh.host000'))
[pid 15232] 17:22:31.240693 D mitogen.ctx.ssh.host000: ansible_mitogen.target: Selected temp directory: u'/home/user/.ansible/tmp' (from [u'/home/user/.ansible/tmp', u'/var/tmp', u'/tmp', '/tmp', '/var/tmp', '/usr/tmp', '/home/user'])
[pid 15232] 17:22:31.240941 D mitogen.ctx.fork.1374: mitogen: register(Context(2, 'parent'), mitogen.core.Stream('parent'))
[pid 15232] 17:22:31.241090 D mitogen.ctx.fork.1374: mitogen: Connected to Context(2, 'parent'); my ID is 3, PID is 1374
[pid 15232] 17:22:31.241239 D mitogen.ctx.fork.1374: mitogen: Recovered sys.executable: '/usr/bin/python'
[pid 15232] 17:22:31.242328 D mitogen: CallChain(Context(2, u'ssh.host000')).call_async(): mitogen.parent._proxy_connect(method_name=u'sudo', name=None, kwargs=Kwargs({u'username': u'root', u'profiling': False, u'sudo_path': None, u'python_path': [u'/usr/bin/python'], 'unidirectional': True, u'debug': False, u'password': None, u'sudo_args': [u'-H', u'-S', u'-n'], u'connect_timeout': 10}))
[pid 15232] 17:22:31.270836 D mitogen: ModuleResponder(Router(Broker(0x7f425b637550)))._on_get_module('mitogen.sudo')
[pid 15232] 17:22:31.273081 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'mitogen.sudo')
[pid 15232] 17:22:31.304365 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.optparse'
[pid 15232] 17:22:31.356107 D mitogen.ctx.ssh.host000: mitogen: mitogen.sudo.Stream(u'default').connect()
[pid 15232] 17:22:31.356476 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'encodings.base64_codec' is submodule of a package we did not load
[pid 15232] 17:22:31.356624 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'encodings.base64' is submodule of a package we did not load
[pid 15232] 17:22:31.356762 D mitogen.ctx.ssh.host000: mitogen.sudo: sudo command line: ['sudo', '-u', u'root', '-H', u'/usr/bin/python', '-c', u'import codecs,os,sys;_=codecs.decode;exec(_(_("eNqFkTFrwzAQhef4V3iTRIQix5SCQdCSoXQoBVOaoQ1FtuVGxJGELMdNf30vTiF2OnS7j3t373GX07WwLXPaKUwiT/sR6ToGqK3fYZJFM6irzi0xpwnn5MI5HZOHbnLmsrGtwvkY/BjWY+gBwLA9gn0jA7juYyFiVEnfa4Niaaqhqb5U2QVZNGpoL7rWLwptFu4YttYgyDm7ks3FMHhQvtXWvGXpZrBV5qA9MLrPH1452ojp2FkD2OBpg05xjvBeB/upTLYvt9J2O31nbKV4wrwq2alkB9k1gbWcOVYHzYwKWZKmNwSRCEx6r4PCCUVPjy/PnPN3gyBcCYOYkGglPvDpC5V1ysDtkS8QgdWywsnyNk0JRd/awabaiYtuTVFfoNNjavdrsBrq87Gv1P1/6r8pk0nKH7qMt6A=".encode(),"base64"),"zip"))']
[pid 15232] 17:22:31.356905 D mitogen.ctx.ssh.host000: mitogen: hybrid_tty_create_child() pid=1376 stdio=18, tty=17, cmd: sudo -u root -H /usr/bin/python -c "import codecs,os,sys;_=codecs.decode;exec(_(_(\"eNqFkTFrwzAQhef4V3iTRIQix5SCQdCSoXQoBVOaoQ1FtuVGxJGELMdNf30vTiF2OnS7j3t373GX07WwLXPaKUwiT/sR6ToGqK3fYZJFM6irzi0xpwnn5MI5HZOHbnLmsrGtwvkY/BjWY+gBwLA9gn0jA7juYyFiVEnfa4Niaaqhqb5U2QVZNGpoL7rWLwptFu4YttYgyDm7ks3FMHhQvtXWvGXpZrBV5qA9MLrPH1452ojp2FkD2OBpg05xjvBeB/upTLYvt9J2O31nbKV4wrwq2alkB9k1gbWcOVYHzYwKWZKmNwSRCEx6r4PCCUVPjy/PnPN3gyBcCYOYkGglPvDpC5V1ysDtkS8QgdWywsnyNk0JRd/awabaiYtuTVFfoNNjavdrsBrq87Gv1P1/6r8pk0nKH7qMt6A=\".encode(),\"base64\"),\"zip\"))"
[pid 15232] 17:22:31.357043 D mitogen.ctx.ssh.host000: mitogen: mitogen.sudo.Stream(u'local.1376').connect(): child process stdin/stdout=18
[pid 15232] 17:22:31.416527 D mitogen.ctx.ssh.host000: mitogen.sudo: mitogen.sudo.Stream(u'local.1376'): received 'Traceback (most recent call last):\n'
[pid 15232] 17:22:31.429858 D mitogen.ctx.ssh.host000: mitogen.sudo: mitogen.sudo.Stream(u'local.1376'): received ' File "<string>", line 1, in <module>\n'
[pid 15232] 17:22:31.430206 D mitogen.ctx.ssh.host000: mitogen.sudo: mitogen.sudo.Stream(u'local.1376'): received ' File "<string>", line 16, in <module>\n'
[pid 15232] 17:22:31.430374 D mitogen.ctx.ssh.host000: mitogen.sudo: mitogen.sudo.Stream(u'local.1376'): received ' File "/usr/lib64/python2.7/encodings/zlib_codec.py", line 43, in zlib_decode\n'
[pid 15232] 17:22:31.430512 D mitogen.ctx.ssh.host000: mitogen.sudo: mitogen.sudo.Stream(u'local.1376'): received ' '
[pid 15232] 17:22:31.430643 D mitogen.ctx.ssh.host000: mitogen.sudo: mitogen.sudo.Stream(u'local.1376'): received 'output = zlib.decompress(input)\n'
[pid 15232] 17:22:31.430770 D mitogen.ctx.ssh.host000: mitogen.sudo: mitogen.sudo.Stream(u'local.1376'): received 'zlib'
[pid 15232] 17:22:31.430896 D mitogen.ctx.ssh.host000: mitogen.sudo: mitogen.sudo.Stream(u'local.1376'): received '.'
[pid 15232] 17:22:31.431022 D mitogen.ctx.ssh.host000: mitogen.sudo: mitogen.sudo.Stream(u'local.1376'): received 'error'
[pid 15232] 17:22:31.431145 D mitogen.ctx.ssh.host000: mitogen.sudo: mitogen.sudo.Stream(u'local.1376'): received ': '
[pid 15232] 17:22:31.431266 D mitogen.ctx.ssh.host000: mitogen.sudo: mitogen.sudo.Stream(u'local.1376'): received 'Error -5 while decompressing data: incomplete or truncated stream'
[pid 15232] 17:22:31.431384 D mitogen.ctx.ssh.host000: mitogen.sudo: mitogen.sudo.Stream(u'local.1376'): received '\n'
[pid 15232] 17:22:31.452079 D mitogen.ctx.ssh.host000: mitogen: mitogen.sudo.Stream(u'local.1376'): child process still alive, sending SIGTERM
[pid 15251] 17:22:31.453634 D mitogen: mitogen.core.Stream(u'unix_listener.15232').on_disconnect()
[pid 15251] 17:22:31.453927 D mitogen: Waker(Broker(0x7f4255db0590) rfd=13, wfd=14).on_disconnect()
[pid 15232] 17:22:31.453950 D mitogen: mitogen.core.Stream(u'unix_client.15251').on_disconnect()
fatal: [host000]: FAILED! => {
"msg": "error occurred on host host000: EOF on stream; last 300 bytes received: u'ck (most recent call last):\\n File \"<string>\", line 1, in <module>\\n File \"<string>\", line 16, in <module>\\n File \"/usr/lib64/python2.7/encodings/zlib_codec.py\", line 43, in zlib_decode\\n output = zlib.decompress(input)\\nzlib.error: Error -5 while decompressing data: incomplete or truncated stream\\n'"
}
PLAY RECAP **************************************************************************************************************************************************************************************************
host000 : ok=0 changed=0 unreachable=0 failed=1
[pid 15232] 17:22:31.462573 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-1'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.462856 D mitogen: Waker(Broker(0x7f425b637550) rfd=9, wfd=11).on_disconnect()
[pid 15232] 17:22:31.462928 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-2'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.463001 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-3'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.463093 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-4'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.463238 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-6'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.463323 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-5'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.463505 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-7'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.463575 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-8'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.463644 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-9'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.463838 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-10'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.463927 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-11'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.464219 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-12'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.464352 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-13'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.464691 D mitogen: mitogen.ssh.Stream(u'ssh.host000') closing CALL_FUNCTION channel
[pid 15232] 17:22:31.464917 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-0'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.464975 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-14'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.465169 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-15'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.467477 D mitogen: <mitogen.unix.Listener object at 0x7f425b637a50>.on_disconnect()
[pid 15232] 17:22:31.468454 D mitogen: mitogen.parent.DiagLogStream(fd=65, u'ssh.host000').on_disconnect()
[pid 15232] 17:22:31.498133 D mitogen.ctx.ssh.host000: mitogen: mitogen.fork.Stream(u'fork.1374') closing CALL_FUNCTION channel
[pid 15232] 17:22:31.498522 D mitogen.ctx.ssh.host000: mitogen: Waker(Broker(0x7f4f2ecd7450) rfd=3, wfd=4).on_disconnect()
[pid 15232] 17:22:31.498667 D mitogen.ctx.ssh.host000: mitogen: <IoLogger stderr>.on_disconnect()
[pid 15232] 17:22:31.498832 D mitogen.ctx.ssh.host000: mitogen: <IoLogger stdout>.on_disconnect()
[pid 15232] 17:22:31.548280 D mitogen.ctx.fork.1374: mitogen: _on_shutdown_msg(Message(3, 2, 2, 106, 0, ''..0))
[pid 15232] 17:22:31.549526 D mitogen: mitogen.parent.DiagLogStream(fd=65, u'ssh.host000').on_disconnect()
[pid 15232] 17:22:31.549745 D mitogen: mitogen.ssh.Stream(u'ssh.host000'): PID 15254 exited with return code 255
[pid 15232] 17:22:31.549866 D mitogen: mitogen.ssh.Stream(u'ssh.host000').on_disconnect()
[pid 15232] 17:22:31.550030 D mitogen: mitogen.ssh.Stream(u'ssh.host000') is gone; propagating DEL_ROUTE for set([2, 3])
[pid 15232] 17:22:31.550157 D mitogen: Router(Broker(0x7f425b637550)).del_route(2)
[pid 15232] 17:22:31.550277 D mitogen: Router(Broker(0x7f425b637550)).del_route(3)
[pid 15232] 17:22:31.550450 I ansible_mitogen.services: Dropping Context(2, u'ssh.host000') due to disconnect of mitogen.ssh.Stream(u'ssh.host000')
```
</details>
|
1.0
|
Incompatible with CentOS 7.5 with SELinux enabled - Problem reported in `#ansible`, user is running CentOS 7.5 with SELinux enabled. SSH connection succeeds, but sudo connection fails, it looks like, because SELinux is preventing the passing of a socket from the lesser privileged process to the more privileged process via sudo.
https://danwalsh.livejournal.com/74421.html looks like a similar issue, and describes the mechanism in play. One simple fallback would be to parameterize `hybrid_tty_create_child` child use so that for SELinux, no socket is used, but that makes file transfer very slow.
Another option is to document how to fix up the SELinux rules.
Another option is to replace the socket usage with pipe usage on Linux when SELinux is present. That gives us a 64KiB buffer, which is only half the 128KiB we want, but still much better than the ~3KiB the TTY layer gives us.
<details><summary>Log</summary>
```
ansible-playbook local.yml -i inventory/ --limit host000 --tags selinux -vvv
ansible-playbook 2.5.4
config file = /home/user/repositories/service/ansible/ansible.cfg
configured module search path = [u'/home/user/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.6 (default, Nov 23 2017, 15:49:48) [GCC 4.8.4]
Using /home/user/repositories/service/ansible/ansible.cfg as config file
Parsed /home/user/repositories/service/ansible/inventory/hosts inventory source with ini plugin
PLAYBOOK: local.yml *****************************************************************************************************************************************************************************************
1 plays in local.yml
PLAY [all] **************************************************************************************************************************************************************************************************
[pid 15232] 17:22:29.042213 D mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='MainThread'): initialized
[pid 15232] 17:22:29.043184 D ansible_mitogen.process: Service pool configured: size=16
TASK [Gathering Facts] **************************************************************************************************************************************************************************************
task path: /home/user/repositories/service/ansible/local.yml:2
[pid 15251] 17:22:29.179536 D mitogen: unix.connect(path='/tmp/mitogen_unix_a0Len9')
[pid 15251] 17:22:29.180703 D mitogen: unix.connect(): local ID is 1, remote is 0
[pid 15232] 17:22:29.205069 D mitogen: mitogen.ssh.Stream(u'default').connect()
[pid 15232] 17:22:29.349278 D mitogen: create_child() child 15254 fd 62, parent 15232, cmd: ssh -o "LogLevel ERROR" -o "Compression yes" -o "ServerAliveInterval 15" -o "ServerAliveCountMax 3" -o "BatchMode yes" -o "StrictHostKeyChecking yes" -C -o ControlMaster=auto -o ControlPersist=60s host000 /usr/bin/python -c "'import codecs,os,sys;_=codecs.decode;exec(_(_(\"eNqFkd1LwzAUxZ/Xv6JvSVjMmg4RCgVlA93DEIpsDzqkH6kLa5OQfjn/eu86Ze188O3+OCc5h3sjug11xYw0AhPH0m5AMncBcm0PmATOBOasMT72KPc8cuGIDsmCys+cFroSOBqCHcJ2CB0ABFZHiC/iGlJLNwxdlMW2kwq5scp6UXyKtKnjpBC9PGsqO0ukmpljvdcKQc/JlW0a9g9bYSup1Wsw3/WxQrXSAqOH6HHjoV04fnb2ABZ4LNAxThEuZa0/hArKdB/r5iDvf4ebzXK1Xj6xtmRtJlnFmWF5LZkSdcBv/blPEHEgprOyFphTtF69PHue96YQ1Et1BlcgziJ8x6c7ZNoIBdtHNkGEWRFnmPt3nBOKvqSBn3ITXnxbiroEnU6Tm5+ART+f133l7v5z/23JRy2/Ae+3t+0=\".encode(),\"base64\"),\"zip\"))'"
[pid 15232] 17:22:29.351328 D mitogen: mitogen.ssh.Stream(u'local.15254').connect(): child process stdin/stdout=62
[pid 15232] 17:22:30.665728 D mitogen: mitogen.ssh.Stream(u'local.15254'): received 'MITO000\n'
[pid 15232] 17:22:30.666094 D mitogen: mitogen.ssh.Stream(u'local.15254')._ec0_received()
[pid 15232] 17:22:30.703335 D mitogen: CallChain(Context(2, u'ssh.host000')).call_async(): ansible_mitogen.target.init_child(candidate_temp_dirs=[u'~/.ansible/tmp', u'/var/tmp', u'/tmp'], log_level=10)
[pid 15232] 17:22:30.707757 D mitogen: _build_tuple('/usr/lib/python2.7/dist-packages/ansible/__init__.py', u'ansible') -> [u'cli', u'compat', u'config', u'constants', u'errors', u'executor', u'galaxy', u'inventory', u'module_utils', u'modules', u'parsing', u'playbook', u'plugins', u'release', u'template', u'utils', u'vars']
[pid 15232] 17:22:30.708795 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'ansible.release')
[pid 15232] 17:22:30.709204 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'ansible')
[pid 15232] 17:22:30.711580 D mitogen: _build_tuple('/usr/lib/python2.7/dist-packages/ansible/module_utils/__init__.py', u'ansible.module_utils') -> [u'_text', u'ansible_tower', u'api', u'aws', u'azure_rm_common', u'basic', u'cloud', u'cloudscale', u'cloudstack', u'common', u'connection', u'crypto', u'database', u'digital_ocean', u'dimensiondata', u'docker_common', u'ec2', u'exoscale', u'f5_utils', u'facts', u'gcdns', u'gce', u'gcp', u'gcp_utils', u'infinibox', u'influxdb', u'ipa', u'ismount', u'json_utils', u'k8s', u'keycloak', u'known_hosts', u'lxd', u'manageiq', u'mysql', u'net_tools', u'netapp', u'network', u'oneandone', u'oneview', u'openstack', u'ovirt', u'parsing', u'postgres', u'powershell', u'pure', u'pycompat24', u'rax', u'redhat', u'remote_management', u'service', u'six', u'splitter', u'univention_umc', u'urls', u'vca', u'vmware', u'vultr']
[pid 15232] 17:22:30.711926 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'ansible.module_utils')
[pid 15232] 17:22:30.794555 D mitogen: _get_module_via_sys_modules('_selinux') -> <module '_selinux' from '/usr/lib/python2.7/dist-packages/selinux/_selinux.so'>
[pid 15232] 17:22:30.795158 D mitogen: get_module_source('_selinux'): cannot find source
[pid 15232] 17:22:30.795947 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'ansible.module_utils._text')
[pid 15232] 17:22:30.796747 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'ansible.module_utils.parsing')
[pid 15232] 17:22:30.797202 D mitogen: _build_tuple('/usr/lib/python2.7/dist-packages/ansible/module_utils/parsing/__init__.py', u'ansible.module_utils.parsing') -> [u'convert_bool']
[pid 15232] 17:22:30.797406 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'ansible.module_utils.parsing.convert_bool')
[pid 15232] 17:22:30.797662 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'ansible.module_utils.pycompat24')
[pid 15232] 17:22:30.797981 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'ansible.module_utils.six')
[pid 15232] 17:22:30.798254 D mitogen: _build_tuple('/usr/lib/python2.7/dist-packages/ansible/module_utils/six/__init__.py', u'ansible.module_utils.six') -> []
[pid 15232] 17:22:30.800194 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'ansible.module_utils.basic')
[pid 15232] 17:22:30.801222 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'ansible.module_utils.json_utils')
[pid 15232] 17:22:30.802056 D mitogen: _build_tuple('/home/user/repositories/mitogen-0.2.3/ansible_mitogen/__init__.py', u'ansible_mitogen') -> [u'connection', u'loaders', u'logging', u'mixins', u'module_finder', u'parsing', u'planner', u'plugins', u'process', u'runner', u'services', u'strategy', u'target']
[pid 15232] 17:22:30.802237 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'ansible_mitogen')
[pid 15232] 17:22:30.849209 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'ansible_mitogen.target')
[pid 15232] 17:22:30.851064 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'mitogen.compat')
[pid 15232] 17:22:30.851395 D mitogen: _build_tuple('/home/user/repositories/mitogen-0.2.3/mitogen/compat/__init__.py', u'mitogen.compat') -> [u'functools', u'pkgutil', u'tokenize']
[pid 15232] 17:22:30.851581 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'mitogen.compat.functools')
[pid 15232] 17:22:30.852326 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'mitogen.fork')
[pid 15232] 17:22:30.852746 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'mitogen.parent')
[pid 15232] 17:22:30.860418 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'mitogen.select')
[pid 15232] 17:22:30.860880 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'mitogen.service')
[pid 15232] 17:22:30.864015 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'ansible_mitogen.runner')
[pid 15232] 17:22:30.864826 D mitogen: _build_tuple('/home/user/repositories/mitogen-0.2.3/mitogen/__init__.py', u'mitogen') -> [u'compat', u'core', u'debug', u'doas', u'docker', u'fakessh', u'fork', u'jail', u'kubectl', u'lxc', u'lxd', u'master', u'minify', u'parent', u'select', u'service', u'setns', u'ssh', u'su', u'sudo', u'unix', u'utils']
[pid 15232] 17:22:30.990868 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'encodings.utf_8' is submodule of a package we did not load
[pid 15232] 17:22:30.991124 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'json.decoder' is submodule of a package we did not load
[pid 15232] 17:22:30.991290 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'json.re' is submodule of a package we did not load
[pid 15232] 17:22:30.991453 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'json.sys' is submodule of a package we did not load
[pid 15232] 17:22:30.991611 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'json.struct' is submodule of a package we did not load
[pid 15232] 17:22:30.991748 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'json.json' is submodule of a package we did not load
[pid 15232] 17:22:30.991884 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'json.scanner' is submodule of a package we did not load
[pid 15232] 17:22:30.992010 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'json._json' is submodule of a package we did not load
[pid 15232] 17:22:31.003845 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'encodings.hex_codec' is submodule of a package we did not load
[pid 15232] 17:22:31.004105 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'encodings.binascii' is submodule of a package we did not load
[pid 15232] 17:22:31.004326 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'json.encoder' is submodule of a package we did not load
[pid 15232] 17:22:31.017231 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.logging'
[pid 15232] 17:22:31.017591 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.os'
[pid 15232] 17:22:31.017744 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.random'
[pid 15232] 17:22:31.017903 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.sys'
[pid 15232] 17:22:31.018060 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.threading'
[pid 15232] 17:22:31.018195 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.traceback'
[pid 15232] 17:22:31.018345 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.mitogen'
[pid 15232] 17:22:31.030246 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.codecs'
[pid 15232] 17:22:31.030575 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.errno'
[pid 15232] 17:22:31.030707 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.fcntl'
[pid 15232] 17:22:31.030833 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.getpass'
[pid 15232] 17:22:31.030964 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.inspect'
[pid 15232] 17:22:31.048435 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.signal'
[pid 15232] 17:22:31.061904 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.socket'
[pid 15232] 17:22:31.062272 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.subprocess'
[pid 15232] 17:22:31.062448 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.termios'
[pid 15232] 17:22:31.062584 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.textwrap'
[pid 15232] 17:22:31.062713 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.time'
[pid 15232] 17:22:31.062846 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.zlib'
[pid 15232] 17:22:31.062975 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.cStringIO'
[pid 15232] 17:22:31.063099 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.functools'
[pid 15232] 17:22:31.063221 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.compat.threading'
[pid 15232] 17:22:31.063341 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.grp'
[pid 15232] 17:22:31.063459 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.pprint'
[pid 15232] 17:22:31.075043 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.pwd'
[pid 15232] 17:22:31.075329 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.stat'
[pid 15232] 17:22:31.075465 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.json'
[pid 15232] 17:22:31.075601 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'ctypes.os' is submodule of a package we did not load
[pid 15232] 17:22:31.075731 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'ctypes.sys' is submodule of a package we did not load
[pid 15232] 17:22:31.075851 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'ctypes._ctypes' is submodule of a package we did not load
[pid 15232] 17:22:31.088423 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'ctypes.struct' is submodule of a package we did not load
[pid 15232] 17:22:31.088715 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'ctypes.ctypes' is submodule of a package we did not load
[pid 15232] 17:22:31.088852 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'ctypes._endian' is submodule of a package we did not load
[pid 15232] 17:22:31.118436 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.locale'
[pid 15232] 17:22:31.127560 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.os'
[pid 15232] 17:22:31.127926 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.re'
[pid 15232] 17:22:31.128077 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.shlex'
[pid 15232] 17:22:31.128217 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.subprocess'
[pid 15232] 17:22:31.128347 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.sys'
[pid 15232] 17:22:31.128471 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.types'
[pid 15232] 17:22:31.128595 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.time'
[pid 15232] 17:22:31.128720 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.select'
[pid 15232] 17:22:31.128845 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.shutil'
[pid 15232] 17:22:31.128965 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.stat'
[pid 15232] 17:22:31.129079 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.tempfile'
[pid 15232] 17:22:31.129195 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.traceback'
[pid 15232] 17:22:31.129304 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.grp'
[pid 15232] 17:22:31.133888 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.pwd'
[pid 15232] 17:22:31.134175 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.platform'
[pid 15232] 17:22:31.134339 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.errno'
[pid 15232] 17:22:31.134476 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.datetime'
[pid 15232] 17:22:31.145892 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.collections'
[pid 15232] 17:22:31.146258 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.itertools'
[pid 15232] 17:22:31.146430 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.syslog'
[pid 15232] 17:22:31.146599 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.systemd'
[pid 15232] 17:22:31.146741 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.selinux'
[pid 15232] 17:22:31.146915 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'selinux.sys' is submodule of a package we did not load
[pid 15232] 17:22:31.147046 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'selinux.os' is submodule of a package we did not load
[pid 15232] 17:22:31.147192 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'selinux.imp' is submodule of a package we did not load
[pid 15232] 17:22:31.147342 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'selinux.shutil' is submodule of a package we did not load
[pid 15232] 17:22:31.147472 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'selinux.errno' is submodule of a package we did not load
[pid 15232] 17:22:31.147599 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'selinux.stat' is submodule of a package we did not load
[pid 15232] 17:22:31.147726 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.hashlib'
[pid 15232] 17:22:31.147850 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.ansible'
[pid 15232] 17:22:31.148004 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.ast'
[pid 15232] 17:22:31.160917 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.six.moves'
[pid 15232] 17:22:31.161189 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.codecs'
[pid 15232] 17:22:31.161331 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'ansible.module_utils.parsing.ansible'
[pid 15232] 17:22:31.161474 D mitogen.ctx.ssh.host000: ansible_mitogen.runner: EnvironmentFileWatcher(u'/home/user/.pam_environment') installed; existing keys: []
[pid 15232] 17:22:31.161617 D mitogen.ctx.ssh.host000: ansible_mitogen.runner: EnvironmentFileWatcher(u'/etc/environment') installed; existing keys: []
[pid 15232] 17:22:31.161749 D mitogen.ctx.ssh.host000: mitogen: replaced Poller(0x7f4f2ecd7590) with EpollPoller(0x7f4f29997fd0) (new: 4 readers, 0 writers; old: 4 readers, 0 writers)
[pid 15232] 17:22:31.161882 D mitogen.ctx.ssh.host000: mitogen: Router(Broker(0x7f4f2ecd7450)).upgrade()
[pid 15232] 17:22:31.162033 D mitogen: IdAllocator(Router(Broker(0x7f425b637550))): allocating [3..1003)
[pid 15232] 17:22:31.162166 D mitogen: IdAllocator(Router(Broker(0x7f425b637550))): allocating [3..1003) to Context(2, u'ssh.host000')
[pid 15232] 17:22:31.190345 D mitogen.ctx.ssh.host000: mitogen: mitogen.fork.Stream(u'default').connect()
[pid 15232] 17:22:31.240012 D mitogen.ctx.ssh.host000: mitogen: mitogen.fork.Stream(u'fork.1374').connect(): child process stdin/stdout=16
[pid 15232] 17:22:31.240417 D mitogen: Adding route to 3 via mitogen.ssh.Stream(u'ssh.host000')
[pid 15232] 17:22:31.240546 D mitogen: Router(Broker(0x7f425b637550)).add_route(3, mitogen.ssh.Stream(u'ssh.host000'))
[pid 15232] 17:22:31.240693 D mitogen.ctx.ssh.host000: ansible_mitogen.target: Selected temp directory: u'/home/user/.ansible/tmp' (from [u'/home/user/.ansible/tmp', u'/var/tmp', u'/tmp', '/tmp', '/var/tmp', '/usr/tmp', '/home/user'])
[pid 15232] 17:22:31.240941 D mitogen.ctx.fork.1374: mitogen: register(Context(2, 'parent'), mitogen.core.Stream('parent'))
[pid 15232] 17:22:31.241090 D mitogen.ctx.fork.1374: mitogen: Connected to Context(2, 'parent'); my ID is 3, PID is 1374
[pid 15232] 17:22:31.241239 D mitogen.ctx.fork.1374: mitogen: Recovered sys.executable: '/usr/bin/python'
[pid 15232] 17:22:31.242328 D mitogen: CallChain(Context(2, u'ssh.host000')).call_async(): mitogen.parent._proxy_connect(method_name=u'sudo', name=None, kwargs=Kwargs({u'username': u'root', u'profiling': False, u'sudo_path': None, u'python_path': [u'/usr/bin/python'], 'unidirectional': True, u'debug': False, u'password': None, u'sudo_args': [u'-H', u'-S', u'-n'], u'connect_timeout': 10}))
[pid 15232] 17:22:31.270836 D mitogen: ModuleResponder(Router(Broker(0x7f425b637550)))._on_get_module('mitogen.sudo')
[pid 15232] 17:22:31.273081 D mitogen: _send_load_module(mitogen.ssh.Stream(u'ssh.host000'), u'mitogen.sudo')
[pid 15232] 17:22:31.304365 D mitogen.ctx.ssh.host000: mitogen: Importer(): master doesn't know 'mitogen.optparse'
[pid 15232] 17:22:31.356107 D mitogen.ctx.ssh.host000: mitogen: mitogen.sudo.Stream(u'default').connect()
[pid 15232] 17:22:31.356476 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'encodings.base64_codec' is submodule of a package we did not load
[pid 15232] 17:22:31.356624 D mitogen.ctx.ssh.host000: mitogen: Importer(): 'encodings.base64' is submodule of a package we did not load
[pid 15232] 17:22:31.356762 D mitogen.ctx.ssh.host000: mitogen.sudo: sudo command line: ['sudo', '-u', u'root', '-H', u'/usr/bin/python', '-c', u'import codecs,os,sys;_=codecs.decode;exec(_(_("eNqFkTFrwzAQhef4V3iTRIQix5SCQdCSoXQoBVOaoQ1FtuVGxJGELMdNf30vTiF2OnS7j3t373GX07WwLXPaKUwiT/sR6ToGqK3fYZJFM6irzi0xpwnn5MI5HZOHbnLmsrGtwvkY/BjWY+gBwLA9gn0jA7juYyFiVEnfa4Niaaqhqb5U2QVZNGpoL7rWLwptFu4YttYgyDm7ks3FMHhQvtXWvGXpZrBV5qA9MLrPH1452ojp2FkD2OBpg05xjvBeB/upTLYvt9J2O31nbKV4wrwq2alkB9k1gbWcOVYHzYwKWZKmNwSRCEx6r4PCCUVPjy/PnPN3gyBcCYOYkGglPvDpC5V1ysDtkS8QgdWywsnyNk0JRd/awabaiYtuTVFfoNNjavdrsBrq87Gv1P1/6r8pk0nKH7qMt6A=".encode(),"base64"),"zip"))']
[pid 15232] 17:22:31.356905 D mitogen.ctx.ssh.host000: mitogen: hybrid_tty_create_child() pid=1376 stdio=18, tty=17, cmd: sudo -u root -H /usr/bin/python -c "import codecs,os,sys;_=codecs.decode;exec(_(_(\"eNqFkTFrwzAQhef4V3iTRIQix5SCQdCSoXQoBVOaoQ1FtuVGxJGELMdNf30vTiF2OnS7j3t373GX07WwLXPaKUwiT/sR6ToGqK3fYZJFM6irzi0xpwnn5MI5HZOHbnLmsrGtwvkY/BjWY+gBwLA9gn0jA7juYyFiVEnfa4Niaaqhqb5U2QVZNGpoL7rWLwptFu4YttYgyDm7ks3FMHhQvtXWvGXpZrBV5qA9MLrPH1452ojp2FkD2OBpg05xjvBeB/upTLYvt9J2O31nbKV4wrwq2alkB9k1gbWcOVYHzYwKWZKmNwSRCEx6r4PCCUVPjy/PnPN3gyBcCYOYkGglPvDpC5V1ysDtkS8QgdWywsnyNk0JRd/awabaiYtuTVFfoNNjavdrsBrq87Gv1P1/6r8pk0nKH7qMt6A=\".encode(),\"base64\"),\"zip\"))"
[pid 15232] 17:22:31.357043 D mitogen.ctx.ssh.host000: mitogen: mitogen.sudo.Stream(u'local.1376').connect(): child process stdin/stdout=18
[pid 15232] 17:22:31.416527 D mitogen.ctx.ssh.host000: mitogen.sudo: mitogen.sudo.Stream(u'local.1376'): received 'Traceback (most recent call last):\n'
[pid 15232] 17:22:31.429858 D mitogen.ctx.ssh.host000: mitogen.sudo: mitogen.sudo.Stream(u'local.1376'): received ' File "<string>", line 1, in <module>\n'
[pid 15232] 17:22:31.430206 D mitogen.ctx.ssh.host000: mitogen.sudo: mitogen.sudo.Stream(u'local.1376'): received ' File "<string>", line 16, in <module>\n'
[pid 15232] 17:22:31.430374 D mitogen.ctx.ssh.host000: mitogen.sudo: mitogen.sudo.Stream(u'local.1376'): received ' File "/usr/lib64/python2.7/encodings/zlib_codec.py", line 43, in zlib_decode\n'
[pid 15232] 17:22:31.430512 D mitogen.ctx.ssh.host000: mitogen.sudo: mitogen.sudo.Stream(u'local.1376'): received ' '
[pid 15232] 17:22:31.430643 D mitogen.ctx.ssh.host000: mitogen.sudo: mitogen.sudo.Stream(u'local.1376'): received 'output = zlib.decompress(input)\n'
[pid 15232] 17:22:31.430770 D mitogen.ctx.ssh.host000: mitogen.sudo: mitogen.sudo.Stream(u'local.1376'): received 'zlib'
[pid 15232] 17:22:31.430896 D mitogen.ctx.ssh.host000: mitogen.sudo: mitogen.sudo.Stream(u'local.1376'): received '.'
[pid 15232] 17:22:31.431022 D mitogen.ctx.ssh.host000: mitogen.sudo: mitogen.sudo.Stream(u'local.1376'): received 'error'
[pid 15232] 17:22:31.431145 D mitogen.ctx.ssh.host000: mitogen.sudo: mitogen.sudo.Stream(u'local.1376'): received ': '
[pid 15232] 17:22:31.431266 D mitogen.ctx.ssh.host000: mitogen.sudo: mitogen.sudo.Stream(u'local.1376'): received 'Error -5 while decompressing data: incomplete or truncated stream'
[pid 15232] 17:22:31.431384 D mitogen.ctx.ssh.host000: mitogen.sudo: mitogen.sudo.Stream(u'local.1376'): received '\n'
[pid 15232] 17:22:31.452079 D mitogen.ctx.ssh.host000: mitogen: mitogen.sudo.Stream(u'local.1376'): child process still alive, sending SIGTERM
[pid 15251] 17:22:31.453634 D mitogen: mitogen.core.Stream(u'unix_listener.15232').on_disconnect()
[pid 15251] 17:22:31.453927 D mitogen: Waker(Broker(0x7f4255db0590) rfd=13, wfd=14).on_disconnect()
[pid 15232] 17:22:31.453950 D mitogen: mitogen.core.Stream(u'unix_client.15251').on_disconnect()
fatal: [host000]: FAILED! => {
"msg": "error occurred on host host000: EOF on stream; last 300 bytes received: u'ck (most recent call last):\\n File \"<string>\", line 1, in <module>\\n File \"<string>\", line 16, in <module>\\n File \"/usr/lib64/python2.7/encodings/zlib_codec.py\", line 43, in zlib_decode\\n output = zlib.decompress(input)\\nzlib.error: Error -5 while decompressing data: incomplete or truncated stream\\n'"
}
PLAY RECAP **************************************************************************************************************************************************************************************************
host000 : ok=0 changed=0 unreachable=0 failed=1
[pid 15232] 17:22:31.462573 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-1'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.462856 D mitogen: Waker(Broker(0x7f425b637550) rfd=9, wfd=11).on_disconnect()
[pid 15232] 17:22:31.462928 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-2'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.463001 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-3'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.463093 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-4'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.463238 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-6'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.463323 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-5'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.463505 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-7'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.463575 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-8'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.463644 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-9'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.463838 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-10'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.463927 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-11'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.464219 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-12'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.464352 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-13'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.464691 D mitogen: mitogen.ssh.Stream(u'ssh.host000') closing CALL_FUNCTION channel
[pid 15232] 17:22:31.464917 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-0'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.464975 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-14'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.465169 I mitogen: mitogen.service.Pool(0x7f425b637dd0, size=16, th='mitogen.service.Pool.7f425b637dd0.worker-15'): channel or latch closed, exitting: None
[pid 15232] 17:22:31.467477 D mitogen: <mitogen.unix.Listener object at 0x7f425b637a50>.on_disconnect()
[pid 15232] 17:22:31.468454 D mitogen: mitogen.parent.DiagLogStream(fd=65, u'ssh.host000').on_disconnect()
[pid 15232] 17:22:31.498133 D mitogen.ctx.ssh.host000: mitogen: mitogen.fork.Stream(u'fork.1374') closing CALL_FUNCTION channel
[pid 15232] 17:22:31.498522 D mitogen.ctx.ssh.host000: mitogen: Waker(Broker(0x7f4f2ecd7450) rfd=3, wfd=4).on_disconnect()
[pid 15232] 17:22:31.498667 D mitogen.ctx.ssh.host000: mitogen: <IoLogger stderr>.on_disconnect()
[pid 15232] 17:22:31.498832 D mitogen.ctx.ssh.host000: mitogen: <IoLogger stdout>.on_disconnect()
[pid 15232] 17:22:31.548280 D mitogen.ctx.fork.1374: mitogen: _on_shutdown_msg(Message(3, 2, 2, 106, 0, ''..0))
[pid 15232] 17:22:31.549526 D mitogen: mitogen.parent.DiagLogStream(fd=65, u'ssh.host000').on_disconnect()
[pid 15232] 17:22:31.549745 D mitogen: mitogen.ssh.Stream(u'ssh.host000'): PID 15254 exited with return code 255
[pid 15232] 17:22:31.549866 D mitogen: mitogen.ssh.Stream(u'ssh.host000').on_disconnect()
[pid 15232] 17:22:31.550030 D mitogen: mitogen.ssh.Stream(u'ssh.host000') is gone; propagating DEL_ROUTE for set([2, 3])
[pid 15232] 17:22:31.550157 D mitogen: Router(Broker(0x7f425b637550)).del_route(2)
[pid 15232] 17:22:31.550277 D mitogen: Router(Broker(0x7f425b637550)).del_route(3)
[pid 15232] 17:22:31.550450 I ansible_mitogen.services: Dropping Context(2, u'ssh.host000') due to disconnect of mitogen.ssh.Stream(u'ssh.host000')
```
</details>
|
non_process
|
incompatible with centos with selinux enabled problem reported in ansible user is running centos with selinux enabled ssh connection succeeds but sudo connection fails it looks like because selinux is preventing the passing of a socket from the lesser privileged process to the more privileged process via sudo looks like a similar issue and describes the mechanism in play one simple fallback would be to parameterize hybrid tty create child child use so that for selinux no socket is used but that makes file transfer very slow another option is to document how to fix up the selinux rules another option is to replace the socket usage with pipe usage on linux when selinux is present that gives us a buffer which is only half the we want but still much better than the the tty layer gives us log ansible playbook local yml i inventory limit tags selinux vvv ansible playbook config file home user repositories service ansible ansible cfg configured module search path ansible python module location usr lib dist packages ansible executable location usr bin ansible playbook python version default nov using home user repositories service ansible ansible cfg as config file parsed home user repositories service ansible inventory hosts inventory source with ini plugin playbook local yml plays in local yml play d mitogen mitogen service pool size th mainthread initialized d ansible mitogen process service pool configured size task task path home user repositories service ansible local yml d mitogen unix connect path tmp mitogen unix d mitogen unix connect local id is remote is d mitogen mitogen ssh stream u default connect d mitogen create child child fd parent cmd ssh o loglevel error o compression yes o serveraliveinterval o serveralivecountmax o batchmode yes o stricthostkeychecking yes c o controlmaster auto o controlpersist usr bin python c import codecs os sys codecs decode exec harkdb art ae encode zip d mitogen mitogen ssh stream u local connect child process stdin stdout d mitogen mitogen ssh stream u local received n d mitogen mitogen ssh stream u local received d mitogen callchain context u ssh call async ansible mitogen target init child candidate temp dirs log level d mitogen build tuple usr lib dist packages ansible init py u ansible d mitogen send load module mitogen ssh stream u ssh u ansible release d mitogen send load module mitogen ssh stream u ssh u ansible d mitogen build tuple usr lib dist packages ansible module utils init py u ansible module utils d mitogen send load module mitogen ssh stream u ssh u ansible module utils d mitogen get module via sys modules selinux d mitogen get module source selinux cannot find source d mitogen send load module mitogen ssh stream u ssh u ansible module utils text d mitogen send load module mitogen ssh stream u ssh u ansible module utils parsing d mitogen build tuple usr lib dist packages ansible module utils parsing init py u ansible module utils parsing d mitogen send load module mitogen ssh stream u ssh u ansible module utils parsing convert bool d mitogen send load module mitogen ssh stream u ssh u ansible module utils d mitogen send load module mitogen ssh stream u ssh u ansible module utils six d mitogen build tuple usr lib dist packages ansible module utils six init py u ansible module utils six d mitogen send load module mitogen ssh stream u ssh u ansible module utils basic d mitogen send load module mitogen ssh stream u ssh u ansible module utils json utils d mitogen build tuple home user repositories mitogen ansible mitogen init py u ansible mitogen d mitogen send load module mitogen ssh stream u ssh u ansible mitogen d mitogen send load module mitogen ssh stream u ssh u ansible mitogen target d mitogen send load module mitogen ssh stream u ssh u mitogen compat d mitogen build tuple home user repositories mitogen mitogen compat init py u mitogen compat d mitogen send load module mitogen ssh stream u ssh u mitogen compat functools d mitogen send load module mitogen ssh stream u ssh u mitogen fork d mitogen send load module mitogen ssh stream u ssh u mitogen parent d mitogen send load module mitogen ssh stream u ssh u mitogen select d mitogen send load module mitogen ssh stream u ssh u mitogen service d mitogen send load module mitogen ssh stream u ssh u ansible mitogen runner d mitogen build tuple home user repositories mitogen mitogen init py u mitogen d mitogen ctx ssh mitogen importer encodings utf is submodule of a package we did not load d mitogen ctx ssh mitogen importer json decoder is submodule of a package we did not load d mitogen ctx ssh mitogen importer json re is submodule of a package we did not load d mitogen ctx ssh mitogen importer json sys is submodule of a package we did not load d mitogen ctx ssh mitogen importer json struct is submodule of a package we did not load d mitogen ctx ssh mitogen importer json json is submodule of a package we did not load d mitogen ctx ssh mitogen importer json scanner is submodule of a package we did not load d mitogen ctx ssh mitogen importer json json is submodule of a package we did not load d mitogen ctx ssh mitogen importer encodings hex codec is submodule of a package we did not load d mitogen ctx ssh mitogen importer encodings binascii is submodule of a package we did not load d mitogen ctx ssh mitogen importer json encoder is submodule of a package we did not load d mitogen ctx ssh mitogen importer master doesn t know mitogen logging d mitogen ctx ssh mitogen importer master doesn t know mitogen os d mitogen ctx ssh mitogen importer master doesn t know mitogen random d mitogen ctx ssh mitogen importer master doesn t know mitogen sys d mitogen ctx ssh mitogen importer master doesn t know mitogen threading d mitogen ctx ssh mitogen importer master doesn t know mitogen traceback d mitogen ctx ssh mitogen importer master doesn t know mitogen mitogen d mitogen ctx ssh mitogen importer master doesn t know mitogen codecs d mitogen ctx ssh mitogen importer master doesn t know mitogen errno d mitogen ctx ssh mitogen importer master doesn t know mitogen fcntl d mitogen ctx ssh mitogen importer master doesn t know mitogen getpass d mitogen ctx ssh mitogen importer master doesn t know mitogen inspect d mitogen ctx ssh mitogen importer master doesn t know mitogen signal d mitogen ctx ssh mitogen importer master doesn t know mitogen socket d mitogen ctx ssh mitogen importer master doesn t know mitogen subprocess d mitogen ctx ssh mitogen importer master doesn t know mitogen termios d mitogen ctx ssh mitogen importer master doesn t know mitogen textwrap d mitogen ctx ssh mitogen importer master doesn t know mitogen time d mitogen ctx ssh mitogen importer master doesn t know mitogen zlib d mitogen ctx ssh mitogen importer master doesn t know mitogen cstringio d mitogen ctx ssh mitogen importer master doesn t know mitogen functools d mitogen ctx ssh mitogen importer master doesn t know mitogen compat threading d mitogen ctx ssh mitogen importer master doesn t know mitogen grp d mitogen ctx ssh mitogen importer master doesn t know mitogen pprint d mitogen ctx ssh mitogen importer master doesn t know mitogen pwd d mitogen ctx ssh mitogen importer master doesn t know mitogen stat d mitogen ctx ssh mitogen importer master doesn t know ansible module utils json d mitogen ctx ssh mitogen importer ctypes os is submodule of a package we did not load d mitogen ctx ssh mitogen importer ctypes sys is submodule of a package we did not load d mitogen ctx ssh mitogen importer ctypes ctypes is submodule of a package we did not load d mitogen ctx ssh mitogen importer ctypes struct is submodule of a package we did not load d mitogen ctx ssh mitogen importer ctypes ctypes is submodule of a package we did not load d mitogen ctx ssh mitogen importer ctypes endian is submodule of a package we did not load d mitogen ctx ssh mitogen importer master doesn t know ansible module utils locale d mitogen ctx ssh mitogen importer master doesn t know ansible module utils os d mitogen ctx ssh mitogen importer master doesn t know ansible module utils re d mitogen ctx ssh mitogen importer master doesn t know ansible module utils shlex d mitogen ctx ssh mitogen importer master doesn t know ansible module utils subprocess d mitogen ctx ssh mitogen importer master doesn t know ansible module utils sys d mitogen ctx ssh mitogen importer master doesn t know ansible module utils types d mitogen ctx ssh mitogen importer master doesn t know ansible module utils time d mitogen ctx ssh mitogen importer master doesn t know ansible module utils select d mitogen ctx ssh mitogen importer master doesn t know ansible module utils shutil d mitogen ctx ssh mitogen importer master doesn t know ansible module utils stat d mitogen ctx ssh mitogen importer master doesn t know ansible module utils tempfile d mitogen ctx ssh mitogen importer master doesn t know ansible module utils traceback d mitogen ctx ssh mitogen importer master doesn t know ansible module utils grp d mitogen ctx ssh mitogen importer master doesn t know ansible module utils pwd d mitogen ctx ssh mitogen importer master doesn t know ansible module utils platform d mitogen ctx ssh mitogen importer master doesn t know ansible module utils errno d mitogen ctx ssh mitogen importer master doesn t know ansible module utils datetime d mitogen ctx ssh mitogen importer master doesn t know ansible module utils collections d mitogen ctx ssh mitogen importer master doesn t know ansible module utils itertools d mitogen ctx ssh mitogen importer master doesn t know ansible module utils syslog d mitogen ctx ssh mitogen importer master doesn t know ansible module utils systemd d mitogen ctx ssh mitogen importer master doesn t know ansible module utils selinux d mitogen ctx ssh mitogen importer selinux sys is submodule of a package we did not load d mitogen ctx ssh mitogen importer selinux os is submodule of a package we did not load d mitogen ctx ssh mitogen importer selinux imp is submodule of a package we did not load d mitogen ctx ssh mitogen importer selinux shutil is submodule of a package we did not load d mitogen ctx ssh mitogen importer selinux errno is submodule of a package we did not load d mitogen ctx ssh mitogen importer selinux stat is submodule of a package we did not load d mitogen ctx ssh mitogen importer master doesn t know ansible module utils hashlib d mitogen ctx ssh mitogen importer master doesn t know ansible module utils ansible d mitogen ctx ssh mitogen importer master doesn t know ansible module utils ast d mitogen ctx ssh mitogen importer master doesn t know ansible module utils six moves d mitogen ctx ssh mitogen importer master doesn t know ansible module utils codecs d mitogen ctx ssh mitogen importer master doesn t know ansible module utils parsing ansible d mitogen ctx ssh ansible mitogen runner environmentfilewatcher u home user pam environment installed existing keys d mitogen ctx ssh ansible mitogen runner environmentfilewatcher u etc environment installed existing keys d mitogen ctx ssh mitogen replaced poller with epollpoller new readers writers old readers writers d mitogen ctx ssh mitogen router broker upgrade d mitogen idallocator router broker allocating d mitogen idallocator router broker allocating to context u ssh d mitogen ctx ssh mitogen mitogen fork stream u default connect d mitogen ctx ssh mitogen mitogen fork stream u fork connect child process stdin stdout d mitogen adding route to via mitogen ssh stream u ssh d mitogen router broker add route mitogen ssh stream u ssh d mitogen ctx ssh ansible mitogen target selected temp directory u home user ansible tmp from d mitogen ctx fork mitogen register context parent mitogen core stream parent d mitogen ctx fork mitogen connected to context parent my id is pid is d mitogen ctx fork mitogen recovered sys executable usr bin python d mitogen callchain context u ssh call async mitogen parent proxy connect method name u sudo name none kwargs kwargs u username u root u profiling false u sudo path none u python path unidirectional true u debug false u password none u sudo args u connect timeout d mitogen moduleresponder router broker on get module mitogen sudo d mitogen send load module mitogen ssh stream u ssh u mitogen sudo d mitogen ctx ssh mitogen importer master doesn t know mitogen optparse d mitogen ctx ssh mitogen mitogen sudo stream u default connect d mitogen ctx ssh mitogen importer encodings codec is submodule of a package we did not load d mitogen ctx ssh mitogen importer encodings is submodule of a package we did not load d mitogen ctx ssh mitogen sudo sudo command line d mitogen ctx ssh mitogen hybrid tty create child pid stdio tty cmd sudo u root h usr bin python c import codecs os sys codecs decode exec bjwy encode zip d mitogen ctx ssh mitogen mitogen sudo stream u local connect child process stdin stdout d mitogen ctx ssh mitogen sudo mitogen sudo stream u local received traceback most recent call last n d mitogen ctx ssh mitogen sudo mitogen sudo stream u local received file line in n d mitogen ctx ssh mitogen sudo mitogen sudo stream u local received file line in n d mitogen ctx ssh mitogen sudo mitogen sudo stream u local received file usr encodings zlib codec py line in zlib decode n d mitogen ctx ssh mitogen sudo mitogen sudo stream u local received d mitogen ctx ssh mitogen sudo mitogen sudo stream u local received output zlib decompress input n d mitogen ctx ssh mitogen sudo mitogen sudo stream u local received zlib d mitogen ctx ssh mitogen sudo mitogen sudo stream u local received d mitogen ctx ssh mitogen sudo mitogen sudo stream u local received error d mitogen ctx ssh mitogen sudo mitogen sudo stream u local received d mitogen ctx ssh mitogen sudo mitogen sudo stream u local received error while decompressing data incomplete or truncated stream d mitogen ctx ssh mitogen sudo mitogen sudo stream u local received n d mitogen ctx ssh mitogen mitogen sudo stream u local child process still alive sending sigterm d mitogen mitogen core stream u unix listener on disconnect d mitogen waker broker rfd wfd on disconnect d mitogen mitogen core stream u unix client on disconnect fatal failed msg error occurred on host eof on stream last bytes received u ck most recent call last n file line in n file line in n file usr encodings zlib codec py line in zlib decode n output zlib decompress input nzlib error error while decompressing data incomplete or truncated stream n play recap ok changed unreachable failed i mitogen mitogen service pool size th mitogen service pool worker channel or latch closed exitting none d mitogen waker broker rfd wfd on disconnect i mitogen mitogen service pool size th mitogen service pool worker channel or latch closed exitting none i mitogen mitogen service pool size th mitogen service pool worker channel or latch closed exitting none i mitogen mitogen service pool size th mitogen service pool worker channel or latch closed exitting none i mitogen mitogen service pool size th mitogen service pool worker channel or latch closed exitting none i mitogen mitogen service pool size th mitogen service pool worker channel or latch closed exitting none i mitogen mitogen service pool size th mitogen service pool worker channel or latch closed exitting none i mitogen mitogen service pool size th mitogen service pool worker channel or latch closed exitting none i mitogen mitogen service pool size th mitogen service pool worker channel or latch closed exitting none i mitogen mitogen service pool size th mitogen service pool worker channel or latch closed exitting none i mitogen mitogen service pool size th mitogen service pool worker channel or latch closed exitting none i mitogen mitogen service pool size th mitogen service pool worker channel or latch closed exitting none i mitogen mitogen service pool size th mitogen service pool worker channel or latch closed exitting none d mitogen mitogen ssh stream u ssh closing call function channel i mitogen mitogen service pool size th mitogen service pool worker channel or latch closed exitting none i mitogen mitogen service pool size th mitogen service pool worker channel or latch closed exitting none i mitogen mitogen service pool size th mitogen service pool worker channel or latch closed exitting none d mitogen on disconnect d mitogen mitogen parent diaglogstream fd u ssh on disconnect d mitogen ctx ssh mitogen mitogen fork stream u fork closing call function channel d mitogen ctx ssh mitogen waker broker rfd wfd on disconnect d mitogen ctx ssh mitogen on disconnect d mitogen ctx ssh mitogen on disconnect d mitogen ctx fork mitogen on shutdown msg message d mitogen mitogen parent diaglogstream fd u ssh on disconnect d mitogen mitogen ssh stream u ssh pid exited with return code d mitogen mitogen ssh stream u ssh on disconnect d mitogen mitogen ssh stream u ssh is gone propagating del route for set d mitogen router broker del route d mitogen router broker del route i ansible mitogen services dropping context u ssh due to disconnect of mitogen ssh stream u ssh
| 0
|
19,511
| 25,827,581,249
|
IssuesEvent
|
2022-12-12 14:01:58
|
microsoft/vscode
|
https://api.github.com/repos/microsoft/vscode
|
closed
|
Remote-WSL: bash terminal sudo without password does not work
|
bug WSL remote *out-of-scope confirmation-pending terminal-process
|
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ -->
- VSCode Version: 1.53.2
- Local OS Version: Windows 10 1803
- Remote OS Version: Ubuntu 18.04.5 LTS
- Remote Extension/Connection Type: WSL
Steps to Reproduce:
1. In WSL, edit `/etc/sudoers` via `sudo visudo`
2. Enable NOPASSWD by adding the following line `<user> ALL=(ALL:ALL) NOPASSWD:ALL` below `root ALL=(ALL:ALL) ALL` [1]
3. Save and close sudoers file. Now test sudo **in WSL terminal, it works without a password**. [2]
4. In visual studio code, connect to WSL remote
5. Use <kbd>Ctrl</kbd>+<kbd>`</kbd> to open the integrated terminal (bash)
6. Try sudo in vscode integrated terminal. It prompts for password. Also `sudo -n` (non-interactive) will fail because
> sudo: a password is required
<!-- Check to see if the problem is general, with a specific extension, or only happens when remote -->
Does this issue occur when you try this locally?: Not applicable, Windows does not have sudo
Does this issue occur when you try this locally and all extensions are disabled?: Not applicable, Windows does not have sudo
[1]: So the relevant part from the final sudoers file looks like this:
```ssh-config
# User privilege specification
root ALL=(ALL:ALL) ALL
<user> ALL=(ALL:ALL) NOPASSWD:ALL
# Members of the admin group may gain root privileges
%admin ALL=(ALL) ALL
# Allow members of group sudo to execute any command
%sudo ALL=(ALL:ALL) ALL
```
[2]: In both cases, `sudo -l` outputs the following:
```console
$ sudo -l
Matching Defaults entries for <user> on <host>:
env_reset, mail_badpass, secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin\:/snap/bin
User <user> may run the following commands on <host>:
(ALL : ALL) NOPASSWD: ALL
(ALL : ALL) ALL
```
Additional information:
I guess it could have more to do with the way integrated terminal was initiated? Maybe pts vs tty? I don't know. I tried to troubleshoot sudo itself, but the log it outputs for a single command is 3000+ lines and not very helpful. So I'm at a lost here. Since it works in WSL but not in VSCode, I suppose it's a bug with VSCode. Please correct me if I'm wrong.
|
1.0
|
Remote-WSL: bash terminal sudo without password does not work - <!-- Please search existing issues to avoid creating duplicates. -->
<!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ -->
- VSCode Version: 1.53.2
- Local OS Version: Windows 10 1803
- Remote OS Version: Ubuntu 18.04.5 LTS
- Remote Extension/Connection Type: WSL
Steps to Reproduce:
1. In WSL, edit `/etc/sudoers` via `sudo visudo`
2. Enable NOPASSWD by adding the following line `<user> ALL=(ALL:ALL) NOPASSWD:ALL` below `root ALL=(ALL:ALL) ALL` [1]
3. Save and close sudoers file. Now test sudo **in WSL terminal, it works without a password**. [2]
4. In visual studio code, connect to WSL remote
5. Use <kbd>Ctrl</kbd>+<kbd>`</kbd> to open the integrated terminal (bash)
6. Try sudo in vscode integrated terminal. It prompts for password. Also `sudo -n` (non-interactive) will fail because
> sudo: a password is required
<!-- Check to see if the problem is general, with a specific extension, or only happens when remote -->
Does this issue occur when you try this locally?: Not applicable, Windows does not have sudo
Does this issue occur when you try this locally and all extensions are disabled?: Not applicable, Windows does not have sudo
[1]: So the relevant part from the final sudoers file looks like this:
```ssh-config
# User privilege specification
root ALL=(ALL:ALL) ALL
<user> ALL=(ALL:ALL) NOPASSWD:ALL
# Members of the admin group may gain root privileges
%admin ALL=(ALL) ALL
# Allow members of group sudo to execute any command
%sudo ALL=(ALL:ALL) ALL
```
[2]: In both cases, `sudo -l` outputs the following:
```console
$ sudo -l
Matching Defaults entries for <user> on <host>:
env_reset, mail_badpass, secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin\:/snap/bin
User <user> may run the following commands on <host>:
(ALL : ALL) NOPASSWD: ALL
(ALL : ALL) ALL
```
Additional information:
I guess it could have more to do with the way integrated terminal was initiated? Maybe pts vs tty? I don't know. I tried to troubleshoot sudo itself, but the log it outputs for a single command is 3000+ lines and not very helpful. So I'm at a lost here. Since it works in WSL but not in VSCode, I suppose it's a bug with VSCode. Please correct me if I'm wrong.
|
process
|
remote wsl bash terminal sudo without password does not work vscode version local os version windows remote os version ubuntu lts remote extension connection type wsl steps to reproduce in wsl edit etc sudoers via sudo visudo enable nopasswd by adding the following line all all all nopasswd all below root all all all all save and close sudoers file now test sudo in wsl terminal it works without a password in visual studio code connect to wsl remote use ctrl to open the integrated terminal bash try sudo in vscode integrated terminal it prompts for password also sudo n non interactive will fail because sudo a password is required does this issue occur when you try this locally not applicable windows does not have sudo does this issue occur when you try this locally and all extensions are disabled not applicable windows does not have sudo so the relevant part from the final sudoers file looks like this ssh config user privilege specification root all all all all all all all nopasswd all members of the admin group may gain root privileges admin all all all allow members of group sudo to execute any command sudo all all all all in both cases sudo l outputs the following console sudo l matching defaults entries for on env reset mail badpass secure path usr local sbin usr local bin usr sbin usr bin sbin bin snap bin user may run the following commands on all all nopasswd all all all all additional information i guess it could have more to do with the way integrated terminal was initiated maybe pts vs tty i don t know i tried to troubleshoot sudo itself but the log it outputs for a single command is lines and not very helpful so i m at a lost here since it works in wsl but not in vscode i suppose it s a bug with vscode please correct me if i m wrong
| 1
|
2,618
| 5,395,897,864
|
IssuesEvent
|
2017-02-27 10:03:47
|
sysown/proxysql
|
https://api.github.com/repos/sysown/proxysql
|
closed
|
Handle SQL_CALC_FOUND_ROWS
|
CONNECTION POOL QUERY PROCESSOR
|
`SQL_CALC_FOUND_ROWS` and `FOUND_ROWS()` should be tracked to identify when multiplexing needs to be disabled and re-enabled
|
1.0
|
Handle SQL_CALC_FOUND_ROWS - `SQL_CALC_FOUND_ROWS` and `FOUND_ROWS()` should be tracked to identify when multiplexing needs to be disabled and re-enabled
|
process
|
handle sql calc found rows sql calc found rows and found rows should be tracked to identify when multiplexing needs to be disabled and re enabled
| 1
|
299,601
| 25,912,740,928
|
IssuesEvent
|
2022-12-15 15:07:46
|
KhronosGroup/Vulkan-ValidationLayers
|
https://api.github.com/repos/KhronosGroup/Vulkan-ValidationLayers
|
closed
|
Android VkPositiveLayerTest.GetTimelineSemThreadRace fails sporadically
|
CI/Tests
|
This seems to occur on **Android 12** devices; we've seen it on the **Pixel 6** on 20221017, and most recently on the **Galaxy S10** on 20221205. The effects look like:
```
ERROR: GalaxyS10-R28M31RTPWK: CRASH in package com.example.VulkanLayerValidationTests
ERROR: GalaxyS10-R28M31RTPWK: signal 11 (SIGSEGV) code 1 (SEGV_MAPERR) fault_addr 0x0
ERROR: GalaxyS10-R28M31RTPWK: cause null pointer dereference
ERROR: GalaxyS10-R28M31RTPWK: backtrace follows
#00 pc 00000000018abbc8 /data/app/~~y5sjP4F14zLBOJZghp-SKw==/com.example.VulkanLayerValidationTests-8K9NQXG5D8ZQltN4bUiu-Q==/lib/arm64/libVkLayer_khronos_validation.so (BuildId: 1def98342d1a100a734d044152399d9bce734eb3)
#01 pc 00000000018aaf40 /data/app/~~y5sjP4F14zLBOJZghp-SKw==/com.example.VulkanLayerValidationTests-8K9NQXG5D8ZQltN4bUiu-Q==/lib/arm64/libVkLayer_khronos_validation.so (BuildId: 1def98342d1a100a734d044152399d9bce734eb3)
#02 pc 00000000018a8bc4 /data/app/~~y5sjP4F14zLBOJZghp-SKw==/com.example.VulkanLayerValidationTests-8K9NQXG5D8ZQltN4bUiu-Q==/lib/arm64/libVkLayer_khronos_validation.so (BuildId: 1def98342d1a100a734d044152399d9bce734eb3)
#03 pc 00000000018ac9c4 /data/app/~~y5sjP4F14zLBOJZghp-SKw==/com.example.VulkanLayerValidationTests-8K9NQXG5D8ZQltN4bUiu-Q==/lib/arm64/libVkLayer_khronos_validation.so (BuildId: 1def98342d1a100a734d044152399d9bce734eb3)
#04 pc 00000000000b10e8 /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+64) (BuildId: 890b75bbb1eaed1155b47ab37b7aad70)
#05 pc 0000000000050a58 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+64) (BuildId: 890b75bbb1eaed1155b47ab37b7aad70)
INFO: GalaxyS10-R28M31RTPWK: pulling generated result files...
ERROR: test process 8687 on device GalaxyS10-R28M31RTPWK finished after 17.02 minutes (1020.97 seconds) with exit code 1
```
Looking at the appropriate output file shows the test that was running at the time of the crash:
```
[ RUN ] VkPositiveLayerTest.GetTimelineSemThreadRace
```
Until the test is repaired, it's been removed from the test list for these devices. When the test is repaired, the test `blacklist.json` file will have to be updated.
|
1.0
|
Android VkPositiveLayerTest.GetTimelineSemThreadRace fails sporadically - This seems to occur on **Android 12** devices; we've seen it on the **Pixel 6** on 20221017, and most recently on the **Galaxy S10** on 20221205. The effects look like:
```
ERROR: GalaxyS10-R28M31RTPWK: CRASH in package com.example.VulkanLayerValidationTests
ERROR: GalaxyS10-R28M31RTPWK: signal 11 (SIGSEGV) code 1 (SEGV_MAPERR) fault_addr 0x0
ERROR: GalaxyS10-R28M31RTPWK: cause null pointer dereference
ERROR: GalaxyS10-R28M31RTPWK: backtrace follows
#00 pc 00000000018abbc8 /data/app/~~y5sjP4F14zLBOJZghp-SKw==/com.example.VulkanLayerValidationTests-8K9NQXG5D8ZQltN4bUiu-Q==/lib/arm64/libVkLayer_khronos_validation.so (BuildId: 1def98342d1a100a734d044152399d9bce734eb3)
#01 pc 00000000018aaf40 /data/app/~~y5sjP4F14zLBOJZghp-SKw==/com.example.VulkanLayerValidationTests-8K9NQXG5D8ZQltN4bUiu-Q==/lib/arm64/libVkLayer_khronos_validation.so (BuildId: 1def98342d1a100a734d044152399d9bce734eb3)
#02 pc 00000000018a8bc4 /data/app/~~y5sjP4F14zLBOJZghp-SKw==/com.example.VulkanLayerValidationTests-8K9NQXG5D8ZQltN4bUiu-Q==/lib/arm64/libVkLayer_khronos_validation.so (BuildId: 1def98342d1a100a734d044152399d9bce734eb3)
#03 pc 00000000018ac9c4 /data/app/~~y5sjP4F14zLBOJZghp-SKw==/com.example.VulkanLayerValidationTests-8K9NQXG5D8ZQltN4bUiu-Q==/lib/arm64/libVkLayer_khronos_validation.so (BuildId: 1def98342d1a100a734d044152399d9bce734eb3)
#04 pc 00000000000b10e8 /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+64) (BuildId: 890b75bbb1eaed1155b47ab37b7aad70)
#05 pc 0000000000050a58 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+64) (BuildId: 890b75bbb1eaed1155b47ab37b7aad70)
INFO: GalaxyS10-R28M31RTPWK: pulling generated result files...
ERROR: test process 8687 on device GalaxyS10-R28M31RTPWK finished after 17.02 minutes (1020.97 seconds) with exit code 1
```
Looking at the appropriate output file shows the test that was running at the time of the crash:
```
[ RUN ] VkPositiveLayerTest.GetTimelineSemThreadRace
```
Until the test is repaired, it's been removed from the test list for these devices. When the test is repaired, the test `blacklist.json` file will have to be updated.
|
non_process
|
android vkpositivelayertest gettimelinesemthreadrace fails sporadically this seems to occur on android devices we ve seen it on the pixel on and most recently on the galaxy on the effects look like error crash in package com example vulkanlayervalidationtests error signal sigsegv code segv maperr fault addr error cause null pointer dereference error backtrace follows pc data app skw com example vulkanlayervalidationtests q lib libvklayer khronos validation so buildid pc data app skw com example vulkanlayervalidationtests q lib libvklayer khronos validation so buildid pc data app skw com example vulkanlayervalidationtests q lib libvklayer khronos validation so buildid pc data app skw com example vulkanlayervalidationtests q lib libvklayer khronos validation so buildid pc apex com android runtime bionic libc so pthread start void buildid pc apex com android runtime bionic libc so start thread buildid info pulling generated result files error test process on device finished after minutes seconds with exit code looking at the appropriate output file shows the test that was running at the time of the crash vkpositivelayertest gettimelinesemthreadrace until the test is repaired it s been removed from the test list for these devices when the test is repaired the test blacklist json file will have to be updated
| 0
|
696
| 3,185,563,385
|
IssuesEvent
|
2015-09-28 06:18:21
|
e-government-ua/i
|
https://api.github.com/repos/e-government-ua/i
|
closed
|
ะะพะฑะฐะฒะธัั ัะบะฐะทะฐะฝะธะต ะบะพะดะธัะพะฒะบะธ UTF-8 ะฒ ัะตะปะต ะตะผะตะนะปะพะฒ
|
hi priority In process of testing test version
|
ะะพะฑะฐะฒะธัั ัะบะฐะทะฐะฝะธะต ะบะพะดะธัะพะฒะบะธ ะดะปั ัะตัะฒะธั-ัะฐัะบะธ ะฟะพ ะพัะฟัะฐะฒะบะต ะตะผะนะปะพะฒ ั ะฐัะฐัะผะตะฝัะฐะผะธ (#{MailTaskWithAttachments})
ะฒ ัะตะปะต ะฟะธััะผะฐ ัะบะฐะทะฐะฝะธะต ะบะพะดะธัะพะฒะบะธ ะพััััััะฒัะตั ะธ ะผะพะทะธะปะปะฐ ัะฐะฝะดะตัะฑะธัะด ัะถะต ะฟะธััะผะพ ะฝะต ัะธัะฐะตั.
ะขะฐะบ ะถะต ะฝะต ัะฐัะบะพะดะธััะตั ััะพ ะฟะธััะผะพ ะธ ััะฐะฝะดะฐััะฝัะน ะบะปะธะตะฝั ะะฝะดัะพะธะดะฐ.
|
1.0
|
ะะพะฑะฐะฒะธัั ัะบะฐะทะฐะฝะธะต ะบะพะดะธัะพะฒะบะธ UTF-8 ะฒ ัะตะปะต ะตะผะตะนะปะพะฒ - ะะพะฑะฐะฒะธัั ัะบะฐะทะฐะฝะธะต ะบะพะดะธัะพะฒะบะธ ะดะปั ัะตัะฒะธั-ัะฐัะบะธ ะฟะพ ะพัะฟัะฐะฒะบะต ะตะผะนะปะพะฒ ั ะฐัะฐัะผะตะฝัะฐะผะธ (#{MailTaskWithAttachments})
ะฒ ัะตะปะต ะฟะธััะผะฐ ัะบะฐะทะฐะฝะธะต ะบะพะดะธัะพะฒะบะธ ะพััััััะฒัะตั ะธ ะผะพะทะธะปะปะฐ ัะฐะฝะดะตัะฑะธัะด ัะถะต ะฟะธััะผะพ ะฝะต ัะธัะฐะตั.
ะขะฐะบ ะถะต ะฝะต ัะฐัะบะพะดะธััะตั ััะพ ะฟะธััะผะพ ะธ ััะฐะฝะดะฐััะฝัะน ะบะปะธะตะฝั ะะฝะดัะพะธะดะฐ.
|
process
|
ะดะพะฑะฐะฒะธัั ัะบะฐะทะฐะฝะธะต ะบะพะดะธัะพะฒะบะธ utf ะฒ ัะตะปะต ะตะผะตะนะปะพะฒ ะดะพะฑะฐะฒะธัั ัะบะฐะทะฐะฝะธะต ะบะพะดะธัะพะฒะบะธ ะดะปั ัะตัะฒะธั ัะฐัะบะธ ะฟะพ ะพัะฟัะฐะฒะบะต ะตะผะนะปะพะฒ ั ะฐัะฐัะผะตะฝัะฐะผะธ mailtaskwithattachments ะฒ ัะตะปะต ะฟะธััะผะฐ ัะบะฐะทะฐะฝะธะต ะบะพะดะธัะพะฒะบะธ ะพััััััะฒัะตั ะธ ะผะพะทะธะปะปะฐ ัะฐะฝะดะตัะฑะธัะด ัะถะต ะฟะธััะผะพ ะฝะต ัะธัะฐะตั ัะฐะบ ะถะต ะฝะต ัะฐัะบะพะดะธััะตั ััะพ ะฟะธััะผะพ ะธ ััะฐะฝะดะฐััะฝัะน ะบะปะธะตะฝั ะฐะฝะดัะพะธะดะฐ
| 1
|
21,816
| 6,223,240,077
|
IssuesEvent
|
2017-07-10 11:20:47
|
joomla/joomla-cms
|
https://api.github.com/repos/joomla/joomla-cms
|
closed
|
[4.0] Atum - Backend Template - control panel heading structure Accessibility
|
No Code Attached Yet
|
# Description of the issues: (WCAG 2.0 Level A and Level AA)
Back-end control panel heading structure accessibility Joomla! 4.0
There are some h5 headings before the first h1 heading in the code. Then there are some h5 headings after h1 heading. Those h5 heading after the h1 must be h2 headings.
And those h5 headings before h1 heading, I just donโt know why itโs there.
Note: Create a logical outline of the web page with headings. Headings are about logical structure, not visual effects. Use headings in prober hierarchical order. Headings should be used to create a logical outline of the page, to allow for quick navigation to page sections.
## Possible simplified solution
Make sure that you are using heading struture to provide information about logical outline of the page, to allow for quick navigation to page sections
### How to test:
Use Mozilla Firefox extension HeadingsMap https://addons.mozilla.org/nl/firefox/addon/headingsmap/?src=search
### Information about this issue:
(SC 1.3.1) Info and Relationships: https://www.w3.org/TR/UNDERSTANDING-WCAG20/content-structure-separation-programmatic.html
(SC 1.3.2) Meaningful Sequence: https://www.w3.org/TR/UNDERSTANDING-WCAG20/content-structure-separation-sequence.html
(SC 2.4.6) Headings and Labels https://www.w3.org/TR/UNDERSTANDING-WCAG20/navigation-mechanisms-descriptive.html
@C-Lodder @yvesh @ciar4n @dgt41 @wilsonge @ylahav @brianteeman
|
1.0
|
[4.0] Atum - Backend Template - control panel heading structure Accessibility - # Description of the issues: (WCAG 2.0 Level A and Level AA)
Back-end control panel heading structure accessibility Joomla! 4.0
There are some h5 headings before the first h1 heading in the code. Then there are some h5 headings after h1 heading. Those h5 heading after the h1 must be h2 headings.
And those h5 headings before h1 heading, I just donโt know why itโs there.
Note: Create a logical outline of the web page with headings. Headings are about logical structure, not visual effects. Use headings in prober hierarchical order. Headings should be used to create a logical outline of the page, to allow for quick navigation to page sections.
## Possible simplified solution
Make sure that you are using heading struture to provide information about logical outline of the page, to allow for quick navigation to page sections
### How to test:
Use Mozilla Firefox extension HeadingsMap https://addons.mozilla.org/nl/firefox/addon/headingsmap/?src=search
### Information about this issue:
(SC 1.3.1) Info and Relationships: https://www.w3.org/TR/UNDERSTANDING-WCAG20/content-structure-separation-programmatic.html
(SC 1.3.2) Meaningful Sequence: https://www.w3.org/TR/UNDERSTANDING-WCAG20/content-structure-separation-sequence.html
(SC 2.4.6) Headings and Labels https://www.w3.org/TR/UNDERSTANDING-WCAG20/navigation-mechanisms-descriptive.html
@C-Lodder @yvesh @ciar4n @dgt41 @wilsonge @ylahav @brianteeman
|
non_process
|
atum backend template control panel heading structure accessibility description of the issues wcag level a and level aa back end control panel heading structure accessibility joomla there are some headings before the first heading in the code then there are some headings after heading those heading after the must be headings and those headings before heading i just donโt know why itโs there note create a logical outline of the web page with headings headings are about logical structure not visual effects use headings in prober hierarchical order headings should be used to create a logical outline of the page to allow for quick navigation to page sections possible simplified solution make sure that you are using heading struture to provide information about logical outline of the page to allow for quick navigation to page sections how to test use mozilla firefox extension headingsmap information about this issue sc info and relationships sc meaningful sequence sc headings and labels c lodder yvesh wilsonge ylahav brianteeman
| 0
|
20,984
| 27,851,487,232
|
IssuesEvent
|
2023-03-20 19:03:50
|
googleapis/google-cloud-ruby
|
https://api.github.com/repos/googleapis/google-cloud-ruby
|
closed
|
Warning: a recent release failed
|
type: process
|
The following release PRs may have failed:
* #20909 - The release job failed -- check the build log.
* #20911 - The release job failed -- check the build log.
* #20907 - The release job failed -- check the build log.
* #20908 - The release job failed -- check the build log.
* #20919 - The release job is 'autorelease: tagged', but expected 'autorelease: published'.
|
1.0
|
Warning: a recent release failed - The following release PRs may have failed:
* #20909 - The release job failed -- check the build log.
* #20911 - The release job failed -- check the build log.
* #20907 - The release job failed -- check the build log.
* #20908 - The release job failed -- check the build log.
* #20919 - The release job is 'autorelease: tagged', but expected 'autorelease: published'.
|
process
|
warning a recent release failed the following release prs may have failed the release job failed check the build log the release job failed check the build log the release job failed check the build log the release job failed check the build log the release job is autorelease tagged but expected autorelease published
| 1
|
84,681
| 15,725,829,056
|
IssuesEvent
|
2021-03-29 10:28:36
|
AlexRogalskiy/charts
|
https://api.github.com/repos/AlexRogalskiy/charts
|
opened
|
CVE-2021-23362 (Medium) detected in hosted-git-info-2.8.8.tgz
|
security vulnerability
|
## CVE-2021-23362 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>hosted-git-info-2.8.8.tgz</b></p></summary>
<p>Provides metadata and conversions from repository urls for Github, Bitbucket and Gitlab</p>
<p>Library home page: <a href="https://registry.npmjs.org/hosted-git-info/-/hosted-git-info-2.8.8.tgz">https://registry.npmjs.org/hosted-git-info/-/hosted-git-info-2.8.8.tgz</a></p>
<p>Path to dependency file: charts/package.json</p>
<p>Path to vulnerable library: charts/node_modules/npm/node_modules/hosted-git-info/package.json,charts/node_modules/hosted-git-info/package.json,charts/node_modules/conventional-changelog-writer/node_modules/read-pkg/node_modules/hosted-git-info/package.json,charts/node_modules/conventional-commits-parser/node_modules/read-pkg/node_modules/hosted-git-info/package.json</p>
<p>
Dependency Hierarchy:
- release-notes-generator-9.0.1.tgz (Root Library)
- conventional-changelog-writer-4.1.0.tgz
- meow-8.1.2.tgz
- read-pkg-up-7.0.1.tgz
- read-pkg-5.2.0.tgz
- normalize-package-data-2.5.0.tgz
- :x: **hosted-git-info-2.8.8.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/charts/commit/cd30f808c4bd539b7ef0abb715993411468ecfc9">cd30f808c4bd539b7ef0abb715993411468ecfc9</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package hosted-git-info before 3.0.8 are vulnerable to Regular Expression Denial of Service (ReDoS) via shortcutMatch in fromUrl().
<p>Publish Date: 2021-03-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23362>CVE-2021-23362</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/hosted-git-info/releases/tag/v3.0.8">https://github.com/npm/hosted-git-info/releases/tag/v3.0.8</a></p>
<p>Release Date: 2021-03-23</p>
<p>Fix Resolution: hosted-git-info - 3.0.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-23362 (Medium) detected in hosted-git-info-2.8.8.tgz - ## CVE-2021-23362 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>hosted-git-info-2.8.8.tgz</b></p></summary>
<p>Provides metadata and conversions from repository urls for Github, Bitbucket and Gitlab</p>
<p>Library home page: <a href="https://registry.npmjs.org/hosted-git-info/-/hosted-git-info-2.8.8.tgz">https://registry.npmjs.org/hosted-git-info/-/hosted-git-info-2.8.8.tgz</a></p>
<p>Path to dependency file: charts/package.json</p>
<p>Path to vulnerable library: charts/node_modules/npm/node_modules/hosted-git-info/package.json,charts/node_modules/hosted-git-info/package.json,charts/node_modules/conventional-changelog-writer/node_modules/read-pkg/node_modules/hosted-git-info/package.json,charts/node_modules/conventional-commits-parser/node_modules/read-pkg/node_modules/hosted-git-info/package.json</p>
<p>
Dependency Hierarchy:
- release-notes-generator-9.0.1.tgz (Root Library)
- conventional-changelog-writer-4.1.0.tgz
- meow-8.1.2.tgz
- read-pkg-up-7.0.1.tgz
- read-pkg-5.2.0.tgz
- normalize-package-data-2.5.0.tgz
- :x: **hosted-git-info-2.8.8.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/charts/commit/cd30f808c4bd539b7ef0abb715993411468ecfc9">cd30f808c4bd539b7ef0abb715993411468ecfc9</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package hosted-git-info before 3.0.8 are vulnerable to Regular Expression Denial of Service (ReDoS) via shortcutMatch in fromUrl().
<p>Publish Date: 2021-03-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23362>CVE-2021-23362</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/hosted-git-info/releases/tag/v3.0.8">https://github.com/npm/hosted-git-info/releases/tag/v3.0.8</a></p>
<p>Release Date: 2021-03-23</p>
<p>Fix Resolution: hosted-git-info - 3.0.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in hosted git info tgz cve medium severity vulnerability vulnerable library hosted git info tgz provides metadata and conversions from repository urls for github bitbucket and gitlab library home page a href path to dependency file charts package json path to vulnerable library charts node modules npm node modules hosted git info package json charts node modules hosted git info package json charts node modules conventional changelog writer node modules read pkg node modules hosted git info package json charts node modules conventional commits parser node modules read pkg node modules hosted git info package json dependency hierarchy release notes generator tgz root library conventional changelog writer tgz meow tgz read pkg up tgz read pkg tgz normalize package data tgz x hosted git info tgz vulnerable library found in head commit a href vulnerability details the package hosted git info before are vulnerable to regular expression denial of service redos via shortcutmatch in fromurl publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution hosted git info step up your open source security game with whitesource
| 0
|
3,157
| 6,206,689,560
|
IssuesEvent
|
2017-07-06 19:00:15
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
opened
|
Remove use of NtQuerySystemInformation for UWP
|
area-System.Diagnostics.Process
|
We can't use NtQuerySystemInformation in an app. It will fail WACK. At this point, we should stop using it in our UWP binaries. That means API that enumerates and examines arbitrary processes should be stubbed to thrown PNSE with a nice message.
Please factor the process class so that it doesn't appear in the UAP build and so you can remove it from src\System.Diagnostics.Process\src\PinvokeAnalyzerExceptionList.analyzerdata.uap
|
1.0
|
Remove use of NtQuerySystemInformation for UWP - We can't use NtQuerySystemInformation in an app. It will fail WACK. At this point, we should stop using it in our UWP binaries. That means API that enumerates and examines arbitrary processes should be stubbed to thrown PNSE with a nice message.
Please factor the process class so that it doesn't appear in the UAP build and so you can remove it from src\System.Diagnostics.Process\src\PinvokeAnalyzerExceptionList.analyzerdata.uap
|
process
|
remove use of ntquerysysteminformation for uwp we can t use ntquerysysteminformation in an app it will fail wack at this point we should stop using it in our uwp binaries that means api that enumerates and examines arbitrary processes should be stubbed to thrown pnse with a nice message please factor the process class so that it doesn t appear in the uap build and so you can remove it from src system diagnostics process src pinvokeanalyzerexceptionlist analyzerdata uap
| 1
|
5,872
| 8,692,101,129
|
IssuesEvent
|
2018-12-04 04:37:54
|
GSA/digitalgov.gov
|
https://api.github.com/repos/GSA/digitalgov.gov
|
closed
|
Keep Event page descriptions consistent
|
event process
|
We have three places where information from event pages are public.
- DigitalGov Event Page
- Eventbrite
- YouTube
**How can we keep text consistent between all three platforms?**
**How can we make this process easier?**
---
## it should include โฆ
- [ ] an easy process to keep text consistent between all three platforms
|
1.0
|
Keep Event page descriptions consistent - We have three places where information from event pages are public.
- DigitalGov Event Page
- Eventbrite
- YouTube
**How can we keep text consistent between all three platforms?**
**How can we make this process easier?**
---
## it should include โฆ
- [ ] an easy process to keep text consistent between all three platforms
|
process
|
keep event page descriptions consistent we have three places where information from event pages are public digitalgov event page eventbrite youtube how can we keep text consistent between all three platforms how can we make this process easier it should include โฆ an easy process to keep text consistent between all three platforms
| 1
|
17,694
| 23,544,231,749
|
IssuesEvent
|
2022-08-20 21:56:53
|
anitsh/til
|
https://api.github.com/repos/anitsh/til
|
opened
|
Domain-driven Desing, Wardley Mapping and Team Topologies
|
ddd team process teamtopologies
|
https://www.infoq.com/podcasts/ddd-wardley-mapping-team-topologies
A system is more than the sum of its parts. It's a product of their interaction. - Dr. Russell Ackoff,
The way parts fit together determines the performance of system, not on how they perform taken separately.
When we are building systems in general, we are faced with the challenges of building the right thing and building the thing right.
Building the right thing addresses effectiveness, and addresses questions such as how aligned is our solution to the users and business needs. Are we creating value for our customers? Have we understood the problem and do we share a common understanding and building the thing right?
Focuses on efficiencies, for example, efficiency of engineering practices, and it's not only crucial to generate value, but also being able to deliver that value. How fast can we deliver changes, and how fast and easy can we make a change effective and adapt to new circumstances. The one doesn't go without the other, but as Dr. Russell Ackoff pointed out doing the wrong thing right is not nearly as good as doing the right thing wrong. So, by considering the whole, and having effectiveness and efficiency in mind to build the right thing right, we need a holistic perspective to build adaptive systems.
One approach out of many is combining these three perspectives of business strategy with Wardley Mapping, software architecture, and design was Domain-Driven Design, and team organization was team topologies. So, in order to build and design and evolve adaptive socio-technical systems that are optimized for fast flow of change.
Building the right thing and then building the thing right. A trade-off and understanding of what you're doing and how you're thinking about these problems.
Where do we start? It depends on the context. We can start each of the perspectives.
Start with analyzing the team situation, regards to their team cognitive load and their delivery bottlenecks that they're currently facing. So, what of kind of problems do they have right now? Are they are dealing with high team cognitive load because they have to deal with a big ball of mud when you deal with a legacy system which evolved over the time? Are there organized as functional silo teams where handover is involved? Are these large teams or do the teams need to communicate and coordinate with each other when they want to implement and deliver changes? Address these questions irst, like analyzing the current situation of your teams.
How do you get people to understand the cognitive load that they're under? A lot of teams have been operating in a certain way for so long, they don't even realize that their cognitive load is so high that they don't know of any other way. They don't know how to adapt to that. How do you have that conversation and get people to understand that the cognitive load that they're seeing is actually a detriment to flow?
How much time does it need for them to understand a piece of code? How long does it take to onboard new team members? How long does it take to make a change effective and to implement changes (and kind of also comes to software quality in teams and in terms of like testing as well).
Are there side effects involved that they could not be easily anticipated? It brings us back to the Wardley Map itself. So, what kind of components they are responsible for? Wardley Map with the value chain matched to the Y axis, your user needs and components that fulfill the user needs directly or facilitating other components in the value chain. And then the evolutions axis going from left to right from Genesis, custom build, product and rental, and commodity.
The more you are on the spectrum of a left spectrum of your Wardley Map, then you are dealing with high uncertainty, unclear path to action. With the components that are located on the right spectrum, we are dealing with mature stable components with a more clear path to action.
And if your teams are responsible for components that are located on the left spectrum of your Wardley Map, then there's a potential high cognitive load involved because you need to experiment more, explore more, discover more and applying then merchant and novel practices instead of best and a good practices on the right part.
Being able to visualize the value chain onto a Wardley Map, and then be able to say things on the left, things that are more towards Genesis require more cognitive load to keep in your mind. Things that are more towards that commodity side, right side, are definitely less.
Genesis, a beginning or origin of anything.
Step two. Creating a Wardley Map of your current situation.
Look at what current landscape you are operating in, what are your users, what are the users' needs? What are the components of fulfill this user needs? And then also to identify the streams of changes in order to optimize for fast flow change, that requires to know where the most important changes in your system are occurring. The streams of changes, and there could be different types of stream of changes. And with the Wardley Map, this visualized activity oriented streams of changes reflected or represented by the user needs. So, if we look at the user needs, these are then the potential stream of changes that we need to focus on when we want to optimize for fast flow of change.
First identify the stream force changes, the user needs using Wardley Map as a foundation for future discussions on how to evolve our system. Address the problem domain. That's where we are then landing in Domain-Driven Design, the users and the user needs of a Wardley Map. They are usually representing the anchor of your map. Also constitute the problem domain in regards to Domain-Driven Design.
Then analyze problem domain and distill problem domain into smaller parts, the sub domains. Different sub domains have different value to the business, i.e. core, supporting and generic. Some are more important to the business than others Identify the core subdomains, those that provides competitive advantage, tend to change often, and are quite complex. Focus on building these parts of our system in-house because that's the one that we would like to differentiate ourselves.
This requires the most strategic investment.
This gives us combined view on Wardley Map. The co-domain related aspects. Located near Genesis needs to be built in-house. Subdomains supporting generic that's then where go to the right spectrum of the Wardley Map, buy off-the-shelf product or using open source software, or outsource as they are commodity and utility.
Bringing these together.
How do you deal with different sizes when you don't necessarily have huge teams to be able to handle different areas within the platform?
A fictitious example of an online school for uni students, which was at that state, a monolithic big ball of mud, which was run and supported by functional silo teams and running on top of on-premises infrastructure components.
Do you just take this big ball of mud and put it right in the middle of a Wardley Map? How do we start to tease apart those big ball of mud?
Start to map the current state of the online school. Start with what users needs first, right? So, the users could be the teachers and the uni students with their user needs of creating course content, or evaluating student progress. Or these students would like to study courses and requesting or receiving help, and also receiving evaluation feedback.
So a full slice, but basically a full slice of what the user's trying to accomplish.
Fulfill the user need directly. That is at the top of the value chain of the Y axis of our Wardley Map.
Then reflect on where our monolith is right now and derive the value chain of the current state, I start with one component, even knowing that it's too large. Decompose later on. But at first bring this big ball of mud as one component,
Via Domain-Driven Design, split it. Decompose it into modular components within the bounded context.
How should we size things on the value chain in Wardley Map?
How do you pick the right size?
Put that big ball of mud in there, and then move into kind of the domains and start to break-out the bounded context between that big ball of mud. That way you can focus on each individual pieces within it.
Wardley Map is kind of like a continuous improvement. It will change over the time with the defined the scope. Put the monolith as one component in the Map and then decompose in the next one or create a new map.
One heuristic is - if one component is too large to be handled by a small cross-functional team, that is an indicator that the component is too large.
Identify where we do high level design decisions (i.e. move to the solution space of strategic design) and decompose our big ball of mud into modular components.
There we can blend in other techniques from Domain-Driven Design, such as event storming, domain storytelling, user story, mapping, example mapping, and so on to have a conversation about what is behavior that shall sit together? Where we have a boundary around a domain model, the domain model reflecting business rules of a specific area of your system, and where bounded context forms a boundary and could also be then later on be suitable team boundaries. When we come to team topologies, where we blend in the next perspective, making team decisions then
Team Topologies aims to establish cross-functional teams in order to streamline teams taking end-to-end responsibility to avoid handovers among frontend and backend sperated development. Mobbing, supported by enabling teams helping them. But build self-sufficient teams so that they can focus on their steady flow of future deliveries, flow of changes, autonomously, and then request some help in specific circumstances.
how do you leverage the streamlined teams with platform teams, for example, to be able to get started?
if you have a really small organization, you can still apply it. establish a temporary task force that can provide a sentence viable platform.
First provide platform that is big enough to fulfill the consumer's need and not become bigger than needed. And it could start with a "How To" documentation like how to provision your infrastructure in cloud ecosystem, or how to use the serverless framework, etc.
With documentation, it can then also describe also standards and best practices.
Later on, it can evolve into a digital platform with self-service services, and APIs, and tools that the streamline team can then easily consume. But it does not necessarily have to be full-blown digital platform from the very beginning, but just as big enough as it's necessary to fulfill the needs of the consumers.
How do you walk the line between standards and standardization?
We want to have high standards, but we don't want standardization to become a bottleneck.
Make something mandatory to use potentially a bottleneck. We are blocking them. Enable the streamline teams that they are able to focus on fast flow of changes, that they are able to produce a steady flow of feature deliveries.
Spark that streamline teams so that they can learn by not holding them back. Supporting them, instead of telling them what to use, let them figure out. Autonomy.
Look at what enables the fast flow of change. DevOps
Start off by talking to the team, you can find out what their friction points are.
How do you get started with a process like this with the team?
Domain-Driven Design to be able to dive deeper, like storming.
Use the Inverse Conway Maneuver to be able to enable fast flow of change.
How do we get started? Do we just like come in Monday morning and say, "All right, we're breaking up into three teams. We've got a platform team, we've got two enabling teams and go."
How do we talk to people about day one, and day two, and then day three, when it comes to these strategies?
Make the change transparent the entire organization and well communicated.
What are the team types? What is the interaction mode? When does it make sense that teams are collaborating? When does it make sense to provide excessive service?
Most fear that people have is about change, that they get laid off. So whenever there's a reorganization in place, because we are transforming to a team of topologies (team of types) and their interaction modes, there will be reteaming involved.
https://www.amazon.com/Dynamic-Reteaming-Wisdom-Changing-Teams/dp/1492061298
Start with forming one team first on site, for example, by isolating one team. For example, while forming first your platform team on a site from members of the back end and the infrastructure team to migrate to the Cloud, let them discover and assess infrastructure options. They don't have to follow the existing processes. They are there to discover and explore new cloud strategies for example, and cloud options that are available and they're suitable for their first bounded context that they would like to extract.
Who will be member of which teams? Who decides?
There are various ways.
It could be from the top down, from management. It could also be self-selecting process as well, where we let your members of your current teams decide in what teams they would like to become member of. There are different levels involve, and also that you can also calibrate when we form a team, when you reteam. We calibrate on different aspects like how we would like to work together, what we like to do, peer programming or more programming.
What is your mission of your team that you introduce new team members, that they can onboard very easily. Have different team calibration sessions that help you to bring the journey forward towards team topologies, team types.
There are different aspects where we can gradually transform existing team incrementally into team topologies team type.
With the highest level of self-selection then, letting the team members select themselves. And let the management can decide input from the teams and they decide.
How do you leverage contractors delivering components for the stream-aligned teams?
Team Topologies says that you should aim for long life stable teams, but this does not mean that they need to be static. So, team members can switch over the time, either freelancers or contractors from outside.
Heidi Helfand also recommends to enable or to provide that team members can switch teams because that is one of the opportunities to retain your talents in-house. For personal growth, switch teams because if they can't grow within your organization, they will find growth opportunities outside of your organization.
It's the team structure that's long live, not the people on the team.
We talked about cognitive load in context of Wardley Maps. We talked about platform and streamlined teams. We talked about standard and standardization. We talked about Domain-Driven Design and then diving deeper into some of the components once you've identified some areas that you want to break out.
What kind of benefits brings Wardley Map?
It helps you to visualize potential instability in an associated risk.
For example, if you have a value chain where you have volatile components, for example, bounded context of your core domain are volatile because they're changing a lot and they have embodied quite high level of complexities because it's the one that provides competitive advantage.
And so if you build these volatile components on top of mature and stable components, that is reflecting a stable system, but if you switch it around, if you have stable components that build up on volatile components, then it is a potential candidate for instabilities and associated risk because you have a stable component, which is expected to be stable and it's built up on volatile components and all these introduces new changes. And you have to keep the stable component up to date, or you have to keep the stable component stable. That shifts your focus on handling the source of risks.
What are some the patterns seen when parts of the system are unstable?
It creates awareness that we are building stable components on top of volatile components and that could be a potential problem.
Maybe it's on purpose. Maybe to discover a new technologies and that new technology is in Genesis and is custom build. Bring in are the efficiency gaps like there is a component in the market that is more evolved in reciting, for example, in commodity and utility. This gives us an indication, a hint that we could be less efficient, because we are building on less efficient components that are then reciting on the left part of your Wardley Map.
When we talk about from Genesis to commodity, It's like a whole spectrum of things to consider on that left to right. How do you talk to people about that?
It's more about the characteristics and general properties of the components of the Wardley Map. It depends on the market perception and the user perception. Failure occurring in a stable brand and new product where failure in stable would be surprising and on the other hand, it would be expected in the new.
|
1.0
|
Domain-driven Desing, Wardley Mapping and Team Topologies - https://www.infoq.com/podcasts/ddd-wardley-mapping-team-topologies
A system is more than the sum of its parts. It's a product of their interaction. - Dr. Russell Ackoff,
The way parts fit together determines the performance of system, not on how they perform taken separately.
When we are building systems in general, we are faced with the challenges of building the right thing and building the thing right.
Building the right thing addresses effectiveness, and addresses questions such as how aligned is our solution to the users and business needs. Are we creating value for our customers? Have we understood the problem and do we share a common understanding and building the thing right?
Focuses on efficiencies, for example, efficiency of engineering practices, and it's not only crucial to generate value, but also being able to deliver that value. How fast can we deliver changes, and how fast and easy can we make a change effective and adapt to new circumstances. The one doesn't go without the other, but as Dr. Russell Ackoff pointed out doing the wrong thing right is not nearly as good as doing the right thing wrong. So, by considering the whole, and having effectiveness and efficiency in mind to build the right thing right, we need a holistic perspective to build adaptive systems.
One approach out of many is combining these three perspectives of business strategy with Wardley Mapping, software architecture, and design was Domain-Driven Design, and team organization was team topologies. So, in order to build and design and evolve adaptive socio-technical systems that are optimized for fast flow of change.
Building the right thing and then building the thing right. A trade-off and understanding of what you're doing and how you're thinking about these problems.
Where do we start? It depends on the context. We can start each of the perspectives.
Start with analyzing the team situation, regards to their team cognitive load and their delivery bottlenecks that they're currently facing. So, what of kind of problems do they have right now? Are they are dealing with high team cognitive load because they have to deal with a big ball of mud when you deal with a legacy system which evolved over the time? Are there organized as functional silo teams where handover is involved? Are these large teams or do the teams need to communicate and coordinate with each other when they want to implement and deliver changes? Address these questions irst, like analyzing the current situation of your teams.
How do you get people to understand the cognitive load that they're under? A lot of teams have been operating in a certain way for so long, they don't even realize that their cognitive load is so high that they don't know of any other way. They don't know how to adapt to that. How do you have that conversation and get people to understand that the cognitive load that they're seeing is actually a detriment to flow?
How much time does it need for them to understand a piece of code? How long does it take to onboard new team members? How long does it take to make a change effective and to implement changes (and kind of also comes to software quality in teams and in terms of like testing as well).
Are there side effects involved that they could not be easily anticipated? It brings us back to the Wardley Map itself. So, what kind of components they are responsible for? Wardley Map with the value chain matched to the Y axis, your user needs and components that fulfill the user needs directly or facilitating other components in the value chain. And then the evolutions axis going from left to right from Genesis, custom build, product and rental, and commodity.
The more you are on the spectrum of a left spectrum of your Wardley Map, then you are dealing with high uncertainty, unclear path to action. With the components that are located on the right spectrum, we are dealing with mature stable components with a more clear path to action.
And if your teams are responsible for components that are located on the left spectrum of your Wardley Map, then there's a potential high cognitive load involved because you need to experiment more, explore more, discover more and applying then merchant and novel practices instead of best and a good practices on the right part.
Being able to visualize the value chain onto a Wardley Map, and then be able to say things on the left, things that are more towards Genesis require more cognitive load to keep in your mind. Things that are more towards that commodity side, right side, are definitely less.
Genesis, a beginning or origin of anything.
Step two. Creating a Wardley Map of your current situation.
Look at what current landscape you are operating in, what are your users, what are the users' needs? What are the components of fulfill this user needs? And then also to identify the streams of changes in order to optimize for fast flow change, that requires to know where the most important changes in your system are occurring. The streams of changes, and there could be different types of stream of changes. And with the Wardley Map, this visualized activity oriented streams of changes reflected or represented by the user needs. So, if we look at the user needs, these are then the potential stream of changes that we need to focus on when we want to optimize for fast flow of change.
First identify the stream force changes, the user needs using Wardley Map as a foundation for future discussions on how to evolve our system. Address the problem domain. That's where we are then landing in Domain-Driven Design, the users and the user needs of a Wardley Map. They are usually representing the anchor of your map. Also constitute the problem domain in regards to Domain-Driven Design.
Then analyze problem domain and distill problem domain into smaller parts, the sub domains. Different sub domains have different value to the business, i.e. core, supporting and generic. Some are more important to the business than others Identify the core subdomains, those that provides competitive advantage, tend to change often, and are quite complex. Focus on building these parts of our system in-house because that's the one that we would like to differentiate ourselves.
This requires the most strategic investment.
This gives us combined view on Wardley Map. The co-domain related aspects. Located near Genesis needs to be built in-house. Subdomains supporting generic that's then where go to the right spectrum of the Wardley Map, buy off-the-shelf product or using open source software, or outsource as they are commodity and utility.
Bringing these together.
How do you deal with different sizes when you don't necessarily have huge teams to be able to handle different areas within the platform?
A fictitious example of an online school for uni students, which was at that state, a monolithic big ball of mud, which was run and supported by functional silo teams and running on top of on-premises infrastructure components.
Do you just take this big ball of mud and put it right in the middle of a Wardley Map? How do we start to tease apart those big ball of mud?
Start to map the current state of the online school. Start with what users needs first, right? So, the users could be the teachers and the uni students with their user needs of creating course content, or evaluating student progress. Or these students would like to study courses and requesting or receiving help, and also receiving evaluation feedback.
So a full slice, but basically a full slice of what the user's trying to accomplish.
Fulfill the user need directly. That is at the top of the value chain of the Y axis of our Wardley Map.
Then reflect on where our monolith is right now and derive the value chain of the current state, I start with one component, even knowing that it's too large. Decompose later on. But at first bring this big ball of mud as one component,
Via Domain-Driven Design, split it. Decompose it into modular components within the bounded context.
How should we size things on the value chain in Wardley Map?
How do you pick the right size?
Put that big ball of mud in there, and then move into kind of the domains and start to break-out the bounded context between that big ball of mud. That way you can focus on each individual pieces within it.
Wardley Map is kind of like a continuous improvement. It will change over the time with the defined the scope. Put the monolith as one component in the Map and then decompose in the next one or create a new map.
One heuristic is - if one component is too large to be handled by a small cross-functional team, that is an indicator that the component is too large.
Identify where we do high level design decisions (i.e. move to the solution space of strategic design) and decompose our big ball of mud into modular components.
There we can blend in other techniques from Domain-Driven Design, such as event storming, domain storytelling, user story, mapping, example mapping, and so on to have a conversation about what is behavior that shall sit together? Where we have a boundary around a domain model, the domain model reflecting business rules of a specific area of your system, and where bounded context forms a boundary and could also be then later on be suitable team boundaries. When we come to team topologies, where we blend in the next perspective, making team decisions then
Team Topologies aims to establish cross-functional teams in order to streamline teams taking end-to-end responsibility to avoid handovers among frontend and backend sperated development. Mobbing, supported by enabling teams helping them. But build self-sufficient teams so that they can focus on their steady flow of future deliveries, flow of changes, autonomously, and then request some help in specific circumstances.
how do you leverage the streamlined teams with platform teams, for example, to be able to get started?
if you have a really small organization, you can still apply it. establish a temporary task force that can provide a sentence viable platform.
First provide platform that is big enough to fulfill the consumer's need and not become bigger than needed. And it could start with a "How To" documentation like how to provision your infrastructure in cloud ecosystem, or how to use the serverless framework, etc.
With documentation, it can then also describe also standards and best practices.
Later on, it can evolve into a digital platform with self-service services, and APIs, and tools that the streamline team can then easily consume. But it does not necessarily have to be full-blown digital platform from the very beginning, but just as big enough as it's necessary to fulfill the needs of the consumers.
How do you walk the line between standards and standardization?
We want to have high standards, but we don't want standardization to become a bottleneck.
Make something mandatory to use potentially a bottleneck. We are blocking them. Enable the streamline teams that they are able to focus on fast flow of changes, that they are able to produce a steady flow of feature deliveries.
Spark that streamline teams so that they can learn by not holding them back. Supporting them, instead of telling them what to use, let them figure out. Autonomy.
Look at what enables the fast flow of change. DevOps
Start off by talking to the team, you can find out what their friction points are.
How do you get started with a process like this with the team?
Domain-Driven Design to be able to dive deeper, like storming.
Use the Inverse Conway Maneuver to be able to enable fast flow of change.
How do we get started? Do we just like come in Monday morning and say, "All right, we're breaking up into three teams. We've got a platform team, we've got two enabling teams and go."
How do we talk to people about day one, and day two, and then day three, when it comes to these strategies?
Make the change transparent the entire organization and well communicated.
What are the team types? What is the interaction mode? When does it make sense that teams are collaborating? When does it make sense to provide excessive service?
Most fear that people have is about change, that they get laid off. So whenever there's a reorganization in place, because we are transforming to a team of topologies (team of types) and their interaction modes, there will be reteaming involved.
https://www.amazon.com/Dynamic-Reteaming-Wisdom-Changing-Teams/dp/1492061298
Start with forming one team first on site, for example, by isolating one team. For example, while forming first your platform team on a site from members of the back end and the infrastructure team to migrate to the Cloud, let them discover and assess infrastructure options. They don't have to follow the existing processes. They are there to discover and explore new cloud strategies for example, and cloud options that are available and they're suitable for their first bounded context that they would like to extract.
Who will be member of which teams? Who decides?
There are various ways.
It could be from the top down, from management. It could also be self-selecting process as well, where we let your members of your current teams decide in what teams they would like to become member of. There are different levels involve, and also that you can also calibrate when we form a team, when you reteam. We calibrate on different aspects like how we would like to work together, what we like to do, peer programming or more programming.
What is your mission of your team that you introduce new team members, that they can onboard very easily. Have different team calibration sessions that help you to bring the journey forward towards team topologies, team types.
There are different aspects where we can gradually transform existing team incrementally into team topologies team type.
With the highest level of self-selection then, letting the team members select themselves. And let the management can decide input from the teams and they decide.
How do you leverage contractors delivering components for the stream-aligned teams?
Team Topologies says that you should aim for long life stable teams, but this does not mean that they need to be static. So, team members can switch over the time, either freelancers or contractors from outside.
Heidi Helfand also recommends to enable or to provide that team members can switch teams because that is one of the opportunities to retain your talents in-house. For personal growth, switch teams because if they can't grow within your organization, they will find growth opportunities outside of your organization.
It's the team structure that's long live, not the people on the team.
We talked about cognitive load in context of Wardley Maps. We talked about platform and streamlined teams. We talked about standard and standardization. We talked about Domain-Driven Design and then diving deeper into some of the components once you've identified some areas that you want to break out.
What kind of benefits brings Wardley Map?
It helps you to visualize potential instability in an associated risk.
For example, if you have a value chain where you have volatile components, for example, bounded context of your core domain are volatile because they're changing a lot and they have embodied quite high level of complexities because it's the one that provides competitive advantage.
And so if you build these volatile components on top of mature and stable components, that is reflecting a stable system, but if you switch it around, if you have stable components that build up on volatile components, then it is a potential candidate for instabilities and associated risk because you have a stable component, which is expected to be stable and it's built up on volatile components and all these introduces new changes. And you have to keep the stable component up to date, or you have to keep the stable component stable. That shifts your focus on handling the source of risks.
What are some the patterns seen when parts of the system are unstable?
It creates awareness that we are building stable components on top of volatile components and that could be a potential problem.
Maybe it's on purpose. Maybe to discover a new technologies and that new technology is in Genesis and is custom build. Bring in are the efficiency gaps like there is a component in the market that is more evolved in reciting, for example, in commodity and utility. This gives us an indication, a hint that we could be less efficient, because we are building on less efficient components that are then reciting on the left part of your Wardley Map.
When we talk about from Genesis to commodity, It's like a whole spectrum of things to consider on that left to right. How do you talk to people about that?
It's more about the characteristics and general properties of the components of the Wardley Map. It depends on the market perception and the user perception. Failure occurring in a stable brand and new product where failure in stable would be surprising and on the other hand, it would be expected in the new.
|
process
|
domain driven desing wardley mapping and team topologies a system is more than the sum of its parts it s a product of their interaction dr russell ackoff the way parts fit together determines the performance of system not on how they perform taken separately when we are building systems in general we are faced with the challenges of building the right thing and building the thing right building the right thing addresses effectiveness and addresses questions such as how aligned is our solution to the users and business needs are we creating value for our customers have we understood the problem and do we share a common understanding and building the thing right focuses on efficiencies for example efficiency of engineering practices and it s not only crucial to generate value but also being able to deliver that value how fast can we deliver changes and how fast and easy can we make a change effective and adapt to new circumstances the one doesn t go without the other but as dr russell ackoff pointed out doing the wrong thing right is not nearly as good as doing the right thing wrong so by considering the whole and having effectiveness and efficiency in mind to build the right thing right we need a holistic perspective to build adaptive systems one approach out of many is combining these three perspectives of business strategy with wardley mapping software architecture and design was domain driven design and team organization was team topologies so in order to build and design and evolve adaptive socio technical systems that are optimized for fast flow of change building the right thing and then building the thing right a trade off and understanding of what you re doing and how you re thinking about these problems where do we start it depends on the context we can start each of the perspectives start with analyzing the team situation regards to their team cognitive load and their delivery bottlenecks that they re currently facing so what of kind of problems do they have right now are they are dealing with high team cognitive load because they have to deal with a big ball of mud when you deal with a legacy system which evolved over the time are there organized as functional silo teams where handover is involved are these large teams or do the teams need to communicate and coordinate with each other when they want to implement and deliver changes address these questions irst like analyzing the current situation of your teams how do you get people to understand the cognitive load that they re under a lot of teams have been operating in a certain way for so long they don t even realize that their cognitive load is so high that they don t know of any other way they don t know how to adapt to that how do you have that conversation and get people to understand that the cognitive load that they re seeing is actually a detriment to flow how much time does it need for them to understand a piece of code how long does it take to onboard new team members how long does it take to make a change effective and to implement changes and kind of also comes to software quality in teams and in terms of like testing as well are there side effects involved that they could not be easily anticipated it brings us back to the wardley map itself so what kind of components they are responsible for wardley map with the value chain matched to the y axis your user needs and components that fulfill the user needs directly or facilitating other components in the value chain and then the evolutions axis going from left to right from genesis custom build product and rental and commodity the more you are on the spectrum of a left spectrum of your wardley map then you are dealing with high uncertainty unclear path to action with the components that are located on the right spectrum we are dealing with mature stable components with a more clear path to action and if your teams are responsible for components that are located on the left spectrum of your wardley map then there s a potential high cognitive load involved because you need to experiment more explore more discover more and applying then merchant and novel practices instead of best and a good practices on the right part being able to visualize the value chain onto a wardley map and then be able to say things on the left things that are more towards genesis require more cognitive load to keep in your mind things that are more towards that commodity side right side are definitely less genesis a beginning or origin of anything step two creating a wardley map of your current situation look at what current landscape you are operating in what are your users what are the users needs what are the components of fulfill this user needs and then also to identify the streams of changes in order to optimize for fast flow change that requires to know where the most important changes in your system are occurring the streams of changes and there could be different types of stream of changes and with the wardley map this visualized activity oriented streams of changes reflected or represented by the user needs so if we look at the user needs these are then the potential stream of changes that we need to focus on when we want to optimize for fast flow of change first identify the stream force changes the user needs using wardley map as a foundation for future discussions on how to evolve our system address the problem domain that s where we are then landing in domain driven design the users and the user needs of a wardley map they are usually representing the anchor of your map also constitute the problem domain in regards to domain driven design then analyze problem domain and distill problem domain into smaller parts the sub domains different sub domains have different value to the business i e core supporting and generic some are more important to the business than others identify the core subdomains those that provides competitive advantage tend to change often and are quite complex focus on building these parts of our system in house because that s the one that we would like to differentiate ourselves this requires the most strategic investment this gives us combined view on wardley map the co domain related aspects located near genesis needs to be built in house subdomains supporting generic that s then where go to the right spectrum of the wardley map buy off the shelf product or using open source software or outsource as they are commodity and utility bringing these together how do you deal with different sizes when you don t necessarily have huge teams to be able to handle different areas within the platform a fictitious example of an online school for uni students which was at that state a monolithic big ball of mud which was run and supported by functional silo teams and running on top of on premises infrastructure components do you just take this big ball of mud and put it right in the middle of a wardley map how do we start to tease apart those big ball of mud start to map the current state of the online school start with what users needs first right so the users could be the teachers and the uni students with their user needs of creating course content or evaluating student progress or these students would like to study courses and requesting or receiving help and also receiving evaluation feedback so a full slice but basically a full slice of what the user s trying to accomplish fulfill the user need directly that is at the top of the value chain of the y axis of our wardley map then reflect on where our monolith is right now and derive the value chain of the current state i start with one component even knowing that it s too large decompose later on but at first bring this big ball of mud as one component via domain driven design split it decompose it into modular components within the bounded context how should we size things on the value chain in wardley map how do you pick the right size put that big ball of mud in there and then move into kind of the domains and start to break out the bounded context between that big ball of mud that way you can focus on each individual pieces within it wardley map is kind of like a continuous improvement it will change over the time with the defined the scope put the monolith as one component in the map and then decompose in the next one or create a new map one heuristic is if one component is too large to be handled by a small cross functional team that is an indicator that the component is too large identify where we do high level design decisions i e move to the solution space of strategic design and decompose our big ball of mud into modular components there we can blend in other techniques from domain driven design such as event storming domain storytelling user story mapping example mapping and so on to have a conversation about what is behavior that shall sit together where we have a boundary around a domain model the domain model reflecting business rules of a specific area of your system and where bounded context forms a boundary and could also be then later on be suitable team boundaries when we come to team topologies where we blend in the next perspective making team decisions then team topologies aims to establish cross functional teams in order to streamline teams taking end to end responsibility to avoid handovers among frontend and backend sperated development mobbing supported by enabling teams helping them but build self sufficient teams so that they can focus on their steady flow of future deliveries flow of changes autonomously and then request some help in specific circumstances how do you leverage the streamlined teams with platform teams for example to be able to get started if you have a really small organization you can still apply it establish a temporary task force that can provide a sentence viable platform first provide platform that is big enough to fulfill the consumer s need and not become bigger than needed and it could start with a how to documentation like how to provision your infrastructure in cloud ecosystem or how to use the serverless framework etc with documentation it can then also describe also standards and best practices later on it can evolve into a digital platform with self service services and apis and tools that the streamline team can then easily consume but it does not necessarily have to be full blown digital platform from the very beginning but just as big enough as it s necessary to fulfill the needs of the consumers how do you walk the line between standards and standardization we want to have high standards but we don t want standardization to become a bottleneck make something mandatory to use potentially a bottleneck we are blocking them enable the streamline teams that they are able to focus on fast flow of changes that they are able to produce a steady flow of feature deliveries spark that streamline teams so that they can learn by not holding them back supporting them instead of telling them what to use let them figure out autonomy look at what enables the fast flow of change devops start off by talking to the team you can find out what their friction points are how do you get started with a process like this with the team domain driven design to be able to dive deeper like storming use the inverse conway maneuver to be able to enable fast flow of change how do we get started do we just like come in monday morning and say all right we re breaking up into three teams we ve got a platform team we ve got two enabling teams and go how do we talk to people about day one and day two and then day three when it comes to these strategies make the change transparent the entire organization and well communicated what are the team types what is the interaction mode when does it make sense that teams are collaborating when does it make sense to provide excessive service most fear that people have is about change that they get laid off so whenever there s a reorganization in place because we are transforming to a team of topologies team of types and their interaction modes there will be reteaming involved start with forming one team first on site for example by isolating one team for example while forming first your platform team on a site from members of the back end and the infrastructure team to migrate to the cloud let them discover and assess infrastructure options they don t have to follow the existing processes they are there to discover and explore new cloud strategies for example and cloud options that are available and they re suitable for their first bounded context that they would like to extract who will be member of which teams who decides there are various ways it could be from the top down from management it could also be self selecting process as well where we let your members of your current teams decide in what teams they would like to become member of there are different levels involve and also that you can also calibrate when we form a team when you reteam we calibrate on different aspects like how we would like to work together what we like to do peer programming or more programming what is your mission of your team that you introduce new team members that they can onboard very easily have different team calibration sessions that help you to bring the journey forward towards team topologies team types there are different aspects where we can gradually transform existing team incrementally into team topologies team type with the highest level of self selection then letting the team members select themselves and let the management can decide input from the teams and they decide how do you leverage contractors delivering components for the stream aligned teams team topologies says that you should aim for long life stable teams but this does not mean that they need to be static so team members can switch over the time either freelancers or contractors from outside heidi helfand also recommends to enable or to provide that team members can switch teams because that is one of the opportunities to retain your talents in house for personal growth switch teams because if they can t grow within your organization they will find growth opportunities outside of your organization it s the team structure that s long live not the people on the team we talked about cognitive load in context of wardley maps we talked about platform and streamlined teams we talked about standard and standardization we talked about domain driven design and then diving deeper into some of the components once you ve identified some areas that you want to break out what kind of benefits brings wardley map it helps you to visualize potential instability in an associated risk for example if you have a value chain where you have volatile components for example bounded context of your core domain are volatile because they re changing a lot and they have embodied quite high level of complexities because it s the one that provides competitive advantage and so if you build these volatile components on top of mature and stable components that is reflecting a stable system but if you switch it around if you have stable components that build up on volatile components then it is a potential candidate for instabilities and associated risk because you have a stable component which is expected to be stable and it s built up on volatile components and all these introduces new changes and you have to keep the stable component up to date or you have to keep the stable component stable that shifts your focus on handling the source of risks what are some the patterns seen when parts of the system are unstable it creates awareness that we are building stable components on top of volatile components and that could be a potential problem maybe it s on purpose maybe to discover a new technologies and that new technology is in genesis and is custom build bring in are the efficiency gaps like there is a component in the market that is more evolved in reciting for example in commodity and utility this gives us an indication a hint that we could be less efficient because we are building on less efficient components that are then reciting on the left part of your wardley map when we talk about from genesis to commodity it s like a whole spectrum of things to consider on that left to right how do you talk to people about that it s more about the characteristics and general properties of the components of the wardley map it depends on the market perception and the user perception failure occurring in a stable brand and new product where failure in stable would be surprising and on the other hand it would be expected in the new
| 1
|
22,244
| 30,795,991,140
|
IssuesEvent
|
2023-07-31 19:55:50
|
cncf/tag-security
|
https://api.github.com/repos/cncf/tag-security
|
closed
|
[Suggestion] Perform hands-on security testing during security assessments
|
assessment-process suggestion inactive
|
### Description
I suggest that hands-on *light* "pentesting" be performed during SIG-Security assessments. As an outsider who joined SIG-Security, this is what I had assumed was happening during a SIG-Security Assessment.
Discussed in SIG-Security call on June 10, 2020.
### Impact
This will allow the assessment team to better understand the security posture of the project, identify areas of interest, gauge necessity for formal security assessment recommendation, etc. Ultimately, this culminates in the team's ability to better inform the TOC.
### Scope
This task should consume no more time than a usual SIG-Security assessment, and could be run in parallel or during the latter phase of the assessment. Adding this "feature" should not impact assessment timelines. This is not a formal security assessment, and we should provide no additional guarantees.
### Requirements
Before this could become a codified process, we would need a pool of at least three security engineers willing to contribute to the SIG-Security assessment through performing hands-on security reviews.
|
1.0
|
[Suggestion] Perform hands-on security testing during security assessments - ### Description
I suggest that hands-on *light* "pentesting" be performed during SIG-Security assessments. As an outsider who joined SIG-Security, this is what I had assumed was happening during a SIG-Security Assessment.
Discussed in SIG-Security call on June 10, 2020.
### Impact
This will allow the assessment team to better understand the security posture of the project, identify areas of interest, gauge necessity for formal security assessment recommendation, etc. Ultimately, this culminates in the team's ability to better inform the TOC.
### Scope
This task should consume no more time than a usual SIG-Security assessment, and could be run in parallel or during the latter phase of the assessment. Adding this "feature" should not impact assessment timelines. This is not a formal security assessment, and we should provide no additional guarantees.
### Requirements
Before this could become a codified process, we would need a pool of at least three security engineers willing to contribute to the SIG-Security assessment through performing hands-on security reviews.
|
process
|
perform hands on security testing during security assessments description i suggest that hands on light pentesting be performed during sig security assessments as an outsider who joined sig security this is what i had assumed was happening during a sig security assessment discussed in sig security call on june impact this will allow the assessment team to better understand the security posture of the project identify areas of interest gauge necessity for formal security assessment recommendation etc ultimately this culminates in the team s ability to better inform the toc scope this task should consume no more time than a usual sig security assessment and could be run in parallel or during the latter phase of the assessment adding this feature should not impact assessment timelines this is not a formal security assessment and we should provide no additional guarantees requirements before this could become a codified process we would need a pool of at least three security engineers willing to contribute to the sig security assessment through performing hands on security reviews
| 1
|
7,310
| 10,449,326,315
|
IssuesEvent
|
2019-09-19 08:10:16
|
stekylsha/CISC210Lab
|
https://api.github.com/repos/stekylsha/CISC210Lab
|
opened
|
Issue Tracking
|
Software Process
|
### Story
As a software developer
I want to issue management
So that I will be able to track defects and enhancement requests.
### Acceptance Criteria
Demonstrate:
- Understanding of issue tracking
- Why? How?
- Using Github ...
- create new bug issue
- create new enhancement request issue
- Coordination between issues, stories, and SCM
|
1.0
|
Issue Tracking - ### Story
As a software developer
I want to issue management
So that I will be able to track defects and enhancement requests.
### Acceptance Criteria
Demonstrate:
- Understanding of issue tracking
- Why? How?
- Using Github ...
- create new bug issue
- create new enhancement request issue
- Coordination between issues, stories, and SCM
|
process
|
issue tracking story as a software developer i want to issue management so that i will be able to track defects and enhancement requests acceptance criteria demonstrate understanding of issue tracking why how using github create new bug issue create new enhancement request issue coordination between issues stories and scm
| 1
|
29,083
| 13,940,226,802
|
IssuesEvent
|
2020-10-22 17:35:43
|
flutter/flutter
|
https://api.github.com/repos/flutter/flutter
|
opened
|
Regression in bench_mouse_region_grid_scroll.html.drawFrameDuration.averagebench_mouse_region_grid_scroll.html.totalUiFrame.average
|
P0 engine perf: speed platform-web severe: performance severe: regression team: benchmark
|
See https://flutter-flutter-perf.skia.org/e/?begin=1603169996&end=1603293374&keys=X5379bfde3fab4e31b2b265892433822d&num_commits=50&request_type=1&xbaroffset=22107.
Pretty clear signal at this engine roll. https://github.com/flutter/flutter/pull/68651
cc @yjbanov, too.
|
True
|
Regression in bench_mouse_region_grid_scroll.html.drawFrameDuration.averagebench_mouse_region_grid_scroll.html.totalUiFrame.average - See https://flutter-flutter-perf.skia.org/e/?begin=1603169996&end=1603293374&keys=X5379bfde3fab4e31b2b265892433822d&num_commits=50&request_type=1&xbaroffset=22107.
Pretty clear signal at this engine roll. https://github.com/flutter/flutter/pull/68651
cc @yjbanov, too.
|
non_process
|
regression in bench mouse region grid scroll html drawframeduration averagebench mouse region grid scroll html totaluiframe average see pretty clear signal at this engine roll cc yjbanov too
| 0
|
810,235
| 30,232,212,676
|
IssuesEvent
|
2023-07-06 07:47:43
|
wp-media/wp-rocket
|
https://api.github.com/repos/wp-media/wp-rocket
|
closed
|
Product sorting in WooCommerce backend does not work when WP Rocket is active
|
type: enhancement 3rd party compatibility priority: high effort: [XS]
|
**Before submitting an issue please check that youโve completed the following steps:**
- Made sure youโre on the latest version - Y
- Used the search feature to ensure that the bug hasnโt been reported before - Y
**Describe the bug**
It's not possible to **sort** products within WooCommerce products page. Instead, you can see `524 error` in the console.
**To Reproduce**
Steps to reproduce the behavior:
1. In a site with WooCommerce, WP Rocket and hosted on Cloudways (not sure what other requirements may be)
2. Go to WooCommerce product page
3. Click on a product's checkbox to try to sort it
4. The checkbox will show a loading icon that never goes away, and sorting won't be possible
**Expected behavior**
Product sorting should work as expected
**Screenshots**
Product sorting error with the loading icon stuck: https://capture.dropbox.com/drH0ISsNLOK4zhCb (@markonikolic985 screenshot)
**Additional context**
From @engahmeds3ed:
> the problem starts in WooCommerce exactly [here](https://github.com/woocommerce/woocommerce/blob/trunk/plugins/woocommerce/includes/class-wc-ajax.php#L2073)
> they get all posts (in this site the number of those posts are +500) with their menu order and loop on them and after updating their menu_order , they clean post cache [here](https://github.com/woocommerce/woocommerce/blob/trunk/plugins/woocommerce/includes/class-wc-ajax.php#LL2090C5-L2090C29) and then set the main product menu_order and also clean post cache [here](https://github.com/woocommerce/woocommerce/blob/trunk/plugins/woocommerce/includes/class-wc-ajax.php#L2111)
> so this is taking lots of time to update and clear post cache.
> My temporary fix is to disable cleaning post cache inside this method only.
>
```
add_action( 'wp_ajax_woocommerce_product_ordering', function(){
global $_wp_suspend_cache_invalidation;
$_wp_suspend_cache_invalidation = true;
}, 9 );
add_action( 'wp_ajax_woocommerce_product_ordering', function(){
global $_wp_suspend_cache_invalidation;
$_wp_suspend_cache_invalidation = false;
}, 11 );
```
> you can know more about the feature here:
> https://github.com/woocommerce/woocommerce/issues/31389
Also:
> both sites are on cloudways
Slack Dev-team escalation thread: https://wp-media.slack.com/archives/C056ZJMHG0P/p1685433858614039
Tickets: https://secure.helpscout.net/conversation/2237699192/419020?folderId=424682 and https://secure.helpscout.net/conversation/2254739596/421844/
**Backlog Grooming (for WP Media dev team use only)**
- [ ] Reproduce the problem
- [ ] Identify the root cause
- [ ] Scope a solution
- [ ] Estimate the effort
|
1.0
|
Product sorting in WooCommerce backend does not work when WP Rocket is active - **Before submitting an issue please check that youโve completed the following steps:**
- Made sure youโre on the latest version - Y
- Used the search feature to ensure that the bug hasnโt been reported before - Y
**Describe the bug**
It's not possible to **sort** products within WooCommerce products page. Instead, you can see `524 error` in the console.
**To Reproduce**
Steps to reproduce the behavior:
1. In a site with WooCommerce, WP Rocket and hosted on Cloudways (not sure what other requirements may be)
2. Go to WooCommerce product page
3. Click on a product's checkbox to try to sort it
4. The checkbox will show a loading icon that never goes away, and sorting won't be possible
**Expected behavior**
Product sorting should work as expected
**Screenshots**
Product sorting error with the loading icon stuck: https://capture.dropbox.com/drH0ISsNLOK4zhCb (@markonikolic985 screenshot)
**Additional context**
From @engahmeds3ed:
> the problem starts in WooCommerce exactly [here](https://github.com/woocommerce/woocommerce/blob/trunk/plugins/woocommerce/includes/class-wc-ajax.php#L2073)
> they get all posts (in this site the number of those posts are +500) with their menu order and loop on them and after updating their menu_order , they clean post cache [here](https://github.com/woocommerce/woocommerce/blob/trunk/plugins/woocommerce/includes/class-wc-ajax.php#LL2090C5-L2090C29) and then set the main product menu_order and also clean post cache [here](https://github.com/woocommerce/woocommerce/blob/trunk/plugins/woocommerce/includes/class-wc-ajax.php#L2111)
> so this is taking lots of time to update and clear post cache.
> My temporary fix is to disable cleaning post cache inside this method only.
>
```
add_action( 'wp_ajax_woocommerce_product_ordering', function(){
global $_wp_suspend_cache_invalidation;
$_wp_suspend_cache_invalidation = true;
}, 9 );
add_action( 'wp_ajax_woocommerce_product_ordering', function(){
global $_wp_suspend_cache_invalidation;
$_wp_suspend_cache_invalidation = false;
}, 11 );
```
> you can know more about the feature here:
> https://github.com/woocommerce/woocommerce/issues/31389
Also:
> both sites are on cloudways
Slack Dev-team escalation thread: https://wp-media.slack.com/archives/C056ZJMHG0P/p1685433858614039
Tickets: https://secure.helpscout.net/conversation/2237699192/419020?folderId=424682 and https://secure.helpscout.net/conversation/2254739596/421844/
**Backlog Grooming (for WP Media dev team use only)**
- [ ] Reproduce the problem
- [ ] Identify the root cause
- [ ] Scope a solution
- [ ] Estimate the effort
|
non_process
|
product sorting in woocommerce backend does not work when wp rocket is active before submitting an issue please check that youโve completed the following steps made sure youโre on the latest version y used the search feature to ensure that the bug hasnโt been reported before y describe the bug it s not possible to sort products within woocommerce products page instead you can see error in the console to reproduce steps to reproduce the behavior in a site with woocommerce wp rocket and hosted on cloudways not sure what other requirements may be go to woocommerce product page click on a product s checkbox to try to sort it the checkbox will show a loading icon that never goes away and sorting won t be possible expected behavior product sorting should work as expected screenshots product sorting error with the loading icon stuck screenshot additional context from the problem starts in woocommerce exactly they get all posts in this site the number of those posts are with their menu order and loop on them and after updating their menu order they clean post cache and then set the main product menu order and also clean post cache so this is taking lots of time to update and clear post cache my temporary fix is to disable cleaning post cache inside this method only add action wp ajax woocommerce product ordering function global wp suspend cache invalidation wp suspend cache invalidation true add action wp ajax woocommerce product ordering function global wp suspend cache invalidation wp suspend cache invalidation false you can know more about the feature here also both sites are on cloudways slack dev team escalation thread tickets and backlog grooming for wp media dev team use only reproduce the problem identify the root cause scope a solution estimate the effort
| 0
|
106,297
| 16,673,278,753
|
IssuesEvent
|
2021-06-07 13:31:11
|
VivekBuzruk/Hygieia
|
https://api.github.com/repos/VivekBuzruk/Hygieia
|
closed
|
CVE-2018-1107 (Medium) detected in is-my-json-valid-2.13.1.tgz - autoclosed
|
security vulnerability
|
## CVE-2018-1107 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>is-my-json-valid-2.13.1.tgz</b></p></summary>
<p>A JSONSchema validator that uses code generation to be extremely fast</p>
<p>Library home page: <a href="https://registry.npmjs.org/is-my-json-valid/-/is-my-json-valid-2.13.1.tgz">https://registry.npmjs.org/is-my-json-valid/-/is-my-json-valid-2.13.1.tgz</a></p>
<p>Path to dependency file: Hygieia/UI-protractor-tests/package.json</p>
<p>Path to vulnerable library: Hygieia/UI-protractor-tests/node_modules/npm/node_modules/request/node_modules/har-validator/node_modules/is-my-json-valid/package.json</p>
<p>
Dependency Hierarchy:
- npm-failsafe-0.1.1.tgz (Root Library)
- npm-2.15.12.tgz
- request-2.74.0.tgz
- har-validator-2.0.6.tgz
- :x: **is-my-json-valid-2.13.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/VivekBuzruk/Hygieia/commit/3c4f119e4343cf7fa276bb4756361b926902248e">3c4f119e4343cf7fa276bb4756361b926902248e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
It was discovered that the is-my-json-valid JavaScript library used an inefficient regular expression to validate JSON fields defined to have email format. A specially crafted JSON file could cause it to consume an excessive amount of CPU time when validated.
<p>Publish Date: 2021-03-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1107>CVE-2018-1107</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1546357">https://bugzilla.redhat.com/show_bug.cgi?id=1546357</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: 1.4.2,2.17.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-1107 (Medium) detected in is-my-json-valid-2.13.1.tgz - autoclosed - ## CVE-2018-1107 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>is-my-json-valid-2.13.1.tgz</b></p></summary>
<p>A JSONSchema validator that uses code generation to be extremely fast</p>
<p>Library home page: <a href="https://registry.npmjs.org/is-my-json-valid/-/is-my-json-valid-2.13.1.tgz">https://registry.npmjs.org/is-my-json-valid/-/is-my-json-valid-2.13.1.tgz</a></p>
<p>Path to dependency file: Hygieia/UI-protractor-tests/package.json</p>
<p>Path to vulnerable library: Hygieia/UI-protractor-tests/node_modules/npm/node_modules/request/node_modules/har-validator/node_modules/is-my-json-valid/package.json</p>
<p>
Dependency Hierarchy:
- npm-failsafe-0.1.1.tgz (Root Library)
- npm-2.15.12.tgz
- request-2.74.0.tgz
- har-validator-2.0.6.tgz
- :x: **is-my-json-valid-2.13.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/VivekBuzruk/Hygieia/commit/3c4f119e4343cf7fa276bb4756361b926902248e">3c4f119e4343cf7fa276bb4756361b926902248e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
It was discovered that the is-my-json-valid JavaScript library used an inefficient regular expression to validate JSON fields defined to have email format. A specially crafted JSON file could cause it to consume an excessive amount of CPU time when validated.
<p>Publish Date: 2021-03-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1107>CVE-2018-1107</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1546357">https://bugzilla.redhat.com/show_bug.cgi?id=1546357</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: 1.4.2,2.17.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in is my json valid tgz autoclosed cve medium severity vulnerability vulnerable library is my json valid tgz a jsonschema validator that uses code generation to be extremely fast library home page a href path to dependency file hygieia ui protractor tests package json path to vulnerable library hygieia ui protractor tests node modules npm node modules request node modules har validator node modules is my json valid package json dependency hierarchy npm failsafe tgz root library npm tgz request tgz har validator tgz x is my json valid tgz vulnerable library found in head commit a href found in base branch master vulnerability details it was discovered that the is my json valid javascript library used an inefficient regular expression to validate json fields defined to have email format a specially crafted json file could cause it to consume an excessive amount of cpu time when validated publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.